Cloud misconfigurations remain the most common cause of cloud security incidents. The 2024 Verizon Data Breach Investigations Report attributes 74% of cloud breaches to misconfiguration or misuse, not sophisticated exploits. An S3 bucket left public, a security group open to 0.0.0.0/0, an unencrypted database — these are configuration errors that could be caught before the infrastructure is provisioned.
Infrastructure as Code (IaC) makes this possible. When your infrastructure is defined in Terraform, CloudFormation, or Kubernetes manifests, you can scan it for known misconfigurations the same way you scan application code for vulnerabilities. This post covers two tools that complement each other: Checkov for broad coverage with 1000+ built-in checks, and Open Policy Agent (OPA) for organization-specific policies written in the Rego language.
Checkov: Broad Coverage Out of the Box
Checkov is an open-source static analysis tool for IaC developed by Prisma Cloud (formerly Bridgecrew). It supports:
- ✓Terraform (HCL and plan JSON)
- ✓CloudFormation (JSON and YAML)
- ✓Kubernetes manifests (YAML)
- ✓Helm charts (rendered templates)
- ✓Dockerfiles
- ✓Serverless Framework
- ✓ARM templates (Azure)
Running Checkov
Basic usage against a Terraform directory:
# Install
pip install checkov
# Scan a Terraform directory
checkov -d ./terraform --framework terraform
# Scan with specific checks
checkov -d ./terraform --check CKV_AWS_18,CKV_AWS_145,CKV_AWS_19
# Scan and output SARIF for GitHub integration
checkov -d ./terraform --output sarif --output-file checkov.sarif
# Scan a Kubernetes manifest
checkov -f ./k8s/deployment.yaml --framework kubernetes
# Scan a Dockerfile
checkov -f ./Dockerfile --framework dockerfile
Built-in Checks and Compliance Frameworks
Checkov ships with 1000+ built-in checks organized by provider and compliance framework:
| Framework | Check Count (approx.) | Examples |
|---|---|---|
| CIS AWS Benchmark | 120+ | S3 bucket logging, IAM password policy, VPC flow logs |
| CIS Azure Benchmark | 100+ | Storage account encryption, network security groups |
| CIS GCP Benchmark | 80+ | Compute firewall rules, Cloud SQL encryption |
| SOC 2 | 90+ | Encryption at rest, access logging, backup configuration |
| PCI-DSS | 70+ | Network segmentation, encryption, access control |
| HIPAA | 80+ | PHI encryption, audit logging, access controls |
| CIS Kubernetes | 60+ | Pod security, RBAC, network policies |
Each check has a unique ID (e.g., CKV_AWS_18 = "Ensure the S3 bucket has access logging enabled") and maps to specific compliance controls.
Example output:
Passed checks: 42, Failed checks: 8, Skipped checks: 0
Check: CKV_AWS_145: "Ensure that S3 Buckets are encrypted with KMS"
FAILED for resource: aws_s3_bucket.data_export
File: /s3.tf:15-25
Check: CKV_AWS_18: "Ensure the S3 bucket has access logging enabled"
FAILED for resource: aws_s3_bucket.data_export
File: /s3.tf:15-25
Check: CKV_AWS_19: "Ensure the EBS volume has encryption enabled"
FAILED for resource: aws_ebs_volume.app_data
File: /ebs.tf:1-8
Custom Checkov Policies: Python-Based
When built-in checks don't cover your organization's requirements, you can write custom checks in Python:
# custom_checks/ebs_encryption.py
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.models.enums import CheckResult, CheckCategories
class EBSVolumeEncryption(BaseResourceCheck):
def __init__(self):
name = "Ensure EBS volumes are encrypted with a customer-managed KMS key"
id = "CKV_CUSTOM_1"
supported_resources = ["aws_ebs_volume"]
categories = [CheckCategories.ENCRYPTION]
super().__init__(
name=name,
id=id,
categories=categories,
supported_resources=supported_resources,
)
def scan_resource_conf(self, conf):
# Check that encryption is enabled
encrypted = conf.get("encrypted", [False])
if isinstance(encrypted, list):
encrypted = encrypted[0]
if not encrypted:
return CheckResult.FAILED
# Check that a KMS key is specified (not default AWS-managed key)
kms_key = conf.get("kms_key_id", [None])
if isinstance(kms_key, list):
kms_key = kms_key[0]
if kms_key is None or kms_key == "":
return CheckResult.FAILED
return CheckResult.PASSED
check = EBSVolumeEncryption()
Register custom checks by pointing Checkov to the directory:
checkov -d ./terraform --external-checks-dir ./custom_checks
Custom Checkov Policies: YAML-Based
For simpler checks, Checkov supports a YAML-based policy format:
# custom_checks/require_tags.yaml
metadata:
id: "CKV_CUSTOM_2"
name: "Ensure all resources have required tags"
category: "GENERAL_SECURITY"
scope:
provider: aws
definition:
and:
- cond_type: "attribute"
resource_types:
- "aws_instance"
- "aws_s3_bucket"
- "aws_rds_cluster"
- "aws_ebs_volume"
attribute: "tags.Environment"
operator: "exists"
- cond_type: "attribute"
resource_types:
- "aws_instance"
- "aws_s3_bucket"
- "aws_rds_cluster"
- "aws_ebs_volume"
attribute: "tags.Owner"
operator: "exists"
- cond_type: "attribute"
resource_types:
- "aws_instance"
- "aws_s3_bucket"
- "aws_rds_cluster"
- "aws_ebs_volume"
attribute: "tags.CostCenter"
operator: "exists"
OPA: Organization-Specific Policies in Rego
Open Policy Agent (OPA) is a general-purpose policy engine. Unlike Checkov, which is purpose-built for IaC scanning, OPA can evaluate policies against any structured data — Terraform plans, Kubernetes admission requests, API authorization decisions, CI/CD pipeline metadata.
OPA policies are written in Rego, a declarative query language. The learning curve is steeper than Checkov's YAML format, but the expressiveness is significantly greater.
Rego Language Basics
Rego evaluates rules against input data. For Terraform scanning, the input is typically the JSON output of terraform plan:
# Generate Terraform plan as JSON
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
A basic Rego policy:
# policy/s3.rego
package terraform.s3
import rego.v1
# Deny S3 buckets without encryption
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket"
not has_encryption(resource)
msg := sprintf(
"S3 bucket '%s' must have server-side encryption enabled",
[resource.address]
)
}
has_encryption(resource) if {
resource.change.after.server_side_encryption_configuration != null
}
Conftest: Running OPA Against Terraform Plans
Conftest is a utility that wraps OPA for testing structured data files. It's the standard way to run OPA policies against Terraform plans in CI:
# Install conftest
brew install conftest # or download binary
# Run policies against Terraform plan
terraform show -json tfplan.binary | conftest test --policy ./policy -
# Run with specific namespaces
conftest test --policy ./policy --namespace terraform.s3 tfplan.json
# Output in structured format for CI
conftest test --policy ./policy --output json tfplan.json
OPA for Kubernetes: Gatekeeper
OPA Gatekeeper is a Kubernetes admission controller that evaluates OPA policies against every resource create/update request.
ConstraintTemplate defines the policy logic:
# constraint-template-required-labels.yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
import rego.v1
violation contains {"msg": msg, "details": {"missing_labels": missing}} if {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf(
"Resource %s/%s is missing required labels: %v",
[input.review.object.kind, input.review.object.metadata.name, missing]
)
}
Constraint applies the template with specific parameters:
# constraint-require-labels.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: all-must-have-owner
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace", "Pod", "Service"]
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet"]
parameters:
labels:
- "app.kubernetes.io/name"
- "app.kubernetes.io/owner"
- "app.kubernetes.io/environment"
Any Kubernetes resource that doesn't include the required labels will be rejected at admission time — it never runs in the cluster.
Combining Checkov and OPA
Checkov and OPA serve different purposes and work well together:
| Aspect | Checkov | OPA/Conftest |
|---|---|---|
| Strength | Broad coverage, compliance frameworks | Custom organizational policies |
| Rule format | Python or YAML (accessible) | Rego (powerful but steeper curve) |
| Best for | CIS benchmarks, known best practices | Business rules, naming conventions, cost controls |
| Runtime enforcement | CI only | CI + Kubernetes admission (Gatekeeper) |
| Terraform input | HCL files or plan JSON | Plan JSON only |
A practical approach:
- ✓Run Checkov for broad coverage (CIS benchmarks, compliance frameworks)
- ✓Run OPA/conftest for organization-specific rules (tagging standards, naming conventions, cost constraints, team-specific restrictions)
- ✓Use Gatekeeper for runtime enforcement in Kubernetes
CI Integration
GitHub Actions Pipeline
# .github/workflows/iac-security.yml
name: IaC Security Scan
on:
pull_request:
paths:
- 'terraform/**'
- 'k8s/**'
- 'Dockerfile'
jobs:
checkov:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Checkov Terraform scan
uses: bridgecrewio/checkov-action@master
with:
directory: ./terraform
framework: terraform
output_format: sarif
output_file_path: checkov-terraform.sarif
soft_fail: false
- name: Checkov Kubernetes scan
uses: bridgecrewio/checkov-action@master
with:
directory: ./k8s
framework: kubernetes
output_format: sarif
output_file_path: checkov-k8s.sarif
soft_fail: false
- name: Checkov Dockerfile scan
uses: bridgecrewio/checkov-action@master
with:
file: ./Dockerfile
framework: dockerfile
soft_fail: false
- name: Upload SARIF results
if: always()
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: checkov-terraform.sarif
opa-terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: 1.6.0
- name: Terraform init and plan
working-directory: ./terraform
run: |
terraform init -backend=false
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Install conftest
run: |
wget -q https://github.com/open-policy-agent/conftest/releases/download/v0.46.0/conftest_0.46.0_Linux_x86_64.tar.gz
tar xzf conftest_0.46.0_Linux_x86_64.tar.gz
sudo mv conftest /usr/local/bin/
- name: Run OPA policies
run: |
conftest test ./terraform/tfplan.json \
--policy ./policy/terraform \
--output json \
--all-namespaces
Handling Violations
When a scan finds violations, the pipeline should:
- ✓Fail the PR check — prevent merge until the violation is resolved.
- ✓Provide actionable output — which resource, which file, which line, what's wrong, how to fix it.
- ✓Support exceptions — allow documented, approved exceptions for legitimate cases.
Checkov supports inline suppressions:
# This bucket is intentionally public for static website hosting
#checkov:skip=CKV_AWS_18: Access logging not required for public static content
#checkov:skip=CKV_AWS_145: Public content does not require KMS encryption
resource "aws_s3_bucket" "static_website" {
bucket = "example-static-site"
}
For OPA, exceptions are handled through policy data:
# policy/exceptions.rego
package terraform.exceptions
import rego.v1
# Resources exempt from specific policies
exception_list := {
"aws_s3_bucket.static_website": ["encryption", "logging"],
"aws_instance.bastion": ["private_subnet"]
}
is_exempt(resource_address, policy) if {
exemptions := exception_list[resource_address]
policy in exemptions
}
Policy-as-Code Workflow
Policies are code and should follow the same development practices:
Version Control
Store policies in a dedicated repository or directory with clear ownership:
policy/
├── terraform/
│ ├── aws/
│ │ ├── s3.rego
│ │ ├── rds.rego
│ │ ├── ec2.rego
│ │ └── iam.rego
│ ├── general/
│ │ ├── tagging.rego
│ │ └── naming.rego
│ └── exceptions.rego
├── kubernetes/
│ ├── pod-security.rego
│ └── network-policy.rego
├── checkov/
│ ├── custom_checks/
│ │ ├── ebs_encryption.py
│ │ └── require_tags.yaml
│ └── .checkov.yaml
└── tests/
├── terraform/
│ ├── s3_test.rego
│ └── tagging_test.rego
└── kubernetes/
└── pod_security_test.rego
Testing Policies
OPA includes a testing framework. Tests are Rego files in the same package with test_ prefixed rules:
# policy/tests/terraform/s3_test.rego
package terraform.s3_test
import rego.v1
import data.terraform.s3
test_deny_unencrypted_s3 if {
result := s3.deny with input as {
"resource_changes": [{
"type": "aws_s3_bucket",
"address": "aws_s3_bucket.test",
"change": {
"after": {
"server_side_encryption_configuration": null
}
}
}]
}
count(result) > 0
}
test_allow_encrypted_s3 if {
result := s3.deny with input as {
"resource_changes": [{
"type": "aws_s3_bucket",
"address": "aws_s3_bucket.test",
"change": {
"after": {
"server_side_encryption_configuration": {
"rule": [{
"apply_server_side_encryption_by_default": [{
"sse_algorithm": "aws:kms"
}]
}]
}
}
}
}]
}
count(result) == 0
}
Run tests:
opa test ./policy -v
Policy Exceptions Workflow
Maintain a structured exception process:
# policy/exceptions/approved.yaml
exceptions:
- resource: "aws_s3_bucket.public_docs"
policy: "s3_encryption"
reason: "Public documentation site. No sensitive data."
approved_by: "security-team"
approved_date: "2025-09-15"
expires: "2026-03-15"
ticket: "SEC-142"
- resource: "aws_security_group.legacy_app"
policy: "no_wide_ingress"
reason: "Legacy application requires port 443 from all IPs. Migration planned for Q1 2026."
approved_by: "security-team"
approved_date: "2025-08-01"
expires: "2026-03-31"
ticket: "SEC-128"
Drift Prevention
IaC scanning catches misconfigurations at authoring time, but infrastructure can drift from its declared state through manual changes (console clicks, ad-hoc CLI commands).
Reconciliation scans compare the running infrastructure against the Terraform state:
# Detect drift with Terraform
terraform plan -detailed-exitcode
# Exit code 0 = no changes
# Exit code 1 = error
# Exit code 2 = changes detected (drift)
Schedule this in CI as a daily or weekly cron job:
# .github/workflows/drift-detection.yml
name: Infrastructure Drift Detection
on:
schedule:
- cron: '0 6 * * 1' # Weekly on Monday at 6 AM UTC
jobs:
drift-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- name: Terraform init
working-directory: ./terraform
run: terraform init
- name: Detect drift
id: drift
working-directory: ./terraform
run: |
terraform plan -detailed-exitcode -out=drift.plan 2>&1 | tee drift-output.txt
echo "exit_code=$?" >> $GITHUB_OUTPUT
continue-on-error: true
- name: Alert on drift
if: steps.drift.outputs.exit_code == '2'
run: |
curl -X POST "${{ secrets.SLACK_WEBHOOK }}" \
-H 'Content-Type: application/json' \
-d "{
\"text\": \"⚠️ Infrastructure drift detected. $(grep 'Plan:' drift-output.txt)\"
}"
Case Study: Healthcare Startup — HIPAA-Compliant AWS Infrastructure
Background
A healthcare startup building a patient portal needed HIPAA compliance for their AWS infrastructure before processing Protected Health Information (PHI). Their infrastructure comprised 3 Terraform modules (networking, compute, data), 8 Kubernetes services, and 12 Dockerfiles. Stripe Systems implemented a policy-as-code framework using Checkov and OPA.
Initial Scan: 47 Violations
The first Checkov scan with the HIPAA framework flagged 47 violations:
checkov -d ./terraform --framework terraform --compact --check-type hipaa
Breakdown:
| Category | Count | Examples |
|---|---|---|
| Encryption | 14 | Unencrypted EBS volumes, RDS without encryption, S3 without KMS |
| Network security | 11 | Security groups with 0.0.0.0/0 ingress, public subnets for data services |
| Logging/Monitoring | 9 | No CloudTrail, no VPC flow logs, no S3 access logging |
| Access control | 8 | Overly permissive IAM policies, no MFA enforcement |
| Backup/Recovery | 5 | No RDS automated backups, no cross-region backup replication |
Custom Checkov Policy: Unencrypted EBS Volumes
The built-in Checkov check CKV_AWS_3 verifies that EBS volumes have encryption enabled but doesn't enforce customer-managed KMS keys (HIPAA requires organizations to manage their own encryption keys for PHI). We wrote a custom check:
# custom_checks/hipaa_ebs_cmk.py
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.models.enums import CheckResult, CheckCategories
class HIPAAEBSCustomerManagedKey(BaseResourceCheck):
def __init__(self):
name = "Ensure EBS volumes use customer-managed KMS keys for HIPAA compliance"
id = "CKV_HIPAA_CUSTOM_1"
supported_resources = ["aws_ebs_volume", "aws_launch_template"]
categories = [CheckCategories.ENCRYPTION]
super().__init__(
name=name,
id=id,
categories=categories,
supported_resources=supported_resources,
)
def scan_resource_conf(self, conf):
if self.entity_type == "aws_ebs_volume":
encrypted = conf.get("encrypted", [False])
if isinstance(encrypted, list):
encrypted = encrypted[0]
if not encrypted:
return CheckResult.FAILED
kms_key = conf.get("kms_key_id", [None])
if isinstance(kms_key, list):
kms_key = kms_key[0]
if not kms_key or kms_key.startswith("alias/aws/"):
return CheckResult.FAILED
return CheckResult.PASSED
if self.entity_type == "aws_launch_template":
block_devices = conf.get("block_device_mappings", [])
if not block_devices:
return CheckResult.FAILED
for bd in block_devices:
ebs = bd.get("ebs", [{}])
if isinstance(ebs, list):
ebs = ebs[0] if ebs else {}
encrypted = ebs.get("encrypted", False)
kms_key = ebs.get("kms_key_id", None)
if not encrypted or not kms_key:
return CheckResult.FAILED
return CheckResult.PASSED
check = HIPAAEBSCustomerManagedKey()
Custom OPA Policy: Tagging Standards
HIPAA requires tracking which systems handle PHI. We enforced this through mandatory tagging:
# policy/terraform/hipaa_tagging.rego
package terraform.hipaa.tagging
import rego.v1
required_tags := ["Environment", "DataClassification", "Owner", "HIPAAScope"]
valid_data_classifications := ["PHI", "PII", "Confidential", "Internal", "Public"]
valid_hipaa_scopes := ["in-scope", "out-of-scope"]
hipaa_resource_types := [
"aws_instance", "aws_rds_cluster", "aws_rds_cluster_instance",
"aws_s3_bucket", "aws_ebs_volume", "aws_elasticache_cluster",
"aws_dynamodb_table", "aws_sqs_queue", "aws_sns_topic",
"aws_lambda_function", "aws_ecs_service", "aws_eks_cluster"
]
deny contains msg if {
resource := input.resource_changes[_]
resource.type in hipaa_resource_types
resource.change.actions[_] in ["create", "update"]
tags := object.get(resource.change.after, "tags", {})
required := required_tags[_]
not tags[required]
msg := sprintf(
"Resource '%s' (type: %s) is missing required tag '%s'. All HIPAA-scoped resources must have tags: %v",
[resource.address, resource.type, required, required_tags]
)
}
deny contains msg if {
resource := input.resource_changes[_]
resource.type in hipaa_resource_types
resource.change.actions[_] in ["create", "update"]
tags := object.get(resource.change.after, "tags", {})
classification := tags.DataClassification
not classification in valid_data_classifications
msg := sprintf(
"Resource '%s' has invalid DataClassification tag '%s'. Valid values: %v",
[resource.address, classification, valid_data_classifications]
)
}
deny contains msg if {
resource := input.resource_changes[_]
resource.type in hipaa_resource_types
resource.change.actions[_] in ["create", "update"]
tags := object.get(resource.change.after, "tags", {})
scope := tags.HIPAAScope
not scope in valid_hipaa_scopes
msg := sprintf(
"Resource '%s' has invalid HIPAAScope tag '%s'. Valid values: %v",
[resource.address, scope, valid_hipaa_scopes]
)
}
OPA Policy: No Public S3 Buckets
# policy/terraform/hipaa_s3.rego
package terraform.hipaa.s3
import rego.v1
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.actions[_] in ["create", "update"]
config := resource.change.after
not config.block_public_acls
msg := sprintf("S3 public access block '%s' must set block_public_acls to true", [resource.address])
}
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.actions[_] in ["create", "update"]
config := resource.change.after
not config.block_public_policy
msg := sprintf("S3 public access block '%s' must set block_public_policy to true", [resource.address])
}
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.actions[_] in ["create", "update"]
config := resource.change.after
not config.ignore_public_acls
msg := sprintf("S3 public access block '%s' must set ignore_public_acls to true", [resource.address])
}
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_s3_bucket_public_access_block"
resource.change.actions[_] in ["create", "update"]
config := resource.change.after
not config.restrict_public_buckets
msg := sprintf("S3 public access block '%s' must set restrict_public_buckets to true", [resource.address])
}
OPA Policy: No Wide Security Group Ingress
# policy/terraform/hipaa_network.rego
package terraform.hipaa.network
import rego.v1
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_security_group_rule"
resource.change.actions[_] in ["create", "update"]
rule := resource.change.after
rule.type == "ingress"
cidr_blocks := object.get(rule, "cidr_blocks", [])
cidr := cidr_blocks[_]
cidr == "0.0.0.0/0"
msg := sprintf(
"Security group rule '%s' allows ingress from 0.0.0.0/0. HIPAA requires restricted network access to systems handling PHI.",
[resource.address]
)
}
deny contains msg if {
resource := input.resource_changes[_]
resource.type == "aws_security_group"
resource.change.actions[_] in ["create", "update"]
ingress := resource.change.after.ingress[_]
cidr := ingress.cidr_blocks[_]
cidr == "0.0.0.0/0"
msg := sprintf(
"Security group '%s' has inline ingress rule allowing 0.0.0.0/0. Use specific CIDR ranges.",
[resource.address]
)
}
CI Pipeline Integration
The pipeline ran both Checkov and conftest on every PR that modified Terraform:
# .github/workflows/iac-hipaa.yml
name: HIPAA IaC Compliance
on:
pull_request:
paths:
- 'terraform/**'
- 'policy/**'
jobs:
checkov:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Checkov HIPAA scan
uses: bridgecrewio/checkov-action@master
with:
directory: ./terraform
framework: terraform
check: HIPAA
external_checks_dirs: ./custom_checks
output_format: cli,sarif
output_file_path: console,checkov.sarif
- uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: checkov.sarif
opa:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- name: Terraform plan
working-directory: ./terraform
run: |
terraform init -backend=false
terraform plan -out=tfplan.binary
terraform show -json tfplan.binary > tfplan.json
- name: Install conftest
run: |
wget -q https://github.com/open-policy-agent/conftest/releases/download/v0.46.0/conftest_0.46.0_Linux_x86_64.tar.gz
tar xzf conftest_0.46.0_Linux_x86_64.tar.gz
sudo mv conftest /usr/local/bin/
- name: Run HIPAA OPA policies
run: |
conftest test ./terraform/tfplan.json \
--policy ./policy/terraform \
--all-namespaces \
--output table
policy-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install OPA
run: |
curl -L -o opa https://openpolicyagent.org/downloads/v0.60.0/opa_linux_amd64_static
chmod +x opa && sudo mv opa /usr/local/bin/
- name: Run policy tests
run: opa test ./policy -v
Results: 47 → 0 Violations
Remediation took 3 weeks:
| Week | Focus | Violations Remaining |
|---|---|---|
| 0 | Initial scan | 47 |
| 1 | Encryption (EBS, RDS, S3 KMS), network security groups | 22 |
| 2 | Logging (CloudTrail, VPC flow logs, S3 access logs), IAM policies | 8 |
| 3 | Backups, tagging, remaining items | 0 |
After remediation, the clean scan output:
checkov -d ./terraform --check-type hipaa --external-checks-dir ./custom_checks
Passed checks: 127, Failed checks: 0, Skipped checks: 2
conftest test ./terraform/tfplan.json --policy ./policy/terraform --all-namespaces
15 tests, 15 passed, 0 warnings, 0 failures
The 2 skipped Checkov checks were documented exceptions (a public-facing ALB required 0.0.0.0/0 on port 443, approved by the security team with compensating controls documented).
Ongoing Enforcement
With the policies integrated into CI, any new Terraform change that introduces a HIPAA violation is blocked before it can be applied. In the 4 months following the initial remediation, the pipeline caught and prevented 31 violations — misconfigurations that would have reached production without the automated checks.
The weekly drift detection scan identified 3 instances of manual console changes (a developer modified a security group directly) which were reverted and addressed through the Terraform workflow.
Conclusion
IaC scanning is the highest-impact security control you can implement for cloud infrastructure. The time investment is modest — Checkov works out of the box with built-in compliance frameworks, and OPA policies can be built incrementally starting with the highest-risk areas.
Start with Checkov's built-in checks against your compliance framework (CIS, SOC 2, HIPAA, PCI-DSS). Add custom OPA policies for organization-specific rules that Checkov doesn't cover. Run both in CI as required checks on infrastructure PRs. Add drift detection as a scheduled job.
The combination provides broad coverage (Checkov) plus deep customization (OPA) plus runtime enforcement (Gatekeeper), creating a defense-in-depth approach to infrastructure security.
Ready to discuss your project?
Get in Touch →