SOC 2 Type II audits examine whether your security controls work consistently over a defined observation period — typically 6 to 12 months. Unlike Type I, which captures a point-in-time snapshot, Type II requires evidence that controls operated effectively throughout the audit window. For engineering teams, this means your practices around access control, change management, monitoring, and incident response need to be demonstrably consistent, not just documented.
This post covers the specific Trust Service Criteria relevant to engineering, the artifacts auditors request, the common gaps that delay audits, and a practical approach to building compliance into your development workflow.
Trust Service Criteria: What Applies to Engineering
SOC 2 is organized around five Trust Service Criteria (TSC):
- ✓Security (Common Criteria) — required for every SOC 2 audit. Covers access control, change management, risk assessment, monitoring, and incident response.
- ✓Availability — uptime commitments, redundancy, disaster recovery, capacity planning.
- ✓Processing Integrity — data processing is complete, accurate, timely, and authorized.
- ✓Confidentiality — protection of data designated as confidential (distinct from personal data).
- ✓Privacy — collection, use, retention, and disposal of personal information per stated privacy notices.
Most SaaS companies include Security and Availability. If you handle sensitive customer data, add Confidentiality. If you process personal data, add Privacy.
Engineering teams are directly responsible for controls under Security and Availability. Processing Integrity and Privacy often require collaboration with product and legal teams, but the technical implementation falls on engineering.
Type I vs Type II: Why the Observation Period Matters
A Type I report says: "On October 15, 2025, these controls existed and were suitably designed." It's a snapshot.
A Type II report says: "From January 1 to December 31, 2025, these controls operated effectively." It requires evidence across the entire observation period.
The distinction matters because customers and enterprise procurement teams almost universally require Type II. A Type I report demonstrates intent; a Type II report demonstrates execution. If your engineering team enforced branch protection on July 1 but your observation period started January 1, you have a six-month gap that auditors will flag.
Minimum observation periods vary by auditor, but 6 months is common for a first Type II report, with 12 months as the standard for subsequent reports.
What Auditors Actually Examine in Your Engineering Org
Access Control
Auditors verify that access to production systems, source code repositories, CI/CD pipelines, and cloud provider consoles follows the principle of least privilege.
Specific evidence they request:
- ✓RBAC configuration: who has access to what, and is it role-based rather than individual grants?
- ✓Least privilege enforcement: do developers have production database access? Can interns push to main?
- ✓Periodic access reviews: quarterly reviews of who has access, with documented removal of access for departed employees or role changes.
- ✓MFA enforcement: is multi-factor authentication required for all accounts with access to production infrastructure, code repositories, and cloud consoles?
- ✓SSH key and API token management: are personal access tokens rotated? Are SSH keys tied to individuals?
A common audit request:
"Provide a list of all users with write access to your production Kubernetes cluster, the date access was granted, and the business justification for each."
If you can't produce this from your IAM configuration or access management tool, you have a gap.
Change Management
Every change to production must be traceable to an authorized request, reviewed by someone other than the author, and deployable through an automated pipeline.
Evidence includes:
- ✓Pull request history: all changes merged to production branches have at least one approval from a reviewer.
- ✓Branch protection rules: direct pushes to main/production branches are blocked.
- ✓Deployment audit trail: who initiated each deployment, what commit was deployed, when, and to which environment.
- ✓Rollback capability: can you revert a deployment, and is there evidence of tested rollback procedures?
Here is a GitHub branch protection configuration that satisfies most change management controls:
{
"required_pull_request_reviews": {
"required_approving_review_count": 1,
"dismiss_stale_reviews": true,
"require_code_owner_reviews": true
},
"required_status_checks": {
"strict": true,
"contexts": ["ci/build", "ci/test", "security/semgrep", "security/trivy"]
},
"enforce_admins": true,
"required_linear_history": true,
"allow_force_pushes": false,
"allow_deletions": false
}
The enforce_admins: true setting is frequently missed. Without it, repository administrators can bypass all protections — and auditors will ask about this.
Monitoring and Incident Response
Auditors verify that you can detect anomalies, that logs are centralized and retained, and that you have a documented incident response process that has been tested.
Required artifacts:
- ✓Centralized logging: application logs, infrastructure logs, and access logs aggregated in a single system (ELK, Datadog, CloudWatch, etc.).
- ✓Log retention policy: typically 90 days minimum for active queries, 1 year for archival. Some compliance frameworks require longer.
- ✓Alerting configuration: alerts for failed deployments, unauthorized access attempts, resource exhaustion, error rate spikes.
- ✓Incident response runbooks: documented procedures for common incident types (service outage, data breach, unauthorized access).
- ✓Incident postmortems: evidence that incidents were investigated and remediated.
Vulnerability Management
Auditors look for a systematic approach to identifying and remediating vulnerabilities within defined SLAs.
Controls include:
- ✓Dependency scanning: automated scanning of third-party libraries (Snyk, Dependabot, npm audit).
- ✓Container image scanning: scanning base images and application images for known CVEs (Trivy, Grype).
- ✓Patching SLAs: critical vulnerabilities remediated within 7 days, high within 30 days, medium within 90 days (adjust per your risk tolerance).
- ✓Penetration testing: annual third-party penetration test with documented remediation of findings.
Data Protection
- ✓Encryption at rest: all datastores (databases, object storage, EBS volumes) encrypted. Auditors will ask for the specific encryption mechanism (AES-256, KMS key ARN).
- ✓Encryption in transit: TLS 1.2+ for all external and internal communications. mTLS between services is strong evidence.
- ✓Key management: encryption keys managed through a dedicated KMS, not hardcoded or stored in environment variables.
- ✓Backup testing: backups exist and have been tested for restoration within your defined Recovery Time Objective (RTO).
Engineering Artifacts That Satisfy Controls
SOC 2 auditors work with evidence. The following artifacts map directly to controls:
| Control Area | Engineering Artifact |
|---|---|
| Access Control | IAM policies, RBAC configs, access review spreadsheets, MFA enforcement settings |
| Change Management | PR history with approvals, branch protection configs, deployment logs |
| Monitoring | Alert configurations, log retention policies, incident postmortems |
| Vulnerability Management | Scan reports (Snyk/Trivy), patching tickets, penetration test reports |
| Data Protection | KMS key configs, TLS certificates, backup restoration test records |
| Asset Management | Terraform state files, infrastructure inventory, service catalog |
Terraform state files are particularly useful — they provide a point-in-time record of your infrastructure configuration that auditors can cross-reference against your stated controls.
Common Engineering Gaps That Delay Audits
These are the gaps we see most frequently when engineering teams begin SOC 2 preparation:
1. Shared credentials. A single AWS root account password shared among three founders. Shared service account keys for CI/CD. A common database password in a .env file.
2. No MFA on CI/CD systems. GitHub accounts without MFA, Jenkins instances accessible with username/password, ArgoCD without SSO integration.
3. Manual deployments. SSH into production servers to run deployment scripts. kubectl apply from a developer laptop. Manual database migrations.
4. No logging retention policy. Logs exist but are rotated after 7 days. No centralized log aggregation — logs live on individual EC2 instances.
5. No branch protection. Developers push directly to main. No required reviews. Force pushes allowed.
6. No vulnerability scanning. Dependencies haven't been audited. Container images use latest tags from Docker Hub without scanning.
7. No documented incident response. The team handles incidents ad hoc. No runbooks, no postmortem process, no severity classification.
8. No access reviews. Former contractors still have AWS access. An intern still has write access to the production database.
9. Encryption gaps. An RDS instance running without encryption because it was created before the team established standards. An S3 bucket without default encryption.
10. No backup testing. Automated backups exist but have never been restored to verify integrity.
Building SOC 2 Compliance Into Your CI/CD Pipeline
The most effective approach treats compliance controls as pipeline stages rather than quarterly checklists.
# .github/workflows/soc2-compliant-pipeline.yml
name: SOC 2 Compliant CI/CD
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
code-review-gate:
runs-on: ubuntu-latest
steps:
- name: Verify PR approval
if: github.event_name == 'push'
run: |
PR_NUMBER=$(gh pr list --state merged --json number,mergeCommit \
--jq ".[] | select(.mergeCommit.oid == \"${{ github.sha }}\") | .number")
APPROVALS=$(gh pr view $PR_NUMBER --json reviews \
--jq '[.reviews[] | select(.state == "APPROVED")] | length')
if [ "$APPROVALS" -lt 1 ]; then
echo "ERROR: No approved reviews found for PR #$PR_NUMBER"
exit 1
fi
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Secret scanning
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: SAST scan
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/owasp-top-ten
p/secrets
- name: Dependency scan
run: |
npm audit --audit-level=high
npx snyk test --severity-threshold=high
container-scan:
runs-on: ubuntu-latest
needs: [security-scan]
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t app:${{ github.sha }} .
- name: Scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: app:${{ github.sha }}
exit-code: 1
severity: CRITICAL,HIGH
deploy:
runs-on: ubuntu-latest
needs: [code-review-gate, security-scan, container-scan]
if: github.ref == 'refs/heads/main'
steps:
- name: Deploy with audit trail
run: |
echo "Deployer: ${{ github.actor }}" >> deployment-log.json
echo "Commit: ${{ github.sha }}" >> deployment-log.json
echo "Timestamp: $(date -u +%Y-%m-%dT%H:%M:%SZ)" >> deployment-log.json
echo "PR: ${{ github.event.pull_request.number }}" >> deployment-log.json
# Deploy command here
- name: Upload deployment record
uses: actions/upload-artifact@v4
with:
name: deployment-audit-${{ github.sha }}
path: deployment-log.json
retention-days: 365
This pipeline produces evidence for four control areas: change management (PR approval verification), vulnerability management (secret, SAST, dependency, and container scanning), deployment audit trail (logged deployer, commit, and timestamp), and artifact retention (365-day retention on deployment records).
Compliance Automation Platforms: Vanta, Drata, Secureframe
These platforms connect to your infrastructure (AWS, GitHub, GCP, Okta, etc.) and continuously monitor controls.
What they automate well:
- ✓Access review evidence collection (pulling IAM users, roles, MFA status)
- ✓Infrastructure configuration monitoring (encryption, network security groups)
- ✓Employee onboarding/offboarding tracking via HR integrations
- ✓Policy document management and employee acknowledgment tracking
- ✓Continuous control monitoring with alerts for drift
What they cannot automate:
- ✓Writing your actual security policies (they provide templates, but auditors expect customization)
- ✓Fixing the control gaps they identify
- ✓Application-level security controls (business logic authorization, data handling)
- ✓Custom CI/CD pipeline compliance checks
- ✓Incident response execution
- ✓Penetration testing
Expect to pay $15,000–$40,000/year for the platform and $30,000–$80,000 for the audit firm, depending on scope and company size.
Cost and Timeline: Realistic Expectations
For a 20–50 person engineering team at a SaaS company:
| Phase | Duration | Key Activities |
|---|---|---|
| Readiness assessment | 2–4 weeks | Gap analysis, control mapping, remediation plan |
| Remediation | 2–4 months | Implementing controls, configuring tools, writing policies |
| Observation period | 6–12 months | Controls operating, evidence accumulating |
| Audit fieldwork | 4–6 weeks | Auditor evidence requests, walkthroughs, testing |
| Report issuance | 2–4 weeks | Draft review, management response, final report |
Total timeline from zero to Type II report: 10–18 months for a first-time audit.
Common cost breakdown:
- ✓Compliance platform (Vanta/Drata): $20,000–$35,000/year
- ✓Audit firm: $30,000–$80,000
- ✓Penetration testing: $10,000–$25,000
- ✓Engineering time for remediation: 1–3 FTEs for 2–4 months
- ✓Ongoing maintenance: 0.25–0.5 FTE
The engineering time is the largest cost and the one most frequently underestimated.
Case Study: Series B SaaS Company — SOC 2 Type II in 6 Months
Background
A Series B SaaS company with 35 engineers needed a SOC 2 Type II report to close three enterprise deals. Their target was completing the observation period and audit within 6 months. The Stripe Systems engineering team performed the readiness assessment and led the remediation effort.
Gap Analysis: 23 Control Gaps Identified
The readiness assessment reviewed infrastructure, CI/CD pipelines, access management, monitoring, and data protection. We identified 23 control gaps. The top 10 — the ones that required the most engineering effort — were:
| # | Gap | Severity | Control Area |
|---|---|---|---|
| 1 | AWS root credentials shared among 3 co-founders | Critical | Access Control |
| 2 | No branch protection on main branch in 4 repositories | Critical | Change Management |
| 3 | No centralized logging — logs on individual EC2 instances | High | Monitoring |
| 4 | CI/CD (GitHub Actions) accessible without MFA for 8 accounts | Critical | Access Control |
| 5 | Production database accessible from developer laptops via static IP allowlist | High | Access Control |
| 6 | No dependency scanning — last audit of node_modules was "never" | High | Vulnerability Mgmt |
| 7 | Manual deployments via SSH for 2 legacy services | High | Change Management |
| 8 | No incident response runbook or postmortem process | High | Monitoring |
| 9 | S3 bucket with customer data exports had no default encryption | Critical | Data Protection |
| 10 | 3 former contractors still had AWS IAM access | Critical | Access Control |
Remediation Plan and Execution
Week 1–2: Access Control (Critical)
Revoked all shared root credentials. Created individual IAM users with MFA enforcement via an SCP (Service Control Policy):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyAllExceptSTSWithoutMFA",
"Effect": "Deny",
"NotAction": [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResolveSecurity",
"sts:GetSessionToken"
],
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
Removed 3 former contractor accounts. Implemented quarterly access review calendar with a documented review template.
Week 2–4: Change Management
Enabled branch protection across all repositories:
#!/bin/bash
REPOS=("api-service" "web-app" "worker-service" "shared-lib")
for repo in "${REPOS[@]}"; do
gh api -X PUT "repos/org/$repo/branches/main/protection" \
--input - <<EOF
{
"required_pull_request_reviews": {
"required_approving_review_count": 1,
"dismiss_stale_reviews": true
},
"required_status_checks": {
"strict": true,
"contexts": ["ci/test", "security/scan"]
},
"enforce_admins": true,
"restrictions": null,
"allow_force_pushes": false
}
EOF
done
Migrated the 2 legacy services from SSH-based deployments to GitHub Actions with deployment audit logging.
Week 3–6: Monitoring and Incident Response
Deployed a centralized logging stack using Datadog:
- ✓Application logs shipped via the Datadog agent
- ✓AWS CloudTrail logs forwarded to Datadog
- ✓Log retention set to 13 months (covering the full audit window plus buffer)
- ✓Alerts configured for: unauthorized API calls, failed login attempts, deployment failures, error rate thresholds
Created incident response runbooks for five scenarios: service outage, data breach, unauthorized access, dependency vulnerability disclosure, and DDoS. Conducted a tabletop exercise for each.
Week 4–8: Vulnerability Management
Integrated Snyk into all CI pipelines with the following policy:
# .snyk policy file
severity-threshold: high
fail-on: upgradable
ignore:
SNYK-JS-EXAMPLE-0000000:
- '*':
reason: 'Risk accepted — not exploitable in our context'
expires: 2026-06-01
Added Trivy for container image scanning. Established patching SLAs: critical within 7 days, high within 30 days, medium within 90 days.
Week 4–6: Data Protection
Encrypted the unencrypted S3 bucket (required migrating existing objects). Verified all RDS instances used AES-256 encryption via AWS KMS. Tested backup restoration for all databases and documented recovery time.
CI/CD Pipeline Changes
The final pipeline included these gates:
PR Created
→ Automated tests (unit + integration)
→ Semgrep SAST scan
→ Snyk dependency scan
→ Minimum 1 approval required
→ All checks must pass before merge
Merge to main
→ Docker image build
→ Trivy container scan (fail on CRITICAL/HIGH)
→ Deploy to staging
→ Smoke tests
Production Deploy
→ Manual approval gate (deployment manager)
→ Deploy with audit trail (actor, commit SHA, timestamp, PR number)
→ Post-deploy health checks
→ Deployment record archived (365-day retention)
Audit Outcome
The observation period began once all controls were operational (week 8). After a 6-month observation period, the audit firm conducted 4 weeks of fieldwork. Key outcomes:
- ✓0 exceptions in access control (all accounts had MFA, no shared credentials, quarterly reviews completed)
- ✓0 exceptions in change management (100% of production changes traced to approved PRs)
- ✓1 minor observation in vulnerability management (2 medium-severity findings exceeded the 90-day SLA by 11 days — documented as a management response with revised tracking)
- ✓Clean Type II report issued, satisfying the requirements for all three enterprise deals
The total elapsed time from kickoff to report issuance was 9.5 months. Engineering effort peaked at 2 FTEs during the first 8 weeks of remediation, then dropped to approximately 0.25 FTE for ongoing maintenance during the observation period.
Key Takeaways
SOC 2 Type II is fundamentally an engineering exercise for software companies. The policies and procedures matter, but auditors spend most of their time evaluating technical controls and the evidence those controls produce.
The most efficient path is to build compliance into your existing development workflow — branch protection, automated scanning, deployment audit trails, centralized logging — rather than maintaining a parallel compliance process. When your CI/CD pipeline enforces the controls and produces the evidence, the audit becomes a verification exercise rather than a scramble for documentation.
Start with the access control and change management controls. These are where auditors find the most gaps, and they require the longest observation periods to demonstrate consistent operation.
Ready to discuss your project?
Get in Touch →