Code review is the most important quality gate in a software team, and it is also the most common bottleneck. Every team has the same problem: senior engineers are the reviewers, they have their own work to do, and PRs queue for hours β sometimes days β waiting for review. The bottleneck is not laziness or lack of process. It is a structural problem: the number of PRs grows linearly with team size, but the number of qualified reviewers does not.
At Stripe Systems, we addressed this by building an AI-assisted code review pipeline that handles the mechanical portion of code review β style violations, common bugs, test coverage gaps, documentation checks β so human reviewers can focus exclusively on the parts that require judgment: business logic correctness, architecture fitness, and security.
This post covers the implementation in detail: the two-tier review model, the GitHub Actions pipeline, the prompts for different project types, how we tuned the system to reduce false positives, and the metrics we have measured over six months across three production projects.
The Code Review Bottleneck β Quantified
Before building the AI review system, we measured our review cycle times across three projects over two months:
| Metric | NestJS Backend | Flutter Mobile | React Frontend |
|---|---|---|---|
| Avg PRs per week | 18 | 12 | 15 |
| Avg review wait time | 5.1 hours | 3.8 hours | 4.6 hours |
| Avg review duration | 42 min | 38 min | 35 min |
| Avg review iterations | 2.3 | 1.9 | 2.1 |
| % of comments on style/formatting | 31% | 28% | 34% |
| % of comments on common patterns | 24% | 22% | 26% |
| % of comments on logic/architecture | 45% | 50% | 40% |
The key insight: 55-60% of review comments addressed issues that do not require human judgment. Style violations, missing null checks, unhandled error cases, missing tests for new endpoints β these are mechanical checks that follow rules, not judgment. If AI could handle these, human reviewers could focus their limited time on the 40-50% of issues that genuinely require experience and context.
What AI Code Review Can and Cannot Catch
Being honest about boundaries prevents both over-investment and disappointment.
What AI Catches Well
Style violations. Naming conventions, import ordering, indentation inconsistencies, missing type annotations. Linters catch some of these, but AI catches higher-level style issues: inconsistent error handling patterns across methods, mixing async/await with .then() chains in the same file, inconsistent DTO naming conventions.
Common bug patterns. Missing null/undefined checks before property access, unhandled promise rejections, missing break statements in switch cases, array operations without empty array checks, off-by-one errors in loop bounds, using == instead of === in JavaScript.
Test coverage gaps. New endpoints without corresponding test files, modified functions without updated tests, new error paths without error case tests. AI checks the diff for new public methods and cross-references with test files.
Documentation gaps. Public methods without JSDoc comments, new API endpoints without Swagger decorators, changed parameters without updated documentation.
Dependency issues. Importing from banned packages, using deprecated API methods, circular dependencies between modules.
What AI Cannot Catch
Business logic correctness. AI does not know that a discount of more than 40% requires manager approval, or that shipping to certain postal codes requires an additional surcharge. Business rules are domain knowledge that lives in requirements documents and the team's heads, not in the code.
Architecture fitness. Whether a new service should exist independently or be part of an existing module is a judgment call. AI can check that existing architecture rules are followed (e.g., "services don't import from controllers"), but it cannot evaluate whether the architectural approach is appropriate for the business need.
Performance implications. AI might flag an N+1 query if the pattern is obvious, but it cannot evaluate whether a specific query will be slow with production data volumes. Performance assessment requires understanding of data distribution, access patterns, and infrastructure constraints.
Security vulnerabilities in business context. AI catches generic security issues (SQL injection from string concatenation, missing input validation). It does not catch context-dependent vulnerabilities: an endpoint that returns user data without checking that the requesting user has permission to see it, or a rate limit that is too generous for a specific business use case.
The Two-Tier Review Model
Our model splits code review into two tiers:
PR Submitted
β
βΌ
βββββββββββββββββββββββ
β Tier 1: AI Review β
β (automated, <2 min)β
β β
β β’ Style violations β
β β’ Common bugs β
β β’ Test gaps β
β β’ Doc gaps β
β β’ Architecture rule β
β violations β
ββββββββββββ¬βββββββββββ
β
βΌ
Developer addresses
AI feedback (if any)
β
βΌ
βββββββββββββββββββββββ
β Tier 2: Human Reviewβ
β (senior engineer) β
β β
β β’ Business logic β
β β’ Architecture fit β
β β’ Security review β
β β’ Performance β
β β’ Overall design β
βββββββββββββββββββββββ
Tier 1 runs automatically on every PR. The developer addresses any valid AI comments before requesting human review. Tier 2 is the traditional human review, but the reviewer now knows that mechanical checks have already been done and can focus entirely on judgment-based review.
This model works because it matches the strengths of each reviewer type. AI is fast, tireless, and consistent at pattern matching. Humans are slow, fatigable, and inconsistent at pattern matching β but they have context, judgment, and domain knowledge that AI lacks.
Building the Pipeline
The implementation uses GitHub Actions, an LLM API, and the GitHub API for posting review comments. Here is the complete workflow.
Step 1: PR Webhook Triggers Review
A GitHub Actions workflow triggers on pull request events:
name: AI Code Review
on:
pull_request:
types: [opened, synchronize, ready_for_review]
permissions:
contents: read
pull-requests: write
jobs:
ai-review:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install review dependencies
run: npm ci --prefix .github/ai-review
- name: Get PR diff
id: diff
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr diff ${{ github.event.pull_request.number }} \
--repo ${{ github.repository }} > pr_diff.txt
- name: Get changed files list
id: files
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh pr view ${{ github.event.pull_request.number }} \
--repo ${{ github.repository }} \
--json files --jq '.files[].path' > changed_files.txt
- name: Gather context files
run: node .github/ai-review/gather-context.js
- name: Run AI review
env:
LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
LLM_API_URL: ${{ secrets.LLM_API_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
REPO: ${{ github.repository }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: node .github/ai-review/review.js
Step 2: Diff Extraction and Context Assembly
The gather-context.js script assembles the context the AI needs beyond the diff itself:
// gather-context.js β simplified version
const fs = require('fs');
const path = require('path');
async function gatherContext() {
const changedFiles = fs.readFileSync('changed_files.txt', 'utf8')
.split('\n')
.filter(Boolean);
const context = {
diff: fs.readFileSync('pr_diff.txt', 'utf8'),
changedFiles,
relatedFiles: [],
architectureRules: '',
codingStandards: '',
};
// Load architecture rules if they exist
const rulesPath = '.github/ai-review/architecture-rules.md';
if (fs.existsSync(rulesPath)) {
context.architectureRules = fs.readFileSync(rulesPath, 'utf8');
}
// Load coding standards
const standardsPath = '.github/ai-review/coding-standards.md';
if (fs.existsSync(standardsPath)) {
context.codingStandards = fs.readFileSync(standardsPath, 'utf8');
}
// For each changed file, find related files (imports, tests)
for (const file of changedFiles) {
const related = await findRelatedFiles(file);
context.relatedFiles.push(...related);
}
// Deduplicate and limit context size
context.relatedFiles = [...new Set(context.relatedFiles)];
// Truncate if total context exceeds token limit
const maxContextChars = 80000; // ~20k tokens
let totalChars = context.diff.length;
const includedRelated = [];
for (const relFile of context.relatedFiles) {
const content = fs.readFileSync(relFile, 'utf8');
if (totalChars + content.length < maxContextChars) {
includedRelated.push({ path: relFile, content });
totalChars += content.length;
}
}
context.relatedFiles = includedRelated;
fs.writeFileSync(
'review_context.json',
JSON.stringify(context, null, 2),
);
}
async function findRelatedFiles(filePath) {
const related = [];
const dir = path.dirname(filePath);
const basename = path.basename(filePath, path.extname(filePath));
// Look for corresponding test file
const testPatterns = [
`${dir}/${basename}.spec.ts`,
`${dir}/${basename}.test.ts`,
`${dir}/__tests__/${basename}.test.ts`,
`test/${dir}/${basename}.test.ts`,
];
for (const pattern of testPatterns) {
if (fs.existsSync(pattern)) {
related.push(pattern);
}
}
// Look for related interface/type files
const content = fs.readFileSync(filePath, 'utf8');
const importMatches = content.matchAll(
/from\s+['"](\.[^'"]+)['"]/g,
);
for (const match of importMatches) {
const importPath = path.resolve(dir, match[1]);
const candidates = [
`${importPath}.ts`,
`${importPath}/index.ts`,
];
for (const candidate of candidates) {
if (fs.existsSync(candidate)) {
related.push(candidate);
}
}
}
return related;
}
gatherContext();
Step 3: AI First Pass
The review.js script sends the assembled context to the LLM and parses the structured response:
// review.js β simplified version
const fs = require('fs');
async function runReview() {
const context = JSON.parse(
fs.readFileSync('review_context.json', 'utf8'),
);
// Select the appropriate prompt template based on project type
const projectType = detectProjectType(context.changedFiles);
const promptTemplate = loadPromptTemplate(projectType);
const prompt = buildPrompt(promptTemplate, context);
const response = await callLLM(prompt);
const reviewComments = parseReviewResponse(response);
// Filter by confidence threshold
const filteredComments = reviewComments.filter(
(c) => c.confidence >= 0.7,
);
// Post comments to GitHub
await postReviewComments(filteredComments);
}
function detectProjectType(changedFiles) {
const extensions = changedFiles.map((f) =>
f.split('.').pop(),
);
if (extensions.includes('dart')) return 'flutter';
if (extensions.includes('tsx') || extensions.includes('jsx'))
return 'react';
return 'nestjs'; // default for .ts files
}
function buildPrompt(template, context) {
return template
.replace('{{DIFF}}', context.diff)
.replace('{{ARCHITECTURE_RULES}}', context.architectureRules)
.replace('{{CODING_STANDARDS}}', context.codingStandards)
.replace(
'{{RELATED_FILES}}',
context.relatedFiles
.map((f) => `--- ${f.path} ---\n${f.content}`)
.join('\n\n'),
);
}
async function callLLM(prompt) {
const response = await fetch(process.env.LLM_API_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.LLM_API_KEY}`,
},
body: JSON.stringify({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
temperature: 0.2,
messages: [{ role: 'user', content: prompt }],
}),
});
const data = await response.json();
return data.content[0].text;
}
function parseReviewResponse(response) {
// Extract JSON array from response
const jsonMatch = response.match(/\[[\s\S]*\]/);
if (!jsonMatch) return [];
try {
return JSON.parse(jsonMatch[0]);
} catch {
console.error('Failed to parse review response as JSON');
return [];
}
}
async function postReviewComments(comments) {
const { Octokit } = require('@octokit/rest');
const octokit = new Octokit({ auth: process.env.GH_TOKEN });
const [owner, repo] = process.env.REPO.split('/');
const prNumber = parseInt(process.env.PR_NUMBER, 10);
// Get the PR to find the latest commit SHA
const { data: pr } = await octokit.pulls.get({
owner,
repo,
pull_number: prNumber,
});
if (comments.length === 0) {
// Post a summary comment indicating clean review
await octokit.issues.createComment({
owner,
repo,
issue_number: prNumber,
body: 'π€ **AI Review:** No issues found. Ready for human review.',
});
return;
}
// Post as a review with inline comments
const reviewComments = comments.map((c) => ({
path: c.file,
line: c.line,
body: formatReviewComment(c),
}));
await octokit.pulls.createReview({
owner,
repo,
pull_number: prNumber,
commit_id: pr.head.sha,
event: 'COMMENT',
body: `π€ **AI Review:** Found ${comments.length} issue(s). Please address before human review.\n\nReact with π or π on each comment to help us improve.`,
comments: reviewComments,
});
}
function formatReviewComment(comment) {
const severityEmoji = {
critical: 'π΄',
major: 'π‘',
minor: 'π΅',
};
return [
`${severityEmoji[comment.severity] || 'βͺ'} **[${comment.category}]** ${comment.severity}`,
'',
comment.description,
'',
comment.suggestion
? `**Suggestion:** ${comment.suggestion}`
: '',
'',
'_React with π if helpful, π if not._',
]
.filter(Boolean)
.join('\n');
}
runReview();
Step 4: Structured Output Format
The LLM is instructed to return a JSON array. Here is the expected structure:
[
{
"file": "src/payments/payment.service.ts",
"line": 47,
"category": "bug",
"severity": "critical",
"confidence": 0.92,
"description": "The `processRefund` method accesses `payment.transaction.gatewayId` without checking whether `payment.transaction` is null. If the payment was created but the transaction failed to initialize, this will throw a TypeError in production.",
"suggestion": "Add a null check: `if (!payment.transaction) { throw new PaymentTransactionNotFoundError(payment.id); }` before accessing `gatewayId`."
},
{
"file": "src/payments/payment.controller.ts",
"line": 23,
"category": "standard",
"severity": "minor",
"confidence": 0.85,
"description": "The `@ApiResponse` decorator is missing for the 404 case. All endpoints should document their error responses per our coding standards.",
"suggestion": "Add `@ApiResponse({ status: 404, description: 'Payment not found' })` to the endpoint decorator."
}
]
The confidence field is critical for filtering. We ask the model to self-rate its confidence on each finding, and we only post comments above a 0.7 threshold. This eliminates most speculative or uncertain findings.
Prompt Templates by Project Type
Different project types have different rules, patterns, and common issues. We maintain separate prompt templates for each.
NestJS Backend Prompt
Review this pull request diff for a NestJS backend application
using TypeORM and PostgreSQL.
Architecture Rules:
{{ARCHITECTURE_RULES}}
Coding Standards:
{{CODING_STANDARDS}}
Related files for context:
{{RELATED_FILES}}
Diff to review:
{{DIFF}}
Check for:
1. BUGS: null/undefined access, unhandled promise rejections,
missing error cases in switch statements, incorrect type
assertions, race conditions in async code
2. ARCHITECTURE: violations of the architecture rules listed above,
circular dependencies, wrong layer accessing wrong layer
3. TESTING: new public methods without test files, modified logic
without updated tests, new error paths without error tests
4. DOCUMENTATION: missing JSDoc on public methods, missing Swagger
decorators on endpoints, missing @ApiResponse for error cases
5. SECURITY: SQL injection risks, missing input validation,
sensitive data in logs, hardcoded credentials
6. DATABASE: missing indexes for query patterns, N+1 query risks,
missing transactions for multi-step operations
Return a JSON array. Each element:
{
"file": "path/to/file.ts",
"line": <line number in the diff>,
"category": "bug|architecture|testing|documentation|security|database",
"severity": "critical|major|minor",
"confidence": <0.0 to 1.0>,
"description": "<specific description referencing the code>",
"suggestion": "<specific code or action to fix>"
}
Rules:
- Only report issues you are confident about (confidence >= 0.7)
- Reference specific variable names, function names, and line numbers
- Do not comment on formatting or style (linter handles this)
- Do not suggest refactoring beyond the scope of this diff
- If the diff is clean, return an empty array []
- Maximum 15 issues
Flutter Mobile Prompt
Review this pull request diff for a Flutter mobile application
using BLoC pattern and freezed for state management.
Architecture Rules:
{{ARCHITECTURE_RULES}}
Coding Standards:
{{CODING_STANDARDS}}
Related files for context:
{{RELATED_FILES}}
Diff to review:
{{DIFF}}
Check for:
1. BUGS: missing null checks on nullable types, incorrect state
emissions in BLoC, missing dispose/close calls for streams and
controllers, incorrect BuildContext usage across async gaps
2. ARCHITECTURE: BLoC logic in widgets, direct API calls from
widgets (should go through repositories), widgets depending on
concrete implementations instead of abstractions
3. TESTING: new widgets without widget tests, new BLoCs without
unit tests, modified state transitions without updated tests
4. UI/UX: missing loading states, missing error handling in UI,
hardcoded strings (should use localization), missing
accessibility labels
5. PERFORMANCE: unnecessary rebuilds (missing const constructors,
wrong BlocBuilder granularity), large widget builds that
should be split
Return a JSON array with the same structure as above.
React Frontend Prompt
Review this pull request diff for a React frontend application
using TypeScript, React Query for data fetching, and Zustand
for state management.
Architecture Rules:
{{ARCHITECTURE_RULES}}
Coding Standards:
{{CODING_STANDARDS}}
Related files for context:
{{RELATED_FILES}}
Diff to review:
{{DIFF}}
Check for:
1. BUGS: missing dependency array entries in useEffect/useMemo/
useCallback, stale closures, incorrect conditional rendering,
missing key props in lists, unhandled error states in queries
2. ARCHITECTURE: business logic in components (should be in hooks
or services), direct API calls in components (should use React
Query hooks), prop drilling beyond 2 levels (use context or state)
3. TESTING: new components without test files, new hooks without
hook tests, modified render logic without updated tests
4. ACCESSIBILITY: missing aria labels, missing alt text on images,
non-semantic HTML elements for interactive content, missing
keyboard navigation support
5. PERFORMANCE: missing React.memo on expensive components, missing
useMemo for expensive computations, unnecessary re-renders from
object/array literals in JSX props
Return a JSON array with the same structure as above.
Sample AI Review Comment in Practice
Here is a real-looking example of what the AI review produces on a payment handler diff:
Diff submitted:
+ async processPayment(orderId: string, paymentDto: ProcessPaymentDto) {
+ const order = await this.orderRepository.findById(orderId);
+ const paymentIntent = await this.razorpayAdapter.createPaymentIntent({
+ amount: order.totalAmount,
+ currency: order.currency,
+ receipt: order.orderNumber,
+ });
+
+ const payment = this.paymentRepository.create({
+ orderId: order.id,
+ gatewayIntentId: paymentIntent.id,
+ amount: order.totalAmount,
+ status: PaymentStatus.PENDING,
+ });
+
+ await this.paymentRepository.save(payment);
+ return { paymentIntentId: paymentIntent.id, clientSecret: paymentIntent.clientSecret };
+ }
AI review comment posted:
π΄ [bug] critical
The
processPaymentmethod accessesorder.totalAmountandorder.currencywithout checking whetherorderis null.findByIdreturnsOrder | nullper the repository interface. If the order does not exist (deleted between cart creation and payment, or invalidorderId), this will throw aTypeError: Cannot read properties of null (reading 'totalAmount')in production instead of a meaningful error response.Suggestion: Add a null check after the
findByIdcall:const order = await this.orderRepository.findById(orderId); if (!order) { throw new EntityNotFoundError('Order', orderId); }React with π if helpful, π if not.
This is exactly the type of issue that a human reviewer would catch when alert, but might miss when reviewing the fifth PR of the day at 4:30 PM. The AI catches it every time.
Tuning the System: Reducing False Positives
The initial deployment had a 22% false positive rate β nearly one in four AI comments was not useful. This erodes developer trust quickly. A developer who sees three unhelpful comments in a row starts ignoring all AI comments.
We reduced the false positive rate to 8% over three months through these changes:
Confidence threshold adjustment. We started with a 0.5 threshold and raised it to 0.7. This eliminated most speculative findings where the AI was unsure but reported anyway.
Project-specific rules. Generic prompts produce generic findings. Adding our specific architecture rules, coding standards, and common patterns to the prompt context reduced irrelevant comments significantly.
Feedback loop. Developers react with π or π on each AI comment. Monthly, we export the π comments and analyze them:
-- Monthly analysis query for AI review feedback
SELECT
r.category,
r.severity,
COUNT(*) as total_comments,
SUM(CASE WHEN r.reaction = 'thumbs_up' THEN 1 ELSE 0 END) as helpful,
SUM(CASE WHEN r.reaction = 'thumbs_down' THEN 1 ELSE 0 END) as unhelpful,
ROUND(
SUM(CASE WHEN r.reaction = 'thumbs_down' THEN 1 ELSE 0 END)::numeric
/ NULLIF(COUNT(*), 0) * 100,
1
) as false_positive_pct
FROM ai_review_comments r
WHERE r.created_at >= NOW() - INTERVAL '30 days'
GROUP BY r.category, r.severity
ORDER BY false_positive_pct DESC;
Common patterns in false positives:
- βArchitecture comments on test files: The AI applied production architecture rules to test helper files. Fix: exclude test files from architecture rule checks.
- βMissing null checks on already-validated inputs: The AI flagged missing null checks on DTO properties that were already validated by class-validator. Fix: added to the prompt that DTO fields with validation decorators are guaranteed non-null after the validation pipe runs.
- βDocumentation comments on internal methods: The AI flagged missing JSDoc on private helper methods. Fix: clarified that documentation requirements apply to public methods only.
Each fix was a prompt adjustment, not a code change. This is the advantage of a prompt-based system: tuning is fast and does not require redeployment.
Architecture Rule Enforcement
One of the most valuable aspects of AI review is encoding architecture rules into the prompt. This turns informal team conventions into automated checks.
Our architecture rules document (.github/ai-review/architecture-rules.md):
# Architecture Rules
## Layer Dependencies
- Controllers MAY import from Services
- Services MAY import from Repositories and Adapters
- Services MUST NOT import from Controllers
- Repositories MUST NOT import from Services or Controllers
- Adapters MUST NOT import from any application layer
## Module Boundaries
- Feature modules MUST NOT import from other feature modules
- Cross-feature communication uses SharedModule services or events
- The SharedModule MUST NOT import from any feature module
## Data Flow
- Entities are internal to the repository/service layer
- Controllers receive DTOs and return DTOs β never entities
- API responses MUST NOT include database IDs as primary identifiers
(use UUIDs or public-facing IDs)
## Error Handling
- All errors thrown from services MUST extend BaseApplicationError
- Controllers MUST NOT catch errors β the global exception filter
handles them
- Repository methods return null for not-found cases β services
decide whether to throw
## External Services
- All external API calls go through adapter classes
- Adapters live in src/adapters/{service-name}/
- Services depend on adapter interfaces, not implementations
- Adapter methods MUST include timeout and retry configuration
The AI review checks every PR diff against these rules. When an engineer creates a service that imports from a controller, or a repository that throws an HTTP exception, the AI catches it before the human reviewer has to spend time on it.
This is particularly valuable for onboarding: new engineers learn the architecture rules through AI feedback on their PRs, in context, with specific examples from their own code.
Metrics After Six Months
We measured these metrics across all three projects over six months of AI-assisted review:
Review Cycle Time
Before AI review:
Average: 4.2 hours (PR submitted β approved)
p50: 3.1 hours
p90: 8.7 hours
After AI review:
Average: 1.7 hours (PR submitted β approved)
p50: 1.2 hours
p90: 3.4 hours
Reduction: 60%
The reduction comes from two sources: (1) developers fix mechanical issues before requesting human review, reducing review iterations, and (2) human reviewers spend less time on mechanical checks and complete reviews faster.
Senior Engineer Review Time
Before: 45 minutes average per PR
After: 18 minutes average per PR
Reduction: 60%
Senior engineers report that they no longer check for style, documentation, or common patterns. They read the AI review comments to confirm the mechanical checks were done, then focus on business logic, architecture, and overall design.
Defects Caught in Review
Before: ~8.2 defects per 100 PRs (found by human reviewers)
After: ~11.1 defects per 100 PRs (AI + human combined)
Increase: 35%
The AI finds issues that humans skip when fatigued: missing null checks in deeply nested code, unhandled error cases in less-common code paths, missing test cases for error scenarios. Humans still find the complex bugs β incorrect business logic, race conditions that require understanding the full system, and security issues in business context.
False Positive Rate
Month 1: 22% (developers frustrated, trust declining)
Month 2: 14% (after confidence threshold and test file fixes)
Month 3: 10% (after DTO validation context added)
Month 4-6: 8% (stable, with monthly prompt refinements)
An 8% false positive rate means roughly 1 in 12 AI comments is not useful. Developers find this acceptable β they spend a few seconds reading and dismissing the occasional irrelevant comment, and the time saved on the remaining 11 valid comments more than compensates.
Developer Satisfaction
We survey the team quarterly on AI review satisfaction (1-5 scale):
| Question | Month 1 | Month 3 | Month 6 |
|---|---|---|---|
| "AI review saves me time" | 3.2 | 4.1 | 4.3 |
| "AI review catches real issues" | 3.5 | 4.0 | 4.2 |
| "AI review false positives are acceptable" | 2.4 | 3.6 | 4.0 |
| "I trust AI review to handle mechanical checks" | 2.8 | 3.9 | 4.1 |
The trust trajectory is notable: it started low (2.4-3.5 range) and grew steadily as the false positive rate dropped. Trust is earned through consistent, accurate feedback β not through marketing.
The Feedback Mechanism
The π/π reaction system is simple but effective. Here is how it works end to end:
- βAI posts a review comment with the reaction prompt at the bottom.
- βDeveloper reads the comment. If it is helpful, they π. If not, they π.
- βA scheduled GitHub Action runs weekly, exporting all AI comments and their reactions to a database.
- βMonthly, the team lead reviews the π comments, categorizes the failure modes, and adjusts prompts.
# .github/workflows/ai-review-feedback.yml
name: Collect AI Review Feedback
on:
schedule:
- cron: '0 9 * * 1' # Every Monday at 9 AM
jobs:
collect-feedback:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Collect reactions on AI comments
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: node .github/ai-review/collect-feedback.js
- name: Generate feedback report
run: node .github/ai-review/generate-report.js
- name: Post report to team channel
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
run: node .github/ai-review/post-report.js
The monthly review meeting is short (30 minutes). The team looks at the false positive patterns, agrees on prompt changes, and implements them immediately. The prompt changes are version-controlled, so we can track which adjustments improved the false positive rate.
Limitations and Honest Assessment
After six months of production use across three projects at Stripe Systems, here is our candid assessment:
AI review is not a replacement for human review. It is a filter that reduces the mechanical load on human reviewers. Every PR still needs a human reviewer for business logic, architecture, and security.
The initial investment is not trivial. Setting up the pipeline, writing project-specific prompts, and tuning false positives took approximately 3 weeks of one engineer's time. The ROI is positive, but it is not instant.
Prompt maintenance is ongoing. As the project evolves β new architecture decisions, new patterns, new libraries β the prompts need updating. This is approximately 2-3 hours per month, which is modest but not zero.
Token costs are real but manageable. At current API pricing, our review pipeline costs approximately $80-120 per month across all three projects (around 45 PRs per week total). This is less than one hour of a senior engineer's time, and it saves many more hours than that.
The system is only as good as its rules. If your architecture rules are vague or your coding standards are informal, the AI review will also be vague. The process of writing explicit rules for the AI prompt has the side benefit of forcing the team to make implicit conventions explicit.
Getting Started
For teams considering a similar implementation:
- β
Start with one project. Pick the project with the most PRs and the most mechanical review overhead. Prove the value before expanding.
- β
Write your architecture rules first. The prompt is only as good as the rules you give it. Spend time making your conventions explicit.
- β
Set a high confidence threshold initially. Start at 0.8 and lower it as you build trust. High false positive rates in the first week will kill adoption.
- β
Implement the feedback loop from day one. Without developer feedback, you cannot improve the system. The π/π mechanism is simple and effective.
- β
Measure before and after. Track review cycle time, review duration, defect escape rate, and developer satisfaction. Without data, you are guessing about impact.
- β
Expect three months to stabilize. The first month will have high false positives. The second month will be better. By the third month, the system should be trusted and stable.
AI-assisted code review is not a silver bullet. It is a well-understood engineering tool that solves a well-understood problem β the mechanical review bottleneck β and frees human reviewers to do the work that actually requires human judgment. The implementation is straightforward, the ROI is measurable, and the adoption curve is predictable. The only requirement is honesty about what it can and cannot do.
Ready to discuss your project?
Get in Touch β