Code reviews get a bad reputation. They're often seen as bureaucratic slowdowns—hoops to jump through before merging. But done right, they're one of the most valuable practices a team can have.
Here's how I approach them.
Teaching, Not Gatekeeping
The worst code reviews are the ones where the reviewer is just looking for reasons to block. They nitpick formatting, bikeshed naming, and make the author feel like they're defending a thesis.
I try to do the opposite. Every review is a teaching opportunity—for both sides. When I flag something, I explain why it matters, not just that it's wrong. And I stay open to learning that my way isn't the only way.
The goal isn't to prove I know more than the author. It's to ship better code together.
What I Look For First
I don't read code top-to-bottom. I triage.
Security first. SQL injection, auth bypasses, secrets in code—these are non-negotiable. They get flagged immediately, regardless of anything else in the PR.
Then performance. N+1 queries, missing indexes, unbounded loops. These won't break things today, but they'll bite you at scale.
Then maintainability. Is this code going to be readable in six months? Does it follow the patterns established elsewhere in the codebase? Are there obvious simplifications?
Finally, style. Naming, formatting, minor preferences. I try to keep these to a minimum—if it's not in a linter, it's probably not worth fighting over.
Severity Matters
Not all feedback is equal. I explicitly label severity:
- Critical: This will cause a security vulnerability or data loss. Must fix before merge.
- High: This will cause bugs or significant performance issues. Should fix before merge.
- Medium: This will cause maintenance headaches. Worth addressing.
- Low: This is a suggestion or nitpick. Take it or leave it.
This does two things. First, it tells the author where to focus their energy. Second, it prevents me from blocking on things that don't actually matter.
If I can't justify a "high" or "critical" rating, I ask myself: does this really need to block the PR?
Feedback That's Actually Useful
Bad feedback: "This is wrong."
Good feedback: "This query will run for every item in the list, causing N+1 database hits. Consider using eager loading: Order.findAll({ include: Customer })."
The difference:
- Specific: Points to the exact issue
- Explains impact: Says why it matters
- Suggests fix: Gives a concrete next step
I always try to include a code example when suggesting changes. It removes ambiguity and often reveals whether my suggestion actually works.
The Checklist
I've distilled my process into a one-page checklist that I use for every review. It covers:
- Security (input validation, auth, secrets)
- Performance (N+1, caching, async)
- Code quality (SRP, naming, DRY)
- Testing (coverage, edge cases, mocking)
- Documentation (comments, JSDoc, README)
You can download the PDF checklist for free—it's what I actually use.
What Makes Reviews Bureaucratic
Some warning signs that your review process has gone off the rails:
Long review queues. If PRs sit for days waiting for review, something's broken. Reviews should happen same-day when possible.
Review by committee. One or two reviewers is usually enough. More than that and you get conflicting feedback and endless cycles.
Blocking on style. If your linter doesn't catch it, consider whether it's worth the friction. Consistency matters, but so does shipping.
No reviewer context. The reviewer should understand the why behind the change. If they don't, the PR description is the problem, not the code.
The Payoff
Good code reviews do more than catch bugs. They:
- Spread knowledge across the team (no silos)
- Establish patterns that compound over time
- Build trust between teammates
- Make onboarding faster (new devs learn by reviewing)
The investment pays off. A 30-minute review can prevent a 3-hour debugging session—or worse, a production incident.
Want to see what this looks like in practice? Check out my sample code review—it's the actual format I deliver for clients. Or grab the checklist and try it yourself.