Most teams assume slow PRs mean slow reviewers. A closer look at PR timelines shows something different.
A recently merged PR might span 54 working hours, with roughly 40 minutes of active work. The rest is wait time between transitions. This pattern appears consistently in teams once they scale beyond small groups. The bottleneck is not the review itself, but the gaps between stages. A PR moves through seven stages, and delays accumulate at every step.
This guide breaks down those stages, identifies where delays occur, and outlines fixes to reduce review time. For what to examine once a PR is open, see the code review best practices guide. For a deeper understanding of how AI can handle the repetitive parts of review, read the complete guide on what AI code review is and how it works.
What is a code review process?
A code review process defines the stages a PR moves through from open to merge, the actor responsible at each stage, and the conditions for handoff. Cutting cycle time depends less on faster reviewers and more on shortening the wait between stages, especially when reviewers pick up a PR and when authors respond to comments.
Most teams operate without a defined process. A PR opens, a reviewer picks it up when available, and it merges once both parties agree. This works at a small scale with fewer PRs and familiar code. At a large scale, the lack of structure increases queue times, reduces consistency in review quality, and makes cycle time unpredictable. The fix is to define the process and measure each stage.
Measure your current PR lifecycle before changing anything
Before redesigning a code review process, establish a baseline. Select a recently merged PR that took longer than expected and review its event timeline in GitHub or GitLab. Record timestamps for key states such as branch push, PR open, CI start and finish, reviewer assignment, first activity, comments, author response, approval, and merge. Calculate the duration between each step to see where time is spent.
In most teams, two or three transitions account for over 80 percent of the elapsed time, while actual review work is under 5 percent. This trace shifts the discussion from opinion to specific gaps and targeted fixes.
Instead of doing this manually for individual PRs, you can use an engineering metrics platform like DevDynamics to track PR lifecycle and delivery metrics across your organization, helping you identify bottlenecks, understand delays, and measure improvements over time.
Map the 7 stages of the PR lifecycle
Every PR progresses through seven distinct stages. A transition is the handoff between two actors or systems, and each one is a candidate for accumulated wait time. The actors involved are the author, the CI system, the routing system, the reviewer, and the merge system.
| # | Stage transition | From | To |
|---|---|---|---|
| 1 | Push to PR open | Author writing code | Author opening the PR |
| 2 | PR open to CI green | PR creation event | CI pipeline completion |
| 3 | CI green to reviewer assigned | CI completion | Reviewer routing |
| 4 | Assigned to reviewer opens it | Routing event | Reviewer attention |
| 5 | Opened to first comment | Reviewer reading the diff | Reviewer posting a comment |
| 6 | First comment to author response | Reviewer comment | Author edit or reply |
| 7 | Author response to merge | Author resolution | Merge system |
Well-functioning teams complete most transitions within a few hours during working days. Teams with cycle time issues typically have at least two transitions that stretch into double-digit hours. The two longest are not always identical across teams, but they are predictable enough that the next section names them directly.
Identify the 2 stages where most of the wait time accumulates
Across teams, transition 4 (assigned → reviewer opens) and transition 6 (first comment → author response) account for most of the cycle time on a typical PR. Both share the same cause: a notification interrupts the recipient in the middle of another task, gets deferred, and the PR waits.
Transition 4 is typically the longest. The system assigns the PR to a reviewer who is in a meeting, paired on another change, or already reviewing two earlier PRs. The notification surfaces hours later. By the time the reviewer opens the PR, the author has switched context to the next ticket and is no longer available for a fast back-and-forth.
Transition 6 shows the same pattern in reverse. A reviewer comments late in the day, the author responds the next morning, and the reviewer replies later. One comment cycle can take 18 to 24 hours, despite under 10 minutes of actual work.
Target durations for these two transitions on high-velocity teams:
- Transition 4 (assigned → reviewer opens): under 2 hours during working hours.
- Transition 6 (first comment → author response): under 3 hours during working hours.
Hitting these two targets reduces total cycle time by more than half on most teams. No other single change has a comparable impact, including faster reviewers, better diff tooling, or smaller PRs. Both targets require structural changes, not effort changes.
Refacto eliminates the wait time at transition 5. It reviews every PR as soon as it opens, adds inline comments on bugs, security issues, and logic gaps with full codebase context, and gives reviewers concrete findings to act on. Try it on your next PR →
Apply a structural fix to each stage transition
Each transition has a known set of structural fixes. None of them requires additional effort from the team. They require a one-time process change followed by measurement.
| Transition | Where time leaks | Structural fix |
|---|---|---|
| 1. Push to PR open | Author drifts into side cleanups before opening the PR | PR opens within 30 minutes of the last meaningful commit; enforced by a personal rule, not a tool |
| 2. PR open to CI green | Slow CI pipeline, flaky tests, sequential jobs | Parallelize CI jobs, quarantine flaky tests, run lint and type checks before the full test suite |
| 3. CI green to assigned | Manual reviewer selection, ad hoc Slack pings | CODEOWNERS for high-risk paths, automatic round-robin for everything else |
| 4. Assigned to opened | Reviewer is buried in their own work and the notification gets queued | Two scheduled review windows per developer per day; dashboard alert for any PR unopened over 2 hours |
| 5. Opened to first comment | Reviewer starts from a cold context and must read the entire diff before commenting | AI code review posts inline comments before the human reviewer arrives, so that the reviewer reacts to existing findings |
| 6. Comment to author response | Author has switched context to the next ticket | Digest of unresolved comments delivered at the start of each scheduled work block |
| 7. Author response to merge | Stale checks or manual approval steps block the merge | Auto-merge when all conversations resolve, all required checks pass, and approval count is met |
Implement one change at a time, measure results over two weeks, then move to the next transition. Cycle time compounds downward as each transition shortens, because reductions multiply rather than add.
Use author preparation to influence 5 of the 7 stages
Discussions of code review speed tend to focus on reviewer behavior, but the author has more leverage over cycle time than the reviewer does. The author directly controls transitions 1, 6, and 7 and influences transitions 4 and 5 through the quality of the PR. That covers five of the seven stages.
A well-prepared PR moves through the process faster because it gives the reviewer fewer reasons to pause. A well-prepared PR includes the following:
- A description that explains the intent before the implementation, so that reviewers do not have to infer the why from the diff.
- A diff under 400 lines, or a clear explanation of why it cannot be split, along with sections that need close attention.
- A self-review before opening the PR, removing dead code, debug logs, and out-of-scope changes.
- A link to the relevant ticket, design document, or incident.
- Author availability during the first 4 hours after opening, enabling quick back-and-forth while context is still fresh.
PRs that meet these conditions move through review with shorter transitions because the reviewer spends less time gathering context, and the author resolves comments while the change is still in working memory.
Define explicit exceptions for hotfixes, reverts, and feature flags
A code review process should define explicit exceptions for cases where the standard path does not apply. Defining these exceptions in advance prevents arguments during incidents and reduces the temptation to bypass the process for changes that should go through it.
Three cases justify bypassing the standard process:
- Hotfixes for active production incidents. Skip the queue, get a single fast review from the on-call engineer, merge, and complete a formal post-incident review afterward.
- Reverts of previously reviewed changes. A revert PR undoes work that has already passed review. A single quick approval is sufficient.
- Experimental code behind a feature flag. If the change cannot affect a user, the review can be lighter. Apply the full process when the flag is ready to flip.
All other changes go through the full process, including those the author considers trivial. Small typo fixes and one-line config changes have a long history of causing production incidents, and the cost of reviewing them is small compared to the cost of skipping review on a load-bearing line.
Track one PR end-to-end through a tuned process
The following timeline shows what a tuned code review process looks like in practice. The PR is a 250-line change touching three files in a payments service.
| Time | Event | Actor |
|---|---|---|
| 09:42 | Author finishes the change and runs tests locally | Author |
| 09:48 | Author self-reviews the diff and removes a debug log | Author |
| 09:51 | PR opens with linked ticket and a clear description | Author |
| 09:52 | CI starts: lint, type check, unit tests, integration tests | CI |
| 09:53 | Refacto AI posts 3 inline comments on a missing null check and a potential N+1 query | AI reviewer |
| 09:58 | CI finishes green | CI |
| 09:58 | CODEOWNERS automatically assigns the reviewer for the payments service | Routing |
| 10:14 | Reviewer opens the PR during their morning review window | Reviewer |
| 10:22 | Reviewer reads the diff and the AI comments, posts 2 design comments | Reviewer |
| 10:31 | Author addresses the AI comments and one design comment, replies to the other | Author |
| 10:38 | Reviewer re-reviews, accepts the reply, approves | Reviewer |
| 10:39 | Auto-merge triggers, PR merges to main | Merge system |
The total cycle time is 57 minutes, with only about 18 minutes of active human effort. The improvement comes from shorter transitions rather than rushed work. The reviewer was already in their scheduled review window when the routing system assigned the PR. The AI review was available before the reviewer opened it, and auto-merge triggered as soon as the conditions were met.
This timeline is reproducible for any team that optimizes the seven-stage transitions. The work remains the same as in a slower PR; only the waiting time is eliminated.
Where to start on Monday morning
Pick a PR from last week that took longer than it should have. Pull the event timeline and calculate the duration of each of the seven transitions. Identify the two longest. Apply the structural fix from the table above to the worst one. Measure for two weeks. Move to the next transition.
This is the entire playbook. The diagnosis is mechanical, the fixes are structural, and the wins compound. For a checklist of what reviewers should examine when they open a PR, see the complete 2026 code review checklist guide. For step-by-step instructions on integrating AI review into your pipeline, see the GitHub setup guide.
The fastest way to shorten cycle time is to remove the wait at transitions 4 and 5. Refacto AI reviews every PR the moment it opens, posts context-aware comments on bugs and security issues, and gives the human reviewer a head start instead of a cold read. Start your free trial!