Review Metrics

Tags: Flow

Overview

Our review and collaborations report package provides a way for software teams to see the ground truth of what’s happening in the code review process. The package is split into three sets of metrics: Submit, Review, and Team Collaboration

The four Review Metrics, found in the Review collaboration report, include:

  • Reaction Time: The time it takes for the reviewer to respond to a comment addressed to them.
  • Involvement: The percentage of pull requests that the reviewer participated in.
  • Influence: The ratio of follow-on commits made after the reviewer commented.
  • Review Coverage: The percentage of hunks commented on by the reviewer.

These metrics are designed to promote healthy collaboration and provide prescriptive guidance to improve the productivity of the team’s code review process as a whole.

As with any data point, these metrics should be used in context. “what’s right,” and “what’s normal,” will vary depending on your team’s culture. 

Reaction Time

Are reviewers responding to comments in a timely manner? 


Reaction Time is the time it takes for a reviewer to respond to a comment addressed to them. Reaction Time for the reviewer is the same concept as Responsiveness for the submitter.  

In practice, the goal is to drive this metric down. You generally want people to be responding to each other in a timely manner, working together to find the right solution, and getting it to production. If someone addresses you directly, you want to respond to them within an hour or so, and more than eight hours is usually counterproductive under most circumstances.

However, like everything we do, Reaction Time is context-dependent. 

An engineer may be in the zone and shouldn’t realistically stop. In some cases, it may be inappropriate for them to stop (they’re in a meeting, working on an extremely important ticket, or handling an outage). 

But when it’s a “my work” versus “their work” situation, as soon as you exit your flow state — breaking for lunch or coffee — you should take the time to respond to those comments. 

Involvement

Are some people more involved in reviews than others?


This number will change according to your view. If you’re looking at their home team, an individual may show 75% involvement, indicating they reviewed three out of four PRs. But if you zoom out to view the whole organization, that same individual’s involvement rate will be much lower.

Involvement is very context dependent.  Not everyone can review everyone else’s code (imagine an HTML’er reviewing a complex query optimization).  Architects and team leads are usually expected to have more Involvement to ensure consistency.  

However, you should find a Goldilocks zone for each individual and the team they’re on and manage significant or sustained changes to their Involvement.

The Review collaboration report shows Involvement for groups of individuals, organizations and teams. The calculations for these are worth a special mention.

For a team, Involvement helps you understand how often the team reviewing their teammates PR's (vs. outsourcing to another team). Having extra people (i.e. architects) review a PR is great, but when it replaces the team itself reviewing the PR's, it is something to watch out for. This metric can be incredibly helpful in highly matrixed organizations. For example if you have teams set up organizationally by role (Software, DBA, QA, Front End, Back end, etc.), but your project teams are cross functional, you likely want someone from the organizational team to review a PR to be sure that it abides by standards and best practices AND someone from the project team to make sure that the work is likely to meet the business objectives. Setting up an organizational team structure AND a project team structure and making sure that BOTH have a high Involvement is a good way to accomplish this.

The calculation for team Involvement is calculated like this:

 [# PR's reviewed by a team member]/[# reviewed PR's].

For your entire organization, Involvement measures how often someone in the organization reviewed the PRs. Once your teams are all set-up properly, everyone in your organization is on the team, so Involvement becomes just [# reviewed prs]/[# total PRs]. If you're a math whiz, you notice that equation is the inverse of Unreviewed PR's. Well done!

Influence

How often do people update their code based on the reviewer’s comments?


Influence is the ratio of follow-on commits made after a reviewer posted a comment. It’s the sibling of Receptiveness. The Influence metric looks at whether your comments elicited a follow-on commit.

Influence doesn’t try to assign specific credit. That is to say, no one person gets the credit for being influential.  We understand that it’s the discussion itself that deserves the credit, so all participants in the discussion prior to the follow-on commit get Influence credit counted toward the metric.

In practice, there’s a Goldilocks zone with this metric: too low may be a signal that an individual isn’t making substantive comments, and too high may be a signal that an individual is acting as a gatekeeper or a crutch. 

Architects and team leads should have higher Influence metrics. Once you find the right level for each individual and team, manage significant or sustained changes as they could indicate a shift in the team dynamic that warrants a manager’s attention.

Review Coverage

How much of each PR has been reviewed?


Review Coverage is the number of hunks in a PR commented on as a percentage of the total hunks in the pull request.  A typical PR will contain multiple files and multiple edits (aka hunks) on that file. 

Like a teacher who puts tic marks on every page of a term paper to indicate they read it, a good reviewer will put a comment on the majority of the edits of a PR even if it’s a simple “LGTM”.  In practice, 100% Review Coverage is overkill.  

As a manager, you want to watch Review Coverage so that when coverage rises and falls, both at the individual level and at a team level, you can provide guidance.The goal is to drive this number up encouraging team members to take the time to review each change in the code, not just skim it as a whole.  Small changes in average Review Coverage can make a big difference.

back to top


If you need help, please email support@pluralsight.com for 24/7 assistance.