As part of the review and collaborations report package, the Submitter Metrics make it easy to understand how PR submitters are responding to and incorporating feedback during the code review process. The four Submit Metrics, found in the Review collaboration report, include:
- Responsiveness: The time it takes a submitter to respond to a comment on their PR with either another comment or a code revision.
- Comments Addressed: Frequency with which a submitter responds to a reviewer’s comment.
- Receptiveness: Frequency with which the submitter accepts reviewer input, as denoted by code revisions.
- Unreviewed PRs: Frequency with which pull requests were submitted that had no comments and were self-merged.
These metrics are designed to promote healthy collaboration and provide prescriptive guidance to improve the productivity of the team’s code review process as a whole.As with any data point, these metrics should be used in context. “What’s right,” and “what’s normal,” will vary depending on your team’s situation.
Are people responding to feedback in a timely manner?
Responsiveness is the average time it takes to respond to a reviewer’s comment with either another comment or a code revision. It looks at the time between the last comment of the reviewer and the submitter’s response.
In practice, the goal is to drive this metric down. It’s up to the manager to determine how quickly people should respond to comments, but depending on your deployment frequency or deadlines, you may find that less than four hours is ideal, where over 24 hours regardless of timezone is counterproductive under most circumstances.
However, like everything we do, responsiveness is context-dependent.
The submitter may be in the zone and shouldn’t stop. In some cases, it may be inappropriate for them to stop (they’re in a meeting, working on an extremely important ticket, or handling an outage).
But when it’s “my work” versus “their work,” as soon as you exit your flow state — breaking for lunch or coffee — you should take the time to try to respond to those comments.
A “response” can be a comment or a code revision. If I say, “Change foo to bar,” you don’t need you to say “Okay” plus make the suggested change to the code. The change is your response. Similarly, if you don’t agree with my suggestion, you can respond with a comment. Both options are viable “responses.”
Are people acknowledging feedback from their teammates?
Comments Addressed is the percentage of reviewer comments the PR submitter responded to with a comment or a code revision.
This metric is different from Responsiveness because it looks at how broadly the submitter responded to the reviewer’s comments (instead of how quickly they responded to them).
As a manager, you want to drive this number up. If a reviewer thought it was worthwhile to make a comment, it’s generally worthwhile to respond to it. It’s best to use this metric as a prompt to encourage thorough reviews rather than managing to an absolute target.
Are people incorporating feedback from their teammates?
Receptiveness is the ratio of follow-on commits to comments. In short, this metric looks at whether the PR submitter is taking people’s feedback and incorporating that into their code.
This is a Goldilocks metric, so you’ll want to manage the outliers — and as always, context matters. A good developer is always open to improvements, but not all suggestions are worth implementing.
If Receptiveness is too low, it could be a sign that a developer is closed to any input regardless of merit. It could also be a sign of “rubber-stamping,” which is a work pattern where the reviewer, assuming too much of the submitter, approves the PR without a thorough review.
Alternately, if Receptiveness is too high, you may be seeing a developer failing to stand their ground, or someone relying on the review process to shake out bugs easily caught in development.
Are PRs getting the proper level of review?
The Unreviewed PRs metric shows the frequency with which pull requests were submitted that had no comments and were self-merged. It’s a percentage of PRs that didn’t get any reviews.
In an ideal world, PRs are never merged without being reviewed — even when they’re small or made by senior developers they can be a huge source of bugs. Many organizations establish a policy and configure their system to programmatically reject unreviewed PRs.
But for those who don’t enforce a policy: Every single time the PR goes out without being reviewed, a manager should know.
As a manager, you should drive this number to zero and take seriously the rare instance when an engineer felt compelled to drive code straight from the laptop to production without anyone ever looking at the change.
If you need help, please email firstname.lastname@example.org for 24/7 assistance.