The review and collaboration metrics are intended to provide insight into organizational behavior with respect to how individuals are collaborating with their peers during the code review process.
The recently released PR benchmarks—calculated from a study of over a half-million PRs—provide visibility into how other organizations are doing across the Review and Submit Fundamental metrics.
This post provides a list of the metrics that were analyzed in this study, their definitions, and the industry benchmarks that were identified.
Benchmarks and definitions
The Submitter Metrics quantify how submitters are responding to comments, engaging in discussion, and incorporating suggestions. The Reviewer Metrics provide a gauge for whether reviewers are providing thoughtful, timely feedback.
Each metric has a “typical” and “leading” benchmark, where typical is where the bulk of organizations are (this is the median number) and leading represents organizations in the 90th percentile.
Responsiveness is the average time it takes to respond to a comment with either another comment or a code revision. The typical industry benchmark for Responsiveness is 6 hours and the leading benchmark is 1.5 hours.
Comments Addressed is the percentage of Reviewer comments that were responded to with a comment or a code revision. The typical industry benchmark for Comments Addressed is 30% and the leading benchmark is 45%.
Receptiveness is the ratio of follow-on commits to comments. It’s important to remember that Receptiveness is a ‘goldilocks’ metric—you’d never expect this metric to go up to 100%, and if you did it’d be indicative of a fairly unhealthy dynamic where every single comment lead to a change. The expected range for Receptiveness is 10-20%.
Unreviewed PRs is the percentage of PRs submitted that had no comments. The typical industry benchmark for Unreviewed PRs is 20% and the leading benchmark is 5%.
Reaction Time is the average time it took to respond to a comment. The typical industry benchmark for Reaction Time is 18 hours and the leading benchmark is 6 hours.
Involvement is the percentage of PRs a reviewer participated in. The typical industry benchmark for Involvement is 80% and the leading benchmark is 95%. However, it’s important to note that this metric is a highly context-based metric. At an individual or team level, “higher” is not necessarily better as it can point to a behavior where people are overly-involved in the review process. But there are certain situations where you’d expect to see Involvement very high, sometimes from a particular person on the team and other times from a group that’s working on a specific project.
Influence is the ratio of follow-on commits to comments made in PRs. The expected range for Influence is 20-40%.
If you need help, please email firstname.lastname@example.org for 24/7 assistance.