Industry benchmarks

Tags: Flow

Industry benchmarks are reference points based on the software development industry. Use industry benchmarks to see how your team compares with the industry. This can help you and your team identify potential areas of growth.


Where do industry Benchmarks come from?

These reference points come from research that examined commits from nearly 88,000 software engineers. The researchers at Flow studied over 7 million anonymized redacted commits across nearly 88,000 developers, with a data set that ranges across all of 2016.

The study looked to understand the contribution patterns of software developers, with an in-depth look at the four code fundamental metrics: Active Days per Week (Coding Days), Commits per Active Day, Impact, Efficiency.

Study Summary

Empirically, productive engineers are more active on a daily basis and commit more frequently throughout the day.

Three contribution profile types emerged:

  • Leading contributors - These contributors are in the upper 10th percentile of engineers.
  • Primary contributors - These contributors tend to be the primary, full time engineers.
  • Occasional contributors - Occasional contributors are usually team leads, QA, devops, architects, data base engineers. Any team member that is not expected to be as active in the code base as your primary engineers.

What the study demonstrates is that engineers who commit more frequently (i.e., leading contributors) also tend to knock out more monthly deliverables (pull requests). The group with the lower commit profile (i.e., occasional contributors) tend to knock out less deliverables over time.

Review and collaboration benchmarks

Our review and collaboration package provides a way for software teams to see the ground truth of what’s happening in the code review process. The package is split into three sets of metrics: Submit, Review, and Team Collaboration.  The review and collaboration metrics are intended to provide insight into organizational behavior with respect to how individuals are collaborating with their peers during the code review process.

The recently released PR benchmarks—calculated from a study of over a half-million PRs—provide visibility into how other organizations are doing across the Review and Submit Fundamental metrics.

This post provides a list of the metrics that were analyzed in this study, their definitions, and the industry benchmarks that were identified.

Benchmarks and definitions

The Submitter Metrics quantify how submitters are responding to comments, engaging in discussion, and incorporating suggestions. The Reviewer Metrics provide a gauge for whether reviewers are providing thoughtful, timely feedback.

Each metric has a “typical” and “leading” benchmark, where typical is where the bulk of organizations are (this is the median number) and leading represents organizations in the 90th percentile.

back to top


Submitter Metrics

Responsiveness is the average time it takes to respond to a comment with either another comment or a code revision. The typical industry benchmark for Responsiveness is 6 hours and the leading benchmark is 1.5 hours. 

Comments Addressed is the percentage of Reviewer comments that were responded to with a comment or a code revision. The typical industry benchmark for Comments Addressed is 30%and the leading benchmark is 45%. 

Receptiveness is the ratio of follow-on commits to comments. It’s important to remember that Receptiveness is a ‘goldilocks’ metric—you’d never expect this metric to go up to 100%, and if you did it’d be indicative of a fairly unhealthy dynamic where every single comment lead to a change.  The expected range for Receptiveness is 10-20%.

Unreviewed PRs is the percentage of PRs submitted that had no comments. The typical industry benchmark for Unreviewed PRs is 20%and the leading benchmark is 5%.

Read more about Submitter Metrics.

back to top


Reviewer Metrics

Reaction Timeis the average time it took to respond to a comment. The typical industry benchmark for Reaction Time is 18 hoursand the leading benchmark is 6 hours.

Involvement is the percentage of PRs a reviewer participated in. The typical industry benchmark for Involvement is 80%and the leading benchmark is 95%. However, it’s important to note that this metric is a highly context-based metric. At an individual or team level, “higher” is not necessarily better as it can point to a behavior where people are overly-involved in the review process. But there are certain situations where you’d expect to see Involvement very high, sometimes from a particular person on the team and other times from a group that’s working on a specific project.

Influenceis the ratio of follow-on commits to comments made in PRs. The expected range for Influence is 20-40%

Read more about Reviewer Metrics


back to top


If you need help, please email support@pluralsight.com for 24/7 assistance.