Industry benchmarks

Tags: Flow

Industry benchmarks are reference points for metrics based on the software development industry. Use industry benchmarks to see how your team compares with the industry. This helps you and your team identify potential areas of growth.

Who can use this?

 Core
Plus
 
  


Where industry benchmarks come from

These reference points come from research that examined commits from nearly 88,000 software engineers. The researchers at Flow studied over 7 million anonymized redacted commits across nearly 88,000 developers, with a data set that ranges across all of 2016.

The study looked to understand the contribution patterns of software developers, with an in-depth look at the four coding metrics: Coding days, Commits per day, Impact, and Efficiency.

Study Summary

Productive engineers are more active on a daily basis and commit more frequently throughout the day.

Three contribution profile types emerged:

  • Leading contributors: Contributors in the upper 10th percentile of engineers.
  • Primary contributors: Contributors that tend to be the primary, full-time engineers.
  • Occasional contributors: Contributors that tend to be team leads, QA, devops, architects, and database engineers. Any team member that is not expected to be as active in the code base as your primary engineers.

The study demonstrates that engineers who commit more frequently, or leading contributors, tend to knock out more deliverables monthly. Occasional contributors, or the group with the lowest commit profile, tend to knock out fewer deliverables over time.

back to top


Benchmark metrics

Benchmark metrics provide a way for software teams to see the ground truth of what’s happening in the code review process. Benchmark metrics include Submit, Review, and Team collaboration metrics.

Review and Team collaboration metrics are intended to provide insight into how individuals collaborate with their peers during the code review process.

The pull request benchmarks were calculated from a study of over a half-million PRs. These metrics provide visibility into how other organizations are doing across Review and Submit metrics.

Benchmarks and definitions

The Submitter metrics quantify how submitters are responding to comments, engaging in discussion, and incorporating suggestions. The Reviewer metrics provide a gauge for whether reviewers are providing thoughtful, timely feedback.

Each metric has a typical and leading benchmark. The typical benchmark is where the bulk of organizations are. This is the median number. The leading benchmark represents organizations in the 90th percentile.

back to top


Submit metrics

Responsiveness is the average number of hours it takes for a submitter to respond after a reviewer action. The typical industry benchmark for Responsiveness is six hours and the leading benchmark is an hour and a half

Note: This benchmark only applies to the Flow Enterprise Server version of Responsiveness. Flow Cloud calculates Responsiveness differently and this benchmark does not apply.

Unreviewed PRs is the percentage of PRs submitted with no comments. The typical industry benchmark for Unreviewed PRs is 20% and the leading benchmark is five percent.

Comments Addressed is the percentage of Reviewer comments that were responded to with a comment or a code revision. The typical industry benchmark for Comments Addressed is 30% and the leading benchmark is 45%.

Receptiveness is the ratio of follow-on commits to comments. The ideal number for Receptiveness is not 100%. A Receptiveness that is too high indicates an unhealthy dynamic where every single comment leads to a change. The expected range for Receptiveness is 10-20%.

Note: Comments addressed and Receptiveness metrics are only available in Flow Enterprise Server.

back to top


Review metrics

Reaction time is the average number of hours it takes for an individual or team to respond to a set of PRs as a reviewer. The typical industry benchmark for Reaction Time is 18 hours and the leading benchmark is six hours.

Note: This benchmark only applies to the Flow Enterprise Server version of Reaction time. Flow Cloud calculates Reaction time differently and this benchmark does not apply.

Involvement is the percentage of PRs a reviewer commented on, committed, reviewed, or approved. The typical industry benchmark for Involvement is 80% and the leading benchmark is 95%.

However, it’s important to note that Involvement is a highly context-based metric. Higher is not always better, since this would indicate a team that is too involved in reviews. A high involvement is expected in certain situations, such as a group working on a specific project.

Influence is the ratio of follow-on commits to comments made in PRs. The expected range for Influence is 20-40%.

Note: Involvement and Influence metrics are only available in Flow Enterprise Server.

back to top


If you need help, please contact Pluralsight Support.