Use this article to learn more about some of Flow's most common metrics.
In this article
Coding metrics
Coding days
- What: Any day where a developer committed code.
- Why: Coding days answers the question, “How often are my team members able to create solutions in the codebase?” This is a simple metric that provides insight into blockers that may be limiting a team’s ability to deliver.
- Learn more about Coding days.
Rework
- What: Code that is deleted or rewritten within 30 days of being written.
- Why: This metric tells you how much and how often an individual is rewriting their own work. Large deviations from your rework baseline is an early indicator that something may be affecting the health of your development lifecycle.
- Learn more about Rework.
Commits per day
- What: The number of commits that a developer made on days when they were able to commit. A developer receives credit for a commit on any branch when the commit is pushed to the remote git server.
- Why: This metric helps visualize how team members commit their work. Is it in multiple small commits or are they keeping their code locally and submitting one large commit at the end of a sprint? Teams running CI/CD, or teams that place a heavy emphasis on small and frequent solutions place a high value on this metric.
- Learn more about Commits per day.
Efficiency
- What: The percentage of all contributed code that is productive work.
- Why: Efficiency is trying to answer the following question, “How much of an author’s work is not re-writing their own code?” Answering this question relates directly to the concept of rework—the quantity of work the author rewrites.
Help others
- What: Code where a developer modifies someone else’s recent work that is less than 30 days old.
- Why: This metric helps you to determine which team members are spending more time helping others than working on their own work.
Impact
- What: Severity of edits to the codebase, as compared to repository history.
- Why: Impact attempts to answer the question, “Roughly how much cognitive load did the engineer carry when implementing these changes?” It can help identify team members' patterns in taking on projects that are similar in nature and encourage them to diversify their work.
- Learn more about Impact.
Legacy refactor
- What: Code that updates or edits code older than 30 days.
- Why: This metric shows you how much time an individual or team spends paying down technical debt.
New work
- What: Brand new code that does not replace other code.
- Why: This metric shows you the quantity of code an individual or team spends on writing new features and products.
- Learn more about New work.
Productive throughput
- What: Any code that is not Rework is productive work. Productive throughput is the net lines of code after Rework.
- Why: Productive throughput is a simple output metric that shows you how much code that you contribute sticks around longer than 30 days.
Raw throughput
- What: All code contributed, regardless of work type.
- Why: This metric shows you how much code is written, across all work types.
Commit complexity
- What: A measure of the riskiness associated with a commit. The calculation includes how large the commit is, the amount of files touched, how concentrated the edits are, etc.
- Why: This metric allows team leads to identify and review the most anomalous work first, which maximizes their most precious resources: time and attention.
- Learn more about Commit complexity.
tt100 Productive
- What: The amount of time it takes to contribute 100 lines of productive code, after Rework.
- Why: This metric can help you identify how an individual is progressing on any given project. If they are writing lots of code very quickly but taking a longer time to write productive code it can be a signal of a blocker.
- Learn more about tt100.
tt100 Raw
- What: The amount of time consumed to contribute 100 lines of raw code, before Rework.
- Why: This metric can be used with tt100 Productive to see how much time it takes an individual to write productive code in relation to raw code.
- Learn more about tt100.
Submit metrics
Learn more about Submit metrics.
Responsiveness
- What: The time it takes a submitter to respond to a comment on their pull request with either another comment or a code revision.
- Why: This metric answers the question, “Are people responding to feedback in a timely manner?“ Driving this metric down will ensure PRs are being reviewed and merged in an appropriate time frame.
- Learn more about Responsiveness.
Unreviewed PRs
- What: The percentage of PRs without comments or approvals.
- Why: This metric shows you the number of PRs merged without being reviewed. This shows whether you're limiting the risk level of the solutions you're providing customers by limiting the amount of work that does not get a second pair of eyes.
- Learn more about Unreviewed PRs.
PR iteration time
-
What: The time from the first comment on a pull request to the final commit on a pull request.
-
Why: PR iteration time helps you understand how long it takes to implement changes requested on pull requests.
-
Learn more about PR iteration time.
Iterated PRs
-
What: The percentage of pull requests with at least one follow-on commit.
-
Why: Iterated PRs helps you see how many of your team’s pull requests require more work before being merged.
- Learn more about Iterated PRs.
Comments addressed
Note: Comments addressed is only available in Flow Enterprise Server.
- What: The percentage of comments to which a submitter responds.
- Why: This metric helps answer the question, “Are people acknowledging feedback from their teammates?” Use this metric to prompt and encourage healthy discussion in reviews.
-
What's normal for the Industry?
- Leading Contributors - 45%
- Typical Contributors - 30%
Receptiveness
Note: Receptiveness is only available in Flow Enterprise Server.
- What: The percentage of comments the submitter accepts, as denoted by code revisions.
- Why: This answers the question “Are people incorporating feedback from their teammates?” This metric looks at whether the PR submitter is taking people’s feedback and incorporating that into their code.
-
What's normal for the Industry?
- Typical Range - 10%-20%
Review metrics
Learn more about Review metrics.
Reaction time
- What: The time it takes for reviewers to review a pull request or respond to a comment.
- Why: This metric answers the question, “Are reviewers responding to comments in a timely manner?” In practice, the goal is to drive this metric down. You generally want people to be responding to each other in a timely manner, working together to find the right solution and getting it to production.
- Learn more about Reaction time.
Thoroughly reviewed PRs
-
What: The percentage of merged pull requests with at least one regular or robust comment.
-
Why: Having thorough comments is related positively to code review quality and healthy team collaboration. Having too many pull requests merged without thorough review could be a sign of rubber-stamping during the code review process.
-
Learn more about Thoroughly reviewed PRs.
Involvement
Note: Involvement is only available in Flow Enterprise Server.
- What: The percentage of pull requests that reviewers participated in.
- Why: Involvement aims to show you how many pull requests are reviewed and by whom. This is very context-dependent and answers the question, “Are some people more involved in reviews than others?”.
-
What's normal for the Industry?
- Leading Contributors - 95%
- Typical Contributors - 80%
Influence
Note: Influence is only available in Flow Enterprise Server.
- What: The ratio of follow-on commits made after reviewers commented.
- Why: This metric shows how often someone updates code based on reviewer comments. It gives you insight into your review process and helps you identify trends in how often submitters respond to comments with code revisions and additions.
-
What's normal for the Industry?
- Typical Range - 20%-40%
Review coverage
Note: Review coverage is only available in Flow Enterprise Server.
- What: The percentage of hunks commented on by reviewers.
- Why: This metric shows you how much of the code in the PR was actually reviewed by team members versus comments on PR itself. It answers the question, “How much of each PR has been reviewed?”.
Team collaboration metrics
Learn more about Team collaboration metrics.
Time to merge
- What: The time it takes from when pull requests are opened to when they are merged.
- Why: This metric gives you visibility into how long on average it takes your team to merge a PR.
- Learn more about Time to merge.
Time to first comment
- What: The time between when a pull request is opened and the time the first reviewer comments.
- Why: This metric answers the question, “On average, how long does it take for a reviewer to comment on a PR?” The lower this number, the fewer wait-states your team encounters and the faster you can move work through code review.
- Learn more about Time to first comment.
Follow-on commits
- What: The number of code revisions added to a pull request after it is opened for review.
- Why: Knowing the number of follow-on commits that are added to an open PR gives you visibility into the strength of your code review process. If you are seeing a trend of lots of follow-on commits being added, there may be a need for more planning and testing.
PR activity level
- What: A measure of how active a pull request is on a scale of Low, Modest, Normal, Elevated, High, as calculated by the comment count, word length, and recency of the comment.
- Why: This metric will help you gauge how much chatter is happening around a PR without having to read the actual comments on the PR. Identify outliers, long-running, and low or high activity PRs and nudge them forward in the review process.
Raw activity
-
What: A raw count of comments and follow-on commits associated with a PR. As a pull request ages, its raw activity does not change and will always stay the same. Pull requests are ordered by raw count of comments + follow-on commits in an increasing or decreasing order.
-
Why: Allows you to find the most historically active PRs regardless of their age.
Ticket activity level
- What: A measure of how active a ticket is on a scale of Low, Modest, Normal, Elevated, High, as calculated by the comment count, word length, and recency of the comment.
- Why: Similar to PR activity, Ticket activity visualizes where people’s attention is going in the review process. Identify outliers to keep features and projects on track for delivery, and review comments/discussions for tone, content and material scope creep.
Sharing Index
- What: Measures how broadly information is being shared amongst a team by looking at who's reviewing whose PRs.
- Why: The Sharing Index helps you understand whether you are increasing the number of people that participate in code review, or whether you're consolidating the group of people that do code review for your organization. It can be helpful in identifying knowledge silos and/or individuals that are “alone on an island” when it comes to code review.
Number of PRs reviewed
- What: Total number of PRs that were reviewed.
- Why: Gives you visibility into the total number of PRs that were reviewed.
Number of users reviewed
- What: Total number of submitters a user reviewed PRs for.
- Why: Understand whether the code review is being distributed to many different team members.