Here’s a brief overview of how stuff is working behind the scenes:
For each PR, we calculate a Complexity metric. This is closely related to how we derive Impact, but tuned a bit for typical pull request patterns.
We look at:
- The amount of code in the change
- What percentage of the work is edits to old code
- The surface area of the change (think ‘number of edit locations’)
- The number of files affected
- The severity of changes when old code is modified
- How this change compares to others from the project history
Once we got a calculus that passed the sniff test, we calculated this for several million PRs and got a distribution that looks something like this:
We selected some appropriate breakpoints for Low, Medium, and High:
The first falls between 45th and 50th percentile, where there’s a slight jump in Complexity data. The second between the 80-85th percentile area, where the surface area of Pull Requests tends to spike prominently.
We then sampled a bunch of PRs to see whether these breakpoints confirmed with ‘kitchen logic’ — general intuition about what’s Complex and what’s not. This proved to be accurate.
Each PR is categorized to have a low, medium, or high Complexity. Complexity can be used to help you and your team better understand what code needs additional review before it makes its way to production.
If you need help, please email firstname.lastname@example.org for 24/7 assistance.