How do you calculate Complexity?

Tags: Flow

Here’s a brief overview of how stuff is working behind the scenes:

For each PR, we calculate a Complexity metric.  This is closely related to how we derive Impact, but tuned a bit for typical pull request patterns. 

We look at:

  1. The amount of code in the change
  2. What percentage of the work is edits to old code
  3. The surface area of the change (think ‘number of edit locations’)
  4. The number of files affected
  5. The severity of changes when old code is modified
  6. How this change compares to others from the project history

Once we got a calculus that passed the sniff test, we calculated this for several million PRs and got a distribution that looks something like this:


We selected some appropriate breakpoints for Low, Medium, and High: 

The first falls between 45th and 50th percentile, where there’s a slight jump in Complexity data. The second between the 80-85th percentile area, where the surface area of Pull Requests tends to spike prominently.

We then sampled a bunch of PRs to see whether these breakpoints confirmed with ‘kitchen logic’ — general intuition about what’s Complex and what’s not. This proved to be accurate.

Each PR is categorized to have a low, medium, or high Complexity. Complexity can be used to help you and your team better understand what code needs additional review before it makes its way to production.

back to top


If you need help, please email support@pluralsight.com for 24/7 assistance.