Related: How do we know when a team or engineer is performing at the highest level? Is it subjective? Can it be measured?
tt100 stands for "Time to 100", i.e. the amount of time it takes an engineer to write 100 productive lines of code and a way to assess how quickly the codebase is being meaningfully edited.
How the time part is calculated
We start by looking at how much of the code 'sticks' during an extended time period such as a month, while screening out the noise (e.g. if someone adds a massive library). We'd then look at how many days each developer was active (dropping empty days out of the calculus) to come up with the result. We look at when people are on the radar coding and only take into account those spans of time (so that days spent planning or holidays or whatever don't move the needle). We assume 8 hours per day with some level of subtlety about how 'blanks days' are approached.
How it's useful
Flow is proposing time to 100 (tt100) as a valuable metric to benchmark the strength of teams and individual engineers. Time to 100 is a measurement of how long it takes an engineer to write 100 lines of code. This is a way to measure velocity based on how quickly code is being produced (as opposed to something less concrete like tickets). But hang on. We all know mere effort is not the same as forward progress —which is why we use two different forms:
- tt100 Raw: The amount of time required to create 100 lines of any type of code, regardless of quality.
- tt100 Productive: Time required to create 100 lines of code, minus Churn. In other words, forward progress.
Let’s look at an example of how this works. Here we can see tt100 statistics for an engineer, who we’ll call Tom:
Looking at these metrics, we see a couple things:
1. Tom’s tt100 Raw is trending down over time. He’s writing more code, faster, when measured in an absolute sense. In January it took him over 2 hours to write 100 lines of code, and by June he’s cut that in half to ~1 hour.
2. However, Tom’s tt100 Productive has is drifting the wrong direction. Even though it takes him less time these days to write raw code, it used to take Tom ~3 hours to produce 100 productive lines, but now it’s taking him almost twice as long to reach those same 100 productive lines.
The delta between these two tt100 metrics is important: it’s a sign that the Tom is checking in a ton of work, but actually struggling a bit compared to before: knowing this, the team lead to intervene before this snowballs out of control. Together, they can discuss how to get things back on track.
Common culprits for this kind of shift include both engineering and non-engineering factors:
- Dead end implementation paths
- Poor or shifting product requirements
- Lots of requested edits during code review
In the example shown above, this was the result of ill-defined requirements: Tom is working really hard, but trial-and-error approach to defining feature requirements isn’t serving anyone. Knowing how much work is being wasted here, Tom and the team lead can now meet with the product owner and find out whether these late-term adjustments are worth the extra expense and delayed delivery.
Historically, these kinds of conversations have been difficult to have in absence of concrete data. With tt100 metrics, this becomes purely tactical: “The constant spec changes are causing this work to take 1.8 times longer than it normally does to deliver. We need to work toward clearer targets, or be comfortable with some significant delays.”
If you need help, please email email@example.com for 24/7 assistance.