The Productivity Metric

Update, 18 Dec 2007: I've posted a followup to this essay: Value Velocity: A Better Productivity Metric?.

I'm often asked, "What productivity metrics should I report from my team?" I usually don't give a direct answer, because the answer is complicated. But okay, you've pushed me enough times. Here's the answer--it works for any team, agile or not.

Report dollars earned over time. (Or dollars saved, or ROI, or IRR, or other business value metric.) It's the only accurate measurement. Don't believe me? Read on...

Productivity is defined as "amount of output per unit of time," or, more succinctly, "output/time."

"Time" is easy to measure on a development team, but "output" is much harder. The classic measurement is "source lines of code," aptly abbreviated as "SLOC" (rhymes with "crock"). The problem with SLOC is that, given two programs with the same features, the smaller one is probably better designed and easier to maintain. Furthermore, SLOC is correlated with cost of development and number of defects. The more SLOC, the higher cost and the greater number of defects.

Don't measure lines of code. You'll just reward defects and high costs.

So, by defining productivity as cranking out tons of code, we encourage sloppy design, high costs, and lots of defects. That's not what we want.

There's an idea called "feature points," or "function points," that tries to get around the limitations of SLOC, but it still depends on implementation details.

Okay, so measuring the amount of code doesn't work. What about velocity?

Velocity is an Extreme Programming term that's been adopted by many agile practitioners. Velocity is the sum of the estimates of the stories that were completed in an iteration. If the programmers estimate perfectly, it's simply a measure of the number of hours that the programmers worked, minus interruptions. The number is often confused by estimates that aren't 100% accurate. Velocity measures a strange combination of estimate accuracy and hours worked. It's a great planning tool, but as a metric, it has serious flaws.

Velocity is not productivity. It's hours worked and estimate accuracy.

The only ways to increase velocity is for programmers to work more hours, decrease interruptions, or provide inflated estimates. Decreasing interruptions is good, but often out of the programmers' control. Those things that are in their control (working more hours, providing inflated estimates) aren't behaviors we want. Even if you don't change estimates after they've been made, the easiest way to increase velocity is to cut back on refactoring, testing, and other important code quality tasks. This only hurts in the long run.

A closely related idea is to simply count the number of stories. Since you're supposed to split and combine stories that are too big or too small, this is about the same as measuring velocity, and has the same problems.

Similarly, some people suggest that you count the number of features that the programmers deliver. The problem here is coming up with an objective and commonly accepted definition of "feature." (And if this is going to be a performance measurement, you had better believe that any definition will be hotly debated.) That's what the feature points crowd tries to do, but ultimately there doesn't seem to be any way to do so that doesn't end up talking about implementation details like "number of screens," "number of database tables," and "number of reports."

There is one way to define output for a programming team that does work. And that's to look at the impact of the team's software on the business. You can measure revenue, return on investment, or some other number that reflects business value.

Of course, this value-based measure of productivity requires the whole team to focus on delivering software that makes a difference to the company. It means releasing software quickly so that the team gets out of the red. It means acknowledging the supreme importance of product vision and understanding how users will really benefit from the software. It means working closely with business folks to come up with effective, low cost compromises. It means delivering software that is resilient to change and can continue to grow and support features for years to come.

I can live with that.

PS: The Poppendiecks deserve a lot of credit for influencing my thinking on this issue. Also, Martin Fowler said something similar two years ago.

PPS: I would love to discuss feature points further with a staunch defender. Contact me.

Update, 18 Dec 2007: I've posted a followup to this essay: Value Velocity: A Better Productivity Metric?.

If you liked this entry, check out my best writing and presentations, and consider subscribing to updates by email or RSS.