When Capers Jones published Assessment and Control of Software Risks (Yourdon Press, 1994), he identified the most serious software risk in IT projects as “Inaccurate Metrics,” and the second most serious software risk as “Inadequate Measurement”. I remember being startled when I first read that back in 1995—they certainly weren’t what I would have chosen—and other authorities in the field criticized his choices. Yet, in the intervening years, I have moved closer and closer to Jones’ point of view.
Let me explain.
“Metric” is just a shorthand term for “that which is measured.” The idea is that by defining and measuring one or more metrics associated with some process, you can reach conclusions about that process. For example, in a manufacturing plant that makes widgets, a useful metric might be the number of faulty widgets that are detected and scrapped, while an even more useful metric might be the number of faulty widgets that aren’t detected at the plant but that are returned or complained about by unhappy customers.
The first metric tells you something about the quality of your manufacturing process, while the second tells you something about the effectiveness of your in-plant quality testing.
Likewise, in managing your IT project, you would like to have metrics that you can use to measure your project. Unfortunately, as Jones noted, most projects have problems with this.
For a metric to be truly useful, it needs to have three characteristics. First, it needs to be informative or predictive; that is, it needs to give you relevant and useful information either as to what you’ve achieved so far or when you’re going to meet some milestone. This may seem obvious, but the single most popular metric in software engineering—number of source lines of code (SLOC)—has little informative value, particularly in light of practices such as refactoring.
What’s more, the number of source lines of code has no real predictive value, since it’s perfectly possible for the SLOC value to increase constantly yet for the project to get no closer to completion and, in some cases, to get farther away.
Second, the metric needs to be objective; that is, the metric’s value shouldn’t depend on who’s doing the measuring. Again, that seems obvious, yet the second most popular metric in software engineering is probably the “walking around” metric: You walk around to each developer and ask her or him, “How close to completion is [the module, class, subsystem, etc.] you’re working on?” And when he or she answers, “Oh, about 70 percent done,” you ask, “When do you think you’ll be finished, then?” to which he or she answers, “Oh, in about two weeks.”
This leads to two key laws of metrics:
Weinberg’s Law of Metrics: “That which gets measured gets fudged.”
The Metric Law of 90s: “The first 90 percent of a development project takes 90 percent of the schedule. The remaining 10 percent of the project takes the other 90 percent of the schedule.”
Weinberg’s Law simply notes that if you’re asking me to pull a metric out of my, ah, head, I am most likely going to give you something that makes me look good—or at least lets me avoid any blame. The Law of 90s reflects the tendency in software projects to do all the easy parts first, which leads to a false sense of progress and completion, and thus inflated values for self-reported metrics.
Third, if possible, the metric should be automated; that is, you should be able to calculate the metric with the click of a mouse or the press of a key. This is important for several critical reasons. First, it goes a long way toward establishing the metric’s objectivity, since the value returned won’t—or at least shouldn’t—care who clicks the mouse/presses the key. Next, it makes collecting the metric painless and undemanding of human effort. This is important because of a third law of metrics:
The Metric Law of Least Resistance: “The more human effort required to calculate a metric, the less often (and less accurately) it will be calculated, until it is abandoned or ignored altogether.”
Finally, the time spent by your IT engineers collecting and calculating that metric is time that they are not spending doing their actual jobs.
So, for your IT project, you want metrics that are informative and (if possible) predictive, objective and automated. In other words, you want to press a button and get a report that gives you some degree of information about how much progress has been made and when the project is likely to be completed.
Part II of this article is here.
[This is adapted from an article I originally wrote for the online version of Baseline.]
About the Author: bfwebsterWebster is Principal and Founder at at Bruce F. Webster & Associates, as well as an Adjunct Professor for the BYU Computer Science Department. He works with organizations to help them with troubled or failed information technology (IT) projects. He has also worked in several dozen legal cases as a consultant and as a testifying expert, both in the United States and Japan. He can be reached at 303.502.4141 or at firstname.lastname@example.org.
Sites That Link to this Post
- Lies, Damned Lies, and Project Metrics (Part II) : Bruce F. Webster | July 11, 2013
- Lies, Damned Lies, and Project Metrics (Part III) : Bruce F. Webster | July 12, 2013
- Fooled by success: the dangers of delivering projects on time : Bruce F. Webster | July 24, 2013
- Obamacare and the 90% solution : And Still I Persist… | November 20, 2013