Subscribe via RSS Feed

Lies, Damned Lies, and Project Metrics (Part II)

July 11, 2013 1 Comment

[Part I is here.]

In my previous post, I talked about the use of metrics in IT project management and the three qualities of an ideal metric: informative and preferably predictive, objective, and automated. The ideal set of metrics would tell you when your IT project is going to ship; these metrics would give you the same answer no matter who calculated them; and, in fact, the computer should calculate them for you or for anyone else who asks.

And then you woke up.

A predictive IT project metric (or a set thereof) would be able to measure—or at least estimate—the gap between where the system under development is now and where it needs to be to ship or go into production.


What you want to know, first and foremost, is the amount of time that will pass from now (“current spot”) to when the projects ships/enters production (“project end”). But that in turn depends on many factors, including:

  • The amount of invention (novel problem solving) that still has to occur.
  • The amount of discovery (e.g., running into roadblocks and dead ends) that still has to occur.
  • The adequacy of the current architecture, design and implementation.
  • The amount of actual coding that still has to occur.
  • The amount of quality engineering (testing, reviews, etc.) that still has to occur.
  • Any and all remaining external dependencies (availability of resources, availability of technologies, deliveries from vendors and other projects, etc.).
  • The talent, experience and productivity of your IT engineers and managers, as well as turnover among those employees.
  • The amount of business process re-engineering required to put this system into production, as well as the degree of resistance or cooperation among the affected business units.
  • The complexity, cohesion and comprehensibility of the overall system.
  • The amount of analysis (gathering relevant subject-matter information) that still has to occur.

This is not an exhaustive list, but it gives you an idea of the challenges you face. Imagine trying to derive all this information from counting the number of lines of source code created so far, or the number of object classes, or the number of open and closed defects. It just won’t work. And yet metrics such as those are commonly gathered, reported and relied upon as if they revealed anything meaningful about the project’s overall progress.

This list also doesn’t directly address such common problems as scope creep, conflicting requirements, changes in business or market needs, budget constraints, or internal politics. Still, the items in the list above could themselves be considered useful metrics; that is, if you could measure this information, you would have a very good sense of where the project stands.

These items would certainly be informative and even predictive—but it remains unclear how to make them “objective,” much less “automated.” In effect, we’re back to the “70 percent done” question and answer, though perhaps in more detail.

Now, I have known organizations that are quite skilled at predicting how long a project will take and how much it will cost. But these are organizations that confine themselves to niche markets and, in effect, implement the same application over and over again, using a rigorous and standardized methodology, usually with extensive up-front analysis and specification (particularly in user interface and functionality)

Even then there are no guarantees; look at the number of troubled and failed enterprise resource planning installations that appear in the news on a regular basis. And, of course, this is of little use for organizations that are creating one-off applications, either custom or commercial.

One solution, I believe, lies in a combination of two approaches: instrumentation and heuristics. By “instrumentation,” I mean creating a system whereby you can automatically track and monitor as many aspects and activities as possible of the entire software development or infrastructure project lifecycle. And by “heuristics,” I mean analyzing the information gathered via instrumentation to discover which characteristics best predict ongoing performance and completion of the project.

More in the next post.

[Adapted from an article originally written for the online version of Baseline.]

Be Sociable, Share!

About the Author:

Webster is Principal and Founder at at Bruce F. Webster & Associates, as well as an Adjunct Professor for the BYU Computer Science Department. He works with organizations to help them with troubled or failed information technology (IT) projects. He has also worked in several dozen legal cases as a consultant and as a testifying expert, both in the United States and Japan. He can be reached at 303.502.4141 or at

Comments (1)

Trackback URL | Comments RSS Feed

Sites That Link to this Post

  1. Lies, Damned Lies, and Project Metrics (Part I) : Bruce F. Webster | July 11, 2013

Leave a Reply

You must be logged in to post a comment.