Subscribe via RSS Feed

Lies, Damned Lies, and Project Metrics (Part III)

July 12, 2013 0 Comments

[Here are links to Part I and Part II]

In the prior two parts (links above), I covered the ideal qualities of metrics (informative and, preferably, predictive; objective; and automated), and why it’s so hard to come up with useful metrics for IT management. Let’s now talk about two concepts that may help you monitor and predict your IT project’s course: instrumentation and heuristics.

My first job after graduating from college was with General Dynamics in San Diego. I worked on several projects there, but one of the most interesting involved tanks and trucks. At the time, the Soviets were putting plywood shells — shaped and painted to look like tanks — atop large numbers of trucks to confuse NATO attempts (via satellite photos) to track Soviet tank movements along the European/Soviet Union border.

This project’s goal was to bounce side-looking radar signals off vehicles many miles away — that is, on the other side of the border-and determine if the vehicles had tracks (a tank) or wheels (a truck). Signal-processing techniques yielded 25 or so different characteristics in the returning radar signal; Bayesian analysis allowed us to use just three of those characteristics to distinguish accurately and rapidly between tanks and trucks.

The solution to IT project management metrics may be lie along the same lines, namely gathering as much information as possible and then finding which characteristics best identify or predict the state of the project.

In other words, instead of deciding ahead of time which metrics are likely relevant, you first want simply to gather as many metrics, or characteristics, of your IT project as you can, then process them to find which combination of metrics is most accurate for determining actual status and predicting completion.

This is where instrumentation comes in. Instrumentation simply means setting up your IT project so you can track as much information as possible about its different aspects and characteristics, preferably in an objective, automated manner. One place to start is your SCM (software-configuration management) system, which should be tracking, not just source-code changes, but changes and versions for all project deliverables (requirements, specifications, models, diagrams, test plans and results, and so on).

This means that your instrumentation will likely need to be able to (programmatically) go into your SCM system and extract that information. With this access, you want to extract characteristics, including size, changes, attributes and authorship, from various files and deliverables.

Next, you want to extract information from your defect-management system (you do have one, don’t you?). You would gather information about the number of defects reported, closed to date, deferred, marked as duplicate, marked as enhancements and so on. You would track both total numbers and new reports for each unit of time (say, day or week), as well as the severity and priority of reported defects.

In conjunction with your SCM and defect-management systems, you also want to gather information about your change-control process, that is, the meetings you hold to prioritize defects and to approve or defer features. Another area to track is human information such as organizational structure, turnover, task assignments, background and qualifications of individuals, and meeting schedules. Again, as far as possible, this information-gathering should be objective and automated.

At this point, you’re probably throwing up your hands and saying, “Wait! Stop! That’s too much!”

My response to that is: We are willing to spend millions, tens of millions, even hundreds of millions of dollars on projects that fail, so why are we so reluctant to spend a tiny fraction of that to help those projects succeed? (The same short-sightedness applies to software quality assurance, but that’s a subject for another post.)

Beyond that, the advantage of gathering so much information is that it makes it very difficult to fudge that information. A developer or manager might be able to do that with one or two particular characteristics but not with all the characteristics you’re monitoring. What’s more, there’s a good chance that attempts to fudge a particular characteristic will show up due to inconsistencies with the other information being gathered.

Now that you have all this information, you start to work with it to develop heuristics (from the Greek heurisko, to find or discover — think eureka). Your goal is to find which combination of characteristics best predicts the current state, level of progress and ultimate completion of the IT project. Applying Henderson’s Maxim — “Start out stupid and work up from there” — you want to try this first with a small project, gathering all the information about that project while monitoring the actual progress of that project, including problems and successes.

Use your best human and numerical analysis to figure out which combination of characteristics appears to predict most accurately the project’s progress and completion. Note that when we started the radar project at General Dynamics, we didn’t travel to Europe and collect radar signals from Soviet tanks and trucks, assuming we could even tell them apart; we went out to Camp Pendleton in San Diego and collected radar signals from known tanks and trucks. Likewise, you want to start out with a project where you clearly know the progress and results, independent of the information you’re gathering.

Having done this initial calibration, repeat this process with several small and midsized projects, adjusting your heuristics, if necessary, based on your observations. Finally, you can start applying your instrumentation and heuristics to your large-scale projects. By this point, you should not only have a set of heuristics for tracking the status of your project, you should also be able to drill down and figure out where the problems and bottlenecks actually are.

These may not be ultimate answers, but they are places to start.

[This is adapted from an article I originally wrote for the online version of Baseline.]

About the Author:

Webster is Principal and Founder at at Bruce F. Webster & Associates, as well as an Adjunct Professor for the BYU Computer Science Department. He works with organizations to help them with troubled or failed information technology (IT) projects. He has also worked in several dozen legal cases as a consultant and as a testifying expert, both in the United States and Japan. He can be reached at 303.502.4141 or at bwebster@bfwa.com.

Leave a Reply

You must be logged in to post a comment.