I read your column with great interest as I’m involved on an IT project to measure productivity. May I ask you a quick question? Are there any mature metrics that can measure tester productivity improvement month by month and accurate to 1%?
Here’s the response I sent back:
Well, for starters you have to define what you mean by “tester productivity.” Number of test scripts run? Number of defects found? Number of defects closed? Number of defects reopened? (And do you weight the “defects found/closed/re-opened” by criticality and/or severity?) Number of reported defects replicated? Number of hard-to-replicate, yet critical/severe defects that can now be replicated (and thus fixed)? Some combination (possibly a weighted function) of all of the above?
In other words, what is it exactly that you’re trying to accomplish? To make your testing team more effective? More efficient? To shorten the test cycle? To spend less on testing? To close more defects (and defer fewer open ones) for each system release? To have fewer defects discovered after a system release? Jerry Weinberg says that “quality is value to some person.” Who are the people you’re worrying about, what qualities — functionality, performance, reliability, etc. — do they value, and to what extent?
Once you’ve defined all that, there still remains the question as to whether you can measure that to a 1% accuracy (or even a 10% accuracy) month over month, and still preserve any meaning in that measurement. It’s possible (and common) in metrics to have “false accuracy” — you believe you’re actually measuring something to a certain precision, but you’re mostly just reading random or insignificant noise at that level.
Finally, we come back (as always) to Weinberg’s law of metrics: that which can be measured can be fudged (or exploited). For example, read this story over at the Daily WTF: The Defect Black Market.
Hope this is of some help, though I tend to doubt it.
Thoughts from the rest of you? ..bruce..