Fooled by success: the dangers of delivering projects on time
One of my favorite books is Fooled by Randomness by Nassim Nicolas Taleb. Taleb’s thesis, which he explains and defends well, is that we often attribute to talent and insight great results that were actually more a matter of luck—a fortunate random outcome that might well have turned out otherwise. Taleb’s examples are largely taken from his own professional experience on Wall Street, where he saw mediocre (in his opinion) traders succeed and count themselves brilliant in a particular market situation, only to be “blown up” — take such great losses that they were fired or wiped out financially — when the market situation changed.
Taleb’s thesis has, I believe, application to large-scale IT projects as well. We focus largely on projects that struggle or fail, and if we’re really wise, we conduct project post-mortems to learn what went wrong and how to do better next time. However, when a large IT project succeeds, we tend to chalk it up to our collective brilliance without ever doing a post-mortem analysis to learn whether we really caused it to succeed— — is, if we had started from the same point and had done the same things, would the project still have been as successful?
Because the answer isn’t always “yes.”
Some large IT projects are more successful than they deserve to be. That is, if you could roll back time and start the same project over and over again, it might only end up being successful 50 percent, 30 percent or even 10 percent of the time. In my earlier posts on IT project metrics, I listed some of the reasons why it’s hard to predict just where an IT project stands and how long it will take to complete it. Those same factors (among others) represent areas of risk through the history of a given IT project, particularly where they involve human creativeness, insight and effort. Because of the extensive inter-connectedness of IT project tasks and deliverables, and the resulting critical paths that form, a single slip in a single area — say, delivery and configuration of production hardware, fixing a critical defect or scaling a key algorithm — can cause the entire project to slip on a day-for-day basis.
But sometimes with projects that really shouldn’t succeed — that are attempting too much, too fast, with too many risks — enough things go right, particularly along the critical paths, enough superhuman effort is made by those involved, so that the project does indeed go into production on time and possibly even under budget. Upper management is thrilled; the development team looks great; and all’s right in heaven.
And that’s when the real trouble begins.
Why? Because most likely no one has actually done the analysis to see why this project ended up succeeding and whether it would likely succeed if repeated under the same circumstances. This is the point that Taleb hammers home: the need to re-run (in simulation or analysis) the same sequence of events with reasonable success/failure probabilities as well as the impact of each outcome (that is, success or failure). This lets you know whether the project’s success was a fluke or a reasonable expectation.
Since no one has done that analysis, upper management usually assumes that it was a reasonable expectation — due, no doubt, to their leadership — and that subsequent IT projects started under similar circumstances should likewise succeed. And so right at the point when those in the IT trenches — who are usually pretty clear on what a “[close]-run thing” the project really was, to paraphrase Wellington at Waterloo — wipe their brows over a disaster averted, upper management tells them to do it again, but faster or better, or both. Since the odds were against the original project succeeding as it did, the chances of this follow-up project succeeding are likely even smaller. Thus, the danger of delivering on time and under budget — what an irony!
In IT projects, we often stress the need for managing expectations, particularly of upper-level executives and business-side sponsors. However, we tend to do this at the start of and during the project itself. We don’t usually think about the need to do so at a project’s end, particularly when a project has been successful. But that is just as critical a time to manage expectations, to clearly lay out the odds against the project having succeeded as it did, and the risks in assuming that similar projects under similar circumstances will produce similar results.
In short, while we should be grateful for any IT project success, we should not be fooled by it. We need to be clear ourselves — and make clear to others who matter — just how and why the project succeeded.
[Adapted from an article I originally wrote for the online version of Baseline.]