Subscribe via RSS Feed

Active risk management: doing IT projects right

September 9, 2013 1 Comment


In a prior post, I talked about IT project risk management and gave a real-world example of doing it wrong, with the expected consequences. But some organizations do it right, and it’s worth looking at those examples as well.

Some years back, I spent a few months at a client site reviewing a couple of major IT projects. The main focus was on an already-deployed system that had been having some issues, but I was also asked by management to look at another system under development (we’ll call it “System X”) and identify any major risks.

I quickly found that there was little for me to add. Not because System X had no risks, but because the project team itself was actively and aggressively identifying and managing those risks. For starters, I found that the System X team had one person whose major responsibility was risk management for the project. I found this novel and refreshing; while I have heard of the idea having a person in charge of risk management on large IT projects, up until then I don’t think I had ever met someone who actually had that as his or her principal responsibility.

Second, I found out (from this person) that the System X team was using a web-based collaboration tool (Microsoft SharePoint) for the project, and within that they had a whole subsystem devoted to risks. Anyone on the team could submit what she or he felt was a meaningful risk. Information about the risk included what aspect of the software development lifecycle was involved, the estimated likelihood of the risk occurring, and the likely impact to the project should the risk come to pass.

Each risk was then put into one of six categories:

  • Identified: a risk that has been identified but not yet dealt with in any way (the initial category for all risks).
  • Accept and Monitor: Take no active steps regarding this risk, but continue to watch for its appearance.
  • Mitigate: Take active steps to either eliminate the risk or mitigate its impact.
  • Closed – Realized: The risk already came to pass and there’s nothing to do about it now.
  • Closed – Unrealized: The risk did not come to pass, and it doesn’t look as though it ever will.
  • Closed – Consolidated: The risk is either a duplicate of another identified risk or so closely tied to it so as not to warrant separate tracking.

When I looked at the risks subsection, I found that the System X team had identified well over 200 risks to the project and was very actively evaluating, classifying, and addressing them.

The project team in turn used this risk information, as well as other reports and metrics, to generate a confidence-level status for the major project milestones. They expressed the status as: green; green-falling; yellow-rising; yellow; yellow-falling; red-rising; or red. Green, yellow and red convey the snapshot status of that milestone; “rising” or “falling” indicated whether the trend was improving or declining.

Of course, the problem with such confidence-level reporting is that it can be overly optimistic as it gets reported up the chain – the “thermocline of truth” that I’ve written about elsewhere. But the fact that that the current list of risks is completely visible to everyone involved with the project (including upper management) tends to dampen that temptation to just pass up the good news. In other words, the risk visibility is critical for the success of such an approach, which is why the risks need to be online (rather than in a report) and readily accessible by all involved.

Computer scientist Adele Goldberg once quipped, “Only optimists build complex systems.” I might amend that to say that while optimists start most complex systems projects, only the cautiously optimistic finish them. Active risk management is critical to the success of any major IT project.

For more insights and suggestions on this subject, I would strongly recommend Waltzing with Bears: Managing Risk on Software Projects by DeMarco and Lister.

And remember: expose and discuss risks, don’t bury them.

[Adapted from an article originally written for the online version of Baseline.]

About the Author:

Webster is Principal and Founder at at Bruce F. Webster & Associates, as well as an Adjunct Professor for the BYU Computer Science Department. He works with organizations to help them with troubled or failed information technology (IT) projects. He has also worked in several dozen legal cases as a consultant and as a testifying expert, both in the United States and Japan. He can be reached at 303.502.4141 or at

Comments (1)

Trackback URL | Comments RSS Feed

Sites That Link to this Post

  1. Active risk management: doing IT projects wrong : Bruce F. Webster | September 9, 2013

Leave a Reply

You must be logged in to post a comment.