Subscribe via RSS Feed

Some thoughts on “Up or Out”

April 29, 2008 8 Comments

[UPDATE: Here are some more observations from Ruby-coloured glasses.]

Alex Papadimoulis over at The Daily WTF (one of my favorite IT blogs) has posted a lengthy and thoughtful solution to the problems I raised in my post on the “Dead Sea effect“. Specifically, he refers to the “Up or Out” model, pioneered over a century ago by Paul Cravath and well-known to anyone who has worked in a law firm or one of the Big Eight Six Five Four (how many are left now?) Consulting Firms.

In fact, I’m quite familiar with it, because I spent two years as a Director (one level below Partner) at PricewaterhouseCoopers, which definitely used the “Up or Out” model. In addition, I’ve been serving off and on as a expert witness in IT-related litigation for nearly a decade and so have spent many long hours in law firms and with lawyers (ranging from associates to partners), and as Alex notes, law firms tend to use this same model as well.

So, how well does this model work in developing and maintaining top talent? Well, that depends upon what you define as “talent”. I haven’t read Cravath’s book, so I don’t know what he originally proposed. But in modern law firms and large consulting firms, “talent” is mostly defined as “bringing in clients with lots of money” and “keeping the staff under you fully occupied with billable hours” (though, to be fair, it also includes your actual performance over the course of multiple engagements). Let me explain.

Modern “up or out” firms typically have four to six basic job titles, sometimes with subdivisions. (For example, in our division at PwC, we had — as I recall — Partners, Directors, Senior Managers, Managers, Senior Associates, and Associates.) Ideal engagements follow the “pyramid model” — say, a Partner, one or two Directors, two to four Managers, and four to eight Associates. (You scale up all the numbers for larger engagements, and so on.) In a firm like this, everyone below Partner is typically on a fixed salary, but has a billable rate for clients that was roughly some multiple of their salary. So, for example, in such a firm an associate might be paid $70,000/year but might be billed out to clients at $140/hour (equivalent of $280,000/year, assuming 2000 hours/year in billable time).

The two key words are “leverage” and “utilization”. You want an engagement with leverage — that is, one in which you could apply the pyramid model, staffing it primarily with Managers and Associates. You also want an engagement that would allow high utilization — that is, it would keep those Associates (and hopefully the Managers) billing 30+ hours/week (80% utilization) throughout the course of the engagement.

The “Up or Out” process itself can be pretty brutal. Here’s a simplified example, again typical of such firms. Once a year, each level would do stack rankings on a fixed curve of all those on the level beneath them. So, for example, the Managers would have to stack-rank all the Associates — if you had 20 Associates, you would have to rate them #1 through #20, with no ties. Then you would have to apply a fixed curve — say, 10% A, 20% B, 40% C, 20% D, and 10% F. That would mean that of those 20 Associates, only #s 1 & 2 could be A-rated, #s 3-6 would be B-rated, and so on, regardless of how much or little separated them. Those not making the minimum grade (whatever that was) would be invited to leave. Even those who were above the cut but who did not score high enough for a certain number of years running would be informed that there was no promotion to Manager in their future and that they should start looking elsewhere. The Directors would repeat the same process for the Managers, and the Partners for the Directors.

(All that said, let me add that the mentoring aspect that Alex mentions was very real and extremely valuable. I learned most of what I know about being an expert witness from PwC — specifically from Jeff Parmet, the PwC Partner who hired and supervised me, as well as from other PwC Directors and Partners, not to mention the significant in-house training sessions that PwC ran each year. I enjoyed my two years at PwC and am grateful for them, particularly when I find myself up against an expert witness who has not had such a background.)

Now, if we look at applying “Up or Out” to an in-house IT department, two immediate questions arise. First, what do we define as “Up”? Most large organizations really don’t have a career track for IT engineers above “senior programmer”, except for maybe a handful of “architect” slots and possibly a Chief Technical Officer (CTO) position. The usual “promotion” is to move onto a management track, which a lot of IT engineers don’t really want and aren’t necessarily very good at. This is, I believe, one of the factors behind the Dead Sea effect — your talented IT engineers see very few choices ahead within the organization that let them keep doing what they do best, so they leave; the less talented/skilled IT engineers, on the other hand, are content to remain in their current positions and not advance at all.

I have proposed before to organizations that they implement a non-management technical track all the way to the CxO level, because it’s a lot cheaper to keep your best people on salary than to hire them (or their equivalents) back as consultants at 2x to 6x their salaries. Such a track might look like this: Associate Engineer -> Engineer -> Senior Engineer -> Technical Officer -> Senior Technical Officer -> Executive Technical Officer -> Chief Technical Officer (only one of these). The ranks from Technical Officer and up would have salaries, perks, and benefits equivalent to progressing through management (VP, Senior VP, Executive VP), but without actual management/head count responsibilities. Architects, mentors, and project overseers would be drawn from these ranks. I honestly believe that this upper-level technical track would save most organizations anywhere from millions to hundreds of millions of dollars in failed or late IT projects because they would constantly disrupt the ‘thermocline of truth‘ before it even formed.

The second question is, how, and on what basis, do we evaluate the IT engineers? This is a trickier question than it would appear at first. That’s because whatever criteria you select for evaluation will then become the key factors that the IT engineers will “game” to. Suppose, for an example, that you judge IT engineers on the basis of accurate schedule prediction and on-time completion of subprojects. You will suddenly have a group of hyper-conservative IT engineers who maximize schedule estimates and minimize completion criteria in order to ensure that they have a great (for them) track record. There’s actually some upside to that — you’ll dampen the inherent (and often excessive) optimism found in many IT engineers — but you have to be willing to live with the consequences as well (such as very long, drawn-out projects).

As for the ‘how’ question, I think that a group evaluation from two levels up might work the best; that is, have all the Senior Engineers evaluate the Associate Engineers, the Technical Officers evaluate the Engineers, and so on. I think the extra distance will allow for a more objective evaluation; IT engineers are notorious for being competitive. I don’t even mind stack ranking so long as there is no fixed curve. If you’re doing your recruiting and hiring correctly, you shouldn’t have many (if any) D- or F-level IT engineers.

So, Alex’s proposal has definite merit, but has some potential pitfalls as well. I’m curious to see what more details he may suggest for it. ..bruce..

About the Author:

Webster is Principal and Founder at at Bruce F. Webster & Associates, as well as an Adjunct Professor for the BYU Computer Science Department. He works with organizations to help them with troubled or failed information technology (IT) projects. He has also worked in several dozen legal cases as a consultant and as a testifying expert, both in the United States and Japan. He can be reached at 303.502.4141 or at

Comments (8)

Trackback URL | Comments RSS Feed

  1. arandomJohn says:


    How does your proposed technical track interrupt the thermocline of truth? I’m guess I’m not sure I understand who would be managing projects and what the reporting structure would look like.

    Also, doesn’t the evaluation scheme you propose (two levels up) fall prey to thermoclines? I know that my second level managers never had any interaction with me and I was completely unaware of whether they were conscious of my skills and strengths.

    The real problem with evaluations is what you mention: you’ll get what you measure. If all you measure is utilization that is what you’ll get: lots of hours billed. It seems that it is very difficult to come up with a good set of metrics that can’t be gamed. What would be ideal is a system in which those being evaluated didn’t know the metrics and/or weights applied to them but trusted the system to be fair and line up well with their impression of who the good engineers were.

  2. bfwebster says:


    Good challenges all.

    First, on personnel reviews, read carefully: I’m not talking about _managers_ doing the evaluation, I’m talking about people on the (non-managerial) technical track doing the evaluations. As a Director at PwC, I certainly knew and often worked with the Senior Managers, Managers, Senior Associates, and even the Associates.

    If (for example) the Senior Engineers don’t know and can’t evaluate the Associate Engineers in their same IT group, then you have serious problems with your organization. Beyond that, the Sr Engineers are certainly free to ask for feedback and insight from the Engineers between them and the Associate Engineers. But I still hold to my opinion that there should be a layer of separation there. Ditto on up the tech track.

    Second, on the thermocline: in my experience, the barrier most often happens in the transition from engineering to management, because management is responsible for and has to report on the project’s progress. Both as a consultant (on troubled IT projects) and as an expert witness (on IT systems failure lawsuits), I have found that the best way to get to the truth is to go down in the trenches and interview the engineers. The technical career path means that your most competent engineers — which usually means your most honest ones — will be free to go into projects and find out what’s really going on without the temptation — as a manager over the project — to fudge. I’ve filled that exact role as a consultant in several different companies and have often been the bearer of bad news to senior management.

    On measuring: I’m not sure that this is what you were saying, but I certainly don’t advocate using utilization as a measurement (unless, of course, you’re in a law or consulting firm and your goal is to ensure an income stream), and it would make no sense in an in-house IT shop anyway. My sense is that the best means of evaluation probably parallels what I suggested for hiring excellent engineers in the first place: have a relevant set of engineers at level X each interview each engineer at level X-2 and then get together to reach a consensus. ..bruce..

  3. Good article,

    The up or out I find works well in competitive markets where where you are basically working on billable time and results. You either do your job better then your peers or the company tries someone else.

    In other areas like you suggest IT for instance people don’t usually go into that with the same drive and ambition to move up. Many people like to stay in a comfortable, secure position. If the position is one that needs to be fulfilled and the person does the job well and within your parameters stick with the person you have and go forward.

    If however it is a competitive position that others want and can either do better or for less money, then you better constantly be upping your game and not stagnating.

  4. Jason M Baker says:

    What you describe is largely how Google works. You have a track of different levels of software engineer (from Software Engineer I to Distinguished Engineer). To advance to the next level, you need to get performance reviews from engineers two levels above you. Promo committees are instructed to place your manager’s opinion *below* your colleagues’.

  5. bfwebster says:


    Thanks for that info about Google — it’s nice to know someone came up with the same ideas. I’d be curious to learn how well people on the tech track think it works. ..bruce..

Leave a Reply

You must be logged in to post a comment.