Thursday, 9 October 2008

How To Screw Up A Project

Technews World has an article entitled Signposts on the US Government's Trail of IT Failures which details the causes of the many multi-million , and even multi-billion dollar software disasters that have afflicted the US Government in the last decade.

The causes are not confined only to Governments, nor to large projects: but the effects scale up, so what will cause an over-run in budget on a small project will cause an utter disaster on a big one.

I'll paraphrase and simplify them:
  1. Not consulting with stakeholders - those who have a vested interest in getting a system that helps them. Too often, requirements are specified by customer personnel who won't use the system, and don't really know what it should be doing. You can even get a bun-fight between different stakeholders with contradictory requirements. Get these over with, or at least identified, or you'll fail.
  2. Being insufficiently flexible and unable to learn what is really required - or even possible - during development. In the course of building something, great hairy unknowns become better and better understood, and things which were thought to be easy might be determined to be hard, or worse, both impossible and irrelevant, un-neccessary. Something that will break budgets to provide, but which the customer doesn't actually want. Except the contract calls or it.
  3. Lack of incremental progress reporting and incremental funding no more than one step ahead of progress. There's a temptation to take a failing project and keep on developing until the money runs out. Projects that are doomed from an early stage should be shot to put them out of their misery, and not kept on as festering and rotting undead, shambling on till they finally disintegrate.
  4. "Just a little bit more" - feeping creaturism. Creeping featurism, where additional requirements are added on, with no additional budget. Sometimes it's better to develop a system that works - but doesn't do everything - than not deliver a system at all. And then upgrade in a planned way with further releases. A good architecture will have a lot of "fitted for but not with" flexibility, a foundation that will take a 12 storey structure even if the spec only calls for 8. Just in case.
  5. Doing what has always been done, but with computers, rather than analysing what should be done, and implementing that. The world is full of systems that replicate pre-computer methodology, with all sorts of bottlenecks and inefficiencies that were unavoidable when using pen and paper, but now are just embarrassments.
  6. Setting to work costs not being budgeted for. An example would be data cleansing - making sure existing databases have good data in, or you'll just be delivering greater and greater quantities of used food. And making sure that the parallel operation where the old system is gently phased out and the new one gently phased in is planned and budgeted for.
I've seen projects both great an small perish miserably by failures in one or more of these areas.

As they say in the Classics, Read The Whole Thing.

9 comments:

Bad hair days said...

Do you know the book "The Deadline" from Tom DeMarco?

RadarGrrl said...

7. Not making it DOS compatible.

Sevesteen said...

I'd add a couple of my own--

Inflexible upper management directives. We aren't allowed shareware, and open source is looked on suspiciously. On the other hand, purchasing anything that isn't already part of a recognized tier is painful.

This leads to preferring in-house development to existing commercial or open-source software. I've seen homebuilt collaboration software that could easily have been replaced by NNTP newsgroups or common discussion forum software, and a custom system to move files around that could have been replaced by a simple batch file.

Not doing realistic scalability testing before a major rollout. I don't know how many times I've seen something that worked fine with 5 users on the same local network rolled out as enterprise software, then failing miserably when there are a couple hundred users across slow network links.

Zoe Brain said...

Sevesteen - anyone who doesn't do a "proof of principle" test with dummy loads on the CPUs and worst-case traffic on the lines is negligent.

Even if the hardware isn't available, a software simulation of the system with conservative assumptions (ie 200% of expected worst-case transient peak load) can save millions, literally, if performed early enough.

This is not just important, but utterly vital, in real-time systems.

Zoe Brain said...

Nat - OK, I should have seen that one coming. But please, a drink warning next time, Ok?

Zoe Brain said...

BHD - indeed I do. Also the Software Death March

Bad hair days said...

Thank you. I really loved the Tom DeMarco book, but Death March sounds like usefull adive - one more to read in the near future (Now reading "Gehirn und Geschlecht" that could interesst you too, as you speak german (ISBN 978-3-540-71627-3))

RadarGrrl said...

It sounds more fun if I don't give you a warning. How far did the coffee travel?

Anonymous said...

I did publish a lot of articles on the reasons why projects fail on PM Hut...

This is a fairly subjective issue, some say that the #1 reason is incompetent team, others say it's an incompetent PM, lack of stakeholder support, etc...

The most read article I've published about this issue is this one: why projects fail, this one lists Executive Level Non-Support as the #1 reason.