Software Estimation Best Practices

Why Do We Keep Having the Same Problems?

The thirty years I have spent in software have bridged a period of remarkable and ever accelerating change. Mercifully, coding an online system on a black and white CRT that accesses an IMS database is mostly a quaint memory. Technology, tools, and processes have all evolved. Why is it, then, that we continue to have the same problems we experienced in the Information Technology Dark Ages? Here are the symptoms:

  • Software projects that continue to overshoot their schedules
  • Quality problems have neither disappeared nor lessened to an acceptable level
  • Budgets are regularly exceeded: sometimes wildly
  • Project estimates are inaccurate

I see two principal reasons. I’m certain there are others.

Our Focus on Technology

We are not Luddites resisting change; we love technology and embrace it whole heartedly. We have a rich array of programming and testing tools at our disposal. Why, then, have problems with cost, schedule, and quality persisted?  

One reason is that we focus on technical solutions to problems with many non-technical components. Suppose you have the choice of coding a project in COBOL or Visual Basic. (Suspend your disbelief for a moment and accept that both languages are suitable for the task at hand.) You will produce far less code in VB than in COBOL. You may see some slight reduction in cost and schedule; but it will not approach the 40 – 50% reduction in code that will be seen if you choose VB over COBOL.  

The reason is fairly simple. On a project of any size, coding and unit testing is not where most effort is expended. One number that is touted puts coding/unit testing at 30% of total project effort. This means that a 50% reduction in coding effort yields only a 15% reduction in project effort. While we want and need more effective tools for coding and testing, they have little impact on the remaining 70% of project effort.  

Research done by QSM in 2006 for IT projects and repeated in 2011 for Engineering Class projects found that the key differentiators between low cost, quick to market, high quality projects and their opposites were how well these projects dealt with the human factors of software development.1 As a side note, the choice of programming language was poorly correlated with success or failure.  

Human factors are fuzzy - unlike the choice of a tool - and it can be hard for those of us who enjoy technology to admit that non-technical factors have a greater influence over our projects than technology. But take a step back for a moment and remember a project where the project manager was ineffective, planning was poor, and team cohesiveness non-existent. It wasn’t a pleasant experience, and very likely the project wasn’t fast to market and high in quality and productivity.

Deliberate Ignorance

There is an old myth about King Canute and the sea. One version of the story portrays the king as very full of himself and his authority. Canute had his throne placed on the seashore and commanded the tide not to come in. The tide ignored him and the king nearly drowned.

Software has been studied for a long time now, and we know quite a bit about how it works:

  • We know that the relationship between cost/effort and schedule is non-linear. Within a given range, the optimal team size and schedule can be determined.
  • We know that adding staff to reduce schedule is ineffective and costly.  
  • We know that organizations have characteristic staffing and productivity patterns that define their current capabilities. Unfortunately, we also know that this information is frequently ignored.  

The reasons are many; but ignoring past experience only guarantees that projects will continue to fail. 

“Facts do not cease to exist because they are ignored." 2

 


1 The QSM Software Almanac, IT Metrics Edition, 2006.
2 Aldous Huxley, Complete Essays 2, 1926-29.

Blog Post Categories 
Program Management