Software Estimation Best Practices

Blogs

For More Accurate Software Estimates, Avoid Hidden Risk Buffers

A colleague of mine recently sent me a blog post explaining the difference between project contingency and padding.  The blogger made the distinction that padding is what often gets added to an individual’s estimate of the effort required to perform a task (in her example, a software development task) to account for project ‘unknowns’.  The estimator determines the most likely required effort, then pads it with a little more effort in order to arrive at an estimate to which he or she can commit.  Thus, padding represents an undisclosed effort reserve (and implied schedule reserve) to buffer against potential risk.  Contingency reserve, she explains, is “an amount of money in the budget or time in the schedule seen and approved by management.  It is documented.  It is measured and therefore managed.”  Ms. Brockmeier is correct in promoting contingency as the better management tool.  The challenge is having a method to measure and document this contingency and the known unknowns it is buffering.

Implicit Risk Buffer

Padding is a natural result of bottoms-up, effort-based estimation techniques.  Estimating low-level WBS elements creates more opportunity for padding, because the number of unknowns grows with the task list.  The estimator is consciously or unconsciously assessing the risk of each task, considering its dependencies and complexities.  What is being implied in the effort estimate is: 1] an assessment of product size and complexity, and 2] a productivity valuation.

Blog Post Categories 
Risk Management Estimation

Webinar Replay Now Available: Successful Estimating Processes Using the SLIM API

If you were unable to attend our most recent webinar, Successful Estimating Processes Using the SLIM API, a replay is now available.

How do best in class development organizations achieve maximum return on investment from their estimation programs? By leveraging the SLIM API for integrations between estimation tools and detail-oriented products, development teams are able to simplify estimation processes and broaden the estimation program user base. Presented by Carl Engel of IBM Global Services, Scott Lancaster of State Street, and Larry Putnam, Jr. of QSM, this webinar explores two successful implementations of the SLIM API between third party tools and the SLIM Suite. 

Carl Engel is the Estimating Program Manager for IBM's Global Business Services responsible for the development and deployment of performance benchmarking and estimating process, methods and tools including the support for nearly 1,000 SLIM Suite users. Carl has been with IBM for 12 years as an Associate Partner and has had previous roles as the program manager for IBM's project management methodology and tools. He is an IBM certified Executive Project Manager, PMP with over 30 years of program and project management experience primarily in very large scale efforts in the nuclear industry and U.S. National Laboratories.

Blog Post Categories 
Webinars Estimation

Losses Loom Larger Than Gains

Anyone who has gambled (and lost) knows the sting of losing.  In 1979, Daniel Kahneman and Amos Tversky, pioneers in the field of behavioral economics, theorized that losses loom larger than gains; essentially, a person who loses $100 loses more satisfaction that what is gained by someone who wins $100. Behavioral economics weaves psychology and economics together to map the irrational man, the foil of economics' rational man. 

How can I leverage this theory for software development?

According to the QSM IT Software Almanac (2006), worst in class projects took 5.6 times as long to complete and used roughly 15 times as much effort with a median team size of 17, and were less likely to track defects. 

One way you can leverage your worst in class projects would be to use them as history files in SLIM-Estimate, which would adjust PI, defect tuning, etc., to match how you have developed software in the past. Don Beckett recently discussed how to tune effort for best in class analysis and design.

Another way to leverage your worst in class projects would be to build a "project graveyard," that is, a database of your organization's worst projects, and load it into SLIM-Metrics. In SLIM-Metrics, you can analyze duration, peak staff, average staff, and defects to view your own organization's weaknesses. Depending on how well documented your SLIM-DataManager database is, you could analyze some of the custom metrics that ship with SLIM-Metrics, such as reviewing who the project was built for (customer metric) and complexity.

Blog Post Categories 
SLIM-Metrics SLIM-DataManager

Webinar: Successful Estimating Processes Using the SLIM API

On April 12, 2012 at 1:00 PM EDT, QSM will host a webinar focused on two successful implementations of the SLIM API presented by IBM's Carl Engel, State Street's Scott Lancaster, and QSM's Larry Putnam, Jr

How do best in class development organizations achieve maximum return on investment from their estimation programs? By leveraging the SLIM API for integrations between estimation tools and detail-oriented products, development teams are able to simplify estimation processes and broaden the estimation program user base. Presented by Carl Engel of IBM Global Services, Scott Lancaster of State Street, and Larry Putnam, Jr. of QSM, this webinar explores two successful implementations of the SLIM API between third party tools and the SLIM Suite. 

Carl Engel is the Estimating Program Manager for IBM's Global Business Services responsible for the development and deployment of performance benchmarking and estimating process, methods and tools including the support for nearly 1,000 SLIM Suite users. Carl has been with IBM for 12 years as an Associate Partner and has had previous roles as the program manager for IBM's project management methodology and tools. He is an IBM certified Executive Project Manager, PMP with over 30 years of program and project management experience primarily in very large scale efforts in the nuclear industry and U.S. National Laboratories.

Blog Post Categories 
Webinars SLIM Suite

Software Cost Estimation Article in The DACS Journal

The February issue of the DACS Journal of Software Technology focuses on Software Cost Estimation and Systems Acquisition. My contribution, which you can read here, addresses the challenges faced by estimators and the value of establishing a historical baseline to support smarter planning, counter unrealistic expectations, and maximize productivity.

Using several recent studies, my paper addresses the following questions:

  • What is estimation accuracy, and how important is it really?
  • What is the connection between the Financial Crisis of 2008 and software estimation?
  • Why do small team projects outperform large team projects?
  • How can you find the optimal team size for your project?

Read the full article.

Blog Post Categories 
Estimation Articles

Part III: The Caveats

In Part 1 of How Much Estimation? we noted that there is an optimal amount of time and effort that should be spent in producing an estimate based on the target cost of a project and business practice being supported.

In Part 2: Estimate the Estimate, we saw that the formula to calculate this optimal time (as measured at NASA)  calculates the Cost of Estimate as the Target_Cost raised to the power 0.35 (approximately the cube root of the Target Cost).  The factor that defines the business practice (either by early lifecycle phase or perhaps by the “expected precision” of the estimate) is a linear factor ranging from a value of 24 to a value of 115.

Those Caveats!

I mentioned that there were caveats with the calculation.  Here they are:

Blog Post Categories 
Estimation SLIM-Estimate

Part II: Estimate the Estimate

In Part 1 of How Much Estimation, we observed that both too much time and effort and too little time and effort spent on estimating are less than optimal.  Combining:

  • The cost of producing an estimate—which is a function of the number of people working on the estimate and how long they work
  • The cost of variance in the results of the estimate—that is, how much the estimate varies from experienced actuals and what that variance will likely cost the project.  This is typically a function of the number of unknowns at the time of estimating for which the project cannot easily adjust and which will require additional unplanned resources of time, effort, and staff.

We get a U-shaped curve, at the bottom of which is the optimal time: we’ve spent enough time and effort to minimize the sum of the cost of estimate and the cost of variance.

The question is: how to calculate this point?  It will not be the same for a very large complex project and a very small simple project.  Also we don’t want a complicated and time-consuming approach to calculate the cost of estimate—it should be quick and simple.

NASA’s Deep Space Network (DSN) projecti developed a mechanism for this calculation based on two simple parameters:

Target Cost of Project

This is goal cost of the project as first envisaged in the project concept.  It is NOT the estimated cost of the project (which hasn’t been calculated yet).  Projects for which we expect and plan to spend a lot of money should clearly have more time and effort spent in estimating simply because more is at risk.

Blog Post Categories 
Estimation

How Much Estimation?

How much time and effort should we spend to produce an estimate?  Project estimation, like any other activity, must balance the cost of the activity with the value produced.

There are two extreme situations that organizations must avoid:

The Drive-By Estimate

The drive-by estimate occurs when a senior executive corners a project manager or developer and requires an immediate answer to the estimation questions: “When can we get this project done?”  “How much will it cost?” and “How many people do we need?" (the equally pertinent questions: “How much functionality will we deliver?” and “What will the quality be?” seem to get much less attention).

Depending on the pressure applied, the estimator must cough up some numbers rather quickly. Since the estimate has not been given much time and attention, it is usually of low quality. Making a critical business decision based on such a perfunctory estimate is dangerous and often costly.

The Never-Ending Estimate

Less common is the estimation process that goes on and on.  In order to make an estimate “safer” an organization may seek to remove uncertainty in the project and the data used to create the estimate. One way to do that is to analyze the situation more and more. Any time we spend more time and more effort in producing an estimate we will generally produce a more precise and defensible result. The trouble is the work we have to do to remove all the uncertainty is pretty much the same work we have to do to run the project. So companies can end up in the odd situation where, in order to decide if they should do the work what resources they should allocate to the project, they actually do the work and use up the resources.

Blog Post Categories 
Estimation

"The Difference Engine" by Phillip Armour in Communications of the ACM

January's Communications of the ACM featured an article by QSM consultant Phillip Armour. "The Difference Engine" focuses on building teams of differently skilled people. The article is partly based on University of Michigan Professor of Complex Systems, Scott Page’s book, The Difference, which shows the power of cognitive diversity in building systems and solving problems. Phil will elaborate more on this subject in a upcoming series on the QSM blog, so stay tuned!

Download the PDF

Phil is a regular contributor to Communications of the ACM. You can read more of his articles here.

 

Blog Post Categories 
QSM News Articles

Webinar Replay Now Available: Shifting to Agile Methods - The Keys for Long-Term Success

If you were unable to attend our webinar, Shifting to Agile Methods - The Keys for Long-Term Success, a replay is now available. 

Changes to the software development process, such as moving toward Agile methods, must demonstrate sustainable results over time versus just short-term wins.  There are two keys to reaching long-term success that should be considered up front – the new process must be repeatable and measurable. 

In this session, AccuRev’s Chris Lucca and QSM’s Larry Putnam, Jr. explore these two keys to success.  

Specifically, they cover:

  • The state of software development projects yesterday versus today and the impact to the software development process
  • The techniques and tools that can help a team to build a process that is repeatable and scalable, even across a distributed team
  • Which metrics and measurement processes are important to measuring the results and improvements of implementing repeatable and scalable processes
  • How to use metrics to estimate project schedules, resources and reliability, and monitor project progress and forecast completion
  • Ways to benchmark the results at project completion for time to market, cost performance and reliability – all of which provide the business case for continued investments in technology and repeatable and scalable processes

View the webinar replay.

View all recordings of all of our past webinars.

Blog Post Categories 
Webinars Agile