Practical Software Estimation Measurement

Blogs

An Empirical Examination of Brooks' Law

Building on some interesting research performed by QSM's Don Beckett, I take a look at how Brooks' Law stacks up against a sample of large projects from our database:

Does adding staff to a late project only make it later? It's hard to tell. Large team projects, on the whole, did not take notably longer than average. For small projects the strategy had some benefit, keeping deliveries at or below the industry average, but this advantage disappeared at the 100,000 line of code mark. At best, aggressive staffing may keep a project's schedule within the normal range of variability.

Contrary to Brooks' law, for large projects the more dramatic impacts of bulking up on staff showed up in quality and cost. Software systems developed using large teams had more defects than average, which would adversely affect customer satisfaction and, perhaps repeat business. The cost was anywhere from 3 times greater than average for a 50,000 line of code system up to almost 8 times as large for a 1 million line of code system. Overall, mega-staffing a project is a strategy with few tangible benefits that should be avoided unless you have a gun pointed at your head. One suspects some of these projects found themselves in that situation: between a rock and a hard place.

How do managers avoid these types of scenarios? Software development remains a tricky blend of people and technical skills, but having solid data at your fingertips and challenging the conventional wisdom wisely can help you avoid costly mistakes.

Read the full post here.

Blog Post Categories 
Metrics Team Size

Will My Project Finish on Time?

Events are said to be independent when the outcome of one event does not affect the other.

On the other hand, two events are dependent when the occurrence or nonoccurrence of one event does affect the probability of the other event.

This is an important distinction, as we shall see.

When using the multiplication rule for independent events, sometimes we use the percent chance of success, other times we use the percent chance of failure. We must think about what we are trying to calculate when deciding which to use. If we want to calculate the probability that all the independent events will fail, then we would use the chances of failure. On the other hand, if we want the chance that all the independent events will succeed, then we use the chances of success. In a situation where we want the probability that one or more of the events will fail, then we would use one minus the multiplication of the chances of success (one minus the chance that all of the events will succeed will be the chance that one or more would fail).

Simple example: A software development project is going to proceed concurrently with the development of a new piece of hardware required to implement the software. Scheduled completion dates for both developments have been determined and a project plan has been created. Both projects can proceed independently until their respective completions (probably an unwarranted assumption, but I said this is a simple example!). Both projects must succeed in order for overall success to be achieved.

Blog Post Categories 
Risk Management MasterPlan

Finding Defects Efficiently

Several weeks ago I read an interesting study on finding bugs in giant software programs:

The efficiency of software development projects is largely determined by the way coders spot and correct errors.

But identifying bugs efficiently can be a tricky business, when the various components of a program can contain millions of lines of code. Now Michele Marchesi from the University of Calgiari and a few pals have come up with a deceptively simple way of efficiently allocating resources to error correction.

...Marchesi and pals have analysed a database of java programs called Eclipse and found that the size of these programs follows a log normal distribution. In other words, the database and by extension, any large project, is made up of lots of small programs but only a few big ones.

So how are errors distributed among these programs? It would be easy to assume that the errors are evenly distributed per 1000 lines of code, regardless of the size of the program.

Not so say Marchesi and co. Their study of the Eclipse database indicates that errors are much more likely in big programs. In fact, in their study, the top 20 per cent of the largest programs contained over 60 per cent of the bugs.

That points to a clear strategy for identifying the most errors as quickly as possible in a software project: just focus on the biggest programs.

Nicole Tedesco adds her thoughts:

Blog Post Categories 
Defects Testing Software Reliability

Practical Software Sizing Methods

Sizing is arguably the most challenging part of any software estimate. Without a notion of functional size, managers may find it difficult to negotiate realistic schedules based on their demonstrated ability to deliver software. They are unable to show empirically why the twelve person team that worked so well on a 150,000 ESLOC project over six months not only fails to deliver a 75,000 ESLOC project in half the time, but produces an error-ridden product that infuriates the customer. Unlike manufacturing shoes, software development is full of non-linear relationships between size, time, effort, and defects. What data driven estimation does successfully is arm managers with the ability to sanity check their current plans against past performance and negotiate achievable outcomes based on a realistic assessment of how much functionality can be built with a set time frame and resource profile.

So how do project managers get a better handle on size? The best place to start is with establishing a practical method for size estimation.

Read the full white paper!

Blog Post Categories 
Sizing

30 Years of Innovation

Progress, far from consisting in change, depends on retentiveness. When change is absolute ...no direction is set for possible improvement... when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it.

- George Santayana, The Life of Reason

The French have a saying: "Plus ca change, plus c’est la meme chose".

That time tested axiom aptly summarizes QSM's 30 years of experience in the software industry. In the three decades since a senior Army Colonel first explored the relationship between software size, schedule, effort and defects, Larry Putnam’s original work has been refined, retested and ultimately reinforced by the dizzying pace of modern software development. Tools and methods du jour continue to replace their predecessors in quick succession but our research shows a reassuring constancy in the fundamentals of software development.

In retrospect, it is not surprising that Larry's work stood the test of time. His approach - practical, results oriented software measurement - was dictated by a feeling familiar to beleaguered developers: pain. When he arrived at the Army Computer Systems Command in the mid-1970s, software cost estimation relied on a simplistic productivity measure: lines of code per person month of effort. Dividing this ratio by the estimated size of the contemplated software product yielded total effort, which could then be divided by planned effort resource gave a schedule estimate that could be tweaked if needed.

System Characteristics