Practical Software Estimation Measurement

Blogs

Code Counters and Size Measurement

Regardless of which size measures (Effective SLOC, function points, objects, modules, etc.) your organization uses to measure software size, code counters provide a fast and easy way to measure developed functionality. If your organization uses Effective (new and modified) SLOC, the output from an automated code counter can generally be used "as is". If you use more abstract size measures (function points or requirements, for example), code counts can be used to calculated gearing factors such as average SLOC/FP or SLOC/requirement.

The QSM Code Counters page has been updated and extended to include both updated version information and additional code counters. Though QSM neither endorses nor recommends the use of any particular code counting tool, we hope the code counter page will be a useful resource that supports both size estimation and the collection of historical data.

Blog Post Categories 
Benchmarking Software Sizing Estimation

QSM Database Update

It’s time to update QSM’s industry trends and we need your help! Contributing data ensures that the database reflects a wide spectrum of project types, languages, and development methods. It helps us conduct ground-breaking research and improve our suite of estimation, tracking, and benchmarking tools. Contributors benefit from the ability to sanity-check estimates, ongoing projects, and completed projects against the best industry trends in the business.

 We're validating over 400 new projects, but we can always use more – especially in the Real Time, Microcode, and Process Control application domains. So what do you need to do to ensure your firm is represented in the next trend line update? That’s easy! Simply send us your DataManager (.smp files) or completed SLIM-Control (.scw) workbooks. Here’s the recommended minimum data set:

  • Project Name
  • Status = “Completed” only – no estimates or in progress projects
  • Application type and sub-type (if applicable)
  • Phase 3 time. Can be calculated from the phase end/start date or entered as a value (e.g.: 3.2 months)
  • Phase 3 effort
  • Effective Size (the number of new and or modified functional size units used to measure the application – objects, SLOC, function points, database tables). Please include a gearing factor if the project was sized in something other than Source Lines of Code

Additional information allows us to perform more sophisticated queries:

Blog Post Categories 
QSM News

Replay Now Available for QSM's High Performance Benchmark Consortium Webinar

Our recent webinar, "Introduction to the High Performance Benchmark Consortium," was a great success and we are already looking forward to planning our next presentation.  Joe Madden received a lot of insightful questions regarding our new consulting program.  We are aware that your time is valuable and scheduling can be a challenge, so we have recorded a replay, including Q&A, for anyone who was unable to attend the scheduled webinar.

 

To view the replay, click here.

 

Blog Post Categories 
Webinars Benchmarking Consulting

High Performance Benchmark Consortium Webinar Announced

I am pleased to announce that on Thursday, February 25 at 1:00 PM EST, QSM will be hosting a webinar based on our new High Performance Benchmark Consortium.

QSM has introduced a program specifically designed to help software development or acquisition organizations quantify and demonstrate performance improvement over time. The High Performance Benchmark Consortium is for clients who want to be best in class software producers and are willing to be active participants in the program. In today’s economic environment it is more important than ever for both suppliers and acquirers to compete more effectively and provide value to their customers. Members of the Consortium gain access to proprietary research that leverages the QSM historical benchmark database of over 8,000 validated software projects.

Presented by benchmarking expert and head of QSM Consulting, Joe Madden, this webinar will discuss:

  • the major components of the program
  • the different levels of membership participation
  • the benefits of being a member
  • sample deliverables that a typical member would receive

To register for this event, simply follow this link and click "Register."

Blog Post Categories 
Webinars Benchmarking

Performance Benchmarking Tables

QSM consultant Paul Below has posted some quick performance benchmarking tables for IT, engineering class, and real time software.

The tables contain average values for the following metrics at various size increments:

Schedule (months)

Effort (Person Months)

Average Staff (FTE)

Mean Time to Defect (Days)

SLOC / PM

Two insights that jump out right away:

1. Application complexity is a big productivity driver. IT (Business) software solves relatively straightforward and well understood problems. As algorithmic complexity increases, average duration, effort, team size increase rapidly when compared to IT systems of the same size.

2. Small teams and small projects produce fewer defects. Projects over 100 effective (new and modified) source lines of code all averaged Mean Times to Defect of under one day. We see this over and over again in the QSM database: small projects with small teams consistently produce higher reliability at delivery.

Blog Post Categories 
Benchmarking

Using Control Bounds to Assess Ongoing Projects

When he created control charts in the 1920’s, Walter Shewhart was concerned with two types of mistakes:

  • Assuming common causes were special causes
  • Assuming special causes were common causes

Since it is not possible to make the rate of both of these mistakes go to zero, managers who want to minimize the risk of economic loss from both types of error often use some form of Statistical Process Control.

SLIM-Control control bounds

 

The control bounds in SLIM-Control perform a related, but not identical function.

Read more...

Blog Post Categories 
SLIM-Control

"Managing Productivity with Proper Staffing" Webinar Replay Available

Just before the holidays, we hosted our first in-house webinar, "Managing Productivity with Proper Staffing Strategies." Confronted with challenges presented by the current economy, we see more and more systems development groups trying to do more with less.  The ultimate goal is to maximize productivity and minimize defects, but many teams struggle to get there.  It is possible, but the most effective methods used to achieve maximum efficiency are counter-intuitive.  People always think more effort will produce more product.  The fact is using less effort is often more effective.  Presented by industry expert, John Bryant, this webinar explains and proves the correct way to maximize productivity while at the same time minimizing cost and defects. 

 

John Bryant has over forty years of IT experience.  He spent the last several years using the SLIM Suite of Tools to improve the software development process by properly estimating, tracking, and analyzing software development efforts.  His expertise includes project management, teaching and mentoring from initial project evaluation and planning through construction. 

 

In case you missed it, you can view the replay of this webinar here

Blog Post Categories 
Defects Team Size Webinars

Calculating Mean Time to Defect

MTTD is Mean Time to Defect.  Basically, it means the average time between defects (mean is the statistical term for average).  A related term is MTTF, or Mean Time to Failure.  It is usually meant as the average time between encountering a defect serious enough to cause the system to fail.

Is MTTD hard to compute?  Does it require difficult metrics collection? Some people I have spoken to think so.  Some texts think so, too.  For example:

Gathering data about time between failures is very expensive.  It requires recording the occurrence time of each software failure.  It is sometimes quite difficult to record the time for all the failures observed during testing or operation.  To be useful, time between failures data also requires a high degree of accuracy.  This is perhaps the reason the MTTF metric is not widely used by commercial developers.


But this is not really true.  The MTTD or MTTF can be computed from basic defect metrics.   All you need is:

  • the total number of defects or failures and
  • the total number of months, weeks, days, or hours during which the system was running or being tested and metrics were recorded.  You do not need the exact time of each defect.

Here is an example.  I will compute MTTD and MTTF two ways to demonstrate that the results are identical.  This table contains defect metrics for first three days of operation of a system that runs 24 hours a day, five days a week:

Blog Post Categories 
Defects

An Empirical Examination of Brooks' Law

Building on some interesting research performed by QSM's Don Beckett, I take a look at how Brooks' Law stacks up against a sample of large projects from our database:

Does adding staff to a late project only make it later? It's hard to tell. Large team projects, on the whole, did not take notably longer than average. For small projects the strategy had some benefit, keeping deliveries at or below the industry average, but this advantage disappeared at the 100,000 line of code mark. At best, aggressive staffing may keep a project's schedule within the normal range of variability.

Contrary to Brooks' law, for large projects the more dramatic impacts of bulking up on staff showed up in quality and cost. Software systems developed using large teams had more defects than average, which would adversely affect customer satisfaction and, perhaps repeat business. The cost was anywhere from 3 times greater than average for a 50,000 line of code system up to almost 8 times as large for a 1 million line of code system. Overall, mega-staffing a project is a strategy with few tangible benefits that should be avoided unless you have a gun pointed at your head. One suspects some of these projects found themselves in that situation: between a rock and a hard place.

How do managers avoid these types of scenarios? Software development remains a tricky blend of people and technical skills, but having solid data at your fingertips and challenging the conventional wisdom wisely can help you avoid costly mistakes.

Read the full post here.

Blog Post Categories 
Metrics Team Size

Will My Project Finish on Time?

Events are said to be independent when the outcome of one event does not affect the other.

On the other hand, two events are dependent when the occurrence or nonoccurrence of one event does affect the probability of the other event.

This is an important distinction, as we shall see.

When using the multiplication rule for independent events, sometimes we use the percent chance of success, other times we use the percent chance of failure. We must think about what we are trying to calculate when deciding which to use. If we want to calculate the probability that all the independent events will fail, then we would use the chances of failure. On the other hand, if we want the chance that all the independent events will succeed, then we use the chances of success. In a situation where we want the probability that one or more of the events will fail, then we would use one minus the multiplication of the chances of success (one minus the chance that all of the events will succeed will be the chance that one or more would fail).

Simple example: A software development project is going to proceed concurrently with the development of a new piece of hardware required to implement the software. Scheduled completion dates for both developments have been determined and a project plan has been created. Both projects can proceed independently until their respective completions (probably an unwarranted assumption, but I said this is a simple example!). Both projects must succeed in order for overall success to be achieved.

Blog Post Categories 
Risk Management MasterPlan