Practical Software Estimation Measurement

Blogs

SLIM-Estimate and IBM Rational Focal Point Integration Now Available

QSM and IBM Rational are pleased to announce yet another point of integration between SLIM and Rational tools.  Most projects overrun their schedules and budgets because of the lack of good estimates at the time commitments are made.  Project and portfolio management tools like Rational Focal Point are useful to analyze proposals and help build a solid business case on which to base project approvals.  These products focus primarily on justifying the project through the business benefits and/or savings derived from the implementation of the proposal’s business requirements.  They do not assess the risk of failing to meet a business stakeholder’s desired project schedule and budget in a proposal.  QSM’s integration to Rational Focal Point brings this powerful capability to market and helps identify high risk proposals before they enter an organization’s project stream.  This capability can significantly reduce schedule slippage and cost overruns that can reach tens of millions of dollars for many large Fortune 1000 organizations.

Read more about the SLIM-Estimate and Rational Focal Point integration here or you can read more about QSM's IBM Rational Solutions on our Partners Page.

Blog Post Categories 
SLIM-Estimate IBM Rational

The Size-Productivity Paradox, Part I

From time to time, questions from clients get us thinking:

After yesterday's Web presentation on the QSM Benchmarking Consortium, I went to your Web site and found the paper "Performance Benchmark Tables." I noticed the delivery rates in both SLOC/PM and FP/PM numbers increase as average project size increases. This seems counterintuitive: are the Performance Benchmark Tables correct?

That's a great question. Our data definitely shows an upward trend in productivity as application size increases. This is true whether we use measures like QSM's PI (productivity index) or ratio based productivity measures (SLOC or FP per person month of effort). The QSM industry benchmark trends behave similarly: as projects get larger, average productivity increases as well.

Paul Below recently took another look at productivity data using several popular statistical software packages. The question he was trying to answer was, “Does productivity (measured as SLOC/PM) always increase with system size, or could the size-productivity relationship actually behave differently in certain regions of the size spectrum?" To answer this question he used something called residuals to evaluate the size/productivity regression trend.

Blog Post Categories 
Productivity

Code Counters and Size Measurement

Regardless of which size measures (Effective SLOC, function points, objects, modules, etc.) your organization uses to measure software size, code counters provide a fast and easy way to measure developed functionality. If your organization uses Effective (new and modified) SLOC, the output from an automated code counter can generally be used "as is". If you use more abstract size measures (function points or requirements, for example), code counts can be used to calculated gearing factors such as average SLOC/FP or SLOC/requirement.

The QSM Code Counters page has been updated and extended to include both updated version information and additional code counters. Though QSM neither endorses nor recommends the use of any particular code counting tool, we hope the code counter page will be a useful resource that supports both size estimation and the collection of historical data.

Blog Post Categories 
Benchmarking Software Sizing Estimation

QSM Database Update

It’s time to update QSM’s industry trends and we need your help! Contributing data ensures that the database reflects a wide spectrum of project types, languages, and development methods. It helps us conduct ground-breaking research and improve our suite of estimation, tracking, and benchmarking tools. Contributors benefit from the ability to sanity-check estimates, ongoing projects, and completed projects against the best industry trends in the business.

 We're validating over 400 new projects, but we can always use more – especially in the Real Time, Microcode, and Process Control application domains. So what do you need to do to ensure your firm is represented in the next trend line update? That’s easy! Simply send us your DataManager (.smp files) or completed SLIM-Control (.scw) workbooks. Here’s the recommended minimum data set:

  • Project Name
  • Status = “Completed” only – no estimates or in progress projects
  • Application type and sub-type (if applicable)
  • Phase 3 time. Can be calculated from the phase end/start date or entered as a value (e.g.: 3.2 months)
  • Phase 3 effort
  • Effective Size (the number of new and or modified functional size units used to measure the application – objects, SLOC, function points, database tables). Please include a gearing factor if the project was sized in something other than Source Lines of Code

Additional information allows us to perform more sophisticated queries:

Blog Post Categories 
QSM News

Replay Now Available for QSM's High Performance Benchmark Consortium Webinar

Our recent webinar, "Introduction to the High Performance Benchmark Consortium," was a great success and we are already looking forward to planning our next presentation.  Joe Madden received a lot of insightful questions regarding our new consulting program.  We are aware that your time is valuable and scheduling can be a challenge, so we have recorded a replay, including Q&A, for anyone who was unable to attend the scheduled webinar.

 

To view the replay, click here.

 

Blog Post Categories 
Webinars Benchmarking Consulting

High Performance Benchmark Consortium Webinar Announced

I am pleased to announce that on Thursday, February 25 at 1:00 PM EST, QSM will be hosting a webinar based on our new High Performance Benchmark Consortium.

QSM has introduced a program specifically designed to help software development or acquisition organizations quantify and demonstrate performance improvement over time. The High Performance Benchmark Consortium is for clients who want to be best in class software producers and are willing to be active participants in the program. In today’s economic environment it is more important than ever for both suppliers and acquirers to compete more effectively and provide value to their customers. Members of the Consortium gain access to proprietary research that leverages the QSM historical benchmark database of over 8,000 validated software projects.

Presented by benchmarking expert and head of QSM Consulting, Joe Madden, this webinar will discuss:

  • the major components of the program
  • the different levels of membership participation
  • the benefits of being a member
  • sample deliverables that a typical member would receive

To register for this event, simply follow this link and click "Register."

Blog Post Categories 
Webinars Benchmarking

Performance Benchmarking Tables

QSM consultant Paul Below has posted some quick performance benchmarking tables for IT, engineering class, and real time software.

The tables contain average values for the following metrics at various size increments:

Schedule (months)

Effort (Person Months)

Average Staff (FTE)

Mean Time to Defect (Days)

SLOC / PM

Two insights that jump out right away:

1. Application complexity is a big productivity driver. IT (Business) software solves relatively straightforward and well understood problems. As algorithmic complexity increases, average duration, effort, team size increase rapidly when compared to IT systems of the same size.

2. Small teams and small projects produce fewer defects. Projects over 100 effective (new and modified) source lines of code all averaged Mean Times to Defect of under one day. We see this over and over again in the QSM database: small projects with small teams consistently produce higher reliability at delivery.

Blog Post Categories 
Benchmarking

Using Control Bounds to Assess Ongoing Projects

When he created control charts in the 1920’s, Walter Shewhart was concerned with two types of mistakes:

  • Assuming common causes were special causes
  • Assuming special causes were common causes

Since it is not possible to make the rate of both of these mistakes go to zero, managers who want to minimize the risk of economic loss from both types of error often use some form of Statistical Process Control.

SLIM-Control control bounds

 

The control bounds in SLIM-Control perform a related, but not identical function.

Read more...

Blog Post Categories 
SLIM-Control

"Managing Productivity with Proper Staffing" Webinar Replay Available

Just before the holidays, we hosted our first in-house webinar, "Managing Productivity with Proper Staffing Strategies." Confronted with challenges presented by the current economy, we see more and more systems development groups trying to do more with less.  The ultimate goal is to maximize productivity and minimize defects, but many teams struggle to get there.  It is possible, but the most effective methods used to achieve maximum efficiency are counter-intuitive.  People always think more effort will produce more product.  The fact is using less effort is often more effective.  Presented by industry expert, John Bryant, this webinar explains and proves the correct way to maximize productivity while at the same time minimizing cost and defects. 

 

John Bryant has over forty years of IT experience.  He spent the last several years using the SLIM Suite of Tools to improve the software development process by properly estimating, tracking, and analyzing software development efforts.  His expertise includes project management, teaching and mentoring from initial project evaluation and planning through construction. 

 

In case you missed it, you can view the replay of this webinar here

Blog Post Categories 
Defects Team Size Webinars

Calculating Mean Time to Defect

MTTD is Mean Time to Defect.  Basically, it means the average time between defects (mean is the statistical term for average).  A related term is MTTF, or Mean Time to Failure.  It is usually meant as the average time between encountering a defect serious enough to cause the system to fail.

Is MTTD hard to compute?  Does it require difficult metrics collection? Some people I have spoken to think so.  Some texts think so, too.  For example:

Gathering data about time between failures is very expensive.  It requires recording the occurrence time of each software failure.  It is sometimes quite difficult to record the time for all the failures observed during testing or operation.  To be useful, time between failures data also requires a high degree of accuracy.  This is perhaps the reason the MTTF metric is not widely used by commercial developers.


But this is not really true.  The MTTD or MTTF can be computed from basic defect metrics.   All you need is:

  • the total number of defects or failures and
  • the total number of months, weeks, days, or hours during which the system was running or being tested and metrics were recorded.  You do not need the exact time of each defect.

Here is an example.  I will compute MTTD and MTTF two ways to demonstrate that the results are identical.  This table contains defect metrics for first three days of operation of a system that runs 24 hours a day, five days a week:

Blog Post Categories 
Defects