Software Estimation Best Practices

Ask Carol: Collecting Metrics Willy-Nilly Doesn't Make Sense

QSM hosts a free advice column for software professionals who seek help to solve project management, communication and general software project issues. Carol Dekkers is a QSM consultant and IT measurement and project management expert who speaks internationally on topics related to software development. Send your questions to Ask Carol!

Dear Carol: 

My manager frequently says “You can’t manage what you can’t measure” and then tells us to collect more and more metrics, willy-nilly. We’ve done this for years now and it just doesn’t seem to make sense to collect all this data that’s never used, when we’ve got better things to do like developing software. What can we do to make him stop this unproductive exercise?

– Over Metricated

Dear Over: 

 This is a common situation in software development: management pursuit of metrics without a clear plan to use them.

The “you can’t manage what you can’t measure” is a paraphrase of Tom DeMarco’s quote, “You can't manage what you can't control, and you can't control what you don't measure.”1 What is interesting is that DeMarco’s follow-up quote 13 years later is seldom cited: “Metrics cost a ton of money. It costs a lot to collect them badly and a lot more to collect them well... At its best... metrics can inform and guide developers, and help organizations to improve.  At its worst, it can do actual harm… And there is an entire range between the two extremes...”2 

It sounds to me like your manager has the right intent (monitoring and controlling software development through metrics) but has not clearly articulated/communicated the purpose to you or your team.  Before I suggest that your manager is collecting too many or the wrong metrics (which could happen), I would perhaps approach him and tell him that it would help you to provide the most accurate data if you knew how the data was being used.  When the purpose of metrics is put into context, often the data collection process comes better into focus and everyone benefits.  Sometimes it is a mere miscommunication that leads to your resistance to collect what appears to be haphazard metrics.

A great approach to software measurement was developed by Victor Basili of the University of Maryland, which later became the focus of work done at the Software Engineering Institute at Carnegie Mellon University called the GQM – or Goal Question Metric approach.  Sometimes, measurement programs spring up out of a need for a particular measure (such as quality or productivity) and then grow with additional measures from that original need.  Without a plan, these metrics programs often end up growing in a variety of directions leading to the “willy-nilly” type of feelings within IT.  When one steps back and applies the GQM approach (first assess the goals for measurement, then ask the questions that will answer whether the goals are being reached, and then choose the metrics that will answer the questions) – all of the collected metrics should fall into place.  If data is being collected that does not provide answers to the questions, or if the metrics answer questions that are not aligned with the goals, then perhaps the metric (or measures) should not be collected.

Here is an example of how the GQM might work:

Goal:  Improve software development productivity on dot net enhancement projects by 15% within 12 months.

Questions (to determine if goal is being met):

  1. What is the current level of productivity on dot net enhancement projects?
  2. What is the incremental level of productivity (over various projects throughout the 12 months)?
  3. How much improvement has been made (towards the 15%)?
  4. What were the improvement steps that led to the improvement?  (i.e., what steps improved the productivity, what made no difference or made a negative impact (stop doing these)?)

Metrics (to answer the questions):

  1. Productivity expressed as output / input = Function Points (size of the software product produced) / hours expended to produce output)
  2. Relative difference in productivity = new productivity – old productivity *100%

Perhaps your manager can provide you with some insights about the metrics he is collecting, and you may be able to productively contribute your own ideas about complementary data or even about streamlining the data collection process.

 


1 Software Metrics: A Rigorous and Practical Approach by Tom Demarco, page 11.

2 Mad about Measurement by Tom DeMarco, 1995.

Blog Post Categories 
Metrics