QSM provides unparalleled support throughout the product acquisition, installation, and implementation process.
For nearly five decades, QSM has helped organizations bring data-driven discipline to software project estimation, tracking, and benchmarking. Our methodology and tools turn project complexity into measurable, defensible outcomes.
Outsourcing was supposed to make government IT executives’ lives easier. Yet in too many cases, it’s had the opposite effect, leading to cost overruns, inefficiencies, and solutions that do not work. Remember the initial rollout of Healthcare.gov? Exactly.
It doesn’t have to be this way. Believe it or not, there’s a proven solution that has stood the test of time. In 1977, Lawrence Putnam Sr. discovered the “physics” of how engineers build software by successfully modeling the nonlinear relationship between the five core metrics of software: product size, process productivity, schedule duration, effort and reliability.
The five core metrics make a powerful tool that can be used at each phase of the software acquisition life cycle to help government IT program managers make more objective, quantitative decisions.
In this phase the five core metrics are used to develop an independent “should cost” estimate using a parametric estimation tool that includes an assessment of expected effort, staffing and schedule duration to deliver the required scope of functionality at a target reliability. The independent government estimate should explore all of the viable options. If done right, this should lead to reasonable program parameters and expectations that will be specified in the request for proposal when it is issued. Note: in the chart below, product size is measured in implementation units (IU), which is equivalent to writing a logical source line of code or a technical step in configuring a commercial off the shelf package.
During this phase it is very important to ensure the RFP (1) quantifies the scope of required functionality, (2) identifies any key management constraints and (3) requires vendors to report regular, well-defined status metrics to include construction progress vs. plan and defects discovered. Example status metrics:
The third phase is about the analytical process of objectively assessing the bidders and scoring their cost and technical proposals.
A cost evaluation should weed out vendors who appear to be lowballing to win, as well as those who appear to be padding their estimates.
The technical evaluation should assess the skill of the development team, not the proposal writer. It should take a hard look at whether bidders are able to provide quantitative data (i.e. the five core metrics) for each of their past performance qualifications to demonstrate they are capable of performing the work.
The fourth phase is about assessing progress against the contract baseline to ensure that the program is on track. If changes in direction are proposed, they need to be understood and quantified in order to evaluate the impact to schedule and cost.
Remember that openness and trust are important components of the vendor/customer relationship. The phases described above allow government IT program managers to have a better understanding of how applications are being developed so they can make sure they are receiving a high quality product without overpaying. Likewise, the vendor gets the opportunity to potentially develop a long-term relationship with the agency by sharing valuable quantitative information from beginning to end. It’s a win-win for everyone.