This post was originally published on Linkedin. Join the QSM Linkedin Group and Company Page to stay up-to-date with more content like this.
If you were thinking about purchasing a driverless car, and the salesperson told you that there’s a “slight” chance that the car will fail during transit, would you still feel comfortable laying down your money? Or, if you faced an emergency, would you trust an automated robot to perform open-heart surgery, rather than the hands of a skilled physician?
While these questions might seem like the stuff of a science fiction novel, they’re quickly becoming a part of our normal, everyday world. We’re hearing a great deal about artificial intelligence and how it is replacing tasks that were once done by humans. AI is powered by software, and that software is becoming increasingly vital to our lives. This makes ensuring its reliability more important than ever.
But here’s a sobering thought: right now, IT operations teams are building software that is, on average, 95% reliable out the door. That’s right; today, a 5% unreliability gap is considered “good enough.”
That may be true when you’re dealing with traditional web and mobile applications, or some other piece of technology that doesn’t necessarily have a significant impact on a person’s life or business. But when you scale that software up – for example, to SpaceX testing a rocket, NASA doing a space shot, or even something more Earthbound, like a driverless car – and take out the human factor, that software must be incredibly reliable and leave very little room for error.
It’s not just lives and businesses that are at stake, either; our country’s security is also dependent on software reliability. Most federal agencies have begun to deploy software automation solutions that relieve IT managers of much of the burden involved in day-to-day network management and security. But what if that software is only 95% reliable? What if a savvy hacker finds his or her way into that 5% window?
If we can’t close that window completely, we need to at least get the software that’s rolling out the door to 99% reliability – minimum.
Cutting Corners Isn’t an Option
I acknowledge that’s not an easy proposition given certain constraints. For one, organizations in all industries are under enormous pressure to introduce solutions in extraordinarily tight timeframes. Many are also under incredibly tight budgets, compelling them to aim for that 95% “good enough” threshold and not necessarily any higher.
But software reliability is not something that organizations can afford to cut corners on. They need to find ways to work within those constraints while still focusing on creating rock-solid products.
Throwing tons of people at the problem is not the solution. In fact, increasing headcount unnecessarily can be detrimental to the project and create more problems than it solves. We’ve all heard the term “too many cooks in the kitchen spoil the broth,” and that’s true – too many people trying to fix a software problem can actually introduce new vulnerabilities and actually end up making the software less reliable. Adding to the team may also require pulling people off of another project, thereby potentially compromising the reliability of other solutions that are in the pipeline, creating a vicious circle.
Mandated deadlines can also hamper attempts to deliver an exceptional finished product. Yes, deadlines are important – but I think we’ve all experienced the pain of deadlines that are unnecessarily unrealistic, forcing us to deliver software that hasn’t been fully tested. Projects that are dictated by these types of situations actually run the risk of falling well below 95% reliability.
Estimation = Reliability
This is why the practice of software estimation has become even more important today. Software estimation helps ensure that projects remain on track, on budget, and, above all, completed to satisfaction. By “satisfaction,” I’m not saying “mostly done right.” I’m talking about projects without major defects or vulnerabilities – projects that meet the minimum required reliability.
Estimating and planning at the outset of any software development project, even those that employ agile methodologies, can help developers conceptualize the types of resources they will need to successfully build a reliable product. They’ll be able to gauge the tools and number of people necessary and develop accurate timelines for completion of the project. They’ll also identify cost drivers (team meetings, bug fixes, etc.) that will inevitably impact the development process.
All of this helps increase efficiency and build a better – and more reliable – product. That may take a little more time in both development and testing, but it will ultimately save organizations a lot of money and hassle. Most important, the final result will be a high quality, dependable product.
We explore the importance of software estimation and planning in our new 2017 QSM Software Almanac, which you can download for free. It’s something you’ll want to read – especially before you sign for that new driverless Tesla.