The laws of flight have many commonalities with the “physics” of software development. Whereas the principles of aerodynamics are essentially about thrust, drag, lift and weight, developing software has always been about the relationship between size (scope), effort (cost), duration (schedule) and the often overlooked measurement of quality (reliability).
In aviation parlance the expression “flying by the seat of your pants” literally meant that you felt the plane through your “seat”. Early aircraft had limited navigational aids, mainly a rudimentary compass (which worked only when flying level) and a simple string to assess airflow relative to the plane’s fuselage. Pilots could determine the degree of an ascent or descent by G-force sensation, or assess airspeed by the severity of the aircraft’s vibrations. Until the invention of the gyroscope it was quite dangerous to fly without a virtual horizon – the centerpiece of a modern dashboard.
Early aeronauts plotted their routes using individual skill, celestial or fixed landmarks and their own real time perceptions rather than depending on mechanical tools. However, fog or low visibility would render the limited instruments they had useless, with the flight itself remaining as a risky endeavor. A successful trip meant you landed at your destination in one piece, but it was largely dependent upon the talent and judgment of the aviator, visibility and weather, and perhaps no small amount of luck.
So while the categorization of doing something by the seat of your pants may have a derogatory connotation, it is not entirely justified when considered in the broader context of the expression. Whereas improvising and trusting your senses isn’t always bad, blindly trusting instruments or data can also be perilous. In fact, many modern pilots rely so heavily on their instrumentation that the FAA requires training to counter that overdependence - as the recent crash of Asiana Flight 214 at San Francisco showed. The point is that exclusive trust in data and tools is ill-advised unless combined with experience, skill and the hard lessons of history.
For example, a lot of QSM clients find their projects significantly off-course or in software estimation tailspins when they rely too heavily on the initial, subjective inputs of technical programmers who offer their own estimates on what it will take to do their own work. Not that this practice is inherently bad, but the variances with such biased inputs can be extremely wide (anywhere from 100-500%). A lot of estimate padding can occur, or conversely, the all-too-human flaw of optimistic personal judgment can mean the estimates can be grossly under what it will actually take to complete the project.
For a pilot, the phrase “dead reckoning” (a weirdly ironic choice of words) describes the use of “fixes” for verifying known way-markers to approximate best guesses on headings, speed, and arrival times. In software estimation, it is perfectly fine to use analogous comparisons as you conceive and plan a project. As in: “this project is like that other one we did” and so forth. But it’s always safer and prudent to sanity-check estimates using actual data and impartial comparisons to account for any new uncertainties or risks associated with unproven teams, unprecedented projects, new technology and the like. Sometimes you will have no examples at all to draw on, or simply count or consider the wrong things.
At the heart of the QSM approach is the use of the core metrics of software development: size, effort, duration and quality. When estimating projects and making ongoing decisions regarding related software deployment, these measures should always be considered together. At QSM we cross-reference project parameters with the intelligence contained in our one-of-kind database of completed projects to run virtual project simulations within our SLIM tools without actually having to execute the projects themselves. Think of a project flight simulator. Not the virtual training devices that play a big part in how the Air Force entrusts multi-million dollar jets to 20 year olds, but as QSM’s predictive model that pioneered software estimation.
The fact is that human senses and judgment can be fooled. This is where QSM emphatically stresses the use of the objective, unbiased, apolitical data (your own company’s if possible) - which is very often counterintuitive. Everyone knows about the infamous Mythical Man Month, right? There is immense value in capturing actual results over time to refresh reference points and use real (not theoretical) data based on your own situations and outcomes. Then come to leverage what the data says and how it informs your evolving estimation process. The results can be transformational for your organization, saving not only countless resources (financial and emotional), preventing frustration and anxiety, but helping to deliver something on time that actually works!
- Superior estimation process, like flying a plane, depends on skillfully using both instruments and data (history and parametric estimation tools) and ability (based on experience, common sense and technique) |
- The stakes can be incredibly high depending on how many souls you have on board (or stakeholders on your project) |
- Turbulent projects (flights) can scare the heck out of you and take years off your life, worse yet they can sabotage a career or ruin a company, crashing and burning with “fatal” consequences |
- In-flight project adjustments are critical as conditions dictate (changes to scope or management constraints, issues impacting planned productivity, etc.) |
- Even slight course corrections early in the fight can mean a huge difference in terms of where you end up |
- Start and stay in control: take off with a smart flight (project) plan and then make adjustments as needed to safely reach your intended destination |
This takes us to the crux of the art and science of estimation. While QSM has led the industry for over three decades with a sophisticated parametric tool that takes data and produces defensible estimation scenarios to provide not just “more accurate” estimates, but infinitely more useful ones. To do this depends on both independent measurement and skill. Our tools by themselves are not going to be the silver bullet.
Anyone launching a software project should endeavor to quantify the scope of the project, counting the right things (to size it appropriately) at all stages of the “flight”. Not only to make sure that its flight plan has some sort of endpoint in mind, but that you most certainly have enough fuel in your plane, and are prepared to adjust based on the inevitable changes that will most certainly occur.
What do you think is more dangerous – liftoff or landing? Statistics show that takeoffs carry more hazards (like initial load balance, full fuel tanks, longer runway requirements, wake turbulence from departing aircraft, etc.). Similar to flying itself, software estimation can be more precarious in those early stages of the SDLC when requirements are unclear and people are not quite sure but just have to launch anyway.
Almost anyone who promotes sound development principles will contend that you can and must leverage good estimation early in the lifecycle based on the best available data you have. It’s a heck of a lot better than flying completely blind and hoping for the best: what goes up must eventually come down.
Enjoy the flight and safe landings!