Precise prediction is very difficult. Long ago, when FORTRAN and COBOL were still the new kids on the block, Ware wrote a science-fiction short story called “The Last Programmer.” That estimable gentleman had become the “last programmer” at his research institution and was being retired early with a gold watch. The retirement time was set for 1984, a then-future date popularized by George Orwell’s 1949 novel, Nineteen Eighty-Four. The assumption was that by 1984 the scientists and engineers would be working directly with the computers. Programmers would no longer be needed.
It is now 1996 and we know how that prediction worked out. The gold watches seem to have gone away, but the number of programmers has steadily increased. The retirement date of the last one is not in sight. In fact, worldwide there are perhaps 12,000,000 professional software personnel, according to a guess by Capers Jones in the September 1995 Computer Magazine. However, parts of the programming task have migrated to end users, as the story expected. Jones believes there are more than 30,000,000 end users who can program—in the sense of building applications with spread sheets and other tools.
We predict—change
Nevertheless, in spite of our dubious past record of predictions, let us charge ahead. We predict more of most everything: iterative development (also known as rapid application development, incremental development, spiral development, and periodic release), reuse, networking, multimedia, groupware, client/server (fewer mainframes), and business process reengineering. The common characteristic of “most everything” is that it takes more software.
We also expect the pace of change, already torrid, to pick up. One reason for this acceleration is worldwide competition. Even more critical is the fact that those organizations that succeed in getting the most out of their computer systems gain an enormous competitive advantage.
A metric for productivity
A key need in the future, as well as now, is a valid metric for software development productivity. The metric conventionally used for this purpose, source lines of code per person month, does not include the critically important factor of development time. As we discussed in our October 1995 column, Larry developed the process productivity metric more than 20 years ago.
Process Productivity = (Size in SLOC) / (Effort/B) ^ (1/3) x (Time) ^ (4/3)
(Where B is a constant related to size.)
This equation, derived from past projects, not from our fevered imagination, tells us two things:
- Time is part of the productivity relationship.
- It is a very important part. The exponents tell us it has four times the weight of effort.
No wonder conventional productivity (Size/Effort) has given the industry poor results. It has ignored the most important variable, development time. This metric, process productivity, measures the overall productivity of a software development project, from management to testers.
A good metric for productivity improves managers’ ability to manage the software function internally. It is essential to planning and estimating projects, controlling them when they are under way, and gauging the improvement of the software process from one project to the next project, or between projects or organizations. Larry derived the process productivity metric from project data collected in the 1970s. At that time most projects were developing new design and code. They were following the waterfall model of development: requirements, specifications, functional design, detailed design, coding, test planning, and various stages of testing.
Now, various forms of iterative development are supplanting the waterfall model. Attempts at reuse are reducing the percentage of new design and code that developers produce. Fewer projects are starting from scratch with all new code. The arrival of objects and other forms of pre-existing code affects the amount of new code to be developed. Purchased software packages are being integrated with new code. Much so-called maintenance work is really adding new functions to an established program. Calling this work a series of releases would be more realistic.
The existing process productivity metric may have to be rethought to some degree to adapt it to these accelerating changes. For instance, should reused code be subtracted from system size in calculating the process productivity index, leading to an index based largely on new code? Or should all code, new and reused, be included in the computation, leading to much higher indexes?
As a practical matter, it has proved difficult to exclude reused code from the computation, because there is considerable work involved in searching for, evaluating, and incorporating existing code, and regression testing the reused modules.
These questions, of course, do not have arbitrary answers.
The real answer is: It depends on what you are trying to use it for. For example, if we are trying to show the benefit of reuse in PI terms we include the reused code in the calculation. On the other hand, if we are trying to tune the software equation for prediction of how much time and effort to use on a new project, then we want to use a PI related to the design and code we do work on -- new and modified.
In any event, we must collect data on future projects, accomplished in new ways with new technology, and let the data guide our answers. At present, our practice is to count all code, new and reused (breaking out the modified and unmodified portions), and calculate the process productivity index based on the new and modified code.
With application generators and intelligent reuse of libraries of reusable objects we need to keep track of that portion of the functionality attributable to such facilities. We need to be able to include it at appropriate times to show the economic benefit it provides. When we do include it in this way, higher productivity indexes result.
Parenthetically, those high indexes mean that some organizations are well along toward mastering new ways of providing software. Their success increases the competitive pressure on lagging organizations.
One of the very important uses of the process productivity metric and the return-on investment figures that can be based on it is to show chief executive officers and line executives how well the information systems organization is being managed. The productivity indexes the organization achieves on successive projects can be watched over the years to see if it is improving. Its index can be compared with that of other IS organizations or with industry averages to see how it stacks up competitively. (Of course, when used across company boundaries more uncertainty arises because of differences in practices among companies.) Within the company the indexes of various IS departments or projects can be compared to see who needs help—tools, training, methods. As a hard number, derived from other hard numbers (size, effort, time), the process productivity index admits little argument so long as its specific use is carefully thought through.
A metric for added value
The degree to which an IS organization meets modern needs, however, reaches beyond its internal efficiency. After all, it is part of a larger organization. That larger organization has to make its way in an increasingly harsh economic world, whether it is a private organization in the marketplace or a public agency seeking appropriations from hard-up legislatures.
“Enhancing an isolated business function to realize lower costs, improved efficiency or some other narrow functional objective such as modernization, may degrade overall results,” notes Paul A. Strassmann, himself a former chief information officer. 1 2 To measure the contribution of IS to the objectives of the entire organization, Strassmann devised the Information Productivity Index:
IPI = [(Operating Profit after Tax) - (Shareholder Equity x Cost of Capital)] / [(Sales,General, and Administrative Costs) + (Research and Development Expenses)]
In 1994, cost of capital was taken as 11.8 percent.
Strassmann believes his index measures the effectiveness with which a company manages information. He believes it is a better metric than the percent of spending on the IS function, because it is based on overall results. Moreover, the data entering into the computation of the index is often publicly available.
Using public data, Computerworld computed the IPI for hundreds of companies. The index ranged from a high of 1.41 to a low of -0.57. The top 100 companies all had positive indexes, averaging 0.43. Microsoft made the top 100, but with a below average score: 0.33; yet it sells packaged software to everybody else and people write books about its methods. Wal-Mart Stores had an index of 0.10; yet it is renowned for systematizing the retail business. Strassmann himself calls Wal-Mart a “successful case.” 3 AT&T and IBM had negative indexes; yet they are well known for their work in software and the use of information technology.
The Information Productivity Index certainly sorts out companies along a scale.
Note that four actions over which management has control can increase the index:
- Improving profits
- Reducing capital employed in the business
- Reducing various kinds of overhead
- Reducing R&D spending
Strassmann makes much in his writings over reducing overhead by streamlining processes. The actions grouped under business process reengineering reduce these overheads. These same actions may reduce the capital assets needed in the business. Increasing marketing costs, however, may be necessary in executives’ judgment to maintain market share or to attain that holy of holies, dominant market share. In that case profits will surely follow and the IPI will rise.
Similarly, reducing R&D spending may not always be in the best interest of the enterprise. Executives may judge this cost necessary to maintain or increase market share.
The attempt to apply one index to all industries may be over-ambitious. The wide variation of the Sales-to-Assets ratio among industries suggests that the use of a single index may not be feasible. Consequently, whether the IPI is the ultimate metric for gauging the effect of information systems on a company’s overall fortunes remains an issue for the future to consider.
Enterprise-wide effectiveness
Chief information officers will be judged more and more in the coming period on whether they are contributing to their company’s strategic performance.
The current situation seems to be that CIOs are not highly regarded by CEOs (though, of course, there are many exceptions). “General managers are tired of being told that information technology (IT) can create competitive advantage and enable business transformation,” according to Michael J. Earl and David F. Feeny, who have been interviewing CEOs and CIOs at length in a series of research projects. 4
“What they observe and experience are IS project failures, unrelenting hype about IT, and rising information processing costs.”
The two authors find that CEOs are polarized between those who grasp that IT is a strategic resource for adding business value and those who see it simply as a cost to be reduced. The authors believe CIOs have a major role to play in moving CEOs from the cost bench to the strategic dugout. It is no longer enough for a CIO just to run a good shop.
A CEO, by the nature of his position as head of an enterprise, tends to be focused outward, to see the organization in relation to its customers and suppliers. He feels the necessity of adding value, a portion of which can be taken as profits. Line executives are, by necessity, oriented to their respective functions.
A CIO has his hands full, keeping up with the explosive advance in technology and running a difficult operation efficiently. Sometimes, he is minimally knowledgeable about the strategic issues affecting his own company. The point at issue is not systematizing all the functions of the business, piling on technology regardless of the business strategy at stake. The point is to identify the functions that add most value to the entire business.
“It is only through dialogue with the CEO and other executives that the CIO can tease out the motivations, meaning, and priorities; know the mind of the business; sense the impending changes; and maintain the relevance and timeliness of the IS effort,” Earl and Feeny contend. Only in this way can the CIO, himself, become oriented to the business strategies in play.
A knowledgeable CEO can aid this process by, for example, including the CIO in appropriate meetings.
A pertinent metric, such as one that tells an organization whether it is using information effectively, can also aid this process. Perhaps Strassmann’s Information Productivity Index answers that need. At a minimum it focuses attention on the problem. Maybe high marketing or R&D costs are in order under the circumstances. Or perhaps the index can be modified to fit the differing ratios in different industries. Or perhaps some other metric will better show CEOs that IS is contributing added value to the entire enterprise.
- 1. Paul A. Strassmann, “Keep Improving,” Computerworld, Oct. 9, 1995, Section 2, pp. 54.
- 2. Paul A. Strassmann, The Business Value of Computers, The Information Economics Press, New Canaan, Connecticut, 1990.
- 3. Paul A. Strassmann, “Lower Transaction Costs—The Key to CALS,” Cross Talk, Jan. 1993, pp. 2-7.
- 4. Michael J. Earl and David F. Feeny, “Is Your CIO Adding Value?” Sloan Management Review, Spring 1994, pp. 11-20.