The Clinger-Cohen Act, 10 Years Later: Measuring Efficiency
In part two of our four-part series on the landmark law overhauling information technology procurement, we look at the difficulty of gauging efficiency in IT acquisition.
It's clear that congressional expectations for cost savings from the Clinger-Cohen Act have not come to pass. So what about the other half of the "sense of Congress" message included in the legislation-that agencies also achieve a 5 percent increase in efficiency. Is the federal government doing measurably better at actually applying IT investments?
The agency most logically aligned to provide an answer to that question is the Office of Management and Budget, which assumed the mantle of information technology management once the CCA was signed. Under Section 5101 of the law, OMB shoulders the responsibility to "oversee the use of information resources to improve the efficiency and effectiveness of governmental operations."
Despite that designation, however, neither OMB nor anyone else can categorically declare whether or not the goal of Congress has been met. Efficiency in IT procurement does not easily lend itself to measurement. To arrive at a specific figure, one would have to mine enough data to fill an ore cart, tracking every expenditure from every chief information officer within the Beltway and beyond, taking inventory of what was bought, whom it benefited and how. That, after all, is the fundamental thrust of the legislation-taking the IT community away from cumbersome and myopic acquisition and wringing an actual return on investments.
Yet proving such a return is notoriously difficult and requires, among other things, a snapshot against which to compare. That lack of a comparative mooring point may be the most crippling factor in trying to prove whether the 5 percent challenge has been met.
Rewind the tape to when CCA was first passed, during the age of what Sen. William Cohen, R-Maine, aptly called "computer chaos." The IT world was long on horror stories and short on real benchmarking data. As the February 1997 minutes of the meeting of the First Practices Forum (as opposed to "best practices," which were yet to emerge) of the newly formed CIO Council Committee indicate, "few agencies have existing capital investment processes, and generally those are for investments other than information technology."
The CCA, either by design or simple omission, mandated no particular construct for capturing initial measurement data. The IT "problem" was passed to OMB as a rather formless entity, a state of being which was to be addressed through the rather nebulous goal of being 5 percent better each year.
To its credit, OMB wasted no time in sizing up the breadth of this complex governance test, issuing a data call of sorts to all 28 newly appointed CIOs to provide "a statement listing the major information systems investments for which new or continued funding is requested."
Guiding resource decisions was, of course, the primary focus of the data call, but if the intent was to capture the state of IT across the federal space, OMB's early efforts fell short. One of the problems was its reluctance to impose a particular model for documenting the relationships between business processes and information technology. By endorsing a medley of models, OMB may have perpetuated some of the complexities associated with drafting the appropriate data collection instrument, and may have therefore missed an opportunity to set conditions and to couch the effort appropriately. Of course, this may have been OMB's precise intent, to leave the intricate architectural details to the information officers now officially at the helm.
Next week: Becoming Enterprise Architects
NEXT STORY: OMB publishes online catalog of IT guidelines