Public Management
he 1993 Government Performance and Results Act was originally conceived by Republicans on the Senate Governmental Affairs Committee, passed at the beginning of the Clinton administration, and quickly endorsed by Vice President Al Gore's National Performance Review. Given that Texas is at the forefront of performance measurement, the idea of performance measurement is likely to stay with us, no matter who wins the November election.
Just this spring, the first annual performance reports mandated under the Results Act (as it is known to Congress members; GPRA to the executive branch) have appeared, presenting quantitative measures of performance along the goals agencies set for themselves in earlier rounds of the process.
While everybody has published a report, the extent to which performance measurement has become part of agencies' lives varies widely. At one end are organizations such as the Transportation Department and the Education Department's Office of Student Financial Assistance. Under the leadership of Deputy Secretary Mort Downey, DoT's agencies have focused on performance in an ongoing way; Downey estimates he spends about a quarter of his time dealing with performance goals. DoT has reported that for 77 percent of its performance measures, goals were either met or the numbers were going in the right direction. Thus, for example, auto fatalities per million vehicle miles are down from 1.7 in 1993 to 1.5 today (the goal was 1.6); injuries are down from 137 to 119 (goal: 127). Public transit injuries per 100 million vehicle miles are down from 129 to 112 (goal: 123). At the college student loan program, Chief Operating Officer Greg Woods has focused constantly on three big-picture performance measures-customer satisfaction (students and school loan administrators), processing costs and employee satisfaction-even to the extent of tying his contractors' payments to achievement of the department's goals.
For many agencies, though, the Results Act has been that classic Washington exercise-the "drill." Anyone who's been in government knows what the drill is. Some outside force, such as Congress or the Office of Management and Budget, announces a requirement. The purpose of the requirement is to affect the agency's behavior, but officials have no intention of letting that happen. So they set up a staff to go through the exercise of compliance with the requirement. This staff is kept as isolated as possible, shielding the organization's working activities from the requirement. The staff prepares paper documents that zip back and forth between it and the originator of the requirement. The organization's behavior
doesn't change.
The movement toward performance measures for government organizations is potentially the single most important management reform in the public sector. It could significantly improve the performance of government. But for that to happen, we need to shift our focus from the Results Acts to results.
The way many in Washington view performance measurement efforts has two fundamental flaws. The first is seeing the primary goal as after-the-fact accountability rather than before-the-fact performance improvement. In conversations with people involved with the Results Act, it becomes clear that many of them see the law mostly as an effort to measure how well a program is doing so that the political system can make budgetary or other decisions based on the program's success or failure. This approach is far too static because it assumes performance measurement will not affect an organization's performance. Instead, we should view performance measurement primarily as a way to improve an organization's performance.
What are the performance benefits of performance measures? Extensive psychological research confirms the common-sense observation that giving people a goal improves their performance by serving as a motivator. Performance measures also give employees signals about which tasks, among the many before them, the organization wishes them to focus on-what gets measured gets noticed. Performance measurement can also improve performance by:
- Allowing managers to see what works and what doesn't by comparing performance across organizational units using different approaches.
- Providing an objective way to evaluate individual employee performance.
- Giving managers information they can use to press people for improvement in certain areas.
For any of these effects to occur, however, performance measurement is insufficient. We need performance management-use of measures by senior executives and line managers in everyday leadership of the organization.
This is why the "drill" approach of isolating performance measurement within a special staff is worse than ineffective: It provides performance data that an organization's critics can use to beat up on it, without giving the organization the golden opportunity to use the measures to do a better job.
The second flaw is centering congressional attention on Results Act reporting requirements rather than on agency performance results. Performance measurement enthusiasts have mourned the general dearth of congressional and organized group attention to the Results Act. Achieving such interest is crucial to improving agency performance, since improvement often requires wrenching organizational changes that many will not pursue without political pressure.
Getting legislators interested in performance measures involves a number of barriers under any circumstances, but the purveyors of performance measurement have made things unnecessarily difficult. They have encouraged legislators to take an arcane approach to rating agency plans rather than agency results, an approach institutionalized by the House Committee on Government Reform's report cards on the quality of agency reports. At a recent Results Act seminar for environmental groups, participants were surprised to learn that the legislation was something more than a bunch of bureaucratic reporting requirements. Getting any significant number of legislators or organized groups interested in the Results Act is unlikely; prospects are much brighter for getting them interested in results.
Performance measurement is crucial. Let's not foul it up.
Steven Kelman, Weatherhead professor of public management at Harvard University's John F. Kennedy School of Government, was administrator of OMB's Office of Federal Procurement Policy from 1993 to 1997.
NEXT STORY: Keep on Measuring Customer Satisfaction