On a Wing and a Prayer
he phrase "faith-based programs" is in vogue these days, as the debate rages about organizations with religious affiliations getting government subsidies. Perhaps a better definition of faith-based programs is, "programs and activities that are believed to be effective, but for which there is no evidence of positive impact-or no one has bothered to find out." These are the programs that truly are operating on faith. There are lots of them. Their existence-and persistence-is one of the key reasons government keeps recycling different ways to approach "performance-informed" budgeting and management, as a colleague once put it.
In principle, there's nothing wrong with designing, enacting and funding programs in the belief that they will work. To solve the difficult problems government takes on, federal officials often need to take leaps of faith, think creatively, move into new areas, and step away from the known and safe into the unknown but promising.
It is easy to criticize a new idea, especially if it's one that has been around government for a while. People always are discovering new approaches that resemble something that was tried before. That's one reason experienced budget examiners seem like naysayers. I was no exception. In fact, when I worked at the Office of Management and Budget during the Clinton administration, I was disinvited to a planning meeting on an education initiative. Gene Sperling, head of the National Economic Council, said, "We'd like one meeting where you don't tell us how what we want to do has been tried before and why it failed."
When the government takes leaps of faith with taxpayers' money, it should set up the machinery to evaluate the new initiative. That way, managers can make timely modifications in response to performance data, expand the program intelligently if it is working well, or put it aside if it is failing and can't be fixed. Government is loath to set up and use a high-quality evaluation system to support these kinds of decisions. But such evaluations are among the most valuable components of performance-informed budgeting and management.
Over the decades, as all jaundiced students of government management innovation know, reformers have put new labels on the notion of performance management and budgeting (Hoover Commission, PPBS, ZBB, MBO, Grace Commission, NPR, GPRA). Generally, they trash whatever went before, declaring that nobody has ever successfully used performance information to drive management and budgeting. They announce that their reform will at long last make government effective. Then they implement their reform badly or incompletely. The next team, which usually wants to put its own stamp on this valuable component of governing, then abandons its predecessor's approach in favor of some innovation of its own.
The government is in the midst of yet another such effort, but this one might last long enough to have a real impact. This effort is a combination of the president's management agenda-which focuses in part on reporting on management performance and integrating budget and performance in decision-making-and PART, the Program Assessment Rating Tool.
Here are some reasons for cautious optimism:
- The initiatives have won consistent and visible support from the president and former OMB Director Mitch Daniels. Daniels' departure in June might become a problem for the initiatives, but early signals indicate no flagging of commitment.
- These initiatives are being implemented with a startling degree of candor about how hard it will be to make them succeed and how much of a serious multi-year effort will be needed to do so.
- They are intimately and publicly related to high stakes budget decision-making, giving them teeth and commanding attention at agencies.
- They build on a statutory base, the 1993 Government Performance and Results Act, which has created the necessary precursor processes of setting goals and objectives, measuring performance, and, most importantly, reporting on performance. These processes have been waiting a decade for a serious strategy that links them to high stakes program management and budgeting decisions and thereby gives them meaning.
These initiatives must lead directly to more and better-focused investment in the rigorous evaluation of programs. Such investment runs in cycles. It was at a high point in the 1970s, but the concept fell on hard times in the 1980s, when many agencies let their evaluation capabilities atrophy.
GPRA should have triggered a new wave of program evaluation, but it didn't. The administration has yet to launch a major push for more evaluation. PART summaries describe myriad programs with "results not known" or "results not demonstrated." That alone shows the need for a high quality evaluation system.
It would be especially exciting if members of Congress-in their appropriations, authorization and tax-writing processes-adopted their own forms of publicly visible performance-informed decision-making. Then performance information could play its proper role at both ends of the decision-making avenue. One end alone isn't good enough.
Already there are signs of hope. The Education Department recently released a reportedly rigorous study of a popular Clinton administration program, 21st Century Schools, which funds after-school services for elementary and secondary school children. The study found little or no objective evidence of educational or other intended effects. In an April 2002 article in The Washington Post, Jay Mathews reported that the controversy around the study "is a classic example of how advocates try to discredit even the best educational research when it seems to threaten their ideas." Mathews also noted that by factoring the study findings into its budget proposal, the administration is "doing what social scientists have long begged policy-makers to do: collect good data and use it in their decisions."
I was disappointed in the study's findings, since I had a role in the decisions on this program in the last administration, and I had faith that it would work. But the study apparently was a good one, because the Bush administration used the performance data to justify its proposal to reduce spending for the Education program. Another equally valid approach would have been to use the study to support legislative or administrative improvements to the program, without reductions.
Initiatives of a prior administration are always at some risk with the next crew, regardless of party. Still, the data were analyzed and used, and published for all to see. That transparency is one of the things that focusing on performance should lead to.
Performance management and budgeting is the essential counterbalance to faith-based programming. Agencies must be willing to take risks. They must be equally willing to measure and evaluate performance and hold programs accountable for what they do or do not achieve. Otherwise, they don't have a prayer.
Barry White, a private consultant, is also director of Government Performance Projects for the Council for Excellence in Government. For more than half of his 31- year federal career, he was an executive at OMB. The views expressed are his own.
NEXT STORY: Crime and Punishment