The Moneyball movement in government is gaining steam again. It started with the release of the film version of the Michael Lewis book in 2011, bringing to dramatic life the story of the Oakland A’s’ Billy Beane, and his efforts to substitute (or at least substantially augment) baseball scouts’ intuition about players with data-driven decision-making.
Soon, the Office of Management and Budget was endorsing the Moneyball approach, arguing that government needed to be relentlessly focused on goals, not processes, and rigorous in efforts to gather evidence about program performance. And that evidence, ultimately, should be used to drive budget decisions, the Obama administration declared.
Fast forward almost two years, and two high-ranking officials in the Obama and George W. Bush administrations put the question starkly in the July/August “Ideas Issue” of The Atlantic: “Can Government Play Moneyball?” John Bridgeland, a Bush policy adviser, and Peter Orszag, Obama’s first OMB chief, argue that less than $1 of every $100 in government spending is spent analyzing whether the money is being spent wisely.
While the Moneyball approach seems trendy, the idea of measuring the performance of government programs isn’t new. Even in its modern incarnation, it’s more than two decades old. I actually wrote a cover story in Government Executive on the phenomenon in June 1992.
So why are we still talking about getting started? Part of the problem is it’s really hard. Back in 1992, the cautionary tale about the perils of performance measurement involved the Job Training Partnership Act, which provided states and localities with funds for job training, remedial education and job search assistance. It had simple performance measures: states were rated on how many people they placed in jobs, and at what cost.
Getting people actual jobs seems like a perfectly good performance measure, right? Yes, up to a point. The problem was that local offices started cherry-picking clients they could place most quickly and tailoring their services to them. And those folks didn’t necessarily stay in the jobs they found. So the Labor Department had to overhaul performance measures to factor in how many people were still in their jobs three months after being placed.
Even if you develop the right performance measures -- which can take years of trial and error -- you’re faced with the issue of getting people to act on them. Every program has congressional backers, and often they’re not interested in learning about whether or not their pet project is working. If it’s delivering a benefit to a particular constituency, then mission accomplished.
So should we just give up on the idea of ever directing increasingly scarce federal dollars to programs that actually work? Not necessarily. Bridgeland and Orszag argue for holding politicians’ feet to the fire by creating a Moneyball Index to rate members of Congress on their support for programs that have been shown not to work.
Even in the absence of such a tool, John Kamensky of the IBM Center for the Business of Government argued recently that there are a bunch of reasons to think the push to evidence-based budgeting may be gaining traction. Among them:
- Agencies are collecting more data than ever.
- Tools to analyze the information are getting more sophisticated.
- Leaders across the political spectrum are beginning to pay attention.
On top of those factors, the budget situation is leading directly to increased interest in squeezing the most out of every federal dollar.
Of course, sustained progress in this area would require an ongoing commitment to assessing and improving federal performance in both the executive and legislative branches. That’ll be the trickiest part of all.
Twenty-one years ago, then-OMB management chief Frank Hodsoll told me, “we’re a long way from doing [performance-based budgeting] in any meaningful sense in the federal government.” We’re still pretty far away, but we may be getting closer, little by little.
NEXT STORY: Taxpayer Advocate’s Solution to Scandal