How to Address the Question Every Program Manager Should Answer: Does It Work?
In one of his first actions, President Biden encouraged agency heads to support “evidence-building plans,” more commonly called “learning agendas.” Here’s the back story.
In 2017, Congress gave tax breaks for investors in designated Opportunity Zones around the country. The idea was that this would generate substantial private sector investments in more than 8,700 lower-income census tracts. Did it work as intended? This is one of many questions raised by the Housing and Urban Development Department in its most recent research agenda, which is required by law.
HUD has pioneered the use of economic- and housing-related research and evaluation agendas since 2013. But in 2019, a new law required all federal agencies to develop evidence-building research agendas to assess how well their programs work. Dubbed “learning agendas,” the first set is due to Congress in early 2022.
What’s been the experience of early pioneers in developing learning agendas? What are some good models to follow? What are the potential pitfalls? A new IBM Center report by a research team composed of Kathy Newcomer, Karol Olejniczak and Nick Hart tackles these questions.
First, some background: A 2017 congressional commission recommended agencies use more “evidence” in making policy decisions. One element of the recommendation was for agencies to develop research-based, evidence-building evaluation plans to methodically examine the effectiveness and impact of various programs. These plans are colloquially called “learning agendas,” a term used by pioneering agencies in the Obama administration.
The commission’s recommendations were incorporated into the bipartisan 2018 Foundations for Evidence-Based Policymaking Act, which required agencies to develop learning agendas in conjunction with their strategic plans, which are refreshed every four years. As a result, the first set of completed learning agendas will be due to Congress in early 2022.
Agencies were also required to designate chief evaluation officers to lead the development of their agencies’ learning agendas. The Office of Management and Budget has supported them with guidance in 2020 that includes, for example, standards for judging program evaluation practices.
Early signals from the Biden administration suggest strong support for continuing this effort. For example, in a memorandum shortly after taking office, President Biden declared: “Heads of agencies shall ensure that the scientific-integrity policies of their agencies consider, supplement, and support their plans for forming evidence-based policies, including the evidence-building plans required [by law in the Evidence Act].”
What Is a Learning Agenda?
OMB guidance in 2019 describes learning agendas as multi-year evidence-building plans that would be “a systematic plan for identifying and addressing policy questions relevant to the programs, policies, and regulations of the agency.” While OMB did not prescribe a format, it did note that such plans would need to address the following elements:
- A list of policy-relevant questions for which the agency intends to develop evidence to support policymaking.
- A list of data the agency intends to collect, use, or acquire to facilitate the use of evidence in policymaking.
- A list of methods and analytical approaches that may be used to develop evidence to support policymaking
The guidance also asks agencies to identify potential challenges to developing evidence – such as statutory restrictions – and to develop annual evaluation plans to implement the multi-year learning agenda, as well as to conduct an assessment of the agency’s ability to actually implement the learning agenda.
Avoiding a Compliance Exercise
The IBM Center report’s researchers note that proponents’ greatest fear of evidence-based policymaking is that the learning agenda would become a compliance exercise to “placate oversight officials” and not be meaningful. Similarly, they noted that there is the possibility that, if program officials were not engaged in the development process, “the substance reaches a level of abstraction that makes implementation difficult.”
Based on their observations of successful early adopters, such as HUD and the Small Business Administration, they found that developing and implementing learning agendas “requires participation from a range of stakeholder and internal programs staff.” They found that such participation grounds the agenda with insights about programs that could lead to short-term operational improvements. It also offers success stories that can demonstrate the value of evidence-building efforts.
SBA, for example, collected feedback internally from program managers and externally from trade groups, think tanks and researchers. The agency constructed its plan around the four strategic priorities in its strategic plan, with long-term and short-term efforts to address questions such as: “What impact does lending have on long-term job creation, revenue growth, and export sales?” SBA’s agenda also identifies research the agency intends to fund, the relevant databases that researchers could access for such projects, and relevant literature for reference by the evidence-building community
Emerging Practices
Newcomer, Olejniczak, and Hart identify three emerging practices for developing an effective learning agenda:
- Agency leaders and program managers need to identify and agree upon their agency’s key mission objectives and goals. If there is not a shared understanding about core mission objectives, it is difficult to agree on relevant research questions and priorities.
- Staff and stakeholders have to be willing to participate in the learning agenda development process and commit to using its resulting activities in order to promote evidence-based decision making and learning within the agency. Developing a plan, and then not providing the resources to implement it, and not using the results to make decisions, is a recipe for a compliance exercise.
- Agency leaders need to define the “unit of analysis” for which the agenda will be developed. Will it be organized based on: agency programs? agency organizational divisions? or broad policy outcomes? Will it be developed in conjunction with other federal agencies with related programs or desired policy outcomes, such as climate change? Will it be developed in conjunction with state, local, or other sectors, such as nonprofits attempting to address issues that require collaboration to solve, such as addressing the social determinants of health?
Sorting these issues out in advance of actually developing a learning agenda will make the resulting product more meaningful and actionable for both evaluators and decision makers.
Next Steps
Building on emerging practices, the research team identified a set of desired characteristics for learning agenda-building exercises. These characteristics include elements such as ensuring they are user-oriented by including program managers as co-designers in the development process, making the development process both interactive as well as iterative, and ensuring grassroots input to ensure the resulting evidence plans are grounded in information that actually exists or can be collected.
Newcomer, Olejniczak, and Hart also describe a seven-step process, using design sprint methods to develop a learning agenda that reflects the three characteristics above that make a learning agenda-building exercise effective. These steps are grounded in the importance of intentional and broad engagement of stakeholders. The steps, for example, include developing a stakeholder map to ensure key players are identified, identifying key points in agency decision processes and their timing to ensure evidence is available when decisions will be made, and cataloging the needs of various decisionmakers.
They authors found that co-developing these elements of a learning agenda increase joint ownership of the result and can increase the likelihood of the agenda being used to support evidence-building that is relevant to decision makers. After all, the ultimate goal is to help program managers, agency heads, budget officers, the public, and Congress answer the question: Does it work? And if not, what’s next?