Outcome Evaluation of Policy Research: Recommendations and Best Practices for Funders, Researchers, and Evaluators

The recommendations and best practices that follow have been excerpted from the last chapter of my Master’s Thesis ‘Who Used Our Findings: Framing Collaborations between Foundations and Think Tanks through Practices in the Evaluation of Policy Influence’, defended at Georgetown University on April 22nd, 2016. They were extracted from a total of 22 semi-structured interviews to experts in foundations, think tanks, and independent evaluation firms.

The study set out to examine the collaborations between foundations and think tanks in order to identify if the former are driving the evaluation of the latter, and to determine whether and how the influence of public policy research is being assessed in that intersection. The motivation for the study was grounded in a paradox: while many collaborations between foundations and think tanks are longstanding and well known, there has been to date no attempt at framing how they are affected by the increasing reliance of organized philanthropy on impact and outcome assessment.

In contrast with that general landscape, the main hypothesis of the study that think tanks supported by foundations are more likely to conduct outcome evaluation, has been supported with the identification of four case studies in which foundations can clearly be attributed as the driving force for the construction of monitoring and evaluation capacities and the conduction of evaluative research. When think tanks build such systems, they are significantly improving their ability to make a persuasive argument about their influence and contribution to the policymaking environment; and this argument is an important part of their fundraising strategy. At the same time, the Think Tank Initiative and other examples of think tank evaluation may have opened the field to studies on the impact of prominent think tanks, like the Brookings Institution, the Bipartisan Policy Center, and the Pew Research Center, that have recently been commissioned by the Hewlett Foundation.

Recommendations

1: Learn from the field of strategic communications

One of the most recurring pieces in attributing value to the contributions of think tanks was the consideration that evidence ought to be packaged strategically to stay influential. Some of the interviewees, like Justin Milner at Urban Institute, mentioned the production of visualizations, videos and blog posts.

Jackie Kaye highlighted the benefits of connecting policy research with the lessons of strategic messaging and communications from a more experimental perspective:

If you have policy research findings, wouldn’t it be great to do randomized control mini-experiments, where half of the people you want to influence get a two-page summary of the findings, and half of the people get a video and a two-page summary of the findings? I think that the work to evaluate policy research needs to get a little more creative.

(Personal interview with Jackie Kaye, Wellspring Advisors, February 22, 2016)

Similarly, Julia Coffman saw that the “wave of the future” may lie in the power of digital analytics for capturing “audience exposure to information and electronic consumption”. Even without advancing that far, some of the tools of market research that gauge behaviors, attitudes and beliefs of target audiences could be adopted more often by public policy research organizations.

2: Think about audiences

As a design principle, the focus on users can be adopted in the earliest phases of research design by addressing the information needs of specific audiences and concentrating dissemination on them.

A recurring recommendation linked to the above is to focus on the users of information. This recommendation dilates into multiple dimensions, as the information consumed can consist in evaluation findings, or in the policy research project; while intended users can be internal to the organization, members of a partnering organization, or decision makers. For that reason, the recommendation to understand the work culture and information needs of users can take many forms. Some organizations focus on understanding the needs of policymakers at the national and local level; others have focused on fostering the understanding of how policies at the federal level constrain the possible decisions of other policy makers; while still others have in mind what can be done to make evaluation findings more useful beyond the context that generated them.

As a design principle, the focus on users can be adopted in the earliest phases of research design by addressing the information needs of specific audiences and concentrating dissemination on them. Additionally, that intensive focus decreases the barriers for measuring uptake in terms of opinions, behaviors, attitudes and use.

3: Map the inclusion of public policy research in larger initiatives

Some of the respondents expressed the hope that the findings of public policy research will be integrated in a purposeful and intentional manner in larger initiatives including different tactics, like grassroots mobilizations, awareness campaigns, etcetera. In this context, evaluation is seen as a test that can help identify what is the most effective placement of new information in a planning sequence.

Best practices for funders

1: Build anchor institutions by aggregation

Offer financial incentives for collaborations between grantees working in the same area with the objective of working through their disagreements and finding a common message that can be disseminated through a collaborative platform.

2: include unnatural allies in the coalition

Including communities and voices that are usually on the other side of an issue, like a hunter and an environmentalist, can decrease the ideological perception of a message and strengthen its credibility.

Best practices for researchers

1: Package information differently

Push beyond the traditional publication of information in books and pdf reports towards formats that summarize information for easy consumption: videos, infographics, visualizations, stories or shorter texts.

2: Target specific audiences

Although federal legislation is usually targeted due to its high profile, less known policymakers at the state and local level, as well as mid-rank employees in government agencies have an important weight in policy implementation. Targeting them at the right time with information that is relevant and connected to their sphere of influence is hard, but can make a difference.

3: Connect the dots

Policymakers are rarely aware of the consequences of their actions for the work of other agencies. Relevant information about unintended consequences in other policymaking communities can promote more nuanced, self-aware and responsible decisions.

4: Create multidisciplinary teams

Varied skillsets can effectively triangulate the lifecycle of a project between the phases of planning, research, communication, and evaluation.

5: Collaborate with advocates

When there are clear legislative goals, collaborate with advocates that can make the message heard in an active way. The inclusion of advocates can also facilitate the assessment of which groups and communities have been informed by a project or are using evidence to support their argument.

Best practices for evaluators

1: Avoid superimposing strict models

Although logic models and theories of change can be useful tools for identifying and testing assumptions and assigning intended outcomes to a project, many funders avoid superimposing their own model, and focus instead on offering support to organizations with similar goals, including support for capacity building in evaluation and planning.

2: Establish intermediate outcomes

Interviewees indicated how intermediate outcomes signaling changes in attitudes, behaviors, discourse and opinions can be measured with tools like public polling or discourse analysis.

Outcome evaluation of policy research cannot be tracked with legislative change only. Other intermediate outcomes that signal changes in attitudes, behaviors, discourse and opinions can be measured. For example, public polling, discourse analysis of public political debates and bill proposals that were not passed, etc.

3: Use evaluation to better understand the world

In the realm of public policy research, the assessment of influence is usually not a summative look on whether a program delivered what it promised. Much rather, it is an examination of how the surrounding environment interacts with the objectives of a program to understand whether the message has to be repeated, translated or shifted.

4: Identify and address key decision points

Evaluation is more likely to be informative and useful when it has been designed to address program-related information needs and is delivered on time for a decision. This often implies designing the evaluation and the program as interdependent processes.

5: Make Social Network Analysis part of the influence assessment toolkit:

The Robert Wood Johnson Foundation has adopted the use of SNA as a way to measure the social outreach of academics, but also to provide insights to advocacy organizations into which groups and communities are underexposed to a certain message. The references to the utilization of SNA for assessing policy influence are augmenting: see Borgenschneider and Corbett (2010, 300) and the guide published by Network Impact and the Center for Evaluation Innovation (2014).