Master’s Thesis: How do Foundations evaluate Think Tanks?

The interest in telling apart what works from what doesn’t is widespread in the philanthropic community, particularly in organizations that devote their efforts to policy change in the United States and around the globe. With the goal of maximizing the efficiency of their grants, prominent foundations, like the Bill and Melinda Gates Foundation or the Annie E. Casey Foundation, have confronted the need for specific methods in monitoring and evaluation that can be applied to the work of the nonprofits they support.

Credit: W. K. Kellogg Foundation. Logic Development Guide (2004)

However, little attention has been paid so far to the relationships between research-oriented organizations and the foundations that support them –with good reasons: it is extremely hard to prove the impact and utilization of a policy report.

My Master’s thesis will be entirely devoted to this complex intersection. Under the preliminary title of “How do Funders Decide? Monitoring and Evaluation as a Standardizing Interface Between Foundations and Think Tanks”, the study will consist of a set of semi-structured interviews to program officers at both foundations and think tanks, and to independent evaluation experts.

The following two paragraphs are an excerpt from the Thesis Proposal submitted to the Graduate School of Arts and Sciences at Georgetown University.

Research Problem

While many of the major American foundations are seeking to maximize the effect of their giving with advocacy strategies that include specific requirements for the evaluation and monitoring of grantees, the impact of policy research is particularly difficult to measure. This study will ask the following questions:

  • How do foundation officers decide whether a think tank deserves to be funded?
  • Does that judgment rest on formal feedback (reports, narratives and objective data provided through monitoring and evaluation frameworks) or on informal factors like trust, habit, reputation, changes in leadership, or other?

Literature Review

Many questions remain unanswered in organized philanthropy, a field that is notorious for its self-sufficiency. Some of them are relevant to this project and will be asked to experts and program officers across boundaries: How useful is the evaluation of think tanks for foundation officers? What are the benefits of good evaluation of think tanks? What type of organizational learning does it enable? How do evaluations contribute to the decision to fund grantees? How can such feedback promote the utilization of policy research in advocacy coalitions?

Experts have ventured into this issue with contributions that allow to build upon a solid body of practical knowledge. In 2006, James McGann produced a “Guide of Best Practices for Funding and Evaluating Think Tanks in Developing Countries” commissioned by the Hewlett Foundation in preparation of the Think Tank Initiative (TTI), launched in 2008. The only study that has explicitly addressed the problem of how to inform decisions to fund policy research organizations, it has a strong focus on the political context of African countries, which make unclear whether domestic grantmaking practices have been influenced by this work and by the TTI experience in general.

However, an external evaluation of the TTI conducted by the British Overseas Development Institute in 2013 identified discrepancies between program owners that seem indicative of the ongoing debate in the United States: while organizations require long-term funding in order to remain sustainable, some donors prefer a ‘campaign for policy change’ approach, which involves funding “aligned with advocacy objectives”. In recognition that “there are no common approaches or accepted practices regarding the evaluation of grantmaking in the area of advocacy and policy change”, in 2007 the Casey Foundation published its “Guide to Measuring Advocacy and Policy”, intended to contribute to standardization and aid to fill the void in “what expectations are meaningful and appropriate for investments[…], what kind of outcomes are possible and realistic, and what kind of strategic adjustments in programmatic approaches or funding allocations might be needed”.

To what extent these models have been applied to programs developed by think tanks is especially difficult to assess because of the confidentiality of evaluations. But the persistent interest in policy change seems in conflict with the metrics that are consistently used by think tanks to measure their programs. Congressional appearances and media citations were the basis of Andrew Rich’s 2004 hallmark study of think tank profiles. Added to the rise of social media, these indicators are closer to measuring popularity than they are to tracking utilization and changes in policy.

David Devlin-Foltz argues that confronting the impact of policy research deliverables with skepticism helps test what research is worth funding, and guide planning and evaluation. (Bumgarner et al., 2006). A similar skepticism can guide further questions about the practice of policy research evaluation and its uses for organizational learning and decision making, which may bring insight on how funders can be supportive of high quality policy research that matters while preserving the much needed autonomy of policy analysis.

References:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s