Credit: adapted from ODI's RAPID framework

Monitoring Research Uptake and Policy Influence

20 December 2018
Annie Holmes

How can research programmes accurately account for the uptake and impact they achieve? To unravel this knotty question, a group of 26 funders, researchers and uptake practitioners met for a roundtable discussion in London on 12 November 2018, hosted by the Wellcome Trust and ODI. 

The discussion was timely for STRIVE, as we approach the end of our 8-year grant and assemble both the change for which we can claim some contribution and the lessons of implementing a research uptake model that we adapted, appropriately enough, from ODI’s RAPID approach. We took away new ideas and tools, as well as the reassurance that others face similar challenges and advances. 

Participants shared useful resources:

The day was organized around three questions. For each question, two short presentations sparked conversation. We took note of some key points. 

1. What kind of research impact should programmes aim for and what should they be accountable for?  


  • Jessica Romo, Evaluation and Learning Manager, Wellcome Trust  
  • Sonia Sezille, Programme Manager, Anti-Corruption Evidence Research Consortium, SOAS University of London 

Characteristics of knowledge translation and research uptake for which programmes may be held accountable include:

  • Quality of knowledge translation and uptake strategies
  • Ethics
  • Open research standards: findings should be accessible and re-usable within reasonable timeframes 
  • Critical engagement with publics 
  • Realistic impact goals

The particular type of impact matters: funders do not want to stifle exploratory research and blue-sky thinking. The funding application should indicate the intention. 

At what point do funders expect impact? To complicate thinking around this vexed question, research shows that the standard period from basic research to impact is 17 years. Funders are beginning to track back over impact case studies from the past to assess the longer term legacy of earlier and ongoing research. 

DFID’s emphasis in budgeting and reporting on M&E and impact has helped to put research uptake on the table as a requirement. Research programme consortia such as STRIVE no longer have to persuade academics to take this seriously, and there is greater understanding of nuance in defining contribution rather than attribution, in understanding pathways of change and in valuing impact on practice and discourse as well as legislation. On the other hand, technical advisors still have to contend with upstream demands for metrics such as “number of policies changed”.

2. How can programmes build plausible cases of the contribution of their research? 


  • Ella Spencer, Head of Monitoring, Evaluation and Learning, International Growth Centre  
  • Ruth Mayne, Senior Researcher on Influencing and its Effectiveness, Oxfam GB  

Factors supporting plausibility include: 

  • Transparency 
  • Nuance and complexity  
  • Realism (vs optimism) about what is achievable as well as pathways to achieve change 
  • Credible research and researchers 

The IGC has developed a scale from 1 to 4 for rating contribution, with Level 3 indicating internalization by policy makers and Level 4 some kind of policy change. They advocate transparency about impact: not claiming too much, highlighting complexity and being clear about input from a range of funders. In some situations, claiming impact can adversely affect outcomes. Be creative and open to opportunity, they advise: a video of a conversation between a researcher and a policy maker in Ghana helped to advance the impact of one project. 

For Oxfam GB, the emphasis is on citizen voice and pro-poor impact. They use a variety of methods to monitor and evaluate impact. With Process Tracking, for example, they involved external researchers to assess contribution, with staff reconstructing the hypothesis behind each theory of change, and resulting in a meta review of 24 initiatives. 

Universities are driven by the REF. Introducing scoring on impact in the most recent round has certainly shifted the conversation but there is still reluctance to put funding into “the impacty bit” along with a tendency towards “heroic optimism” in terms of what can be achieved. Researchers and research groups have differing levels of skills, interest and home-institution support for research uptake, so support and guidance can be crucial.  

3. How can programmes support and incentivise learning about research uptake and policy influence?

Effective methods include:

  • Dedicated resources for research uptake and M&E staff, activities and training
  • Realistic timeframes for developing strategies at the start of a programme
  • Investment in analysis of M&E data
  • Connectivity, including forums for sharing across projects at and beyond national levels

Overall, participants emphasized a balance between the science and the art that our field requires. We need to be more systematic about (for instance) brokerage, boundary partners and network analysis, but also be savvy, creative and agile in analyzing and responding to context and power. Within MEL – monitoring, evaluation and learning – we stand to gain from analyzing what has not worked or where we see no change, rather than strenuously focusing on the positives. Critical reflection with partners and across projects and sectors is essential.

With thanks to ODI and the Wellcome Trust for organizing this exchange and ongoing discussions.