Blog: evaluation but at whose cost?

In the music, arts and wider education sectors I’m noticing a growing demand for hard evidence of impact from evaluation and research. Language like robust, measurable, KPIs, stats, dashboards increasingly appear in Evaluation EOIs and my feeds are full of ‘impact at a glance’ infographics.

But scratch beneath the surface and what’s often being reported isn’t impact at all. It’s outputs: numbers reached, sessions delivered, partnerships formed etc.

On paper, evaluation is framed as a tool for learning and improvement. In practice, it’s often driven by fundraising, advocacy and the need for positive stories. Even well intentioned, socially-just projects and organisations can fall into extractive, top-down evaluation practices, focussed on collecting data ‘from’ participants rather than working with them to ask how the evaluation process itself might benefit them.

I’ve recently been particularly challenged and inspired by Sahibzada Mayed’s writing on trauma mining, research justice, and decoloniality in research and evaluation which in the sectors in which I work feels urgent and necessary.

This has pushed me to unlearn aspects of my learned evaluation practice. My interest now lies in reframing evaluation not just as learning, but as a tool for challenging inequitable systems, and for genuinely involving participants not only in programme co-design, but in evaluation co-design too.

Because impact isn’t just something we measure or present in infographics. It’s about what changes in peoples’ lives, defined and described in the voices of those affected.

I work with organisations to design evaluation and learning processes with participants, particularly children and young people. If you would like an informal conversation about how this could work in your organisation, please do reach out.

Next
Next

Case study: National Children’s Choir and Soundabout