Dark Matter of Funding — Session 4

Cassie Robinson.
4 min readDec 23, 2021


“The definition of impact that has that legitimacy to say this is a good future, this is a bad future — how do you track the social legitimacy of that? I don’t know what that future looks like, so I think we’re bridging into the fog, an unknown landscape and I think we need to be more honest about that. And who has the right, the authority to define what good looks like? Which good is it? For whom is that good? Present populations? community? the public? future generations? non-human systems? There’s an interesting question about the theory of impact that I would argue is built on a conversation about command and control — out of a thesis of control. Most of our logic models and management systems are built on a theory of command and control.” — Indy Johar

In our fourth session of the Dark Matter of Funding series, we were joined by Tim Hobbs, CEO of the Dartington Service Design Lab. Continuing our exploration of the meaning, value and purpose of impact in funding and social change work, the session, led once again by Indy Johar, Co-Founder and Director of Dark Matter Labs, explored the intersection of impact and evidence in driving and understanding change.

Tim discussed the multiple approaches he and the team at Dartington take (and have historically taken) to their work on the practical application of research and evidence to improve outcomes for children and young people. The first is a more traditional, relatively top-down evidence and impact informed approach, which seeks to understand whether certain interventions work, whether outcomes can be attributed to particular activities, and what the mechanisms are for change.

The second, informed by the short-comings of the first, is an approach that emphasises co-design and co-production — involving people in informing decisions and generating evidence. This kind of work can build capacity and resources within communities, and increases the likelihood that people will engage meaningfully with the interventions being designed, because they reflect their needs and preferences.

The third approach is one that is complexity-informed, that is, understanding the dynamics at play in complex systems, how these evolve over time, and how outcomes can be emergent properties of these complex systems. This approach avoids some of the narrowness of more simplistic approaches, highlights local variations and contextual nuances, and can help to consider intended and unintended consequences of any intervention.

However, each of these approaches has downsides as well as benefits. The more traditional evidence approach leads to a strong focus on proof and a narrow emphasis of delivery in isolation which distracts from broader systemic and contextual influences. It can also contribute to trying to understand aggregated estimates of impact which hide significant variation and discrimination through the evidenced interventions. The co-design approach can become overly narrow, focusing too closely on the particularities of one context can make it difficult for others to learn from and build upon it. It also doesn’t necessarily avoid the question of power dynamics which plague more top-down approaches — the imbalances of power might manifest differently, but that doesn’t mean they aren’t there. And, finally, the complexity-informed approach is, unsurprisingly, complex, sometimes to the point of being overwhelming. It can lead to the conclusion that evidence is never generalisable or knowledge transferable as between different messy and complex systems, which can lead to lots of repetition of harmful practices. Furthermore, focusing on the whole can take focus away from particular parts of the system.

Tim argued that there is a danger of becoming polarised about these different approaches, with people taking strong positions in favour of particular approaches. He sees this as setting up a false duality that fails to appreciates the nuance required in terms of understanding what questions need to be asked in what circumstances. In some cases or contexts, the right question may well be whether an activity is impactful or whether it ‘works’ but there are many other questions which may be more important in other contexts, such as ‘what matters to people in their lives?’, or, ‘are particular inequalities emerging through the work?’, or, ‘how can we build capabilities to learn and improve through the work?’

This last theme — of all the different questions other than ‘does it work?’ that might be asked about an intervention — came up over and over again throughout the conversation. Tim advocated a return to curiosity, and an understanding of evidence and experimentation that emphasises uncertainty, asking questions, and the likelihood of being wrong — all leading to an iterative approach to interventions.

Other topics that were discussed over the course of the hour included:

  • The (im)possibility of universal evidence
  • The rapidity of learning required to respond to accelerating crises
  • Who owns the learning and impact agenda?
  • The impact of precarity and reactivity in organisations’ ability to move beyond an evidence and impact paradigm
  • The relationship between accountability and evidence — and the question of who is accountable to whom?

Thank you to Olivia for pulling a lot of these strands together for this blog post.



Cassie Robinson.

Working with Joseph Rowntree Foundation, EarthPercent, P4NE, Policy Fellow IIPP, Co-founder Point People, Founder Stewarding Loss, International Futures Forum.