How to measure complex social change? Don’t make it so complex
A few years ago, Save the Children International set out to rethink measurement systems for campaigning. As we tried to ensure the system was credible and rigorous in the eyes of our monitoring and evaluation experts we started to realise that complex and time consuming assessment systems do not necessarily equal systems for quick adaptation and learning. Long evaluations are seldom read and therefore also don’t equal learning.
Our campaigners focus on complex social issues such as discrimination and child marriage, and most often they are able to tell an accurate story of how change is happening if you ask them the right questions (and recognise that they are time poor and do not like M&E jargon). With this as a starting point, we found ourselves striking a delicate balance between providing rigorous and honest assessments of our contributions and timely learning.
We see this existing on a spectrum of complexity, rigour and time. If there is increased complexity and rigour, the process can be slower and more costly. If there is less, it allows rapid and continuous adjustment. Similar ideas of measurement ‘along the way’ are suggested in various platforms including this Standard Social Innovation Review article on measuring advocacy.
As we continue to move back and forth along the spectrum we seem to be settling somewhere in the middle, where well-known methodologies (such as contribution analysis and theory of change) are simplified and a balance is struck between upwards accountability and learning. At its core, the system is less about the tools and more about facilitating a thought process and culture change.
How have we simplified methodologies?
Our measurement centres around three core principals – better planning, good questions and storytelling.
Better Planning - During planning we try to start with a clearer picture of what is in and outside of our control and set indicators accordingly. Inspired by the following papers from ODI, Annie E Casey Foundation, Oxfam America and many theories of change toolkits we have developed menus of indicators and use the spheres of control, influence and concern, pictured in the diagram.
Theories of change and the spheres of influence guide planning discussions, along with time spent on unpacking assumptions and a recognition that the context is likely to have changed before the ink is dry on the plan.
Good questions – we are starting to steer away from lengthy reporting templates and monitoring frameworks and towards a set of good learning questions. When a learning discussion is facilitated well, the by-product is often good data. If we validate this data with external partners it adds the rigour that we need.
Learning questions
- What has been the most significant change and why is it significant in this context? (including progress AND digression)
- Did we make course – corrections, if so why?
- What is Save the Children’s contribution to the change? (Who else was involved and would it have happened regardless?)
- What assumptions have we made?
- Is our approach still relevant?
Story telling – for important achievements we document the chronology of events, civic space and rank our contribution and the potential impact on children. The stories are a quick snapshot and we send them to external partners for validation.
Using the CIVICUS monitor we are asking why we see less or more access to decision makers in certain contexts. Is doesn’t follow through that we have less access in restricted spaces, sometimes we have more.
How can we use the data?
The data we draw out of this process is able to tell a number of different stories depending on how we decide to carve it up. Some ideas we are working with include looking at the correlations between civic space and access to decision makers (and asking why) and mapping contribution and impact on an axis. Now the challenge is to make sure that what is interesting is also relevant.
A few observations
The ingredients for successful social change are seldom repeated with the same results in different contexts or at different times. The same is true for designing measurement systems and so it may not be likely that repeating this process will have the same impact in a different organisation. However, there are a few observations that can be considered within the M&E and campaigning communities of practice.
We all have to start with some sense of direction (or a plan) but we should try not to fall into the trap of over planning and embrace the uncertainty that complex social change processes bring. Alongside this, we need to accept that we can’t know all the information no matter how thorough our systems are.
While we are very happy to use toolkits we try very hard not to create them. The sector is filled with unread toolkits and we don’t expect time poor campaigners to read them, which is why we stick to short concepts and graphics, and at a push a few simple word documents.
What has been most helpful is that our peers have reviewed and validated the system. This process has been important to ensure we use up to date thinking and for internal credibility.
What is frustrating (or exciting depending on your energy levels) is that the system will never be complete. Although we started out with the aim of redesigning the measurement systems, it’s become clear that there will be a continuous process of adaptation, capacity building and change.
Essentially, it all boils down to simplicity, encouraging people to think differently and being prepared to chuck it all out and start again!
For more information look at https://campaigns.savethechildren.net/impact or contact measuringimpact@savethechildren.org