What Working(ish): Evaluation

--

By Francisca Rojas + Lindsay Cole

Seeing the forest and the trees

Register for our upcoming discussion about Evaluation, May 15 2024, 8:00–9:30am Pacific time, here.

Evaluating the complexity of transformative public innovation

Evaluation theorists and practitioners have been developing approaches for working with complexity over the last decade+ in earnest, acknowledging that complex challenges have characteristics that require unique evaluative mindsets, approaches, and processes. Complex systems — the playground of public innovators working toward transformation — have the following attributes: nonlinear; emergent; adaptive; uncertain; dynamic; and co-evolutionary. Yet many public sector innovation initiatives have attempted to demonstrate their value and impact by using the same approaches to measurement that are used for other government activities. Examples include return on investment, business case, achieving project management milestones, or meeting a predetermined quantitative target.

This is a struggle because the purpose and focus of innovation work is fundamentally different from day-to-day government work. Goals often shift as practitioners learn more about how to frame problems and refine desired outcomes through the innovation process itself, and measurement and evaluation work needs to reflect and understand that this agility is critical to the work of systems transformation. Many public sector innovation initiatives also delay or neglect evaluation work for a variety of reasons, including: a lack of appropriate and accessible methods and frameworks that they can quickly take up; the absence of evaluation skills and roles on innovation teams, or budget to hire evaluation support; and pressures to keep producing activities and outputs and not having/taking time to gather up impacts and learning, amongst others.

Creative evaluators offer a different orientation to evaluating complexity. This includes being concrete, inventive, flexible, and specific. It requires understanding what type of strategic approach to innovation is being used and then designing an appropriate evaluative strategy from there. It is important to reframe the idea of ‘rigour’ when evaluating complexity, where it means: quality of thinking; credible and legitimate claims with transferable findings; cultural context and responsiveness to stakeholder values; and quality and value of the learning process. (With thanks to Mark Cabaj, Michael Quinn Patton, and the talented folks at FSG working on strategic learning and evaluation).

A range of helpful and robust evaluation frameworks that can help when working with complexity are currently underutilized in government, and can be brought into wider use in public innovation work. Some of these include participatory action research, developmental evaluation, equitable evaluation, principles-focused evaluation, transformative learning, and custom frameworks and approaches to evaluating public sector innovation that are being generated by practitioners. As an example of what’s possible, in developmental evaluation practitioners track what they’re learning and what insights are emerging throughout the iterations of their practice by asking: What did we do? What happened? What does that mean? Which key relationships and insights have emerged? And then, now what? What sort of capacities do we want to deepen and cultivate?

It is important to choose and use evaluation methods that can be implemented throughout the innovation process, not just at the culmination/delivery of an initiative. This shortens feedback loops on what is working, and enables learning, reflection, and adapting approaches along the way. Discernment and right-sizing of evaluation methods appropriate for different stages and purposes in an innovation process is critical, otherwise evaluation can send the wrong signals about impacts. Evaluation approaches that work alongside different innovation activities, for example user research, problem (re)framing, ideation, prototyping, user testing, and iteration is also important for this right-sizing, as different innovation stages or activities have different objectives, and need to be evaluated as such. Not everything should be assessed against the scaling out of policy, program, or service solutions, yet much evaluation of public innovation is oriented in that direction. Evaluation in public sector innovation must often be low-cost, easy to run, integrate into innovation processes and workflows, and generate actionable insights that can be quickly taken up.

The time horizons in evaluating public innovation need to be carefully considered, and often adjusted compared to typical public sector evaluation approaches. Some of the most significant shifts in transformative innovation work result from people changing mindsets, paradigms, and values, which then cascade into decision-making, organizational rules, hiring and procurement practices, and community-organizing approaches. These take time to show up and be visible at scale, and are not contained within a specific innovation initiative or activity but ripple out beyond that into other areas of work. Measurement and evaluation processes need to get better and see the early signals for these shifts so that they can be supported and amplified. As a field we must also get much better at understanding how to measure and tell stories about these types of deeper impacts, and make them visible so that they start to matter, and be valued, in our transformative innovation work.

Questions we are exploring

Some questions that we are holding as we look forward to our Pushing the Boundaries of Public Sector Innovation community of practice conversation about evaluation are:

  • How might shifting our evaluation approach support more radical theories of change, and the inner/outer work of transformation (topics of earlier discussions)?
  • What are the most significant ways that evaluating transformative innovation is different from evaluating more standard public sector policy, programs, or services?
  • Which evaluation methods have shown to generate the most useful insights about transformative public innovation? How can we learn, practice, and become more skillful in their use?
  • How might we use evaluation practice to help us ensure that our innovation interventions are not causing harm somewhere else in the system?
  • Who holds the evaluation role/responsibility? What skills, orientations, capacities are needed? How might we make sure that this work and role is properly prioritized, resourced, and supported?
  • How can the results of evaluation be shared and mobilized, and help to make the field of transformative public innovation more robust, credible, and impactful?

This blog post is part of the Pushing the Boundaries of Public Sector Innovation (PB PSI) community of practice (CoP). We are people working in- and alongside public sector organizations who share a curiosity and commitment to work more ambitiously, systemically, and respectfully on the biggest social and ecological challenges of our time. These posts are written from the diverse perspectives of different members of the CoP as we learn and explore together. Find out more about the project and/or join the CoP, here.

--

--

Lindsay Cole (she/her)
Pushing the Boundaries of Public Sector Innovation

Lindsay Cole is a Postdoctoral Research Fellow, exploring transformative public innovation at Emily Carr University and UBC.