I was lucky enough to be asked to participate in a recent workshop on community and public engagement (CE/PE) evaluation in Kenya. I work in communications and am new to CE/PE, so – feeling somewhat intimidated – I set off to Naivasha, armed with a crash course in background reading on the discipline. With a whole new vocabulary swirling in my head, as well as new questions about the interface between communications and CE/PE, I felt even less prepared to be sitting in the same room as expert practitioners and academics from fellow Wellcome Trust-funded research institutes in Kenya, Thailand, Vietnam, Laos and Malawi!

Mesh, which is a truly great resource on the subject, hosted the workshop. The goals included starting to build an evidence base of CE/PE effectiveness across different contexts, and creating a community of practice. CE at the Africa Health Research Institute (AHRI), where I am based, focuses mostly on informed consent and deepening understanding of our work in the areas where we do research. We plan over the next five years to significantly expand this programme, and develop robust evaluation systems. There was lots to be inspired by at the workshop. Over the course of two days we discussed four case studies, three papers in progress, theoretical and methodological approaches and frameworks, and cross-cutting themes and challenges. From puppet theatre for children, to ambitious internship programmes, radio shows, science cafés and exhibitions there is a huge amount of exciting work happening.

I’ll admit that I was somewhat overwhelmed by how much there is to learn and to consider when designing engagement activities and their evaluation. However, friendly conversations over dinner, during evening jogs and in between sessions revealed how willing everyone is to help and share. I was also struck by the different backgrounds that people come to CE/PE from – adding to its richness and interdisciplinary feel.

Below are a few of my important ‘take homes’ from the workshop:

There is no ‘magic bullet’ or ‘one-size-fits-all’ to CE/PE evaluation. That said, much can be learned from and applied from different contexts – and there are years of experience to draw on from our fellow Wellcome Major Overseas Programmes. Underlying all of our discussions was the subtext of ethics. What does ethical engagement and evaluation look like in the context of (for our purposes) biomedical research, usually among vulnerable and resource poor populations? I found the strong academic thread that informs the KEMRI/Wellcome Trust Research Programme’s study-specific and programme-wide CE/PE programme and evaluation a useful place to start. Their work also begins to answer the call for applying the same rigour or ‘science’ to engagement activities that is applied to the medical research or intervention that often goes hand-in-hand with engagement.

There are multiple complexities, narratives and perspectives to consider.

Emory’s Jim Lavery asked us to shift perspective as he introduced an architecture analogy to thinking through CE/PE evaluation. In the simplest terms: depending on who you are and what use you have for it, you might have a completely different opinion on a building’s form and function to, for example, the architect - or community of architects - who built it, or the person who uses it, or the person or people responsible for cleaning it. (From my past life as an online editor I was immediately reminded of discussions and decisions taken around design versus ‘user experience’.) Similarly, when evaluating engagement, you need to take into account that your results will depend so much on who you are asking. During the workshop we were given an overview of Realist Evaluation as a useful methodology which takes seriously the context and complexity of intervention and change. Reflecting on Jim’s architecture analogy, I was reminded of the questions that Realism asks us to consider: What works? For whom? In what circumstances? And why?

Image Caption: KEMRI-Wellcome Trust Research Programme’s (KWTRP) Dorcas Kamuya and Emory University’s Jim Lavery discuss KWTRP’s engagement evaluation case study, which Dorcas presented to the workshop

Whose voice matters? Related to the above point: whose perspectives do we then take into account, and regard as ‘useful’? I found this question particularly interesting in the context of AHRI’s elected Community Advisory Board (CAB), which we engage to help inform our study design. Can we regard our CAB as being representative? If so, how do we ensure that the least ‘represented’ voices are heard? Also, what can we realistically hope for from our participants? Jim broke this down into four categories: collaboration, cooperation, toleration and opposition. While we might hope for active collaboration, toleration is more likely.  

The ‘ceiling of accountability’ Regina Makwinja from Malawi-Liverpool-Wellcome Trust Clinical Research Programme introduced what I felt was a useful way to frame where our responsibility begins and ends. She called it the ‘ceiling of accountability’. In the example she gave, the ultimate goals of MLW’s school health clubs, exhibition and internships included ethical research practice, increased number of Malawian scientists and Malawian research capacity, and access to good quality science education. Is it enough to inspire and expose school children to science and its opportunities? Where then does our ‘ceiling of accountability’ lie, can this shift over time, and how do you even begin to measure some of these outcomes? We will be implementing a new Schools Engagement Programme at AHRI, and will need to grapple with similar questions.

The power of real-time evaluation It may seem an obvious point to make about evaluation at an evaluation workshop, but there is such value in being flexible and open to change, as informed by evaluation feedback. Some of this can/should be done in real-time, as demonstrated by MORU’s Fishy Clouds theatre production which, based on early feedback, implemented a basic explanatory synopsis to aid audience understanding. Being reflexive, in the best possible definition of the word.

Overall, I found the work being reflected on – as well as the people at the workshop - motivating. I am looking forward to applying these learnings as we start work on developing AHRI’s CE/PE and evaluation programmes. I am also looking forward to continuing the discussions on Mesh! Thank you to everyone for sharing their experience so generously.  

This resource resulted from the March 2017 Mesh Evaluation workshop. For more information and links to other resources that emerged from the workshop (which will be built upon over time) visit the workshop page.

For a comprehensive summary of Mesh's evaluation resources, and to learn how to navigate them, visit the Mesh evaluation page

Creative Commons License

This work, unless stated otherwise, is licensed under a Creative Commons Attribution 4.0 International License

Reply

  • julieodhiambo Juliet 18 Apr 2017

    Good job!

  • hannahkeal Hannah Keal 4 Apr 2017

    Thank you Dorcas, and for all the help and sharing of ideas from you and colleagues at Kilifi!

  • dkamuya Dorcas Kamuya 31 Mar 2017

    Thanks Hannah for the nice reflection of the workshop, and highlighting some key areas and themes that emerged. I am certainly looking forward to hearing more about AHRI's public and community engagement plans, and their evaluation.

Please Sign in (or Register) to view further.