How do we measure the impact of culture change, communication and storytelling strategies?  Join Alan Jenkins on a journey to examine leading methodologies for evaluating cultural and communication campaigns through concrete examples and case studies.

Alan Jenkinsis the founder of The Opportunity Agenda and a Professor of Practice at Harvard Law School where he teaches courses on Race and the Law, Communication, and Social Justice.

hls.harvard.edu

www.opportunityagenda.org

Resources

Slide Presentation PDF Measuring the Success of Your Communications Strategy Impact Playbook: Best Practices for Understanding The Impact Of Media Storytelling Matters: Measuring the Social Impact of Entertainment on Audiences

Full Transcript

Mallika Dutt: Welcome to Leadership Moves presented by Interconnected. I’m Mallika Dutt. In this episode, we’ll be talking with Alan Jenkins on the leading methodologies for how we can evaluate cultural and communication campaigns. He is a Professor of Practice at Harvard Law School, where he teaches courses on race and the law, communication and social justice. What better person to show us through concrete examples and case studies, how we can measure the impact of culture change.

Mallika Dutt: Hello everybody, welcome, welcome, welcome. I’m in New York City where it’s snowing and I am so delighted to see all of you. So let’s get to the content of the day today. I am delighted to invite Alan Jenkins back to really take us through a journey of how we measure the impact of our culture change communications and storytelling approaches and strategies. You know, this whole question of evaluation and monitoring is one that many of us in the social justice world explore and are often challenged by, because measuring this work doesn’t lend itself as easily as perhaps a more quantitative outcomes like health outcomes, and still it’s really important for us to have ways to explore whether what we’re doing is effective, whether we’re having the impact that we desire, what kinds of indicators we might create.

So I’m really delighted that Alan Jenkins has joined us again. And it’s because Alan straddles so many of the worlds that we are straddling. So he is currently a Professor of Practice at Harvard Law School where he teaches communications. He also was at the Ford Foundation, so he sat in the place of philanthropy. He was the Founder and the Executive Director of The Opportunity Agenda, and so has created a lot of campaigns, communications, message boards, and he is an artist and storyteller himself. So it’s pretty amazing to have somebody like Alan who straddles and brings together all of these intersecting worlds to lead us through this journey of discovering how we might get better at really understanding whether what we’re doing is working, not working, and how we might be more effective in our work. I want to also take a moment to welcome Kathy and all the other folks from Ford who are joining us today. This community is an extraordinary community – really an absolutely extraordinary community of leaders around the world. And Kathy is the Director of the BUILD Program and so I just want to say, hey, hello, thank you for joining us today, and thank you to the Ford Foundation for making this community possible. So without further ado, I’m going to hand this over to Alan Jenkins.

Alan Jenkins: Great. Thanks so much Mallika and hello to everyone. As Mallika mentioned, I’ve been on multiple sides of this question of evaluating narrative change and communications and cultural strategies, generally, both as a funder receiving reports and the like as a NGO president having to both assess our own work and report it to others. I’ve also been on the boards of non-profit media companies that were trying to look at metrics and the impact that they were having. So this is a challenging issue, but the good news is there are lots of solutions and best practices out there. And so I’m going to present some ideas to you, but I’m really hoping that this will be a conversation, and also that we’ll have time to do a small group so that you all can do some specific problem solving for your own situations and country contexts, and issues that you’re working on. So I want us to jump straight in by asking some of you to share one communication evaluation challenge or issue that you’re struggling with when it comes to narrative change, the subject of today’s session. I just want to hear from some of you, some of what you’re struggling with, so we can make sure that our time together is responsive to your needs. So great, Eve?

Eve Tahmincioglu: Yeah, I figured I’d jump in. I’m Eve Tahmincioglu, Director of Communications for the Economic Policy Institute. And we had a good year from a communications perspective, but one thing we’re looking at going forward is reaching to new key — well, audiences that we have reached, but want to reach more, a younger audience and also what I’m calling the swayables. So folks that may have, you know, not workers that aren’t on unions or disillusioned with unions and progressive economics. So how do we get those swayables? It’s one of the challenges and things we’re looking at that we’re going to do, but you know, any advice you could give would be much appreciated.

Alan Jenkins: Great. Thank you Eve, and thanks for starting us off. And I’m going to tweak your question for our purposes, which is how do we assess, right, whether we’re actually reaching those audiences that we want to reach, how do we measure that? How do you engage in midcourse correction, right? So, but I’m happy to talk to you more offline about your core question. And I know my former organization Opportunity Agenda is working with your organization now, so okay, let’s hear from our couple more folks, narrative change, evaluation questions, challenges that you all are dealing with. Some of you got up early in the morning or stayed up late at night for this session. So I’m sure there are some specifics that you’re looking to solve.

Leila Nachawati: So I am Leila Nachawati, and I work with the Association for Progressive Communications. So this year has been quite intense and we have been working online for decades – for 20 years now. But now that there are more and more people online and we work supporting human rights organizations and people who work online, who are trying to shift online. So we are thinking on how to strategically address this. And we are also going through some internal changes where we are trying to be more active as like the communications team to be more strategically focused, to be less of like responding to needs and more proactive in identifying audiences, but I actually find it difficult to actually segment audiences. I think we all talk about dividing audiences, but at the end of the day, it’s difficult to actually plan who are you targeting and how. So I’m really interested to know how to assess this and whether we’re doing things the right way.

Alan Jenkins: Great. Thank you so much. Let’s hear one more.

Mallika Dutt: Alan, there’s a comment in the chat that says from Hoi Cecil that says knowing what data to collect and setting up the proper collection system.

Alan Jenkins: Excellent, excellent. Okay, good, good. Okay. Well, let me jump in. I’m going to share my screen and share some ideas and best practices with you that are going to be responsive to some of what you all have raised, but not all. And then we’ll jump out and discuss. So, this question of why measure narrative change, I do think it’s important and this is reflected a bit in the questions that some of you asked or the challenges that some of you identified. There are a few different reasons, right. One is internal learning and improvement and internal accountability, right. Are we making a difference in the world? Are we doing what we said we were going to do? And is that thing that we said we were going to do, if we’re doing it, is that making the change that we want to see both change in the narrative of the audience that we’re targeting and also is that narrative change actually improving people’s lives because that’s what all of us are in this business to do.

A different but related challenge is accountability to stakeholders, funders, obviously loom large in that respect, but also inspiration for supporters. And so here in this goes to the question was asked about data collection, right. We’re often going to be collecting data through a variety of means, which I’m going to talk about in a moment for all purposes, but then we need to be sharing that data and analyzing that data for the different purposes that we’re using it for. So what we generate internally for our own learning may or may not make sense to external stakeholders. If we simply give them raw internet usage data or public opinion data, or what have you, it’s not necessarily going to make sense to them or tell the story that we want to tell. Whereas internally we want to be developing our culture of learning so that we’re constantly looking at that data and with external stakeholders and to some extent, internal, we also want to inspire, right. We want to tell the story of why our work matters and so it’s important to separate out those functions a bit. We don’t want to be mixing public relations communications with evaluation data. We want to make sure that our evaluation is as objective as possible, that when something goes wrong or it’s not working, we’re able to acknowledge that, recognize it and make changes based on that. And that’s going to be different from the stories that we may pluck out to reach and inspire external audiences. So we’re going to talk a little bit about that distinction and what it means.

So this idea of learning improvement and internal accountability is crucial. In the best case scenario, we all want to be learning organizations where we’re constantly assessing what are we learning out in the field, what’s working along all of the activities that we use whether it’s organizing or litigation or research and narrative change strategies should fall within that. What are we learning? Let’s share that across our staff and our internal consultants and others. And let’s be constantly asking the question, how can we be more impactful? Do we need to make changes? Are there things that are not working? Are there things that are working even better than we expected that we need to double down on and really emphasize? That internal structure and that requires setting aside time and resources and people to collect and report data.

Even for our own staff, it’s often the case that just reporting raw data is not going to be useful to staff so that requires its own functions. And here are two quick examples, and I apologize that most of my examples are US-based because those are the case studies that I have access to, but I’ve tried along the way to incorporate some examples from other countries as well. But two quick examples of this internal learning, one comes from the US movement to abolish the death penalty. Where anti-death penalty opponents had hit a wall, they were not making any progress in reducing US support for the death penalty among US residents, nor amongst lawmakers. And they had been using the story of abolition that as I believe, and many people believe, that death penalty is always wrong, it’s immoral, it’s a human rights violation and should never be used. That convinced a certain block of people — voters in this instance, but it was not reaching other large numbers What their research and assessment showed was both that that message wasn’t working for a lot of their audiences, and that adding to that a message about exoneration, right, the idea that the death penalty, whether you think it’s a good or bad idea, it’s rife with mistakes. And so the likelihood that someone who is innocent of the crime that had been convicted of will be executed is high.

And they found that that gave them access to large numbers of new audiences who were not willing to listen to the abolition argument, but were willing to listen to the exoneration argument, and that reduced their support for the death penalty in all instances. And at that point, they began to make significant progress, still a battle to be fought, to be sure, but a lot of the progress of the last two decades has been layering on that exoneration narrative. Another well-known example comes from the marriage equality movement here in the United States where what the evaluation data was showing the movement was that arguing that gay and lesbian couples should be able to marry because it would give them access to the kind of rights and responsibilities and benefits of marriage was not working. It wasn’t convincing large numbers of people who — straight people who said in surveys, oh, you know, straight people get married for love, gay people get married for rights and benefits. So the movement shifted in response to their evaluation data. They shifted their narrative to one of love and inclusion, belonging, and family. And we know that that turned out to be very, very successful, not only in convincing the US Supreme court, but in moving hearts, minds and policy around the country in ways that we look at.

So I wanted to give you two concrete examples of using evaluation and assessment data for the learning process. Those of you who were part of my earlier presentation, saw this idea of movement building a movement narrative, which includes beginning with the voices and insights and values of people directly affected and of advocates, but then informed by research that orange box and that that’s where the data and assessment evaluation come in and then keep going around the circle to implementation, experience and evaluation. So once you’re actually trying out strategies, you’re constantly asking, is it working and how is it working and then adapting. That’s why it’s a circle because you’re constantly having to adapt and when big events happen such as the pandemic, you need to kind of reassess and make sure that your narrative strategies are still working or don’t need to change, or that more could be done, then there’s accountability to external folks and inspiration for supporters. And so that often comes in the form of an annual report, which you see an example of here from, I believe this is United We Dream, but also making sure you have the kind of anecdotal stories of, you know, here’s the op-ed that we placed. These are on the left of your screen.

Here is the media appearance that our executive director did that reached millions of people. Here is the person who was exonerated and who helped to tell the story of why the death penalty is always wrong, but also here’s the data you see on the right. And so you’re using the same sources of information that we’ll discuss, but you’re telling your story in different ways, always accurate, always aligned with the facts, but recognizing that different audiences are going to receive the same information differently. And so this goes to this idea of both quantitative and qualitative information, right. So you both want to be collecting hard data, you know, this is an example from the Innocence Project of how many people were exonerated, how many people did we exonerate but also interviews and focus groups in the light. At the front end of that should always be planning. In other words, your evaluation system, your narrative evaluation tools should always be built in at the earliest possible moment, even though you might have to change them over time.

So that means discussing with your staff and leadership and partners, what is your theory of narrative change. So, in other words, if we change perceptions of this group with this audience, then we will see better treatment or a change in policies or an upholding of human rights or a better equipping of advocates to be successful. So articulating that is important. Many of you know the concept of smart planning. So the English acronym SMART is specific, measurable, attainable, realistic, and time bound. So going from your big goal of we want to end all child poverty to a goal that is specific and can be measured. That applies to all of your work, but it also should apply to your narrative change goals and I’ll show you an example of that. So here’s how the SMART goals typically work in practice. You might have a broad goal of save the children and food borne illness, but you see that SMART goals are things like passed legislation this year time bound and specific to ensure that every child in the state has access to quality healthcare. So, you’ll know in a year whether you achieved that goal or not, or the progress that you made towards it, that’s the idea of a smart goal going from something that can feel very general to something that is actually measurable.

So in the communications context, it might look like this, if your goal is end mass incarceration, your A narrative goal might be increased public support for alternatives to incarceration by 20% among millennial voters in two years, as measured by public opinion polls. So let’s break that down. You’ve picked a metric. That’s not the only way to measure narrative change. There are other examples below, but it is one example and it’s specific, it is time bound, meaning that it’s in two years. And in this instance, this is something I recommend actually indicate how you’re going to measure it. So maybe you have the resources to do a public opinion poll, or maybe there are enough polls being done by journalist organizations and others, so that you can assess it. Maybe you need to pick another indicator. And I’ll just give you a moment to look at some of these other examples.

So you get the point. I think this goes to a couple of your questions about how do you go from, we want to reach youth to much more specific, measurable goals? Now, let’s just return to mass incarceration. This might be one of six or seven or more narrative change SMART goals that you have for the year or the two-year period. But the more specific you can be about them and about the audience right here it is millennial voters, the more measurable it’s going to be. And that realistic part is important. You want to reach. You want to be ambitious, but you also don’t want to set a goal that is completely unattainable, that will just end up being demoralizing and that sometimes requires estimation. You may not know at the beginning of the campaign, what’s possible. The US marriage equality folks they shot low actually. They had a 20 year timeline and it only took them 11 years to get everything that they were trying to get. This is really important. This is in my view kind of the best analysis of the categories of narrative change metrics that you should be thinking about.

Activity metrics that means did we do what we said we were going to do. If we said we were going to have 10 press conferences, did we have 10 press conferences? And if we didn’t then what was the reason that we didn’t? Was it lack of resources? Was it changed circumstances, et cetera? It could be social media posts, it could be rallies, it could be poems, you know, wide range of activities. Reach metrics are both did we reach the audience we were targeting, millennial voters for example, in a particular jurisdiction and what was the size of that audience? So if we know in the United States there are and you can pick another country, there are 25 million voters who are millennials and we’re seeking to reach them. Our reach metrics tell us in part, how many of those folks did we reach and I’ll be talking about some ways of assessing that. Next is engagement metrics, measuring when and how others interact with our communications campaigns and narrative change campaigns. That, of course is easiest in the social media context, because you can look at likes and shares and all of that. But we also know if you have a rally or you speak to a religious congregation, or what have you, and you’re doing petition signing afterwards for example.

You can figure out the extent to which people engaged with your content with your message. And then finally is impact metrics, measuring the changes you’ve achieved both in attitudes and behaviors and in people’s lives. That of course is the hardest to measure for reasons that we’ll talk about, but there are ways to do that. So some of the tools that we use and so this goes to one of the questions that was asked, right. One is having a regular process for downloading with staff, right, oh, I spoke to this group, you know, this religious faith congregation and they loved it when I talked about the narrative of welcoming the stranger in the Bible – in the Old Testament, but then when I spoke to this other congregation, they rejected it because it didn’t really fit with their faith stories. And actually the golden rule “Do unto others, as you would have them do unto you” was much more powerful, just literally collecting that information and having, you know, it could be just an Excel sheet, right, that you use to as a database to keep that kind of information. And also think about activities, right. Every time when I was running The Opportunity Agenda, every time one of our staff did a presentation like this one, we would collect the numbers on who participated and from what organization and all of that. And we kept that in a database for our own purposes and also to report to our stakeholders. Social media and online data tracking obviously is crucial. There are free tools for doing that, but also expensive tools for doing that. Content analysis, what this means is alright, if our group, if we’re working on behalf of Dalits or Roma or indigenous peoples, how are they depicted in news stories now? How are they depicted six months later and, you know, two years later? And that requires some level of expertise to walk through and see how, what language was used, what were the depictions, were they positive or negative, were they all about crime and poverty or were they also about investment and thriving communities? But that can be very important and it can be done with policy with cultural discourse.

The Opportunity Agenda has done reports on entertainment, media, depictions of immigrants, for example. So that is both quantitative and qualitative. How much coverage was there and was it helpful or harmful in terms of our narrative? Surveys and focus groups are an important tool, but they also can be costly. And so one thing as I noted is to see if you have a topic that’s hot and we’re pulling as being done in your region, then you want to be monitoring that, to see if there’s attitude change. Interviews, ethnography, actually speaking to people about, yeah, you know, after this big story came out, people started asking me about my experiences and whether I was experiencing discrimination as an example. Legal and policy tracking, right, did the laws that we were seeking to change, change, and then this idea of integrated evaluations and case studies. When you have the resources, and this is something that you should lean on your funders to resource. If they’re funding the project that you should also be asking them to fund some resources for evaluation to actually assess at least at the middle and the end to interview people, but also to look at data shorthand that. So this is just some examples. Here is an example of activity metrics from center for community change. These were publicly available on the web, which is why I’m using them. But full disclosure, I used to be a board member at community change. So you see they’re setting out the number of partner convenings that they did, that’s one way, one of their strategies for doing multiple things, including narrative change. Some examples of reach metrics, this is on the left from the Innocence Project, a number of visits to their website, stories in popular media outlets.

I would also want to know, well, what was the circulation of those media outlets? And so on the right, you see that some of that data is available. So this is your top right image is regarding newspaper outlets in India. So this is publicly available. I found this through a Google search and more that data’s available, but you can also purchase more detailed data if you have the resources on the bottom is a data again from the web, the most popular social networks among fixed internet users in Mexico by age group. So that’s outside data that you can use to help analyze your own data. Engagement metrics, this is an example, public report by participant media. So they’re actually looking at if you saw one of their films, so they make films and television. They’re monitoring based on your social media content, what you did after seeing one of their films. And then they’re comparing that to people who didn’t experience their content. And so they’ve really broken it down very specifically information seeking, information sharing, taking individual action, encouraging community action, so they can report with a lot of specificity some of the things that were done by people who saw their films that were not done by people who did not see their films.

This can also be affordable just to note. So this is Google Trends, which is free resource. And so I did a search of the phrase defund the police, which in the United States has been very much in play, starting from the summer, essentially when the killing of George Floyd and Breonna Taylor and others occurred. And so you see that that phrase was not used at all or not searched for at all pardon me in Google until about Memorial Day at the end of May in the United States. And then suddenly there was this jump and it’s come down over time. So this is a free tool for assessing the extent to which in this instance, a phrase that was not really used before this movement really took off and then was used. And we can track over time the extent to which it’s in the public discourse and again, that’s free. And impact metrics, which are, in many ways the most challenging, because it’s so hard to prove causation. If you’ve got multiple things going on, television shows and public discourse and maybe public events are happening, it can be very difficult to determine whether but for your activities, the discourse would have changed. But we do have here, perhaps a remarkable example from the marriage equality movement, which it shows the change in public attitudes towards marriage equality over a period of looks like about 15 years. So, can the marriage equality movement say that writ large they were responsible for that change? You might not perhaps be able to prove it scientifically, but I think the kind of mixed methodologies of interviews and tracking data and seeing the extent to which shifts in narrative strategy led to shifts in attitudes I think a very strong case can be made. On the right, you have a couple of other examples. So the upper right is again the Innocence Project. So this is an example of laws that were passed, but of a human example, right, someone who had benefited from them.

On the bottom, right was a campaign long story but the hallmark network in the US refused to run an ad featuring a queer couple, and pressure was brought to bear on them, communications pressure, and they made a change. And so the organization GLAF was one of the organizations that push that. So that’s an impact metric. Again, hard to prove causation, but I think important nonetheless in telling the story. So before I get to challenges, why don’t I jump out and let’s take questions and comments and make sure that we get to actually talk about the things that are on your minds.

Jennifer Radloff: My name is Jenny. I’m from also like Leila from the Association for Progressive Communications and I guess I was wondering if you could comment a bit about something that I’m curious about, which is the more intimate personal data that would come out of an evaluation around communication. So for example, we work a lot in collaboration and in partnership. So through our communications, we often deepen relationships with our existing partners, or we find new partners through what we communicate out about and core to our work really is movement building. So I’m just wondering if you’ve got any comments about that in relation to measuring communication change. I suppose it’s a kind of advocacy question that’s built into what am I asking about.

Alan Jenkins: Yeah, I’m not sure I completely understand your question. So they’re kind of, you talked about more intimate details that come out of the effort and about movement building. But can you clarify for me what your question is?

Jennifer Radloff: It is about relations about relationships that can arise out of, or deepen in terms of what one is communicating; and for us the core of our work is movement building, bringing partnerships, cross issue partnerships so, yeah.

Alan Jenkins: Got it. So I can tell you in my former organization, we would both, we kept track of all of our partnerships. Here’s everyone we engaged with and in what way, here are the people who are on our email list, who have actually asked for things, here are the people we’ve been with at meetings and the like. And we actually had a three tiered system of close partners, allies who were occasionally engaged and then people who simply received our information. And then we would also capture and ask about, we did an annual online survey of everybody in our database to ask them about any, there were open-ended questions about any benefits, did they use our materials and the like, but from that we would get stories, oh, you know, at your convening, we met this other group and then we collaborated on something else. And so we would capture all of that. That part was qualitative. In other words, we didn’t know what percentage of our partners had had some kind of epiphany like that, but we would capture those stories. And then sometimes we would ask them, can we interview you to talk about why that was, what happened, where it went from there, all of that. So I hope that’s helpful. Jennifer Morales also has hand up.

Jennifer Morales: So I guess I would like, since you have been in the philanthropic world, maybe to give some reassurance, sometimes our measures seem so amorphous and hard to hold on to. So something like trying to change how people talk about a certain thing, maybe there’s polling, maybe there isn’t. What kind of data is of strong interest to funders like what feels like proof I guess I’m asking that their money, their investment in us made could change for people?

Alan Jenkins: Yeah, that’s a great question. And so, you know, my answer will not make you completely happy, but I think that number one, that breakdown of activity, reach, engagement and impact helps a lot with funders. It gives them assurance that, okay, you’re looking at this in an analytical way. You’re not just throwing out stuff to see what’s going to stick. And in my experience, if you can pick a few, I’ll call the metrics. But a few measures that you think are realistic and that are good assessments of the progress you’re making, it doesn’t have to be an ironclad peer reviewed study in order to make the point with your funders. I think you could, for example, pick three places where, Jennifer would tell me just a sentence about the work that you’re doing or give me an example.

Jennifer Morales: Yeah, I’m from Family Values at Work, and we’ve been working for the past 18 years on changing the conversation around care, specifically, trying to promote paid leave and paid sick days policies around the US.

Alan Jenkins: Got it. So, yeah, so the work I know well, so, you know, good news is my former organization has been doing — Opportunity Agenda has been doing a bunch of opinion research so that hopefully you can glom onto, but I think just to answer your question you could pick three places where you’re working with audiences and just do a set of informal focus groups over the course of a year or of 18 months. And asking the same questions and recording them in the same kind of way. But, you know, we now have Zoom, so we’re Zoom ready. And then, get a sense of, did people answer those questions differently over time, give people some options, like on a scale of one to five how much opposition are you finding or how much openness are you finding, capture that and also obviously keep track, as I know you do of progress that’s measured in other ways. Let’s look at legislative discussions, did the language of what was introduced, even if it didn’t pass better reflect the story that you all want to tell the audiences that you’re trying to reach. In my experience, funders and also board members for that matter and others will be responsive to that. I think you also want to be realistic about your own timeframe. So is this something that we think we can do in a grant period, or are there certain benchmarks that we believe we can hit, especially in some places over the course of an 18 month grant, but actually our goal is five years.

I think being realistic with donors about that is important and recognize that you’re going to have to remind them, right, because they have their own stakeholders. Your program officer has got to explain to the people above them why should I make another grant for a paid family or medical leave when we haven’t seen the progress that we were hoping for? So reminding stakeholders of that longer timeframe, I think is important. And over time you might be able to shorten, or you might have to extend that timeline. I hope that’s helpful. Alyssa.

Alyssa Pinkerton: Hi. Thanks Alan. My name is Alyssa. I work with High Country News. We’re a non-profit media organization, and we’ve just are in the middle of a number of changes, including intentionally broadening our audience. And so this is a really exciting conversation. Thank you for hosting it. I have a thousand questions, but I’ll just stick to a couple. As a fundraiser with the organization, I’m driving some meetings to really evaluate our metrics. And my question for you is how often would you recommend we collect data? And in nonprofits I think we’re always struggling with wearing too many hats so how do you share this load and who is the person responsible for data collection and assessment? Those are my two questions.

Alan Jenkins: Yeah. So a couple of thoughts and then — but I might ask you a follow-up question. So, part of it is developing a culture of collecting information, which is, you know, when you onboard people, making clear is a part of your job. In staff meetings devoting maybe one out of every three staff meetings to the report back, typically, almost all program staff are going to need to have, I mean, this is not just a narrative issue, right, this is an evaluation and data issue. Typically almost all staff are going to have to have some responsibilities there, but in my experience we had to designate someone to be the nagger in chief. So it wasn’t that that person was responsible for collecting all the data; we had a database, it was just an Excel spreadsheet, but there was somebody whose responsibility was to before every third staff meeting to reach out and say, hey, Julie, I know you did a training last week, you know, can you enter the data? And it needs to be somebody with enough juice in the organization or someone who works for someone with enough power so that people are responsive. So if you have an intern nagging people, it’s going to be lower priority than if somebody who either works for the boss or has a position of authority. So I think that part is important. And then, sorry, remind me of the first part of your question?

Alyssa Pinkerton: How often do you collect?

Alan Jenkins: Yeah, it really depends on what you’re measuring. So for example, we would look every month at our own internal web and online data. So, Google analytics, we would be looking at, for example, hey, we sent out this e-blast, this email message and we would do A B testing, right. So we would send out a message with two different types of language and then we’d look at, did people click and open one of them more than the other? We would look at that every month, but that’s because that was continually available data to us. Something like polling or assessing polling of our core audiences, we would have to do maybe semi-annually or sometimes we would only do it when we could raise the money to do it. So it really needs to be kind of keyed to your ability. And then the final point is of course, what’s your theory of change, and what frequency of assessment fits with the theory of change that you have. If you’re trying to change the perceptions of the death penalty over decades, you don’t need to be checking every week to see if attitudes have changed. But you do need to be checking every year. So it’s a matter of kind of the retrofit there. Hopefully that’s helpful. Okay. Any other questions, if not, I’m going to have you all try something out.

Elisabetta Micaro: Hi. Hello everyone. Sorry, I have a question, but I arrived to the webinar very late. So maybe you have already explained, so if you have just ignore my question, but one of the things that, well, first of all, I work for AWID, which is the Association for Women’s Rights in Development. And part of our work is also work online through our website and our social media. And we also try to do movement support type of work, all sorts of way as I said the social media. One of the big question that we have is beyond how many likes, how many followers we have, how do we get a better sense of who is among our followers and how our messages get beyond those followers, the impact they are having? As I said, you’ve probably already talked about this at the beginning, sorry, I missed the initial part.

Alan Jenkins: So, and I have not addressed that specific question. So online, that’s the easiest space in which to do that, because you have the information to contact those people. And so you can do an online survey. You don’t want to do it too often, but you can do a survey of your members and followers. You’ll get a percentage of those people. You’re not going to get them all, but you want to get a sense of whether you feel like it’s a representative response and you make it anonymous. Ask people, the kinds of questions that you want to know about. So are an advocate, an organizer or researcher or someone else, how old are you, what part of the region do you live in? All that stuff. One approach is to do an incentive. So, I think we gave away an iPhone one time, like the new iPhone that was coming out. It was often new technologies. So, that was, I don’t know 500 dollars, 1000 dollars, which is meaningful, but to get as an incentive, to get back really important, rich data, it wasn’t huge. And it was also fun. And then, when someone won, we would put their picture on our website, all that stuff.

So that’s easier but there are other ways of doing it, which is you can just literally pass if you do a physical gathering, once we’re able to do that again, pass around, a sign-up sheet. And if people want to be anonymous, just ask them, we’ll please answer these three questions kind of thing. So, those are all things that you can do. And in our experience, people are willing to do that. You have to make your own decision about whether people need the safety of anonymity and also what their trust level is with you. So when you reach out as an organization, reach out to them, are they going to know you and say, oh right, I’ve been engaged with them and I’m happy to fill out this form. We invite you by giving away the iPhone we encourage people, even if we reach them, but they didn’t know who we were. Some of them said that, like I don’t, you know, I guess I get your newsletter, but I don’t really know you as an organization that was helpful to us as well. Okay, any other questions? So what I’m going to ask you all to do is to take a moment, take out whatever you write with. I still use a pen. I know that’s old school and thinking about your own narrative measurement and assessment needs. I want you to write down as many of these things, and I’m about to tell you, as you can. Think about what might be an activity metric, one thing that you could measure over a specific period of time that is an activity to, a narrative change activity that you could report back internally or externally, a reach metric, remember reach is about, did you reach the audience you were seeking? Was it the right audience and in what numbers? And engagement metric, which remember is what did people do in response to receiving your narrative or message and an impact metric so I am going to ask for volunteers to raise your hand and just share with us one of the metrics you came up with?

Nana Sekyiamah: Hi, my name is Nana Sekyiamah. I’m based Accra in Ghana. I also worked for AWID. So one of my projects for this year is to organize a festival feminist realities festival and so one metric I came up with as an activity metric is to co-organize, selected sessions with at least eight partners who represent particular minority constituencies.

Alan Jenkins: Great. So that’s really specific. It’s measurable, you know, when the, the festival is happening. So you have a defined time period. And so I’m going to put you on the spot a little Nana, if you were to kind of follow that through, are there any engagement or impact metrics that you might think about including?

Nana Sekyiamah: Absolutely, one of our broader goals is to increase our membership from particular constituencies. And so for me, an engagement metric will be recruiting into our membership, x number of people from those priority constituencies.

Alan Jenkins: Yeah. That’s great. Thank you. So, a number of folks mentioned on the front end wanting to grow constituencies or to reach more youth or the like, and so I think Nana, you give us a great example, the more specific we can be about who we’re prioritizing the more easily it can be measured. That doesn’t mean we’re ignoring everyone else. So if your membership grows exponentially, but it’s not primarily from those target groups, that doesn’t mean you haven’t accomplished something that is amazing and also worth measuring and reporting. But it doesn’t speak to that particular metric that you identified that level of specificity. So from a learning standpoint, you can then reassess like, wow, we added a bunch of members, but we actually failed to increase our membership with these particular groups; what do we need to do differently to achieve that. So thank you very much. That’s a great example. Let’s hear from someone else outside the US in a country other than the United States.

Mallika Dutt: Alan while we’re waiting for that, Alyssa Pinkerton has a question about more examples of activity metrics and not being particularly clear on what those might be.

Alan Jenkins: Got it. So activity metrics tend to be closest to what you are agreeing as a staff and telling your stakeholders you’re going to accomplish in terms of narrative change. So it might be, you know, so let’s take Nana’s example where over two years, we’re going to have three festivals in which we’ll do three sessions a piece convening groups from this community. So those numbers are the activity metrics. So then you get to measure, did you actually do it, did you have the six festivals that you said you were going to have. So that’s an example. Another example of activity metrics would be we’re going to give 10 interviews with journalists who reached the audience that we want to reach. So you’re able to determine, did we as an organization, do those 10 interviews or did we do 20 or if we only did three, what was the reason? What can we learn from our inability to reach them or from the fact that only regional news outlets wanted to speak with us, but we were hoping to speak with national news outlets that also is important.

Hopefully that helps. Okay alright, well, I’m going to lift my geographic restrictions. So let’s hear from anyone who can share a metric with us.

Sharon Harrison: Hi, first off, I am HR. So work with me on this the fact that communications is slightly foreign to me. So I’m going to give you what I know for a fact my staff does. So we have a global digital engagement team and they look through our social media and they have a engagement that’s to investigate basically what is trending more when we send something out, if there’s any material is being put together, we use it in that capacity to find out which one is actually reaching better. We have different audiences. So we do different things. We also have, we do engaging at the partner level to find out which one of our materials is being used more and being downloaded more to be used by our partners. So we do that as well. Then our funders, and this is what causes, I think the most controversy one is to quantify our data and the quality of our data is very different for the program’s team, as opposed to how the funders wanted represented. So that is always something that we struggle with because we can show you what we do and we can actually see the effects that it has, but a funder wants you to put it into numbers which are sometimes really hard to let’s say garner. And that I think is one of the biggest issues that we have. But I think just judging by the fact that they’re able to do all of this data collection to find out what is really our niche and where do we need to go from there to me seems that we’ve got this in a really good place from an outsider perspective. So that’s what I can say.

Alan Jenkins: Yeah. Well, thank you, Sharon, for that. And also for your background, your Zoom background makes me feel like I’m in the tropics. Can I ask you just one follow-up question? So what you described both reach and engagement metrics online, they sound like they’re numbers. So what is it that the funders are looking for that it’s hard for you all to?

Sharon Harrison: Well, I think like let’s say we’re doing something very specific. We were told to garner information about how our VAE guide is doing, video as evidence program. In order to get more funding for the video as evidence program, we had to prove that our manuals were being used and downloaded into whatever languages that we’re using them for. So we can get somewhat information, but keep in mind some of our partners aren’t going to tell us that they’re sharing the materials. So it’s not like, okay, 16 partners actually pulled it down, but 45 of them are using it. So that information that they have they want it to the tee. And sometimes it’s very hard because some of these grassroots organizations are not up there using the computers and putting it down, oh, by the way, I downloaded a manual, I paid 25 dollars for it or whatever, and didn’t share it with 15 other people.

Alan Jenkins: Yeah. Thank you. That’s really helpful. So, here’s a thought, because I bet this is something that a lot of folks are struggling with. One thing that sometimes works is and this is especially true when you’re providing technical support or free or low cost resources is to essentially have a trust, a covenant, if you will, an agreement that where people say, okay, if I use your materials, I agree that when you call me every six months, I will make the time to explain to you. I mean, optimally, it would be online, but to your point, not everybody has the ability to do that. I agree that someone on our staff will explain to you how we’ve used your materials and then you’ve got it again, that nagger in chief, right. You have to have somebody that, that reaches out to them and says, hey, we’re going to call you next week and ask you about this and so please do what you need to do to get the materials together.

Not everyone will do it, but in my experience, lots and lots of folks will do it because they want to contribute back but in a way that is manageable for them and it’s not taking them by surprise. So that often is a way that where you can get back that data. And sometimes then you get the stories that you wouldn’t have gotten where people say not only did we do this, use your materials for this training, but one of our members came back and said that they achieved this huge success using your materials and that’s something that you need to capture as well. And we used to often say to people, okay, hey, could we call you back to talk a little bit more about this, when we go to write our proposal or our grant report kind of thing. So, it’s work. I don’t mean to suggest that it doesn’t require effort, but those are approaches that are achievable. So thank you.

Sharon Harrison: I appreciate that. Thank you.

Alan Jenkins: Great. Thank you, out of time.

Mallika Dutt: Alan, I’m going to have you answer one last question and then we’ll close it down. So Oneye has asked what is the best way to measure reach, and impact when we use broadcast media?

Alan Jenkins: Yeah, that’s great. So in terms of reach data on broadcast media reach is available. So sometimes you have to dig, but, you know, for instance, I showed that slide in India, that was newspaper reach. Those numbers get reported in the entertainment industry. And sometimes you can find them for free. Sometimes there is usually reasonable cost to see what viewership was and also what online reach was because any broadcast report typically also has an online component and that information is available. You will not necessarily be able to say my interview on this news outlet reached this number of people, but you can say, I did an interview in primetime or in the lunchtime hour with this news outlet. And here’s what this news outlets numbers where, their reach and ratings with different demographics were. So that part is a little easier. It takes some digging. Sometimes they will make that available to advertisers because if you’re placing an ad, you want to know who you’re reaching and so that part can be done.

The impact part is very difficult for all the reasons we described, but part of it is looking before, during and after immediate interview, was there more attention towards us, did more people like us on Facebook or sign up on our webpage or add themselves to our database, did I get additional calls from reporters or from advocates, did that legislator who would not meet with me for years suddenly they were willing to take my call. And you may not be able to say with certainty that that was because of your interview, but those of us who’ve done mainstream media interviews you often feel the difference afterwards. And so the challenge from an evaluation standpoint is to actually capture that difference in specifics, like, oh, you want to meet with me now? Okay, you didn’t last week, so I assume you’re not telling them that, but you’re writing this down, right, I assume that that’s because of this media interview. So those tend to be the ways to measure mainstream media or broadcast media impact. Alright, I’m going to stop there. Thank you all so much. And Mallika I’ll turn it back to you.

Mallika Dutt: I’m going to read you a message from Sharon that says thank you for an amazing presentation. Alan, you are very engaging, wish my professors were more like you. I think that’s a great way for us to close out this great presentation on evaluating narrative strategy, so grateful to you for joining us again.

This series of Leadership Moves is supported by the BUILD Program of the Ford Foundation. Stay connected at Mallikadutt.com.

This series is supported by the BUILD program of the Ford Foundation.

“Inter-Connected Theme” composed by Devadas, (c) Mallika Dutt, LLC 2021.

Production team: Mallika Dutt, Devadas Labrecque, Ambika Pressman.