Building a social impact measurement framework: How to accurately assess your social impact programs

In this video, you'll learn how to build a social impact measurement framework and different ways to measure social impact.
 
We chat with Nicole McPhail and Emily Hazell from Darwin Pivot , and explore social impact indicators and how to use data to assess your social impact programs.
 
This is part three of our series on measuring social impact. 
 
Watch or listen to:
 
 

Watch the episode:

 

Prefer to listen?


What we discussed:

Karl Yeh (00:00):

My two special guests today, their names are Nicole McPhail, and Emily Hazell. They are the co-founders of Darwin Pivot.

This is part three of our discussion on social impact measurement, and today we're talking about framework and insights.

So let's get right into it.

How do you build a social impact impact measurement framework?

Nicole McPhail (00:48):

Emily and I are pretty big advocates of making sure that you're starting at the top and really understanding what you're trying to solve for with your program or strategy.

And not even when we say the top, not even on the program level, we're talking about

Understanding what your strategy is doing in service to one of your biggest stakeholders, which would be the executives in your company.

Because if you can understand how you can solve for that, then you can think about metrics on a couple different levels.

One would be how you're supporting the business, and then the second would be how you're doing operationally. And we've talked about this before.

 

The third way is typically, companies trying to figure out how they can measure the outputs as outcomes in the community, which is really challenging.

So having provided that context, stepping on back to your original question, you got to figure out what you're solving for, you got to figure out what your stakeholders care about, and then you can start to map it out and align both the strategic measures of success and how you can actually accomplish that and the programmatic measures of success.

For instance, if you're looking at employees, how much they're giving back, how often, where, why, and I identifying some of those opportunities and gaps there.

Emily Hazell (02:34):

And I think in terms of building, you can take a very hands on easy approaches actually starting to storyboard and write this down, asking yourself the question, what am I solving for?

And start at the top and then move down from there and start to compartmentalize and bucket some of these components of what you're trying to achieve.

And it can help you really organize just your train of thought around it, just preliminary when you're starting off or if you're taking over a legacy program or whatever.

Actually, writing down, it seems like an obvious statement, but a lot of the times people aren't thinking about that. And actually going through that process can really help.

Nicole McPhail (03:14):

And to give an example of this, say your company is really focusing a lot more efforts on DEI initiatives.

As part of your CSR strategy, you should certainly be thinking about how you can solve for that with diverse opportunities that are impacting certain organizations in the community that also support this.

So from a very transactional level, and when you're reporting out to your executives if they care about this, you can talk about what percentage of, say, volunteering opportunities are to these types of causes, by what groups, just one bucket.

And then you have another bucket, which is maybe around employee engagement.

 

And then you could look at participation rates and then qualitative data around sentiment and the pride that they have working for a company that has a program like this.

So just break it apart and then it will feel a lot more simple when you know exactly what you're solving for and think about the data that you actually have that will tell that story.

And this is a big thing, don't make up data and don't be misleading about the stories that you're trying to show.

Emily Hazell (04:26):

Well, and again, in this process, by going through that process of actually bucketing and writing it out, it also allows you to see more clearly some of the gaps made maybe in your data.

And so you can just make note of that and then it can be something that you're prioritizing after. But it just helps you come up with a roadmap for where you are and where you want to be.

Nicole McPhail (04:48):

And I think that when you are doing this, you do need to keep in mind that there is a culture component. And I'm not talking about corporate culture, I'm talking about a culture in its real sense.

The behaviors that you're seeing in one country might be drastically different in another country. So it has to also be localized.

You can't have a centralized dashboard that isn't factoring in the nuance from specific countries and cultures too, because yeah, it's just not going to be effective and you'll be really disappointed in your results.

Karl Yeh (05:30):

And so let's talk a little bit about,

What are the social impact indicators?

Nicole McPhail (05:36):

Well, this depends on how you're defining social impact at your specific company.

Social impact is a word that thrown around just like CSR.

It could mean everything from all of the pillars that fall under CSR, even down to supply chain, or it could be social impact, could just be a granting program that you have.

To Emily's point, the indicators have to be broken down into buckets and the indicators vary based on the components of your strategy and what you're solving for.

But at a very high level, the indicators are basically, as it sounds, the little indication of how you are doing against whatever goal you're trying to achieve.

And those should be tracked and measured consistently over time.

So while the metrics might change, the indicators are basically the informants of the health of whatever it is you're trying to do.

Emily Hazell (06:35):

I think maybe with just the build off of that, and it's something that we talked about in a previous episode, is again going back to the basics.

And so making sure that the social impact indicators that you are including or that you're striving to include are valid and reliable so they can reliably measure what you're looking to measure.

Over time, they consistently will be able to tell you how trends are changing over time. And again, valid in that they are actually measuring what you set out to measure.

And so that takes a little bit of how to flex your critical thinking skills and making sure that the metrics that you're actually looking at are measuring what you want them to measure.

And if they're not, then you'll have to rejig your approach a little bit and make sure that you are aligning with that overarching goal and that overarching framework.

 

Nicole McPhail (07:29):

And I think to give an example of the valid and consistency, one, it doesn't have to be high level metrics.

So it doesn't just have to be the giving and the volunteering.

A really great metric would be if you have a goal around basically indoctrinating people into your social good component of your program, you could consistently track how many people are engaging in some sort of social good action, whether it's giving, volunteering, doing missions through Benevity.

When are they doing this?

Are they doing it in their first three months? And then tracking that over time to understand your engagement strategies for net new employees.

That's a really great metric that you could see.

And then look at over time, okay, well, we tried something different, these are the variances in the behaviors.

And then after a year you can actually really start to think about these insights in a more meaningful way.

Karl Yeh (08:31):

So what are some different ways to measure social impact?

Nicole McPhail (08:35):

Methodology in the sense that we're talking about is just how we're collecting the information. And if you have a technology that does a lot for you, so it's not even just the giving and the volunteering and the grants and the pro-social behaviors that can be tracked through a technology.

You can also use Google Analytics to understand how people are utilizing the technology too, and understand their interests and stuff like that from that aspect.

And then collecting also comes from surveys and interviews, and there's different ways that you can think about this.

But if you are planning on collecting data in a more qualitative way through surveys and interviews, do some research on that approach because even the framing of questions in a survey can impact how valid your results are in the first place.

If you have a bad survey, if you're not giving it to enough people, it's like it's a waste of your time.

Emily Hazell (09:41):

You have to be intentional when you're designing the survey.

And again, if there's examples out there that you can use because the cadence and the question order, all of those things can have an impact.

And so make sure that you have an understanding of that and don't just willy-nilly throw a survey out there because it can become misleading and then you end up making decisions based on data that's actually not really that accurate or not well designed.

Nicole McPhail (10:18):

No, and I absolutely hear what you're saying. I think in terms of the surveying, you also have to remember the timing in which you're giving the survey.

Qualitative data is so important to help dissect and overlay the quantitative data that you have. And Emily talked a lot about this in a different episode.

For instance, if a merger and acquisition is happening or a mass layoff is happening and you send out an employee giving a volunteering or a social impact survey, the emotions that people are experiencing might impact the answers that you're getting.

So the surveys seem like a no big D, but they are really important.

You have to be strategic.

Do your research before you start sending those out for collecting your data.

But I would say those are the two main ways, the technology and then the qualitative through surveys and interviews.

Karl Yeh (11:20):

You've built your program, you've built your frameworks, so how do you go about knowing your program's actually doing well, it's working and it's actually achieving not just the program goals, but I guess the business goals as well?

How do you know if your program is succeeding?

 

Nicole McPhail (11:33):

And I think that we have to ensure that we're doing our research before even getting into designing a strategy and then metrics come after you understand your priorities.

External research to understand benchmarks, to really know where you should be is going to be a really important first step so then you can set realistic goals and put a stake in the ground.

And to find this information, there's a bunch of ways like ACCP giving in numbers does benchmarking report every year, and they talk about the different engagement levels and donations or donations as percent pretax per industry and per company size.

So you can use that as a way to benchmark yourself.

Don't limit yourself to that. I always say go aspirational when you're benchmarking, but that's pretty much at the external.

 

And then also if you're a part of a company that has a technology like Benevity or some of the others, you have a really great community there that you can share best practices and where you're putting that stake.

And in lack of those, say you're a smaller company and you're trying to figure this out, just set a goal and then work towards that goal.

And you'll see over time as you're consistently tracking, whether or not this stake is too far and you need to pull it back and you can modify.

But it's really important to have that context.

Otherwise, you're sharing your dashboards and reports with people and they don't even know what that means.

Like, "Okay, well, how are we tracking against? Where should we be? This is us." That's what I would say is just make sure you do your research and have sufficient bench benchmarks.

Emily Hazell (13:23):

It's just if you don't measure and you don't track it, then you would have no way of knowing where you're at or how would you know if the program's improving if you're not actually tracking progress over time.

Again, you might start with the goal and then check in the next quarter or six months or in a year and be, "Well, that goal was unrealistic." That's okay.

You can reframe that up and work back to it.

But you have to start somewhere in order to get that initial measuring stick so that you can go back and then take those learnings and be open-minded minded and keep the conversation going to make sure that you're building the most inspirational program that you can.

Nicole McPhail (14:08):

Because I would argue, too, so a really relevant, I think sometimes we have the inclination to set a percentage increase over time, like every year we'll increase X by 2%.

Okay, well, that works good for budgets if you're trying to pre-plan and get that buy-in from investors.

But who's to say that you can't?

Once you start to understand Emily's point how you're operating, you have a lot more information to know.

Like we can do this by 7% because now we've looked at this data and understand these are the changes that we need to make, this is where we're going next.

But until you start tracking it, you don't have that information to understand cause and effect as much as you can at that point.

Cause and effect is hard because there's so many variables, but you catch my drift.

Karl Yeh (14:59):

Do you have anything else to add in terms of building frameworks or looking at the insights from your day programs?

Nicole McPhail (15:09):

So many.

You're going to have to keep us under control here, Karl, and stop us when we're slipping.

But I think one big thing is something that you see at an aggregate level shouldn't be how you're making decisions.

Emily can probably talk more on this one, but you really need to know how to slice and dice in order to discover new insights that can inform you on how well you're doing and whether or not the assumptions you're making potentially at that aggregate level number are accurate.

Emily, do you want to jump on that? Do you have any thoughts?

Emily Hazell (15:51):

Yeah, I think you need both the aggregate numbers and the smaller finer resolution numbers, both of them work together.

And so you need the aggregate numbers to, as you said, you're putting a stake in the ground or you're understanding the broader perspective.

You can't just be looking at one individual behavior.

So you need those aggregate numbers at the company wide level or countrywide, depending on how big it is that you can look at.

But you have to then layer onto those more finer resolution.

And so again, if you can think of it kind of zooming in, zooming out, you need to zoom in to actually then understand why for... Well, first of all, there's going to be differences in discrepancies across the whole company depending on how many locations or offices and teams, etc, that you have.

So an aggregate number, you're making a lot of assumptions about all of your employees or metrics if you're looking at one number for the entire company.

 

But you use that at a high level and then you zoom in so that you can start to understand, okay, well what's going on in this team or this branch or this location?

And if, for example, there was something like an outlier in a particular location, then you can use these outliers to be, well, what is going on there?

So you're using different levels of data to help you ask smarter questions to get more to the root cause so that you can actually then solve for what you're trying to solve for and make better decisions and have a more integrated program.

But you wouldn't know any of that unless you started diving deeper into the nuances of the data. So often sometimes, you don't want the outliers to dictate a lot, but the outliers, for me, when I'm looking at data just sparks a little bit of an interest of, why is that going on there?

So you use that to spark your interest and be curious.

Nicole McPhail (17:45):

This is a really great point.

And to give one example, think about... Okay, so for anyone who has a corporate matching program as an example, for those of you who aren't super familiar with this, there's typically caps on how much money an individual can give and the company will match.

The typical would be 1000 to $5,000 for every employee where the company will give this money and match those donations.

Some companies have massive caps up to 100,000. And so if you look at, say, a bunch of executives who are giving to their universities that they went to or whatever organization that is large sums of money, and then you look at, say, the average donation amount at your company, it's going to skew it.

They might have a lot of frontline employees whose salaries are far less and they're wanting to give a little bit less money.

But if you look at it from an aggregate level and it says that your average donation is $500, you might use that to figure out how to promote your program or expectations on what people are giving, when really the average employee is only giving, say, $100 a year.

So that's just one example of really looking at that data and zeroing and to understand what all of this means.

And similarly, I've seen companies accidentally make decisions on having thresholds. So you can't give a donation unless you give, say, $50.

And they're looking at their average donation amount thinking, well, a lot of people are giving this amount of money, but there's so many people that don't have the funds to give where they put their thresholds and that's limiting participation and they had never even realized it.

That's just one example of it in action.

Emily Hazell (19:45):

I love that example, and I was actually thinking about as you were talking, it's like if you were comparing something that I was donating to the annual donations that Oprah does or something.

It's just like, well, comparatively to where I'm at, is you have to look at it proportionally and you have to set that to make sure that you're taking into context, whether it's the team, the level that you're at, at the company, how long you've been at the company.

You can start to look at other types of demographics as well.

And so, again, going back to the earlier episode, you want to be making sure that you are comparing apples to apples not apples to oranges.

Because then you're just going to go down a completely wrong direction, to your point Nicole, because you weren't looking at the data with that intention and you weren't being careful in interpreting that data.

Karl Yeh (20:37):

And I know we can talk about this and you both can talk about this for hours on end, but if our audience wants to connect with you, what's the best place to reach you both and learn more about Darwin Pivot?

Nicole McPhail (20:49):

Check us out on our website ,darwinpivot.com or email us at info@darwininc.com

Connect with Nicole McPhail

Connect with Emily Hazell