Revolutionize Philanthropy: Standardized Reporting's Game-Changing Impact
Watch the episode:
Read what we discussed:
Karl Yeh:
So, today my special guest is Heather King. She is the Vice President of Evidence and Implementation, as well as the Chief Ontologist for Impact Genome. Thank you, Heather, for joining us today.
Heather King:
Thank you. I'm happy to be here.
Karl Yeh:
So, today we're going to be talking about the article that you co-authored called The Promise of Impact Science, and we'll leave a link to that article in the description below.
So, the article mentioned hardening social science. I guess that means more tangible results for reporting.
So, how does this relate to, I guess, social impact measurement?
How does the "Promise of Impact Science" relate to social impact measurement?
Heather King:
So, impact science is really trying to get us all on the same page and use data to make decisions in the social impact space.
So right now, there's a lot of different ways that programs and other stakeholders are measuring impact, and that's really what this is about, right?
Impact Science is about the impact.
It's trying to get us toward results, and so we have to measure those results, right?
But programs are doing that in so many different ways and they're appropriate often and make sense for that specific program, but those measures tend to be really idiosyncratic, so it makes it hard to compare across programs and aggregate information, so then everybody is kind of doing their own thing in their own way.
That might work for individual programs, but if we're trying to look at this as a whole sector and move the needle on these really big problems, we have to have a way of communicating and getting on the same page about what impact is actually happening.
So, it's kind of like a world in which credit scores don't exist, right? If you're trying to get a loan and a credit score doesn't exist, what do you do?
You have to write the lender maybe a really persuasive letter.
That's really the only way you're going to be able to have them trust you and show like, okay, I can pay this loan off, right?
That's the world that nonprofits and funders are living in right now.
No standard measurement on the effectiveness of a non-profit
There's no standard measure for whether a nonprofit is effective, how much it costs to get an outcome, what the quality of their data is for impact, and so that's really where they are.
It's really hard to look across different programs and say, here's where we are in terms of impact, and we're measuring it all in a way that is standardized and can be aggregated and compared.
So, they're trying to do this in a world where there is no standard and they have to still function, they have to still write that persuasive letter to get the funding and show that their impact is they're getting results.
Karl Yeh:
I think for some nonprofits and causes, that's hundreds of letters being written.
Heather King:
So many.
Yeah. Yeah, I've seen numbers saying that on average nonprofits spend about $20 raising $100 to fund their programs.
I think in other industries it's $2, $5, it's nowhere near $20. So yeah, they're spending a lot of time writing those letters.
Karl Yeh:
So, how does Impact Genome look to solve this?
How does Impact Genome look to solve this measurement challenge
Heather King:
So, what we're doing is we're solving that standardization issue.
So, we're trying to modernize the social impact sector by creating the equivalent of a credit score for Impact.
Though it's not as simple as that, it's not a single score.
We talked about this a lot in the other podcasts that we did a few months ago where we're looking at standardized data points that are kind of like the pieces of the credit score, right?
So we're standardizing things like cost per outcome, we're standardizing how effective a program is, we're standardizing the data quality, and that's kind of like the pieces of a credit score, like how many hard inquiries have you had?
What's your ratio for credit card debt?
Are you late on your payments?
All of those pieces is kind of the equivalent of what we're doing at Impact Genome.
This is important as I said before, because with standards you can compare programs, you can understand what should I be spending as a funder to get the impact that I want based on actual benchmarks from programs on the ground, and what are programs that have low cost and high effectiveness?
What program strategies are they using? That's something else that we standardize at Impact Genome, and that's ultimately this question of what should I expect to be paying as a funder to support the impact that I want to see?
So for programs, they can actually get a better idea of where they stand relative to their peers. I actually found out about Impact Genome, because I was an evaluator at a nonprofit.
I was asked to register our program, and because I was able to standardize our impact data through the registry at impactgenome.org, they were able to benchmark us and we were able to see where we stood in terms of evidence, quality, cost per outcome and effectiveness compared to our peers.
I was completely blown away by this, because there was nowhere else to get that kind of information, and it was so helpful to go to our board members and our funders and be able to say, "This is how we compare to programs like us."
Karl Yeh:
What are some of the challenges to standardized reporting and measurement?
Heather King:
So, standardization is not easy.
If you think about anything else that's standardized, even a credit score, I mean, that took a long time to develop and a lot of effort to get it right.
In my opinion, it's really just about putting that time and effort in to get feedback from the right stakeholders and to do the right research, so this is something we've been working on at Impact Genome for about eight years now.
Then the next challenge really is adoption.
So that's kind of what the article was about, and Stanford was, "Let's all get on the same page about this, let's all try to adopt this idea of standardization," and then being able to use that standardized data to make decisions.
That's what we're working toward right now.
So, we've begun to demonstrate the value of standardization in the sector with our clients. So for example, one of our funders was able to produce 14,000 more outcomes in just one year with the same amount of funding by using standardization to find more effective programs.
Karl Yeh:
In the article, it mentions about impact sciences. So, what exactly are there? Now, I know there's three, right?
There's data standardization, talked about aggregation in synthesis methodology, prediction, matching and benchmarking. Can you go into those three?
Heather King:
What are the Impact Sciences?
So it's good that we started out talking about standardization, because that's really the foundation of everything else.
I think what we're trying to do is create data about social impact that is standardized, so that you can use methodologies from all these other fields like data science, econometrics, other fields where they've been able to build out a knowledge base instead of methods that help them predict, because that's ultimately where you're trying to get to, right?
So if you think about other sciences, you have DNA, you have the periodic table, you have a species, those are building blocks of the science.
So without them, it's really hard to predict anything. The point of science is not just to tell us where we've been or even where we are right now, but where we're going.
So in social impact, we want to be able to understand what social impact is happening right now, whether at a funder portfolio or sector level, but also which programs are going to be most likely to produce impact at what cost, and what are the essential elements of those programs so they can be integrated into other places and produce similar results.
So it's really hard to do that right now, because we don't have that standard definition of, I mean, other than what we're doing at Impact Genome, a standard definition of what an outcome is, what the strategies are, who's working on those outcomes, what is the context around that and what is the cost?
So once you have that, think about in data science, for example, you can put all kinds of data into a predictive model for the climate, and you can figure out, okay, if this variable changes, then this is going to happen, but if this one stays the same, then that's going to happen.
Think of the power of that in social impact, which the foundation is the standardized data, and that's kind of where we need to start, and then we can borrow all these methodologies from other places.
Karl Yeh:
I think one of the things that I was thinking about in terms of challenges to this is how do you get all these, I guess, nonprofits to buy in?
Because I think that's a hard thing.
There's a lot of nonprofits around the world, and I'm sure they are already thinking about ... maybe they're thinking about standardization, but a lot of them are just like we mentioned earlier, just writing a whole bunch of grants trying to get funding for their programs.
Heather King:
Yeah. Yeah, I mean, we want this to really make it easier for nonprofits and for the funders, but for the nonprofits too.
So, think about the common application for colleges, right?
College applications are basically asking the same kinds of things, and so they standardized that a couple decades back so that you filled out one application and then all these different colleges would accept it, because again, they're all trying to get the same information from you.
So, that's what our Impact Genome Registry is trying to do.
So we're not asking nonprofits even to change the way that they're measuring impact or even necessarily do anything extra, what we want is to help them take all of that information and kind of translate it.
So, the registry is almost like this Rosetta Stone where you have a set of fields like TurboTax, you're filling out this registration form, you're putting in the information that you already have about your program, and then that is all translated into the standardized language, and then that can go into our benchmarks and that can go into our registry.
Then the idea is that funders can find them in the registry based on their standardized data.
- So, what outcome are they going for?
- Who are they working with?
- Where are they working?
- How much is it costing for them to produce an outcome?
And the funders can search for them. Then over time, this can cut out that whole grant writing process, because again, funders, they want to know those same things.
They might have different ways of asking the questions, but the information is roughly the same.
So, what we eventually want to do is create a marketplace where we can essentially sell outcome credits, and that is, again, only possible through the standardization, because you have to have a common definition of what that credit means.
But then in an ideal world, we would get to a place where nonprofits can put their credits on a marketplace, funders can go and find those credits and purchase them, and we cut out all of this grant writing and we can just focus more on doing the work and less on trying to raise money.
So we want to make it easier for nonprofits, and we want to make it easier for funders so that they don't have to go through the whole process again. If no credit score existed and everybody is writing persuasive letters, in that world the lender is going through and having to read all the letters.
That's where the funders are too, is they're having to go through and read all these applications and try to figure out, well, how do I make this decision when all of the information is presented in a different way?
Karl Yeh:
So, what does the application of this look like?
So, what are I guess the steps that you're now trying to get nonprofits and funders involved?
I guess maybe the first step is to understand or raise awareness and then actually get involved and participate and implement?
Getting buy-in from non-profits and funders
Heather King:
Yeah, so if you're a nonprofit, you can go to impactgenome.org and register for free.
It's not a super long process.
Again, it's taking information you already have, and then you can get verified and you can get into our registry so any funder can find you.
We do this with our clients.
So we have about 3,000 programs that have been verified, and we have 2.2 million programs in the US and Canada that we got from tax data.
So nonprofits should come and claim their impact, tell us a little bit more about their program, and then they can be in the registry and discoverable by funders.
Then funders work with us to take all of that information in their portfolio and make decisions about it.
So, I can give you an example from one of our clients.
So this is a big funder of financial health programs and financial literacy, and so they wanted to continue to do that work, but they didn't have a super clear idea of what that meant and how to know whether they're actually funding the outcomes they wanted to support.
So, we had their programs registered with Impact Genome.
We found that most of their portfolio actually was focused on financial literacy or programs that helped people learn about personal finance.
But what the funder really wanted to do was move the needle on financial stability, so helping people pay their bills on time. So, those are really different, right?
There's one piece of this, which is, I know about personal finance, I have financial literacy, I know how to build a budget, I know how to work with my checking account, things like that.
I know how to balance my checkbook if people do that anymore, but that's the idea, right?
But then you have financial stability, which is like, can I pay my bills on time? Those are really different things.
People talk about that under the umbrella of financial health, but this is an example of with impact science, you can get really precise about what outcome you're actually trying to target.
Then once you have that, you're able to use something like our registry to help find programs that actually do that.
So this funder was able to find programs that were focused on financial stability, and that was their first win, making sure that their portfolio was aligned to their goals. So, that's the power of standardization just off the bat.
But then the second win was making the most of their funding.
So because we could benchmark the cost that it takes to achieve a single outcome, we were able to help them find programs that could produce more outcomes for the same cost.
So, that resulted in an 11% increase in the number of people that they were able to help for the same amount of money.
Then third, we used impact science to predict which program strategies would be most effective for achieving that financial stability outcome.
So not only do we standardize outcomes at Impact Genome, but we also standardize the program strategies.
So we standardized findings from over 200 research studies, and we found that a personalized coaching model was a really promising strategy.
That came out in a lot of successful programs.
So we actually tested this in a real program on the ground, and we worked with this funder to create a couple cohorts in this nonprofit.
So we had one that did the coaching, we had one that did the same program without coaching, and then we had another comparison group that was just their walk-in.
Anybody can come in and get some financial help, whether it was somebody calling the credit cards for them and trying to negotiate their debt down, or other things that weren't coaching, but it was still kind of a personalized model.
So what we saw was when coaching was used, participants were twice as likely to get a financial stability outcome than if coaching wasn't used.
So, that's an example right there is if you're able to translate this program set of strategies into a standardized language, then you can be really precise when you're testing whether or not that strategy works.
Then you can have information now to say, "Well, if you use the strategy, there are real results that can happen because of that."
Then it's a lot easier to implement a specific strategy in an existing program than it is to kind of throw the whole program out and implement a brand new program.
So, the idea there is that we can help make tweaks to programs on the ground using impact science so that it's more realistic for them to modify what they're doing with an evidence-based behind it to increase their results.
So we're actually going to publish that in Stanford in the coming months, so we can talk about that then.
Karl Yeh:
You know what's really interesting is when you talk about real-time, because once a funder provides the dollar amount, unless you get regular reporting, but even then from your cause or nonprofit, you don't really see anything in real-time, right?
So, that's really fascinating.
I could see if funders actually were more confident in their ability to ... the money actually goes both farther and actually is more effective in the specific causes based on that standardization, it would just make things a lot easier to decide who to give the money to, and also give the top performing nonprofits and causes more leverage and ability to do more, because they don't have to write a whole bunch of grant letters, but actually they've got their scores, and it could maybe even change the dynamic where funders are actually actively looking at the ones with the highest scores and wanting to provide that, because they're doing the best work.
Moving toward standardized reporting
Heather King:
Yeah, exactly. I think there's a lot of power to it, and I also don't want it to seem like it's punishing the nonprofits. You know what I mean?
So, the other thing that this can do is it just creates a lot of transparency.
So for example, we have funders that are really interested in helping the nonprofits improve, and you can do a better job of that when you know what specifically needs to be improved, right?
So for example, we benchmark the quality of evaluation evidence. So there are some funders, including this one that I just gave the example of, that really wanted to fund nonprofits that had high quality evidence.
Not all funder wants to leave out nonprofits that don't have that, but once you know, okay, it's the evidence quality and it's specifically that the evaluation design needs to be a pre-post design and not just a point in time, and they need to survey their whole population and not just a subset, those are things that we have also standardized that can increase the evidence quality score.
So, that's something really specific that funders might even be able to say, "Well, we want to support your evaluation practices.
We know precisely what needs to be improved, and now we can give you a grant to go and do that," and everybody can be on the same page about how to help that nonprofit improve.
So, that's the other part of this is everybody is kind of in their own place on this.
Everybody's doing their own thing, and that's part of what we're trying to solve.
But when you have that standardization, you can be really precise and specific, and funders can use that also to help bring everybody up and not just the ones that are high performing right now.
Karl Yeh:
Just one final question.
I was thinking about it too. How could, let's say ... I don't know how to even call this, nonprofits or causes that are just getting going, they're just starting up, how would they benefit from this?
Benefits to new non-profits
Heather King:
We've heard a lot from nonprofits, just going through the registration process can be really helpful, because you have to take what you know about your program and do this kind of translation.
So specifically for the program strategies, we've found that that's a really helpful conversation internally for nonprofit to have, because for example, I've experienced this myself being an evaluator and working in a nonprofit, the people on the ground running the program have one idea about what the program is, the people writing the grants might have a different idea of what the program is, and then the leadership might have yet another idea of what the program is actually doing.
So because we're providing this language, again, it's like dropdown menus almost, where we're giving you the pieces and then you're telling us, here's the pieces that we use and here's how important they are to our program.
That can help internally just align on what that program model is.
So, that's a lot of what I think nonprofits are probably doing when they're just getting started is like, what is our program model?
How do we get it on the same page about this?
And then eventually, how do we scale it?
But you have to know what those building blocks are, you have to know what those puzzle pieces are, and so just going through the registration process can be really helpful for existing programs, but also emerging programs to have that conversation.
Then also when you have the standardization, as I mentioned, you can benchmark against other peers.
So, that can be really helpful too for an emerging nonprofit to understand, okay, well, we are 80% effective, but the benchmark is 90% effective, so that gives us a goalpost for where we can try to move toward based on what's happening in the sector.
Not just based on kind of guesses or gut feeling, it provides some real data that they can use to make decisions with.
Karl Yeh:
So Heather, we can talk about this for a while, but if funders and nonprofits and causes want to learn more or participate, what's the best place to learn more?
Heather King:
Yeah, so go to impactgenome.org.
Right on our front page, there's a button that says view the registry, and that's a place where anyone including funders can go and see who has been registered and verified with Impact Genome.
Then likewise, nonprofits can go to impactgenome.org and they can create an account and go through the registration process.
Again, we're not asking anything that they wouldn't already have. It's just about translating it into that common language, and then they can get verified and then they go onto the registry and funders can find them.