Historically, philanthropy was often steered by intuition. Funders relied on personal values, trusted relationships, and lived experience to decide which grant seekers to give to. Success was anecdotal, with no real measurement involved.
Of course, passion and empathy will always be crucial in philanthropy, but decision-making is becoming more grounded in data. Now that big data is on the scene, funders don’t have to exclusively rely on experience and intuition when handling complex decisions. Evidence can complement these elements.
Measurement benefits both funders and nonprofits. Funders gain confidence that their donations will make a difference while grant recipients trust that demonstrating impact in concrete terms will help them secure future funding.
In this article, we review how the insights that data provides can lead to more effective giving, some frameworks for measuring impact, and how grant management tools can make the process easier.
In recent years, philanthropy has edged closer to impact investing and Environmental, Social, and Governance (ESG) principles.
Funders are demonstrating outcomes and impact in more concrete terms than ever. In fact, about 90% of charities with turnovers greater than £500,000 measure impact due to increasing pressure from private donors, grantmaking trusts, and commissioners, and 80% of charities agree that impact measurement improves efficiency and service.
Most donors don’t have the time to scrutinise every organisation they could support with as much rigour as they might like to. Evaluating nonprofits requires reliable, comparable information; yet clear data on effectiveness, impact, or costs isn’t always easy to find. Decisions are often based on partial information or whatever details happen to be visible in the moment.
As discussed by Stanford Social Innovation Review (SSRI), this is ultimately a problem of choice architecture – the way options are presented to decision makers and how that framing affects decisions. Choice architecture is often weak in philanthropic contexts, so donors struggle to assess possibilities effectively.
SSRI discusses the results of a study that explored how changes in choice architecture affect donation decisions. The study was conducted by Impact Genome and ideas42, with support from the Fidelity Charitable Trustees’ Initiative.
Several years ago, the Impact Genome Fund developed a standardised system of impact metrics covering thousands of nonprofits and compiled them into a comprehensive registry. The dataset includes five indicators designed to help donors make more informed choices:
The central question of the study was this: Does providing donors with consistent, comparable impact data meaningfully improve decision-making?
Participants were asked to choose between charities to donate to and decide how much to donate. They were provided with the five indicators above, while the control group had no access to any data.
Participants that viewed the data chose charities with stronger performance, and 74% reported that comparing charities in this way was helpful.
The presence of metrics didn’t significantly affect how much was donated. However, participants that were confident in their decisions donated much more – and so did those who said it was easy to identify which charities had the best combination of qualities. So, indirectly, clarity about performance may increase donation sizes.
How can organisations start measuring impact more accurately? Here are several widely-used frameworks.
Organisations can adapt the Business for Societal Impact (B4SI) framework (formerly the London Benchmarking Group (LBG) standard). It categorises inputs, outputs, and impacts, and takes the focus off of spending and onto achieved benefits.
Inputs are defined based on four questions:
Outputs are categorised as follows:
Defining community impacts includes defining the type of impact – for example, on behaviour, attitude, skills, or quality of life. It also considers the depth of impact – did the activity lead to improvement? Did it lead to transformation? In a corporate context, the framework also addresses business impacts such as whether processes, services, or job-related skills have improved.
The Theory of Change framework maps the necessary conditions and assumptions for achieving long-term social impact, often at an organisational or sectoral level.
It starts with an end-goal, such as eradicating youth unemployment in a region, and reasons backwards through preconditions (like policy changes that improve access to employment), outcomes (e.g., sustained employment), interventions (e.g., skill development programmes), and contextual factors like economic barriers. It articulates the "why" and causal logic behind how change happens.
A logic model is a programme-specific tool, typically visualised as a table or diagram. It illustrates a linear ‘if-then’ progression: inputs (resources like staff and funding) lead to activities (e.g., workshops), producing outputs (e.g., 500 participants trained), short-term outcomes (e.g., skill gains), and long-term impact (e.g., higher employment rates).
Both approaches aid impact measurement by defining trackable indicators. However, a Theory of Change is broader, guiding overall strategy and external alignment, while a logic model is narrower, operational, and focused on execution. Nonprofits benefit from developing a Theory of Change first to set vision, then logic models for each program to operationalise it.
Maintaining a balance between data and human insight is important because of the nuances of human lives and social change. Quantitative metrics might show what is happening, but frontline intuition and community voices often explain why. Data should therefore inform rather than dictate strategy.
A data-driven impact strategy starts with understanding what information you already have and how reliably it’s captured. In many organisations, relevant data is scattered across different teams and multiple sources like programme reports, financial reports, grant applications, surveys, spreadsheets, and email threads. Therefore, the first task is to dissolve these silos and create a shared view of the organisation’s evidence base.
Data mapping is the process of identifying every source of information that contributes to your impact narrative and understanding how it’s collected and stored.
Typical sources include:
Mapping these sources provides a complete picture of the information used to prove effectiveness. It also reveals inconsistencies, duplication, or missing data fields.
Once data sources are mapped, the next priority is ensuring data quality. Organisations should assess accuracy, completeness, and consistency; for example, checking whether indicators are measured the same way across programmes and checking the consistency of formatting and naming conventions.
To move from scattered evidence to a source of actionable insights, organisations need a single source of truth. A unified database brings all relevant impact, financial, and operational data into one place. This makes it possible to automate reporting, run analyses, and detect trends across funding programs or cycles.
Building the infrastructure is half the battle won. To ensure that insights are used consistently, a data-driven approach needs to be embedded into culture. Here are some tactics for doing so.
Programme officers and leadership need to be able to interpret data, question assumptions, and understand what the data is (and isn’t) telling them. Targeted training helps staff connect the figures to real-world outcomes and recognise how evidence can inform strategy.
Providing intuitive dashboards and accessible visual tools allows staff to explore impact data without needing technical expertise. When insights are available to everyone (and not just analysts), learning becomes a part of everyday decision-making.
Regular data review sessions help teams to look objectively at what’s working and what needs adjustment. Making this a routine reinforces the expectation that decisions will be guided by evidence rather than habit.
A cloud based grant management system like Flexigrant helps organisations collect data and track progress throughout the entire grant lifecycle with minimal administrative burden. These platforms are centralised and bring together many data points, including EDI and financial data, to provide complete visibility.
Flexigrant can generate custom reports and real-time visualisations of key metrics, and users can monitor milestones and measure the success of each grant program. Our Data Replication Service consolidates data into a single SQL database, simplifying analysis tasks and improving accuracy.
Data-driven philanthropy is ultimately about discovering what works and doing more of it. As we’ve seen, research shows that having a consistent framework for understanding impact helps donors to make better decisions and helps nonprofits gain more funding. Common frameworks include Theories of Change, Logic Models, and the B4SI Framework.
To implement a data-driven approach, first map existing data sources, then identify any gaps where data points you need to measure are not being captured and decide how that information will be captured. Ensure proper data quality for accuracy, and unify all grant data in a single database.
An advanced grant management platform like Flexigrant helps you capture, centralise, visualise data, and generate reports with ease. To learn more or request a demo, contact us today.