Closing the gap between current practice and what the field needs
Impact valuation has moved from academic curiosity to boardroom tool. Hundreds of organisations across the globe now use value factors to translate the impact of organizations activities on societal outcomes into monetary terms, informing investment committees, board strategy, and portfolio decisions. That adoption is welcome and long overdue. When impact numbers show up in board packs and investor memos, the underlying value factors need to hold up under scrutiny. Ensuring that impact valuation is done robustly matters more than moving fast.
Getting it right, however, does not mean converging on one universal set of value factors. Standardization of value factors can be important for lowering the barrier of entry and ensuring more trust and comparability, but while keeping the flexibility to adapt the value factors or chose different ones when a particular business context requires this. More importantly, being clear about what question each valuation answers is needed, building the infrastructure that makes impact data actionable, and holding every framework to a set of non-negotiable requirements. This post sets out those requirements.
What value factors are supposed to do
Definition: A value factor is an expression of the relative importance, worth or usefulness of changes in natural, human, social, and economic or produced capitals to people.
Value factors translate societal outcomes into comparable units (expressed in monetary values) so decision-makers can weigh trade-offs. A health gain from cleaner air, a livelihood improvement from higher wages, and an ecosystem loss from deforestation are fundamentally different things. Value factors exist to make them commensurable, so that an investor or manager can compare one against another and make an informed choice.
Different decision-making contexts, however, ask fundamentally different questions. A damage-cost perspective asks what is the cost of the harm? A mitigation-cost perspective asks what would it cost to prevent it? A well-being perspective asks how does this affect people’s lives? A GDP perspective asks what is the macroeconomic consequence? Different valuation approaches answer different questions. The same impact driver, say a tonne of CO₂, carries a different value under each lens, and each value is legitimate for its intended purpose.
The field of impact valuation has made real progress, and several initiatives are working to standardize value factors, such as the Capitals Coalition. That work is welcome, provided the standards meet the needs of the people who actually use them. The requirements that follow are meant to apply to any framework producing or endorsing value factors, not to any single initiative.
The gap between current practice and what decision-makers need
Before laying out requirements, it is worth being honest about the patterns that currently limit the impact valuation field’s credibility and usefulness, because naming the gaps clearly is the first step to better close the gaps. Each of these gaps is also an opportunity.
The most fundamental issue is internal inconsistency. Many current frameworks combine GDP-based estimates, remediation costs, stated preferences, health impacts, and economic damage costs into a single set of value factors. When different factors use fundamentally different valuation logics, the results cannot be meaningfully compared or aggregated. A health-related factor expressed in QALYs and a climate factor expressed in GDP damage (the current method of the Social Cost of Carbon, or SCC) are measuring different things, and adding them together implies a false equivalence. This problem is compounded when frameworks also conflate impact assessment (identifying and quantifying what changed) with impact valuation (assigning a societal value to that change). These are distinct analytical steps that require different expertise and different data. When they are merged, the boundary between “what happened” and “what it is worth” becomes unclear, making results harder to audit and harder to improve.
A related challenge is that new methods sometimes enter standardization processes before they have been tested extensively in practice. Standardization should codify what works, not serve as a testing ground for novel approaches. When untested methods get embedded in standards, they create risk for every organization that adopts them. Some value factors that look clean on paper produce confusing or misleading results when applied across dozens of sectors and geographies. The organizations that have conducted hundreds of valuations worldwide have learned things that cannot be derived from theory alone, and that operational knowledge has not always found its way into current standards-development processes.
Beyond methodology, there are practical gaps that slow everything down. Practitioners today still lack access to a complete, publicly available, and internally consistent set of value factors they can actually use. Some partial sets exist, and some consistent approaches have been developed, but the field has not yet produced the open infrastructure that widespread adoption requires. And even where value factors are available, the tools to make them operational, including platforms, calculators, sector-specific applications, and decision-making templates, remain scarce.
Perhaps the most consequential gap, however, is conceptual. The field operates under an implicit assumption that once measurement is available, better decisions will follow. This is not how it works. Without clear decision-making use cases (deal screening, due diligence, trade-off analysis, engagement strategy, materiality assessment) and without management frameworks that embed impact valuation into actual workflows, even well-designed value factors will sit unused. The field is over-indexed on measuring for the sake of measuring, and under-indexed on the management systems and decision architectures that turn measurement into action. Closing this gap is where the real opportunity lies.
Ten requirements for value factors that work in practice
Building on our experience developing a consistent set of value factors as part of Valuing Impact’s eQALY valuation method, and based on our experience deploying impact valuation across thousands of organisations over the last decade, we identified the requirements that would lead to the greatest outcome for the field of impact valuation and accounting.
The following requirements are interdependent. They reinforce each other, and weakness in one undermines the others. Consistency without comparability produces factors built the same way but measuring different things. Comparability without comprehensiveness leaves blind spots. Multiple lenses without consistency produce confusion. And all of it is moot without the management intention to actually use the results. These are not a checklist where meeting seven out of ten is good enough. They form a system.
The requirements are organised around three questions that any credible set of value factors should be able to answer.
These ten requirements form a system. Consistency without comparability produces factors built the same way but expressing incommensurable values. Comprehensive coverage without consistency means cherry-picking convenient factors from incompatible sources. Multiple lenses without a management-first orientation leaves decision-makers without guidance on which lens to use. Configurability without transparency becomes a licence to shop for favourable numbers. And none of it matters if the factors are not publicly accessible, regularly updated, and validated against real-world experience.
Value factors are necessary, but not sufficient
This post focuses on value factors, which are where impact accounting starts: they underpin every impact statement and shape the decisions that follow. Without credible value factors, there is nothing to operationalize. However, value factors alone do not deliver impact accounting. They need to sit within a broader ecosystem that includes at least four other elements.
First, an impact accounting framework that defines how impact accounts are prepared, what they contain, and how they relate to financial accounts. Initiatives like the IFVI/VTPC (International Foundation for Valuing Impacts / Valuation Technical & Practitioner Committee) and the Capitals Protocol IVSB (Impact Value Standard Board) all contribute here. Valuing Impact also published its Impact Statement Framework transparently.
Second, impact assessment methods that quantify what actually changed before the valuation step, whether through LCA, activity-based approaches, direct measurement, or other techniques. The right method depends on the topic, and this is a field with established and evolving practice.
Third, tools and platforms that operationalise the full chain from data collection through assessment to valuation and reporting, making the process accessible beyond specialist consultants.
Fourth, decision-making integration, meaning the templates, workflows, and organisational processes that embed impact results into actual management decisions, from deal screening to board reporting. Without all four of these elements working alongside credible value factors, impact accounting remains incomplete.
What good looks like
The requirements above are not aspirational. Parts of the field are already meeting them. Several well-being-anchored methods, including WALY (developed by Bayer), Wellby (used in UK government appraisal), eQALY (developed by Valuing Impact), SVI, and QALY-based approaches endorsed by GIIN, are converging on consistent, comparable, well-being-relevant valuation. These approaches differ in detail but share a common architecture that satisfies the consistency and comparability requirements by design.
Accessible tools are starting to emerge as well, platforms that operationalize value factors for non-specialists and embed them directly into decision-making workflows. Some organizations are already running impact valuation through actual management processes, from deal screening through to portfolio reporting, demonstrating that the management-first orientation is achievable in practice, not just in theory.
These examples suggest that the ten requirements are within reach. The opportunity for the standards being developed now is to build on what is already working and accelerate it, rather than starting from scratch or settling for a lower bar. The practitioners and organizations leading this work have generated a body of evidence that standards-setters can draw on directly.
A constructive path forward
The ongoing efforts to develop technical standards for impact accounting, led by the Capitals Coalition and the independent Impact Value Standards Board that they have set up, including its collaboration with practitioner coalitions, represent a genuine opportunity. Standardization done well accelerates adoption, builds trust, and creates the common language that the field needs.
The ten requirements outlined in this blog should serve as a benchmark for any organization claiming to produce or endorse value factors. They are grounded in over a decade of operational experience with impact valuation across more than a hundred organizations worldwide. And they are reinforced by a growing body of practice showing that consistent, comparable, well-being-anchored, management-oriented value factors are both possible and effective.
The impact valuation field is at an inflection point. The decisions made in the next few years about what goes into the standards will shape impact valuation and accounting for a generation. Those decisions should be guided by what actually works in practice. The bar should be high, because the stakes are.
Appendix - Full requirements description
ARE WE MEASURING IT RIGHT?
- Consistency: Every value factor within a framework should be constructed the same way: the same pathway logic, the same formula structure, the same methodological architecture. The way a value factor is constructed (valuation pathway), meaning the mechanism that converts an assessed impact into a monetary value, should be consistent across all factors and across all capitals (natural, human, social). If the construction differs between impact categories, it should be explicitly flagged and justified, not buried in technical documentation. Expect: A single, documented valuation mechanism applied to every value factor, with the same definition of what a valuation pathway is. If the formula or logic differs between impact categories, this should be explicitly flagged and justified.
- Comparability: Within a given set, all value factors should use the same valuation technique, so that results are not apples and oranges. If the lens is well-being impact, every factor should use well-being. If it is GDP contribution or damage cost, every factor should use that. Mixing valuation perspectives within a single set makes aggregation meaningless. Consistent construction (see Consistency) does not guarantee this; a single methodological architecture can accommodate different valuation perspectives. Comparability requires that a set commits to one. Both the IFVI/Capitals Coalition general methodology and the Capitals Protocol define impact as a change in well-being, and several approaches (WALY, Wellby, eQALY, SVI, QALY-based methods) have demonstrated that well-being-anchored valuation is both feasible and practical. Expect: A single, stated valuation technique applied across all factors within a set. Any two value factors from the same set should produce results that can be placed side by side and aggregated without caveats about different underlying perspectives, units, or scales.
- Comprehensiveness: Value factors should cover the full range of well-being dimensions across all capitals (natural, human, social), at a resolution useful for decision-making. Partial coverage creates blind spots and invites cherry-picking of whichever impacts tell the most convenient story. A comprehensive framework provides a systematic way to assess whether all material impact pathways have been considered, even where some carry more uncertainty than others. Expect: A complete map of impact pathways across all capitals, with value factors available for each material pathway. Where factors are missing, the gaps should be documented and flagged, not silently omitted.
- Separation of assessment and valuation: Impact assessment — how an activity leads to an outcome — and impact valuation — what that outcome is worth — should be clearly split as distinct analytical steps with separate assumptions and quality controls. Bundling both into one “value factor” obscures the logic at each stage and makes results harder to audit or improve. The assessment can use any established technique (LCA, input-output modelling, surveys, activity-based approaches); the valuation approach is independent from it and should be standardised separately. Expect: Clear documentation of where impact assessment ends and impact valuation begins. Value factors should address the valuation step specifically, with pathway modelling treated as a separate, upstream input.
WILL IT SERVE THE DECISIONS THAT MATTER?
- Multiple valuation lenses: No single lens captures the full picture. A tonne of CO₂ has a damage cost, a mitigation cost, a market price, a business value, and a well-being impact. Each answers a different question. Different sets of value factors — each internally comparable — can be used together to support more advanced analysis. Combining damage cost and solution cost factors informs cost-benefit analysis. Using well-being impact alongside business risk enables double materiality assessment. A credible framework should be explicit about which lens it applies and ideally support multiple lenses, so decision-makers can match the perspective to their question. In practice, this means working in parallel with different sets of value factors for the same impact drivers. A credible framework supports multiple parallel sets. Expect: Frameworks should state which valuation technique each set of factors uses and offer guidance on which lens fits which decision-making context. Multiple parallel sets should be available for the same impact drivers, so that they can be combined to serve more advanced analytical needs.
- Management-first orientation: Different valuation factors answer fundamentally different questions. The question being asked should be clarified first — whether deal screening, due diligence, portfolio construction, engagement, trade-off analysis, or materiality assessment — before the valuation technique is chosen. Each decision context requires a different perspective, and sometimes more than one in parallel. Value factors should be chosen to match the decision context, not the other way around. Expect: Every set of value factors should be accompanied by clear guidance on the decision-making contexts it supports and how results translate into management actions. Value factors without a defined use case are measurement for measurement’s sake.
CAN WE TRUST IT OVER TIME?
- Operational validation before standardisation: No value factor should enter a standard without evidence that it produces meaningful, usable results across real-world applications in diverse sectors, geographies, and organisation types. Theoretical soundness is necessary but not sufficient. A decade of operational experience with existing value factors already exists — the priority should be to build on what has been tested, not to invent new factors from scratch. The question is whether the factor survives contact with actual business and investment decisions. Expect: Published evidence of real-world application across multiple sectors and geographies, with documented cases of how the results informed actual decisions. Peer review alone does not constitute operational validation.
- Transparency and configurability: Every value factor should make its assumptions, data sources, and modelling choices fully explicit and accessible (for instance building on the Governance for Valuation document from the Capitals Coalition). Users should be able to adjust contested parameters (value of a statistical life, discount rates, dose-response functions) and recalculate factors for their context, rather than accepting a fixed number as settled truth. Static, opaque value factors are not fit for serious decision-making. Expect: Full documentation of all assumptions and data sources, plus a mechanism (model, tool, or platform) that allows users to modify key parameters and regenerate value factors accordingly. If a user cannot trace and adjust the number, the factor fails this requirement.
- Regular updating: Value factors should follow a defined update cycle, ideally annual, with clear documentation of what changed and why. The social cost of carbon, health value factors, and income-related factors all shift as science and economic conditions evolve. An outdated value factor is not a neutral error — it actively distorts decision-making. Expect: A published update schedule, a version history with change logs, and a clear process for incorporating new evidence (following again the Governance for Valuation). Factors without a defined update commitment should be treated as provisional.
- Public accessibility: Value factors should be publicly available as a common good. If the goal of standardisation is widespread adoption, locking value factors behind proprietary access, consulting engagements, or membership paywalls defeats the purpose. Public accessibility also enables independent scrutiny, which strengthens the credibility of the factors over time. The field cannot standardise what practitioners cannot access. Expect: The full set of value factors, including their underlying assumptions and calculation models, should be freely available for download and use. Restricted access should be the exception, not the default, and any restrictions should be justified and time-limited.
