One of the most common questions CHMI receives is “how do I know which programs are actually ‘working’?” In other words, which programs are achieving the kind of health and financial protection results that matter to policymakers, donors, investors, and other program managers?
In response, CHMI started the Reported Results initiative, which captures self-reported statements of programmatic performance results. This was the first step in a broader plan to expand the understanding of the impact of profiled programs.
Based on the findings from the first phase of results collection, CHMI has launched a joint effort with the Impact Reporting and Investment Standards (IRIS) initiative of the Global Impact Investing Network (GIIN) to develop a set of standardized, quantifiable health performance measures. These metrics will be designed to help organizations, their investors, and other stakeholders assess and report on social performance.
The framework will provide a common language that allows comparison across a broad spectrum of organizations that have social impact as a primary driver. The University of Toronto is also playing a key role in guiding the research on other performance measurement frameworks, and feedback from our key stakeholder groups.
The effort convened a Working Group with expertise in health and metrics design to lead this work. To ensure a diversity of perspectives, the Working Group includes participants across a range of health programs, research institutions, funder groups, and standard setting bodies. Members of the Working Group recently met in New York to discuss prioritization of the indicator categories and their associated metrics, and to further refine the definitions for core metric concepts.
Building up to the New York meeting, members were divided into three tracks based on core issues identified through the CHMI Reported Results framework – pro-poor targeting, scale, and affordability; health outputs and health outcomes; and clinical quality, user satisfaction, and financial sustainability. Each track was tasked with identifying the core questions, issues, and metrics relevant to each track topic, to be presented in New York.
The conversations kicked off with our findings from the Working Group discussions, and with additional input from other technical health and metrics experts who provided separate one-on-one feedback. The group then dived into a prioritization discussion to identify which metrics to recommend for inclusion, which to hold off on for now, and which were identified as important needing further refinement.
The metrics identified will be utilized in conjunction with the Reported Results framework to guide program implementers and funders on practical, feasible approaches to performance measurement. The effort is expected to produce the final set of metrics for dissemination this summer, along with a paper outlining the methodology behind developing the framework, and identifying areas that were not included but still considered important areas for assessing impact.
Programs with performance measurement experience that are interested in participating in the piloting phase are encouraged to contact Christina Synowiec at firstname.lastname@example.org.