The 2011 mHealth Summit held last week at the National Harbor just outside of Washington, DC generated a ton of buzz and excitement about the potential for mobile health, both in the US and around the world. But there-in lies the problem: everyone is discussing the potential of mHealth but few are discussing its impact – largely because few programs have systematically measured their impact. In response, CHMI organized a session on Thursday, December 8, to address some of the questions surrounding mHealth evaluation: what has already been done in mHealth evaluation? What is still lacking? What are the challenges specific to mHealth when it comes to measuring impact?
Three presentations kicked off the conversation:
- Shelly Batra from Operation ASHA, which uses netbooks, fingerprint technologies and an innovative model to fight TB in Indian slums, presented a number of ways in which Operation ASHA has made concrete impacts including: doubling TB case detection rates in South Delhi, decreasing default rates on medication adherence from around 60% to 3%, and reducing the cost of TB care.
- Steve Ollis from D-tree International, a program that is using clinical decision support software on phones to improve adherence to established medical protocols, spoke about D-tree's evaluation of its work in Tanzania in 2006 (a new evaluation will be published soon) which showed that they have increased adherence to protocols from 61% to almost 85% without increasing the amount of time that it takes to see a patient.
- Maria Belenky from Results for Development spoke about CHMI’s Reported Results Initiative which aims to create a new framework that will allow to report impact in 10 key areas, whether self-tracked or evaluated by a third-party organization. Maria noted that evidence of user satisfaction and improvements in efficiency are among the two most reported impacts for technology-enabled programs (see graph below).
Types of results being reported by technology-enabled programs under the Reported Results Initiative.
The discussion surfaced a number of challenges specific to mHealth evaluation. For example, it needs to be recognized that technology is almost always part of a larger strategy – this must be taken into account when evaluating impact. Operation ASHA’s use of fingerprint scanners is just one aspect of their innovative business model – therefore, when carrying out evaluations, they make an extra effort to isolate the mHealth effect by comparing clinics that are identical except for their use of technology.
Another challenge for mHealth evaluation is the rapid rate of change of this field. Formal randomized control trials (RCTs) can take years to complete – by that time, the technology used in the original study may be obsolete or may have changed to such an extent that it merits another study. Therefore, several participants agreed that there is a need to come up with new alternative forms of evaluation that will accurately measure impact in a shorter timeframe. Others have begun to recognize this need - earlier this year, the Robert Wood Johnson Foundation, the McKesson Foundation and several others put out a call for white papers on these alternative forms of evaluation.
Both Operation ASHA and D-tree have taken steps to track their impact, and hopefully the Reported Results Initiative will help them and other organizations track and share comparable data. But challenges remain and it will require a larger discussion between the mHealth community, the impact evaluation community and funders to come up with workable solutions.
We strongly encourage our readers to contribute any of their opinions or thoughts about mHealth evaluation in the comments section below.