Summer in the Field: Three Effective Ways to Use Routinely Collected Data to Evaluate Health Programs — Insights from Rwanda’s Mass Drug Administration Program
During the summer of 2024, I travelled from Boston to my home country Rwanda to collect data for my doctoral dissertation as a part of the 2024 Summer in the Field Fellowship Program. I partnered with the Rwanda Biomedical Center (RBC), an implementation agency of the Rwanda Ministry of Health, to evaluate the coverage of the mass drug administration (MDA) program for schistosomiasis and soil-transmitted helminthiasis — the two most common Neglected Tropical Diseases (NTDs) in Rwanda.
MDA involves distributing free medication periodically to entire populations in high endemic regions, irrespective of individual infection status. MDA operates on the principle of interrupting transmission cycles by treating entire populations to reduce the risk of infection and re-infection while diminishing the disease reservoir. MDA typically involves several steps: planning and coordination with health authorities; community mobilization and education to ensure high participation; distribution of medications by healthcare workers or trained volunteers to targeted populations; and monitoring and evaluation to assess coverage, effectiveness or adverse effects.
In Rwanda, the MDA program aims to combat NTDs, particularly schistosomiasis and soil-transmitted helminthiasis, which affect millions of people worldwide. Using routinely collected data offers a cost-effective way to assess such programs, but challenges like data inconsistencies affect accurate evaluations. My research reveals three effective ways to use routinely collected data to evaluate health programs, drawing insights from Rwanda’s MDA program: engaging directly with data managers and program staff, triangulating data from multiple sources, and conducting qualitative interviews for contextual understanding.
Engaging directly with data managers and program staff
Collaborating closely with those who handle data daily is important. Data managers and program staff have significant insights about data collection processes, potential pitfalls and system limitations. By engaging with them, evaluators can gain a deeper understanding of the data’s context and address any inconsistencies.
In Rwanda’s MDA program, engaging with data managers revealed that discrepancies were due to system limitations in estimating target populations. For example, how variations in methodologies used to estimate the target population across districts created inconsistencies in MDA coverage data across different districts.
Triangulating data from multiple sources
Relying on a single data source can result in incomplete conclusions. Triangulating data by comparing information from multiple sources enhances reliability and validity. In the context of Rwanda’s MDA program, combining data from the Health Management Information System (HMIS) as well as using program reports provided a more comprehensive picture.
For instance, while data from the HMIS showed certain coverage levels, program reports indicated different figures. This discrepancy prompted further investigation. Program staff began validating reports by directly contacting health facilities to clarify the data and ask specific questions. Through these calls, they discovered that some data entries in the HMIS were incomplete or delayed due to data entry errors. By reaching out to the facilities, staff could correct these errors and update the data accordingly.
Conducting qualitative interviews for contextual understanding
Numbers tell part of the story, but qualitative interviews add depth by revealing the experiences and perceptions of those involved. Speaking with healthcare providers, facility leaders, community health workers and beneficiaries uncovers factors affecting program outcomes that quantitative data alone cannot reveal.
For the case of Rwanda MDA, interviews with healthcare providers revealed inconsistencies in estimating the target population for the MDA program. These inconsistencies skew the denominator when calculating program coverage, leading to reported coverage rates exceeding 100 percent in many districts. This anomaly was not evident from the quantitative data alone. Insights from qualitative interviews with program staff and data managers uncovered that the strong political support for the program puts pressure on local health facilities to set lower target population estimates to surpass their goals. This practice results in inflated coverage figures that do not accurately reflect the program’s reach.
Why it matters
NTDs such as schistosomiasis and soil-transmitted helminthiasis cause chronic health problems and affect economic development. Effectively using routinely collected data ensures resources are utilized efficiently, programs reach those in need and health outcomes improve.
Misinterpreted data due to inaccuracies or lack of context can have consequences on MDA programs, including falsely assessing them as ineffective or effective, or interventions that might not reach vulnerable populations.
Evaluating health programs using routinely collected data presents challenges, but these can be overcome by engaging with data managers, triangulating multiple data sources and conducting qualitative interviews. These methods provide a comprehensive understanding of program performance, ensuring health programs like MDA program can be accurately assessed and improved–directly impacting lives by informing better health policies and practices.
Reflecting on this experience, I realized that evaluating a health program extends beyond numerical analysis, and is really about understanding the ecosystem of people, processes and contexts that shape the data.
*
Learn more about the Summer in the Field Fellowship Program.