My colleague Seungman Cha has a paper out this week, which I co-authored with him and others. It’s a trial-based cost-benefit analysis (CBA) of a community-led total sanitation (CLTS) intervention in rural south-western Ethiopia. We estimated intervention delivery costs from financial records and recurrent costs from the trial’s surveys. All outcome data are from the trial (health, time savings, cost of illness) – the trial effect paper is accepted pending revisions at AJTMH, but the protocol is here. Avoided mortality comprised ~60% of benefits, and the base case benefit–cost ratio (BCR) was 3.7. In probabilistic sensitivity analysis, 95% of estimates of the BCR were within a range of 1.9–5.4.
nb. the paper was already under review at IJERPH when the recent controversy about prioritising open access waivers for high-income (!) countries came out. I won’t be submitting more papers to MDPI journals, or reviewing for them, until they report results of the apparently ongoing internal investigation and ensure nothing like that happens again.
Obviously I think you should read the paper. In case you need some persuading, here’s three important things about it. One is a broad conclusion about the intervention, and the other two are about (the lack of) ex post and/or trial-based economic evaluations in the sanitation sector.
1. Upgrading low-quality latrines
The story of this intervention in Ethiopia is about upgrading poor-quality latrines, rather than ending OD. More detail is in the effects paper when it comes out. OD was already <5% at baseline, so not your typical setting for CLTS. About 75% of intervention group households used private latrines, of which almost all were unimproved. The other 25% used neighbours’ or communal latrines. What the intervention achieved (Figure 1 in the paper) was (i) ~100% coverage of private latrines of varying quality, and (ii) substantial upgrading from unimproved pits to improved (and “partially improved”) pits. Definitions are in the paper, but “partially” in this study essentially meets the JMP definition of improved . Achieving ~70% coverage of ~improved latrines, as opposed to the unimproved latrines typically achieved under CLTS, was probably a key factor in seeing the effect on longitudinal prevalence of child diarrhoea. High baseline usage of latrines of one type or another is also why the value of time savings comprised <30% of total benefits, which is small compared to many such studies.
2. First fully trial-based economic evaluation of a sanitation intervention
Our paper reports a “single study” economic evaluation , meaning that all key* parameter values come from the specific setting rather than being assumptions from the literature. I know of only one other sanitation study which does this: a cost-effectiveness analysis based on a case-control study of a latrine intervention in Kabul, Afghanistan, in the late 90s (Meddings et al., 2004). Our paper is therefore the only fully trial-based economic evaluation of a sanitation intervention, despite such study designs being pretty common in public health. Trial-based economic evaluations are valuable because they show the economic performance of an intervention in real conditions, with high internal validity. They are also not hard to bolt onto existing trials. In my opinion, many or most impact evaluations (RCTs or otherwise) should include an accompanying economic evaluation, if they are to influence investment decisions. It is surprising that researchers do not do them and funders do not demand them, as others have argued recently (Whittington et al., 2020). It is not enough to know whether interventions are effective – we also need to know whether their benefits justify their costs. More importantly, we need to compare the relative economic performance of competing WASH intervention options.
3. Very few ex post economic evaluations of real interventions more broadly
Our Ethiopia study presents an ex post CBA of a specific sanitation intervention – that is, the intervention actually happened. It is quite surprising just how many studies in this literature are of hypothetical interventions. There are only four other examples of ex post economic evaluations of sanitation interventions I know of, in addition to the Afghanistan study above. Two studies in India combine primary cost data from the setting with health impact estimates from secondary sources (Hutton et al (2020); Dickinson et al, 2014). The East Asian studies synthesised by Hutton et al. (2014) also combine primary cost data with secondary outcomes (and are immensely detailed in the country-level reports), though they focus primarily on technologies rather than interventions. Finally, a further Indian study by Spears (2013) combines secondary data on both costs and outcomes.
Hypothetical studies can be very informative, such as a recent one which explored how the extent of uptake (and other factors) influences the economic performance of CLTS (Radin et al., 2020). However, to be able to make investment decisions about which sanitation interventions are most efficient, we need more studies that evaluate interventions which actually happened! Interestingly, the coverage increase for improved latrines achieved by the intervention (~35%) in our Ethiopian study was the same the “high-uptake” scenario in the Radin et al., 2020 hypothetical study, and our headline result is almost identical. However, note the discussion regarding definitions above – the intervention increased coverage of “JMP-improved” latrines by ~60%. That’s quite a lot of upgrading, and a fair amount of new construction as well.
In conclusion, read our paper and reflect on toilet upgrading in rural areas! But more importantly, if you’re currently running or planning an impact evaluation, strongly consider adding a cost-effectiveness or cost-benefit analysis to the protocol. The incremental effort of collecting good-quality cost data is very low compared to the overall research cost of your study. As is the incremental effort in carrying out a cost-effectiveness or cost-benefit analysis. Effectiveness estimates only tell us so much – economic evaluations help us make decisions about investing scarce resources. If the intervention “works”, one of the first things you’ll be asked is how much it costs…
* OK, the case fatality rates come from the Global Burden of Disease study, but very few WASH trials are powered to have mortality as an outcome. Likewise the estimate for value of a statistical life (VSL) is secondary – there are precious few VSL studies in LMICs, let alone undertaken as part of a trial such as this. My point is that the key sources of data for benefits are primary (health effect, value of time, and cost of illness).