The growing use of controlled experiments to test whether programs for the poor are effective is an exciting trend in development economics research and in development practice. Much like a drug trial tries to answer the question of whether a drug is working by comparing the health outcomes of a randomly selected treatment group with those a of a control group, a randomized controlled trial of a development program tries to see whether it improves some measureable outcome or outcomes for the program’s beneficiaries.
The main benefit evaluating a development programs with an experiment is that it can help ensure that we aren’t spending limited time and resources on programs that don’t help people. We could write a whole blog just updating readers on new and exciting experiments in the development field. One experiment that helped people think differently about development was done by Michael Kremer and colleagues in Kenya. They provided randomly selected schools with deworming medicine for their students and compared the attendance of these schools to ones that hadn’t gotten the medicine. The deworming medicine cut student absence by a quarter. Even students who hadn’t taken the medicine attended school more because they were less likely to fall sick from worms they’d contracted from their friends.
While experiments like this one are key to helping us know what to do to help the poor, some scholars (such as Angus Deaton, Martin Ravaillion, Dani Rodrik and others) have been skeptical of the widespread use of experiments in development research and practice.
They say that it will be hard to convince organizations to evaluate their program, given the high chances that they will find out that their programs are not having a measurable impact in the lives of their beneficiaries. This, combined with lack of funding for evaluation, and lack of consensus about its importance, means that even now, the fraction of development programs evaluated with experimental design is small.
Even if organizations or governments can be convinced to evaluate their programs, the results may not be useful beyond the initial context. Experiments face a problem researchers have dubbed “external validity” – the idea that what works in one context may not work in another. For example, just because scholarship competitions helped girls stay in secondary school in Kenya does not imply that these competitions will work in poor neighborhoods in the US.
Another problem is that of scale. Since running experiments can be expensive (you have to hire people to design the project, collect the data and analyze it), they are often used to evaluate small projects. However, the fact that program works well on a small scale does not mean it will work on a large scale – even within the same country. An example of this might be a cash transfer program. We can imagine a situation in which handing out monthly cash benefits to 100 randomly selected people might improve their health and wellbeing. But trying to do the same thing on a large scale could prove a lot more difficult – for one because the person running the program now has a bigger incentive to steal the cash – he has a lot more of it under his control.
One problem that isn’t mentioned much is that of organizational capacity and dedication. In order to do an evaluation of a program, funders and implementers need to have the know-how and dedication to run a program in the first place. While this might seem like a strange claim, I’ve encountered many organizations that claim to be operating on behalf of poor people but that aren’t really doing much on the ground. This is especially true as you travel farther from the big cities in poor countries. After all, living in remote places in poor countries can mean a pretty difficult life, so often the people who can escape it do. This leaves few people left to work with the very poor.
In these sorts of situations, experiments can’t help much. Any program that is experimented with will probably fail, not necessarily because it was a bad idea, but because organizations aren’t really able to implement the programs as intended. This problem seems even more pressing when we consider that the people most in need of help are often served by the least capable organizations, or none at all.
Put another way, it isn’t until organizational capacity in very poor and remote places improves that we’ll be able to engage them in experimenting to find programs that can have positive impacts for their beneficiaries. This leads me to a question that I think more development researchers and practitioners should be pondering: How do we attract capable and dedicated organizations to work with the poorest people?