Author: Lycia Lima
A couple of years ago, we were approached to carry out an impact evaluation of an early childhood program aimed at improving child development. This was a program that was publicly recognized as successful, and that had even served as a basis for scaling up a similar intervention at the national level. When we began working with the program team, we realized that the program managers’ perception of success was mostly based on anecdotal evidence, rather than on robust facts demonstrating that the program was effective in promoting the desired changes.
To design an impact evaluation, the first step is to outline the theory of change of the program in order to map out the links between program activities, outputs and the desired short-, medium- and long-term outcomes. Without a theory of change, researchers in charge of carrying out an impact evaluation are blind regarding the outcomes/impacts that should be investigated and are likely to design survey instruments that do not accurately capture all the dimensions. If an impact evaluation is not based on a robust theory of change, there is a risk that important factors will be overlooked and that an evaluation will erroneously conclude that the program has no impact, when it does have an impact on dimensions that were simply not measured.
Despite having been in operation for several years, the program we were evaluating did not have a well-structured theory of change. Therefore, our first step was to organize meetings in which several people, who were in some way engaged with the program, collaborated in the development of the program’s theory of change. Surprisingly, the theory of change initiated an intense debate, demonstrating that there was no consensus in regard to the causal chains of the program.
From my experience, this is very common: programs which are already established and running often lack clarity in their causal chains. In the case of the program we were evaluating, the program managers realized that program impacts were only likely to occur if all the causal links outlined in the theory of change were valid, as well as the underlying assumptions. This convinced them of the importance of proper implementation, and that they did not have an appropriate monitoring system to track the implementation of their programs. Consequently, they have improved monitoring systems and their programs are now well-structured evidence-based interventions, that use M&E tools in all phases of the project cycle.
Our initial commitment was to measure the impact of the program, and this triggered a theory of change exercise, which was very rich and shed light on the importance of using M&E tools every step of the way. The main takeaway message here is that M&E tools are interconnected and serve different purposes throughout the policy cycle. To truly implement an evidence-based approach, one should be aware that it is not necessary to choose among M&E tools! Most of them generate complementary information. Together they enable program managers to have a full vision of their program’s performance, allowing them to make well-informed decisions and implement a truly evidence-based management approach.
The copyright of all the information, documents and materials hereby presented are held by IFAD, which reserves all the rights.