Source: International Food Policy Research Institute (IFPRI)
Date: Nov 2009
Assessing impacts of public investments has long captured the interest and attention of the development community. This paper presents the evolution of different methods and approaches used for ex ante appraisal, monitoring, project evaluation, and impact assessment over the last five decades. Among these tools, impact assessment (IA) conducted retrospectively comes closest to providing the proof of development effectiveness. It is defined as the systematic analysis of the significant or lasting changes in people’s lives brought about by a given action or series of actions in relation to a counterfactual.
There are three basic types of retrospective IAs: macro-level IAs that focus on the contribution of developmental efforts to an impact goal aggregated at a sector or a system level; micro-level impact evaluations (IEs) concerned with estimating the average effect of an intervention on outcomes at the beneficiary level; and micro-level ex post impact analysis concerned with total effects of a development effort after the outputs are scaled-up. Ex post IAs have evolved and expanded over the decades in both breadth and depth of analysis in response to evolving development themes and methodological advancements. The increased emphasis on learning from evaluations has also seen responses from both quantitative and qualitative camps of the evaluation community.
The paper argues that generation of robust knowledge that feeds into making developmental policies and investment decisions requires a hierarchical and cumulative approach to “improving the proof” through rigorous and a variety of impact assessment methods applied incrementally at the project, program and system level. Subjecting as many development interventions as resources allow to rigorous impact assessment based on a common framework can help build a critical body of evidence on impacts of development interventions, which can then be subjected to meta-analyses to help assimilate results across different studies and build a knowledge base on what works and what does not.