by Paola Elice, Impact Evaluation Specialist, World Bank
“We have been implementing women and child-friendly spaces in refugee camps across the world, but we have never rigorously evaluated their impact. We operate based on best intentions, and what seems to work – we now need to focus on how to better measure the impact we are having.”
Remarks by Ninette Kelley during a World Bank–UNHCR internal meeting in mid-2020. Kelley is the author of People Forced to Flee: History, Change and Challenge, a 2022 UNHCR report that draws on the lessons of history to inspire improved responses to forced displacement.
Measuring project impact in forced displacement contexts can be challenging from a research standpoint. Displaced populations move a lot, making it difficult to collect their data over time. Also, baseline information may not be known because it was not collected when refugees arrived. Further, displaced people and host communities often participate in various programmes at the same time so that distinguishing the impact of a specific programme from others may prove difficult. Last but not least, investing in rigorous evaluation is not a priority in these settings given limited resources are devoted to humanitarian relief.
Despite these known challenges, there is increased interest by governments, donors and implementing organizations to understand the impact of projects as part of the growing need to use evidence to inform financing decisions and ensure resources are utilized efficiently, as laid out in the 2018 Global Compact on Refugees.
The growing demand and interest has propelled a rise in impact evaluations in forced displacement settings, including studies funded by Building the Evidence on Protracted Forced Displacement, a research partnership between the UK Foreign, Commonwealth and Development Office, UNHCR, and the World Bank. Also funded by the partnership is my recent paper on Impact Evaluations in Forced Displacement Contexts: A Guide for Practitioners*, which explains the importance of extending the use of impact evaluation to forced displacement projects.
Collectively, these and other studies are generating important lessons on the effectiveness of programmes in various sectors. For example, the delivery of basic information on the benefits of hosting refugees proved effective at promoting social cohesion in an urban setting in Uganda. In another instance, the provision of work tasks were shown to improve refugees’ psychosocial wellbeing significantly more than cash-only assistance in Bangladesh. A graduation programme in Mozambique was evaluated to lower food insecurity and increase household income and savings. Similarly, evaluations of graduation programs yielded positive results across a range of outcomes in other countries, as discussed in this podcast during Fragility Forum 2022.
Importantly, impact evaluations also show when programmes are not delivering results, such as in the Democratic Republic of the Congo, where evaluations of community-driven reconstruction programmes found that the construction of schools and health centers did not lead to better education or health outcomes. In Eastern Democratic Republic of the Congo, the provision of vouchers to purchase essential household items improved outcomes associated with psychological wellbeing but did not improve resilience and social cohesion among displaced households and hosts.
It is necessary also to address skepticism about the rigour of impact evaluations in forced displacement and humanitarian contexts due to the complexity and cost factors outlined at the start of this post. But ongoing work proves that these challenges can be addressed.
For example, not all impact evaluations need to be randomized evaluations: a comparison group can be identified by leveraging sudden changes to a programme that can occur during implementation, as was the approach taken to evaluate the impact of a variation in cash assistance to returning Afghan refugees in Afghanistan, or the timing of introducing a new policy in Colombia granting access to formal markets and safety nets.
Further, not all randomized evaluations need a pure comparison group. In a humanitarian context, it does not make sense to ask whether it is good or not to provide shelter, food, access to clean sanitation and hygiene facilities, but rather how to provide such basic support (for example, this evaluation in Niger compares provision of money in cash and digitally); or whether simple interventions, additional to cash transfers, can be effective at maximizing the benefits of cash (factorial designs). For evaluations of non-essential interventions, randomization can provide a fair way to roll out a programme as many organizations start small and expand their reach over time (phase-in design) as often there is more demand for a programme than resources available (over-subscription design).
Finally, impact evaluations can be cost effective. They can leverage existing data collections (for example, impact evaluations in Bangladesh make use of listing data from the Cox’s Bazar panel survey), monitoring and evaluation budgets and programme administrative data. They can also be adaptable to the specific context, as discussed in this blog.
Impact evaluation not only provides the evidence needed by governments, UNHCR and other agencies to make operational and policy decisions; as I have discussed in this post, their application to forced displacement contexts is bound to further improve its use as a discipline by pushing researchers to innovate, advance on research ethics and promote cost-effectiveness analysis.
*My paper on Impact Evaluations in Forced Displacement Contexts: A Guide for Practitioners was prepared as a background report for the UNHCR report People Forced to Flee: History, Change and Challenge.