GeoEvidence Explorer: Revealing gaps in the evidence base behind the use of geospatial data for humanitarian operations

GeoEvidence Explorer: Revealing gaps in the evidence base behind the use of geospatial data for humanitarian operations

The GeoEvidence Explorer is an interactive tool from Caribou that maps the impact of geospatial data in humanitarian operations. The Explorer draws on a diverse evidence base dominated by gray literature to allow users to scan for insights on specific geospatial and humanitarian use cases. This blog explores the quality of evidence found, the data gaps identified, and why more efforts should be made to generate more knowledge on the impacts of geospatial data.

Most impact evidence literature reviews focus on the impacts on an intervention’s final recipients. In this case, we looked at the impact on operations, as our goal was to help humanitarian actors understand the potential use cases for geospatial data in their work. We did not find peer-reviewed impact evaluations focused on the impact on operations. Out of 62 pieces of evidence reviewed we only found 10 resources that classify as peer-reviewed academic resources and they typically focus on impact interventions of which the use of geospatial data is just a component. The few peer-reviewed articles we found either assess the methodology—and so provide info on impact coincidentally—or are impact evaluations of programs in which geospatial data was used but was not the primary focus.

Why is evidence on the operational impacts of geospatial data so limited?

  • Focus on outcomes for final recipients: most evaluations prioritize measuring the impacts of humanitarian interventions on final recipients rather than examining the tools and processes that support the operational efficiency of those interventions. As a result, geospatial data’s specific impact on operations is rarely isolated or assessed.
  • Lack of time and resources for evaluations: the urgency and fast-paced nature of humanitarian work leaves little capacity to design and conduct rigorous evaluations, especially on operational aspects like data use. Many humanitarian evaluations are of mixed quality, have limited focus, and lack a thorough assessment of effectiveness.
  • Limited publication of internal analyses: while organizations may conduct internal evaluations, such as cost-benefit analyses, these are often not published or shared, possibly due to concerns about scrutiny from funders. This ultimately results in a fragmented evidence base.
  • Challenges in designing experimental impact evaluations: assessing the operational impact of geospatial data would require comparing scenarios with and without its use, which is logistically challenging and ethically complex in humanitarian settings.
  • Assumed value of geospatial data: there’s a widespread belief that using geospatial data inherently improves operations, which reduces the perceived need for formal evaluations.

Faced with an evidence base mostly consisting of grey literature, we re-adjusted our expectations and inclusion criteria to allow for observational evidence. To address the vast differences in standards within observational evidence, we added a “quality of methodology” variable to keep track of how rigorous and replicable the approach to assess impacts was.?

In most cases, the evidence we found lacked key? details about the data used (source of geo-spatial data, other data used) or the methodology used to process the data to serve operations. In other instances, impacts on operations are only briefly mentioned. This is because those resources were published to share an experience, not to evaluate it.

The lack of rigorous evaluations can on one hand lead to underutilization of geospatial data because of the lack of a convincing justification of its benefits.? On the other hand, a lack of evaluations may also lead organizations to simply assume that geospatial data is effective and efficient, which might not be valid for every use case.

What are the risks of not knowing?

  • Overlooking support needs: assuming geospatial data always adds value can lead to organizations overlooking possible issues, such as that of insufficient training for staff to use the data effectively.
  • Missed opportunities for innovation: rigorous evaluations can reveal unexpected benefits or limitations, driving innovation. Without these insights, organizations may fail to explore alternative applications or address gaps in current approaches.
  • Resource misallocation: without understanding where geospatial data truly adds value, organizations risk diverting resources to tools or processes that don’t significantly enhance operations, leaving other critical needs underfunded.
  • Limited learning and improvement: assuming success without evaluation hinders opportunities to learn from challenges or optimize data use. For example, an organization might miss insights on how to better integrate geospatial data into decision-making.
  • Lost opportunities for advocacy: robust evidence is critical for advocating for resources and support. Without rigorous evaluations, it’s harder to make a compelling case for the strategic value of geospatial data in operations.

In the humanitarian sector, the unpredictable and dynamic nature of real-world challenges makes it unlikely that experimental evaluations will ever become a standard practice. The constraints of time, resources, and ethical considerations often render such rigorous methodologies infeasible. However, the sector has long embraced the habit of monitoring processes, which presents a valuable opportunity documenting outcomes.

By making modest yet strategic improvements to existing monitoring practices, humanitarian actors can unlock the potential for evidence-based approaches that are both practical and impactful. Creative, collaborative, and systematic evidence collection, interpretation, and sharing methods can bridge the gap between observational data and actionable insights, fostering better decision-making even in complex, fast-changing environments.

How can observational evidence be leveraged effectively to inform practice, given the absence of more rigorous studies?

  • Enhance transparency in case studies: The evidence base for geospatial data benefits when users share detailed accounts of their experiences with geospatial data, including successes, challenges, and the contexts in which the data was applied. Detailed case studies that document the geospatial data used, methodologies applied, and operational outcomes observed are the foundations of the impact evidence base.
  • Embrace mixed-methods approaches: Combining qualitative insights with available quantitative data, such as system logs or administrative records, creates richer narratives about the data’s impact. This can help triangulate findings and increase their credibility.
  • Create a repository of lessons learned: Establishing a centralized database or platform for humanitarian actors to share case studies, observational findings, and lessons related to geospatial data could help identify trends and common factors that influence success.
  • Engage stakeholders in interpretation: Participatory approaches can validate observational findings with practitioners, beneficiaries, and data users to ensure insights resonate with their experiences and realities.

What’s next?

Building on the potential of enhanced observational evidence, the next step for the humanitarian sector could be to ensure that the necessary structures and resources are in place to improve evidence availability and quality. This requires a concerted effort from both funders and organizations to prioritize the generation and dissemination of actionable insights. To do this, we’ve outlined three key steps that funders and organizations can take to improve evidence availability and quality on this topic.

  • Prioritize funding for impact evaluations: Donors could allocate specific funding streams for evaluations of geospatial data’s impact on operations, ensuring resources for both evaluation design and execution are included in project budgets.
  • Support pilots with experimental designs: Fund small-scale pilot programs that explicitly compare operations with and without geospatial data. These pilots can serve as testbeds for generating evidence and refining evaluation approaches.
  • Encourage open data and knowledge sharing: Promote the publication of internal evaluations, cost-benefit analyses, and operational studies by creating incentives or anonymized reporting mechanisms. Hosting forums within which discussions about past experiences can actually take place.

The work on the GeoEvidence Explorer highlighted a critical gap in the humanitarian sector: the lack of robust evidence on the operational impacts of geospatial data. While the challenges of conducting rigorous evaluations in this field are substantial, they should not deter efforts to document and share observational insights. Addressing this gap requires collaborative action from organizations, practitioners, and funders alike to prioritize learning and innovation. Ultimately, a better understanding of geospatial data’s role in enhancing operations can drive more effective, efficient, and equitable humanitarian outcomes. If you have questions about the Explorer or are interested in discussing research priorities, please contact [email protected].

要查看或添加评论,请登录

Caribou Space的更多文章

社区洞察