Over the past few weeks I've been thinking a lot about the role of artificial intelligence (A.I.) in disaster risk reduction. Given the speed in which A.I. is being integrated into our daily lives, one could argue it is only a matter of time before our preparedness, response and recovery from disasters is heavily influenced by A.I. systems and processes.?
Among the broader discussions around A.I. in disaster risk reduction and management is a more specific concern around how this new technology might impact the most marginalised [1] and vulnerable in society. Whilst there are a number of obvious benefits to the roll out of this technology in helping to respond to and recover from the most well understood impacts of disasters, there are a number of concerns around communities and activities that are less understood.?
If we understand that AI works by combining large amounts of data with intelligent processing and algorithms, then its use in disaster risk reduction or emergency management poses some immediate issues.
This is the foundation of a more in-depth piece of research I am currently working on. However, I wanted to share some of my initial thoughts on the pros and cons of A.I. in relation to marginalised groups in DRR:
- A.I. is predicted to be able to help accurately predict certain hazards (including natural hazards and pandemics)[2]. This capacity to identify potential threats could contribute greatly to identifying marginalized groups at risk within the areas impacted and ensure a safe and effective preparedness, response and recovery phase.
- A.I. when implemented properly and with the correct data could potentially identify marginalised groups that would otherwise be missed through traditional methods. A.I. could for example, use satellite information as well as pre-programmed vulnerability criteria to help identify at-risk populations in under-served areas at risk of flash flooding, such as informal settlements [3]. This could also allow for targeted and coordinated interventions for those communities before, during and after disasters.
- A.I. could also provide more effective communication strategies in disasters. Ensuring fast, relevant and up to date information is a huge barrier to effective disaster comms implemented the traditional way. However, with properly fed and regulated A.I., the messaging and content of emergency comms could be hugely improved [4].
But for every pro, there’s at least one con. A.I. as it currently stands is unregulated, prone to inaccuracies and easily manipulated. Before A.I. is seriously considered as a player within the disaster risk reduction and management space, there needs to be a considerable discussion around these many flaws.
Some of the cons I can see at this early stage are:
- There’s a considerable risk that A.I. (as it currently exists) could actually worsen marginalisation of vulnerable and at-risk groups. Allowing for the fact that A.I. is intelligent because of the data it’s being ‘fed’, a lot of its success depends on the validity and quality of that data.? For example, if an algorithm is trained on data that is biased against a certain group, it may make decisions that are unfair or discriminatory towards that group [5].
- The very basis of A.I. effectiveness is that it uses a huge amount of data. The scale of that data use may (rightly) raise privacy concerns, particularly if personal data is being collected and analysed. Marginalised or at-risk groups may be particularly vulnerable to these concerns, especially in countries where those groups experience state-backed or linked harassment or discrimination.
- As we’ve started to witness, there is a considerable amount of skepticism and concern around A.I. and it’s unchecked implementation [6]. This has the risk to build and mean an overall lack of trust in the technology. That would make its implementation in disaster or crisis situations difficult to impossible.
Overall, while A.I. has the potential to be a powerful tool in disaster risk reduction policy, action towards its use in policy aimed at marginalized or at-risk groups is full of serious questions. It is important to carefully consider the potential risks and ensure that the technology is used in an ethical and transparent manner.
- For the purposes of this paper marginalised groups can include (but are not limited to) women and girls, those with physical or mental disabilities, older people, ethnic and religious minorities. In addition, hyper-marginalised people refers to any group experiencing additional vulnerabilities because of cultural and societal attitudes & discrimination including (but not limited to) LGBTQIA+ people, first nation/ indigenous people, sex workers/ those within the informal economy, those experiencing homelessness, refugees, migrants, transient populations.
- https://studyfinds.org/artificial-intelligence-future-disasters/
- https://www.unglobalpulse.org/2021/04/fusing-ai-into-satellite-image-analysis-to-inform-rapid-response-to-floods/
- https://public.wmo.int/en/resources/bulletin/artificial-intelligence-disaster-risk-reduction-opportunities-challenges-and
- https://fra.europa.eu/en/publication/2022/bias-algorithm
- https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt
Stakeholder Engagement Coordinator at LGBT+ Consortium
1 年Fantastic work as ever!
Senior Researcher, Anthropologist, & Gender and Social Inclusion Consultant and Technical Advisor
1 年Emily Springer, PhD, I thought you might find Kevin’s piece here related to your work as well!
Executive Director at Shelter Centre | Humanitarian Consultant | PhD Cantab
1 年We are exploring in a similar context as a natural language interface to resource platforms. Although unlike blockchain in so many ways, again we might avoid confusing the medium with the content, albeit that the medium can now read the content? Machine learning is not AI ...
Climate & Disaster Resilience
1 年Great article Kevin, looking forward to the full research piece. Definitely concerns about the impact of AI on overall vulnerability with severe impacts on income/livelihoods. Potential pro, of it being a very useful aide to capacity development. But I agree with your point, for every pro there's a couple of cons at this point.