AI for Peace Weekly Insights
Welcome to AI for Peace Weekly Insights! Each week, we’ll share updates, trends, and key developments at the intersection of AI and peace, offering fresh perspectives on both risks and opportunities to inspire action and spark meaningful dialogue within our community of practice.
Catch up on the latest news on:
?? How unchecked AI could trigger a nuclear war
?? Machine Learning Meets War Termination: Using AI to Explore Peace Scenarios in Ukraine
?? Geopolitics and the Regulation of Autonomous Weapons Systems
?? In Ukraine Short Range Drones Become Most Dangerous Weapon for Civilians
Have news to share or insights to add? Drop us a comment—we’d love to hear from you!
In Ukraine Short Range Drones Become Most Dangerous Weapon for Civilians UN Human Rights Monitors Say
“In January 2025, short-range drones caused more casualties than any other weapon in Ukraine, the UN Human Rights Monitoring Mission in Ukraine (HRMMU) said today. Increasing casualties from short-range drones, including those with “first-person-view” cameras, raise serious concerns about compliance with fundamental principles of international humanitarian law, HRMMU said.” Read more
Why 2024 Was the Worst Year for Internet Shutdowns
A new report by Access Now, launched at the international human rights conference RightsCon, reveals that 2024 was the worst year on record for internet shutdowns, with governments imposing at least 296 outages across 54 countries. This marks a 35% increase in the number of affected nations compared to previous years, surpassing the 2022 high of 40 countries. Alarmingly, 47 shutdowns extended into 2025, with 35 still active at the end of 2024. The report also highlights seven first-time offenders—Comoros, El Salvador, France, Guinea-Bissau, Malaysia, Mauritius, and Thailand—further underscoring the growing global trend of digital repression. Read more
?
Book recommendation: Political Automation: An Introduction to AI in Government and Its Impact on Citizens
Governments increasingly rely on AI to make decisions affecting citizens’ privacy, mobility, public benefits, and speech, ushering in an era of political automation. This book examines how citizens can influence algorithmic governance through civil society efforts across various domains, including policing, national security, and peacekeeping. It argues that to preserve democratic participation, a new institution—a Third House—is needed: a virtual chamber where AI-driven "digital citizens" represent individuals in decisions about AI in governance. Without such oversight, political automation risks eroding participatory government, leading to a future where human conscience no longer guides public policy. Read more
?
Geopolitics and the Regulation of Autonomous Weapons Systems
Advancements in artificial intelligence (AI) offer significant benefits, streamlining tasks and improving accessibility, but they also present serious risks, particularly in life-and-death decisions like autonomous weapons systems. Experts and developers urge caution, advocating for reflection and regulation as AI reshapes modern warfare and challenges the role of humans in conflict. Early examples in Ukraine and Gaza highlight this shift, with militaries drawn to AI-driven weapons for their speed, efficiency, and ability to reduce soldier casualties. As global investment grows and costs decline, the widespread adoption of these technologies seems inevitable, raising urgent ethical and security concerns. Read more
?
How US tech giants supplied Israel with AI models, raising questions about tech’s role in warfare
U.S. tech giants have significantly boosted Israel’s ability to track and target alleged militants in Gaza and Lebanon through increased AI and computing services. However, this has also led to a rise in civilian casualties, raising concerns about the role of these technologies in warfare. While militaries have long relied on private companies to develop autonomous weapons, Israel’s recent conflicts highlight one of the first instances where commercial AI models from the U.S. are being used in active combat—despite not being originally designed to determine life-and-death decisions. Read more
?
AI Biases in Critical Foreign Policy Decisions
A new study from SIS Futures Lab presents the first major benchmarking analysis of how large language models (LLMs) approach international relations and foreign policy decision-making. Benchmarking, a key evaluation method, provides valuable insights into the strengths and limitations of foundation models such as ChatGPT, Gemini, and Llama. The findings are available in an interactive dashboard and a detailed technical paper. Read more
?
Should AI Be Allowed in Military Applications? A Global Debate
“The debate about the application of artificial intelligence (AI) in the military is heating up worldwide, with serious ethical, security, and regulatory issues at stake. Some believe that AI increases military power and the country remains secure, but others are concerned about ethical issues, misuse, and a lack of accountability. Such a debate has forced international movements to implement governing mechanisms, but great global powers are still split on this issue.” Read more
?
Machine Learning Meets War Termination: Using AI to Explore Peace Scenarios in Ukraine
War negotiations are inherently complex, filled with uncertainty, competing interests, and shifting variables, but AI can help navigate this landscape if applied effectively. While large language models (LLMs) are generalists, their outputs can be enhanced through techniques like retrieval-augmented generation (RAG), few-shot learning, and chain-of-thought reasoning. The key lies not just in the model itself but in the quality of data and the sequencing of questions. By curating datasets focused on historical peace negotiations and structuring prompts to mirror real-world decision-making, AI goes beyond simple summarization to systematically analyze war termination and the difficult path toward a negotiated settlement in Ukraine. Read more
?
How unchecked AI could trigger a nuclear war
In late 2024, Chinese President Xi Jinping and U.S. President Joe Biden agreed that AI should never have the authority to decide on launching nuclear war, a policy shaped by five years of discussions at the Track II U.S.-China Dialogue on AI and National Security, led by the Brookings Institution and Tsinghua University. Examining Cold War-era U.S.-Soviet tensions illustrates the potential dangers of AI making such decisions, as an AI system trained on historical doctrines and simulated scenarios could have misjudged threats and preemptively launched nuclear weapons, leading to catastrophic consequences. This agreement marks a crucial step in ensuring human control over such high-stakes military decisions. Read more
?
ECOWAS to consider deployment of AI in counterterrorism, peace operations
The Economic Community of West African States (ECOWAS) will consider the deployment of technology, including Artificial Intelligence (AI) in its counterterrorism and peace operations within the West African region. The chairman of the Governmental Experts Validation Meeting on Logistics Concept and Logistics Deport Policies of the ECOWAS Standby Force, Air Commodore Sampson Eyekosi, disclosed this at the closing ceremony of the meeting in Abuja on Friday. Read more
This weekly review aims to spark dialogue and inspire the AI for Peace community of practice to continue advancing their efforts toward lasting peace. Visit our website to explore more resources, insights, and opportunities to engage with our work— www.aiforpeace.org
If we’ve missed any significant developments, let us know! Share your top news from last week in the comments, including a link—we’re always eager to stay updated on the latest trends and insights.
#AIforPeace #AI4Peace #Data4Peace #Tech4Peace #EmergingTechnologies #Peacebuilding #Peacemaking #HumanRights #LastingPeace #PositivePeace