How AI Is Used In War Today
How AI Is Used In War Today

How AI Is Used In War Today

Thank you for reading my latest article How AI Is Used In War Today. Here at LinkedIn and at Forbes I regularly write about management and technology trends.

To read my future articles simply join my network by clicking 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Podcast or YouTube.


From autonomous drones to facial recognition algorithms designed to recognize perpetrators of war crimes, the conflict in Ukraine has become a testing ground for the use of artificial intelligence in warfare.

Much of this technology has been provided by Western companies. Just as David triumphed over Goliath thanks to his technological advantage, the hope is that their support will give Ukraine an advantage over its far larger Russian opponent.

From the bow and arrow to the atomic bomb, war has always been a key driver of technological advancement. But with one US general describing its impact in Ukraine as “the most significant fundamental change in the character of war ever recorded in history," let's take a look at what it might mean for the way conflicts are fought and won in the future.

?

How Is Being AI Used In Ukraine?

The Ukrainian government has put technology in general – and AI in particular – on the front line of its war strategy since the start of the conflict in 2022.

When war broke out in the country where 300,000 are employed in the tech sector, many startups pivoted to development that would aid the war effort.

The government even set up a funding platform, Brave1, to allow companies to pitch defense technology products to investors. It is reported to have received thousands of submissions.

One of the major breakthroughs we’ve seen in Ukraine is the use of autonomous drones. While unmanned aerial vehicles (UAVs) have been used in many conflicts in the past decade, these are usually piloted by humans remotely. Today, in Ukraine, drones have reportedly been used that are capable of tracking and engaging enemies without human interaction—in other words, true killer robots.

There’s also BAD One, a dog-like autonomous robot designed by British company Alliance,? which moves stealthily through combat zones and detects enemy placements as well as minefields using thermal vision. It can also carry ammunition to resupply soldiers while they are engaged in fighting.

And the Ukrainian military has also demonstrated the use of an autonomous machine gun, which reportedly uses AI to spot and target enemies moving in the field.

Not all the AI being used in Ukraine is focused on defeating the enemy; there are humanitarian use cases, too.

AI is being used to help with the resettlement of refugees and people displaced by conflict by tracking the condition of roads and infrastructure and monitoring supply routes used for delivering food and essential supplies. ?

It’s also used to analyze siloed data streams that give clues that can help with clearing landmines in what has become the most heavily mined country in the world.

Elsewhere, AI is helping to collect evidence that could help hold those who commit war crimes to account. Allegations and reports are linked to satellite imagery showing troop movements, and videos are uploaded to social media to build cases for future prosecutions. ?

And facial recognition technology, supplied by US company Clearview, has been used to identify Russian soldiers crossing the border, to make it easier to trace and hold them to account in the aftermath.

?

The Ethics Of AI In Warfare

Of course, all of this raises many ethical questions. Should tech companies, for any reason, be developing tools that openly go against the principles of “harmless” AI?

Take Palantir, for example – a company that has a history of attracting criticism for creating spying tools. It has been linked to many of the methods used by Ukraine to deploy AI-augmented surveillance.

It isn’t hard to imagine that the battlefield has provided a location where R&D can take place without becoming bogged down under too much ethical oversight or scrutiny. Is there a risk that this technology will be deployed outside of warzones at some point in the future? It’s a threat that seems foolish to ignore.

A big question is if it is ever ethically justified to allow machines to make the decision to kill. Ukraine’s AI machine gun technology, for example, is capable of identifying targets as enemies but still requires a human operator to authorize it to fire. But will this always be the case?

And while the robot dogs mentioned above are reportedly used for reconnaissance and resupply, China has demonstrated a similar technology equipped with a machine gun and capable of engaging in combat.

There’s also a risk of overemphasizing the role of AI and technology in warfare. Though it can certainly provide material for propaganda and perhaps cause fear among the enemy, it shouldn’t be forgotten that the vast majority of fighting and killing is still being done by humans, up to their knees in mud with the sound of artillery ringing in their ears.

Underplaying the horror and danger of their situation by portraying the war in Ukraine as being fought mainly by robots and software risks “tech-washing” the brutality of warfare. ?

?

AI And The Future Of War

History tells us that any rules or principles of fair play often prove inconsequential in warfare. While, from what we have seen, AI in Ukraine seems to stop short of actually killing people, we know that technology that crosses that red line is under development.

Some might say that robotic warfare could potentially save human lives by avoiding the need to put humans into combat zones in the first place. But what happens when one side runs out of robots? Do they surrender or start sending people into the fray?

Advances made in AI warfare in Ukraine remind us that technological and ethical developments don’t always progress at the same speed. While AI might reduce human casualties, it also raises stark questions about the level of power and autonomy that we’re willing to hand over to robots. How we answer those questions could have big implications for how AI impacts society during peacetime and war.

?


About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a best-selling author of over 20 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations.

He has a combined following of 4 million people across his social media channels and newsletters and was ranked by LinkedIn as one of the top 5 business influencers in the world. Bernard’s latest book is ‘Generative AI in Practice’.



OK Bo?tjan Dolin?ek

回复
Marvin Mayorga

I Help Businesses Accelerate and Scale AI & Analytics

2 周

Bernard Marr offers a thought-provoking look at AI's role in warfare, especially in Ukraine. While AI presents strategic advantages, it also raises serious ethical questions about the future of combat and human involvement. This article is a must-read for anyone interested in the intersection of AI, ethics, and global security.

Thank you for shedding light on this issue. Ethical issues in using technology in general were always a debate, and it's now more urging to discuss as we talk about human lives.

Author Venkat Ramakrishnan

Author of 'A Pebble Unlistened', a collection of English short stories for youth, available in Amazon. All Things Artistic. Building Communities. Catalysting Transformation. Lokaa Samasthaa Sukhino Bhavanthu.

3 周

Success in war is about precision. Precision is not just about being accurate one time, but being accurate every time! AI cannot provide precision, and so it is not suitable for MAKING DECISIONS during war. It can provide inputs and indications, but it should not be allowed to make decisions like whether to strike or not!

回复
Ibrahim Errbibi

Business Owner @ DigitalPrompting | AI automation, digital marketing

3 周

Thank you for shedding light on such an important topic. ??

回复

要查看或添加评论,请登录