The Coming Swarm...
CREDIT: https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

The Coming Swarm...

I was an enthusiastic participant in creating the first 100 robot swarm. I also led work on what I believe was the first experimentally proven autonomous "killer" robot. Fortunately, the performance testing used a high speed camera gimbal system and then later a paintball gun was added. It never shot with lethal weapons, but regardless the results were both compelling and sobering. Not a single naval reserve participant in the experiment made it through the test without being successfully detected, chased and targeted. That test occurred in San Diego coastal bunkers more than 20 years ago. Later we enlisted ground robots and drones to work together as part of an Office of Naval Research funded Bold Alligator exercise at Camp Lejeune:

https://youtu.be/ieR-txpbMiY?si=KUKxdQ_9fETwTdwJ

Much more recently, the @DARPA OFFSET program showed that a large number of air and ground vehicles can work together to identify targets: https://youtu.be/W34NPbGkLGI

Business Insider just ran an article discussing a recent exercise in South Korea with Drones and tanks where it pointed out the possibility for automated targeting:

US Closer to Using AI-Drones That Can Autonomously Decide to Kill Humans (businessinsider.com)

The threat of an autonomous kill capability is nothing new, but the intersection of swarms, AI and autonomy is a new locus of concern. Swarms are hard to defeat. They are resilient and fault tolerant. With the addition of AI they are adaptive and can learn. In a variety of international forums I argued not against the idea of lethal autonomy, but rather against the pragmatic reality. In other words, it is the nitty-gritty engineering issues that worry me. I don't trust the perception of the robot to be nearly as good as a human and I don't believe the moral reasoning on board the robot can be trusted. In one of these forums I argued that we need to enable an appropriate level of distrust, not just push for unjustified trust. I hoped that understanding the real benefits and limitations of technology helps us adapt and assist technology and compensate for its failings.

If we want to ensure moral behavior, we need ways to measure and metric it. What if AI could assess human behavior? When interviewed by Peter Singer for his book Wired for Far, I argued that AI lacked the acuity to make moral decisions and decide when to fire. That was almost twenty years ago and things have changed. I am not saying we should yet trust AI to be moral. I still have reservations, but I now see AI as a means to more objectively judge and metric behavior. In fact, I have recently thought that it could one day be considered morally wrong to not have an AI ethics module watching the combat scene, weighing in on the behavior of humans and robots. After all, humans acting in the throes of war may not be the best arbiters of ethical behavior. Historically, we've got a pretty bad record. AI could do better.

We should not trust AI over human moral judgement, but I can see interesting possibilities for moral judgement to be comprehensively derived from human heart and machine proficiency. It is not morally wrong to develop effective AI tools for combat, but we can find maximum value at the intersection of human insight and machine proficiency... not by eliminating the human. I've written a great deal about our studies with mixed initiative control. For a slew of tasks including search and rescue, reconnaissance, radiation detection, plume tracing and landmine detection, we showed that balancing human and robotic initiative achieved better results than eliminating the human. The key was dynamic autonomy and ensuring a two-way window into the mind of each participant to enable shared understanding.

David Wood, a futurist, is concerned and raises the warning of technology that snowballs out of control, but as Missy Cummings explains, this is a complex issue where fear is not the path to understanding. Let's embrace swarm intelligence, drones and AI with humanity-centered design. Humans are slow, but we are still worth listening to especially for critical decisions where lives are on the line.

hashtag#ethicsandai hashtag#ethicsinai hashtag#wiredforwar hashtag#autonomousvehicles hashtag#autonomousdriving hashtag#selfdrivingcars hashtag#selfdriving hashtag#selfdrive hashtag#darpa hashtag#dod hashtag#defense hashtag#defenseindustry hashtag#defenseinnovation hashtag#swarmintelligence hashtag#swarm hashtag#swarmrobots hashtag#robotics hashtag#roboticsinnovation hashtag#roboticsengineering hashtag#autonomousrobots hashtag#drones hashtag#dronesforgood hashtag#dronetechnology hashtag#dronesafety hashtag#droneswarm hashtag#ai hashtag#aicommunity hashtag#aichallenges hashtag#defensetech hashtag#defensetechnology

Jin W.

Generative AI Prompt Curator | Pharmacist | Connector of people across technology, healthcare and finance

1 å¹´

Only found this now. Damn

Scott Stewart

Senior Technical Staff at Control Vision, Inc.

1 å¹´

We're more likely to get corporate moral reasoning and machine efficiency in swarms.

赞
回复

In a past life I looked at this in depth. I ran a program that fully scared me after it ended. Ultimately competition will derive to speed, agility, and capabilities between machines. As humans will no longer be able to keep up. 15 years ago, multiple humans were already overmatched by a single simple machine. Speaking as a former special forces soldier, another life as well, and having been in combat and seen many conflicts. Humans have a right to fight humans. Machines posses no such self awareness. Thier death is a dollar cost. Machines can over match a human, why give it lethality if it only needs to inhibit or capture. This is why I addressed The God Father of the Silicon Valley, John Hennessy, with the question about the regrets of Kalashnikov and Oppenheimer. His answer was standard. Technology will be the answer. No one thought that what Hitler had done to the Jews was possible either. My answer, it is until it isn’t. I fear that Pandora’s box is too late to close now. That race has already started. Engineering ethics should be a mandetory course since we now posses the ability to build beautiful lethal and intelligent hardware that will not fail to elegantly do its job.

赞
回复

Great achievement David. Congratulations.

赞
回复
Tim McGuinness, Ph.D., DFin, MCPO, MAnth

Partner @ wiSource | Director-Board Member @ SCARS Institute | Partner @ Emeritus Council | Strategic Analyst, Advisor, Public Speaker, Scientist, Polymath, Volunteer Advocate, Author, Roboticist, and Navy Veteran

1 å¹´

The simple answer is we cannot control it. But it gets worse. Just wait until criminals start to use drone swarms in their activities. Gangs are already using drones to watch their territories for police and locals who do not accept their rule! This is only going to get worse. On a related topic: https://www.dhirubhai.net/feed/update/urn:li:linkedInArticle:7136194536170508288

赞
回复

要查看或添加评论,请登录

David Bruemmer的更多文章

  • In Praise of Basic Instinct

    In Praise of Basic Instinct

    We are awash in language models. They are driving speculative investment and increasingly they are driving you to buy…

    14 条评论
  • Nobody Likes Reverse Centaur

    Nobody Likes Reverse Centaur

    Technically, we are all half centaurs, but we take surprisingly little time to relate with our regal counterparts. They…

    9 条评论
  • GPS: Understanding Our Ring of Power

    GPS: Understanding Our Ring of Power

    We don't see the beams of energy shooting down from satellites in space. We don't see the satellites.

    2 条评论
  • Meet Your Robot Overlords

    Meet Your Robot Overlords

    I am leading a panel later this month in Vitoria, Brazil titled: "Meet Your AI Overlords." Panelists will grapple with…

    15 条评论
  • Are We Playing Chess or Dodgeball?

    Are We Playing Chess or Dodgeball?

    We all know AI can play chess but what if the domains we care about are more like dodgeball? I’ve spent much of my…

    10 条评论
  • Less Autonomy, More Teaming

    Less Autonomy, More Teaming

    How can one human control 100 drones? You can't! At least not through direct control..

    24 条评论
  • Is Software Eating the World?

    Is Software Eating the World?

    Marc Andreesen's clever quip that “software is eating the world,” introduced a software-centric perspective that has…

    9 条评论
  • Digital Workers of the World Unite!

    Digital Workers of the World Unite!

    This week I was invited to attend the inaugural Digital Workers Forum in Silicon Valley. Expecting the gorgeous AI…

    6 条评论
  • Becoming Swarm: How Our Approach to Self-Driving Must Change

    Becoming Swarm: How Our Approach to Self-Driving Must Change

    For 20 years we have been promised that autonomy will create safer, more efficient roads, but the latest data shows we…

    6 条评论
  • AI is the Seawitch - I Want My Voice Back

    AI is the Seawitch - I Want My Voice Back

    Scroll through Instagram and you’ll find that some of the most striking images of landscapes, animals, and, yes people,…

社区洞察

其他会员也浏览了