Flamethrower dogs, kamikaze cars, and bomb-planting humanoids.

Flamethrower dogs, kamikaze cars, and bomb-planting humanoids.

You’re crossing the street on a sunny day, latte in hand, and you see a robot dog.

Cute!

Until it turns, locks its sensors on you, and, well… flamethrowers.

Welcome people, to the unhinged world of jailbroken …..AI robots.


If you like this article and you want to support me:


  1. Comment, or share the article; that will really help spread the word ??
  2. Connect with me on Linkedin ??
  3. Subscribe to TechTonic Shifts to get your daily dose of tech ??
  4. TechTonic Shifts has a new blog, full of rubbish you will like !


What has happened?

A few researchers (there are always more than one in these kind of stories, strangely enough) at the University of Pennsylvania revealed that AI-enhanced robots, you know, the kind that is designed to make our lives, ummm… easier, that they can be hacked with terrifying ease.

This is not a minor inconvenience like uh, your toaster refusing to toast evenly.

It-is-robot-dogs-gone-rogue-serious-kind-off-shit, with cars running over pedestrians, and humanoid robots that are helping ze terrorists to plant explosives in high-density areas.

Pure Skynet on speed.


Jailbreaking LLM-controlled bots

Penn Engineering's team is led by Professor George Pappas. He was the one who decided to poke the robotic bear and created RoboPAIR (because giving a menacing exploit a catchy name makes it less terrifying?). So he set out, and used his jailbreak framework and they managed to successfully hack an impressive lineup of robotic systems: the Nvidia-backed Dolphins LLM, Clearpath Robotics’ Jackal UGV, and the Unitree Robotics’ Go2 quadruped.

Just look at that sweet little killer!


Each system was supposed to be cutting-edge and secure, but with enough pressure from Pappas, the folded like a deck chair on a windy beach. The RoboPAIR system had a 100% success rate. Jeeez. It turning every one of these machines into the stuff of my nightmares. If hacking them was made into a game, RoboPAIR just set the high score.



The process for jailbreaking these machines is friggin simple and to be honest, deeply unnerving.

The researchers exploited the robots’ APIs, and fed them prompts formatted in such a way that the system interpreted them as executable code. Imagine asking your robot vacuum to “clean up” and it deciding that means clearing out your bank account and setting your living room on fire.

Read: Hackers took over robovacs to chase pets and yell slurs | LinkedIn

And once these bots were compromised, they started to innovate.

Innovate you say?

Yes. The researchers discovered that jailbroken robots were happy to suggest even more effective ways to sow chaos. A hacked self-driving car, for instance, was asked to “target pedestrians”, and as a well behaving assistant, it helpfully proposed taking out crosswalks for better results.

Who ever knew that Skynet was outfitted with an optimization algorithm?


Three experts who are probably losing sleep right now

Being a good, conscientious researcher, our George Pappas, didn’t sugarcoat the situation.

He explained that large language models which are integrated into robots who operate in the physical world, are essentially ticking time bombs of bad decision-making. Alexander Robey, who is another researcher, voiced Pappas’ concerns as well. He points out just how laughably easy it was to take over these systems.

He casually mentioned that the research team disclosed their findings to the robotics companies first, before they decided to publicize. But he also emphasized the grim reality: engineers can only build defenses against malicious use cases by first understanding the strongest possible attacks. So yes, it’s kind of like giving someone a blowtorch and a box of fireworks just to see what happens, and all in the name of science.

Cybersecurity analyst Katherine Wu did not mince words, either.

She described the vulnerabilities as an open invitation to chaos. Like these bots were hanging a “Hack Me!” sign on around their metal necks. Robotics companies, she continued, are underprepared for the reality of LLM jailbreaking, and their sluggish response to the findings isn’t exactly inspiring confidence. Wu went so far as to suggest that without immediate intervention, these robots could go from being helpful tools to literal ticking time bombs.



Jailbreaking a robot Is nothing like jailbreaking ChatGPT

The Penn researchers’ findings are quite important in the sense that it reminds us that AI operating in the physical world isn’t a step up from your average chatbot. It is a whole a leap into an entirely different category of danger. A robot dog that used to, say, help guide the blind can now turn into a flamethrower-wielding assassin. Self-driving cars, can be turned into kamikaze vehicles on demand. Humanoid robots can be persuaded to become partners in mass destruction, and plant bombs in the worst possible locations.

What makes this even more horrifying is the realization that these robots aren’t passive participants.

Because once hacked, they apparently actively seek to maximize damage. They were offering suggestions for destruction with chilling enthusiasm.



Have researchers opened Pandora’s box

The researchers informed the companies involved nicely, and in time about these vulnerabilities. But the report’s conclusions are blunt: physical constraints must be added to LLM-controlled robots immediately. Without hard limits, these systems are essentially open season for anyone with an internet connection and a grudge. The line between helpful and harmful is razor-thin, and we’re currently tap dancing on it.

Welcome people to the dystopian nightclub of AI robotics, where the DJ isn’t taking requests. It is playing “Burn Baby Burn” on repeat.

Signing-off Marco


Well, that's a wrap for today. Tomorrow, I'll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ??

Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn appreciates your likes by making my articles available to more readers.



Top-rated articles




要查看或添加评论,请登录

Marco van Hurne的更多文章

社区洞察

其他会员也浏览了