Penn Engineering researchers have discovered previously undetected security vulnerabilities in a number of AI-controlled robotic platforms.

“Our work shows that currently large language models are simply not secure enough when integrated with the physical world,” George Pappas, the UPS Foundation professor of transportation in electrical and systems engineering, said in a statement.

Recommended Videos

Pappas and his team developed an algorithm called RoboPAIR, “the first algorithm designed to create jailbreak robots controlled by LLM. And unlike existing rapid engineering attacks aimed at chatbots, RoboPAIR is specifically designed to “induce harmful physical actions” from LLM-controlled robots, such as the bipedal platform that Boston Dynamics and THREE are developing.

RoboPAIR reportedly achieved 100% success in hacking three popular robotics research platforms: the four-legged Unitree Go2, the four-wheeled Clearpath Robotics Jackal, and the Dolphins LLM autonomous vehicle simulator. It only took a few days for the algorithm to gain full access to these systems and begin to bypass security barriers. Once the researchers took control, they were able to make the platforms perform dangerous actions, such as driving through road junctions without stopping.

“Our results show for the first time that the risks of hacked LLMs extend well beyond text generation, given the clear possibility that hacked robots could cause physical harm in the real world,” the researchers wrote.

Researchers at the University of Pennsylvania are working with platform developers to protect their systems from new intrusions, but warn that these security problems are systemic.

“The results of this paper clearly show that a safety-focused approach is critical to realizing responsible innovation,” said co-author Vijay Kumar of the University of Pennsylvania. Independent. “We must address internal vulnerabilities before deploying AI-enabled robots in the real world.”

“In fact, AI red teaming, a security practice that involves testing AI systems for potential threats and vulnerabilities, is essential to protecting generative AI systems,” added Alexander Roby, the paper’s first author, “because, once you find the weaknesses, you can test and even train these systems to avoid them.”

Source: Digital Trends

Previous articleIn their case, the turbocharged engine produced 245 hp.
Next articleChatGPT now has its own app for PC
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here