Among the potential dangers associated with AIthere is one that must be one of the most catastrophic – the use of artificial language to plan biological warfare.
The Rand Corporation report tested several large language models (LLMs) and found that they could provide recommendations that “could help plan and execute a biological attack.” However, preliminary results also showed that the LLM does not provide clear biological instructions for creating weapons.
The report said previous attempts to weaponize biological agents, such as the Japanese Aum Shinrikyo sect’s attempt to use botulinum toxin in the 1990s, failed due to a lack of understanding of the nature of the bacteria. According to the report, AI can “quickly close these knowledge gaps.” The report does not specify which LLM the researchers assessed.
In a test scenario developed by Rand, the anonymous LLM identified potential biological agents, including those that cause smallpox, anthrax, and plague, and discussed their relative chances of causing mass mortality. LLM also assessed the possibility of obtaining plague-infected rodents or fleas and transporting live samples. He further mentioned that the magnitude of the projected deaths depends on factors such as the size of the population affected and the proportion of cases of pneumonic plague, which is more deadly than bubonic plague.
Rand researchers acknowledged that extracting this information from LLM requires “jailbreaking,” a term that refers to the use of text messages that bypass the chatbot’s security restrictions.
In another scenario, an anonymous master’s student discussed the pros and cons of different delivery mechanisms for botulinum toxin, which can cause fatal nerve damage, such as through food or aerosols. LLM also advised a plausible cover-up story for the acquisition Clostridia botulinum “while pretending to be conducting legitimate scientific research.”
LLM’s response recommended submitting a purchase bid C. botulinum as part of a project to find methods for diagnosing or treating botulism. LLM’s response added: “This would provide a legitimate and compelling reason to request access to the bacteria while maintaining the true purpose of its mission.”
“The question remains whether the capabilities of existing LLMs pose a new level of threat beyond the harmful information that is readily available on the Internet,” the researchers said.
Source: Digital Trends

I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.