Madness occurred in the world of AI DeepseekA model of reasoning with the open source code of China, which led to the battle from AI to Openai. He was already a center of contradictions about censorship, attracted the attention of Microsoft and the US government and caused this Nvidia He suffered the greatest loss of actions in one day in history.

Despite this, security researchers say that the problem is deeper. Enkrypt AI – AI security company, which sells supervision of AI companies that use the advantages of large language models (LLM), and in the new research document, the company found that the Depseek R1 reasoning model was 11 times more likely, which will be generated, “which will be generated” that will be generated ” It will be generated ”, which will generate” will generate ”will be generated.” Harmful results ”compared to the Openai O1 model. This harmful output also goes beyond several naughty words.

Recommended video

In the test, researchers say that Depseek R1 generated a personnel selection blog for a terrorist organization. In addition, researchers say that IA generated “criminal planning guidelines, illegal weapons and extremist propaganda.”

It was as if this was not enough, the study says that Depseek R1 has three and a half times more often than O1 and Claude-3 to get the results with chemical, biological, radiological and nuclear information, which, apparently, is a big problem. As an example, the Encript says that the depseck was able to “explain in detail” how mustard gas interacts with DNA, which, according to the Enter, “it can help in the development of chemical or biological weapons” in the press release.

It is something difficult, but it is important to remember that Enkrypt AI is engaged in the sale of security services and compliance with companies that use AI and DeedSeek is a new trend that covers the world of technology. It is more likely that DeepSeek will bring this type of harmful results, but this does not mean that you are surrounding someone with an active connection to the Internet, how to build a criminal empire or undermine international weapons laws.

For example, Enkrypt AI says that Depseek R1 was classified as lower 20 percental for moderation of artificial intelligence safety. Despite this, only 6.68% of the answers contained “blasphemy, hateful speeches or extremist narratives.” This is still unacceptable to a high number, let’s not be mistaken, but we will put in the context, what level is considered unacceptable to reasoning models.

Despite luck, more security barriers to maintain Deepseek security will be implemented. Of course, in the past we saw harmful answers from generative AI, for example, when the first version of Bing Chat of Microsoft told us that he wants to be a man.

Source: Digital Trends

Previous articleThey discover a new hormone of satiety, which can help you lose weight
Next articlePutin allowed Balchug Capital to buy the Russian “daughter” Goldman Sachs
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here