The expert of AI accused Openai Reward your story and be too contemptuous with security problems.

The former OpenAi policy researcher, thousands of living, criticized the recent document on security and alignment of the company published this week. The document describes Openai as an attempt by general artificial intelligence (Agi) in many small steps, instead of making a “big leap”, stating that the process of iterative implementation will allow you to detect security problems and study the potential for excessive use of AI at each stage.

Recommended video

Among many critical remarks of AI technology as ChatGPTExperts are concerned that chat bots provide inaccurate health and safety information (such as a notorious problem with the Google search function, which gave people the opportunity to eat stones), and that they can be used for political manipulation, misinformation and fraud. Openai, in particular, was criticized for the lack of transparency in how he develops his own models, which may contain confidential personal data.

The publication of the Openai document this week, apparently, is a response to these problems, and the document implies that the development of the previous GPT-2 model was “intermittent”, and this was not initially launched due to “fears about harmful applications”, but now the company will move on the principle of iterative development. But the excess claims that the document changes the narrative and is not an accurate description of the history of the development of AI in Openai.

“The launch of the GPT-2 by Openai, in which I participated, was 100% coordinated + previously pre-prescribed the current Ouenai openai,” writes Lishog in X. ”. Many security experts at that time thanked us for this caution. ”

Lishoga also criticized the company’s obvious approach based on this document by writing: “It seems that this was the establishment of a trial load in this section, where problems are pan -Mammys + overwhelming evidence of the inevitable dangers to act on them, otherwise simply continue to send. This is a very dangerous mentality for advanced AI systems. ”

This happens at a time when Openai is under a growing analysis with accusations of priorities of “bright products” in security.

Source: Digital Trends

Previous articleIPhone 17 Air will have the same measures as the iPhone 17 Pro Max, the iPhone 17 Plus will be the successor
Next articleNASA is funny for Elon Mask: “We don’t know who he was talking to
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here