An OpenAI security official has resigned after saying he had lost confidence in the company will behave responsibly when running artificial general intelligence (AGI), which are supposedly capable of even surpassing humans in various tasks.

How detailed Business Insider, Daniel Kokotailo, an OpenAI researcher who works on the company’s ethics team, resigned last April. lose enthusiasm for the general AI project.

The researcher further stated that asked the company to suspend development in order to analyze and manage the impact that these types of models can have on society. In fact, in his forum post, Kokotailo detailed that he believes “most people advocating a pause are trying to argue against a ‘selective pause’ and a real pause that applies to large laboratories that are at the forefront of progress. ” and that this “selective pause” will ultimately not apply to large companies such as OpenAI.

The reason is obvious: AI companies, as well as large companies that also have the capabilities to develop AI models, are constantly competing with each other to release new AI products as quickly as possible. A pause in OpenAI will mean a delay in AGI development compared to other companies.

OpenAI and its desire to develop general AI

ChatGPT, a generative artificial intelligence application available on the Google Play Store.

Interestingly, the development of general artificial intelligence was one of the main reasons for the OpenAI soap opera and the unfortunate firing of Sam Altman, the company’s CEO, because the board of directors feared that the executive would approve the launch of very powerful artificial intelligence. products without prior assessment of the consequences.

In this regard, several leaders and experts in the field of artificial intelligence, including Elon Musk, They asked to suspend the development and training of new I models.To at least assess whether they really can have a negative impact on society. The purpose of the pause, which was expected to last six months, was to create a series of common safety protocols.

At the moment, no company has presented a general AI model that threatens humanity, although OpenAI is known to be working on it with an AGI internally called Q* (Q-Star), which is capable of even performing mathematical operations.

Source: Hiper Textual

Previous articleEurope investigates whether Instagram and Facebook are addictive to minors
Next articleFederal government launches platform to support Rio Grande do Sul
I am Garth Carter and I work at Gadget Onus. I have specialized in writing for the Hot News section, focusing on topics that are trending and highly relevant to readers. My passion is to present news stories accurately, in an engaging manner that captures the attention of my audience.

LEAVE A REPLY

Please enter your comment!
Please enter your name here