The departure of OpenAI chief scientist Ilya Sutskever led to the dissolution of the team that had focused on the long-term risks associated with the development of artificial intelligence. The group that was supposed to protect humanity from AI worked for less than a year.
Subscribe to RB.RU on Telegram
As CNBC writes, citing an informed source, some team members are being transferred to other divisions within the company.
OpenAI announced the creation of the Superalignment team last July. The company said in a statement that the division, over the next four years, must solve the problem of managing AI systems “that are much smarter than us.”
“But the enormous power of superintelligence can also be very dangerous and lead to the deprivation of humanity or even its extinction,” the message said.
The ChatGPT developer planned to dedicate 20% of available computing resources to working with risks, and Ilya Sutskever and Jan Leike led the team to protect against risks associated with AI.
Leike also recently announced his departure from OpenAI. The former top manager of the company stated that he does not share the opinion of the management on the company’s development priorities. In his opinion, OpenAI should make security a priority in the development of generative AI.
This week, OpenAI co-founder Ilya Sutskever posted on his X account that he was leaving the company after ten years. The position of chief scientist will be held by Jakub Paczocki, who was previously director of scientific research.
Photo: Unsplash
Author:
Akhmed Sadulayev
Source: RB

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.