OpenAI co-founder Ilya Sutskever attended the annual NeurIPS 2024 AI conference for two reasons: to speak on several panels and to receive an award for the best paper that has “withstood the test of time.”
Author:
https://rb.ru/author/alipova/
Subscribe to RB.RU on Telegram
The expert’s statements attracted more attention than the prestigious award, as Ilya said that superintelligent intelligence at some point will become “unpredictable” or will want to “get rights,” as TechCrunch writes.
According to Sutzkever, once AI reaches the level where it is “qualitatively different” from currently existing models, it “will have agency in the full sense of the word” (current AIs are “very weakly agentive”).
If given some freedom and self-awareness, but still analyzing a limited amount of data, the models will begin to truly “reason” and therefore become unpredictable.
“It’s not a bad end result if you have AI and all they want is to coexist with us and just have rights,” Sutzkever said.
This position sounds especially acute in the context of other news coming out of the United States. Local media learned that former OpenAI employee Suchir Balaji committed suicide in late November at the age of 26.
The researcher is known for his criticism of the company’s AI policies. In his opinion, “technology will bring more harm than good to society.”
Ilya Sutskever co-founded OpenAI in 2015, where he led work on GPT language models and participated in the development of DALL-E. In 2024 he announced his resignation from the company. He then founded Safe Superintelligence, a lab dedicated to general AI safety. SSI raised $1 billion in September.
Author:
Ekaterina Alipova
Source: RB

I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.