Miles Brundage left OpenAI to pursue AGI (Artificial General Intelligence) research in the nonprofit sector. He assures that soon AI will be able to do on a computer everything that a person can do, all that is needed is research that is not hindered by corporate policy.

The former head of OpenAI’s AGI department explained his departure from the company as a desire for “independence”
  1. News

Author:

Subscribe to RB.RU on Telegram

“I want to be independent and less critical. So I didn’t want my views to be rejected, rightly or wrongly, just because I was promoting a corporation’s products,” Business Insider quoted Brundage as saying in an interview with the Hard Fork podcast.

In the coming years, Brundage said, the industry will develop “systems that can, in principle, do everything that a person can do remotely on a computer.” This includes mouse and keyboard control or the ability to imagine an avatar as a “real person in a video chat.”

“Governments should think about what this means in terms of taxes on employees or how it can help in terms of investment in education,” he says.

The timing of AGI creation by large companies is the subject of very persistent debate in the industry. Most experts, like Brundage, believe this is a matter of years. But Dario Amodei, CEO of OpenAI’s main competitor Anthropic, believes the first iterations of the technology could appear as early as 2026.

Brundage, who announced his departure from OpenAI last month after just over six years at the company, should understand OpenAI’s timeline better than anyone.

During his time at the company, he advised its executives and board members on how to prepare for AGI implementation. He was also responsible for some of OpenAI’s biggest security innovations, including bringing in outside experts to find potential problems in the company’s products.

RB.RU recommends the best digital solution providers for your business: click here

OpenAI has seen a succession of prominent data security researchers and leaders, some of whom have expressed concerns about the balance between AGI development and digital security.

Brundage said his departure was certainly not motivated by doubts about OpenAI’s success in the AGI space (“I’m pretty sure there’s no other lab that’s at the same level of standing”), but he did hint that security issues They were becoming increasingly important. obstacles to testing some innovations.

“I couldn’t work on everything I wanted, which was often related to general industry issues. And it’s not just about what we do within OpenAI, but also what rules should exist in general,” he explained vaguely.

Author:

Ekaterina Alipova

Source: RB

Previous articleA Pakistani website used artificial intelligence to bring thousands of people to a parade in Dublin. He was not thereAttachments03 November 2024, 12:45
Next articleBluetti created backpacks with built-in batteries with up to 700 W / 512 Wh power Science and technology November 3, 2024, 13:30
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here