A group of OpenAI whistleblowers has filed a complaint with the U.S. Securities and Exchange Commission (SEC) alleging that the company prohibits its employees from speaking out about the risks associated with artificial intelligence. The department is being asked to conduct an investigation.
OpenAI bans employees from discussing AI risks – The Washington Post
-
News
Author:
Subscribe to RB.RU on Telegram
This was reported by The Washington Post, citing a letter to the SEC. According to the document, OpenAI forced employees to sign cooperation agreements with an exemption from severance pay and the need to obtain permission from the employer to disclose information to federal authorities.
According to one of the whistleblowers, the company’s employment contract contains provisions stating that it does not want its employees to speak to federal regulators.
“I don’t believe AI companies can create technology that is safe and in the public interest if they shield themselves from scrutiny and dissent,” the anonymous employee added.
Whistleblower attorney Stephen Cohn noted that such agreements threaten employees with criminal prosecution if they report any wrongdoing to authorities.
According to the lawyer, this is contrary to federal law. Moreover, the agreement does not provide an exception for disclosure of violations of the law, which also violates SEC regulations.
OpenAI spokesperson Hannah Vaughn responded to the allegations by saying the company’s policies protect workers’ rights to disclose information and that developer ChatGPT welcomes discussions about the technology’s impact on humanity.
“We believe that the debate [о влиянии ИИ] “They are important and we have already made significant changes to our dismissal process, removing confidentiality clauses,” said the head of the company’s press service.
In May, it became known that OpenAI had disbanded its long-term AI risk team. The group that was supposed to protect humanity from artificial intelligence had been in place for less than a year.
Jan Leike, one of the former chief risk officers, said that security was not among the generative AI developer’s priorities.
Photo: Mehaniq/Shutterstock