OpenAI reversed a controversial move on Thursday, effectively forcing former employees to choose between signing a non-disparagement agreement that would never expire and preserving their vested interest in the business.
CNBC obtained an internal memo that was addressed to departed employees and shared with current ones.
The memo, written to each former employee, stated that when leaving OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” wrote the memo, which was obtained by CNBC.
According to the document, OpenAI will not enforce any other non-disparagement or non-solicitation provisions in the employee’s contract.
“As we shared with employees, we are making important updates to our departure process,” a spokesman for OpenAI told CNBC.
“We have not and will never take away vested equity, even if persons did not sign the exit forms. “We’ll remove nondisparagement clauses from our standard departure paperwork, and we’ll release former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual,” the statement added, adding that past employees would also be informed of this.
“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” the spokesman for OpenAI stated.
Bloomberg was the first to report on the release of the non-disparagement provision. Vox was the first to report on the NDA provision.
The development comes as OpenAI has faced rising criticism over the last week or so.
On Monday, one week after OpenAI introduced a spectrum of audio voices for ChatGPT, the business revealed the removal of one of the viral chatbot’s voices, “Sky.”
“Sky” sparked outrage because it resembled actress Scarlett Johansson’s voice in the film “Her,” about artificial intelligence. The Hollywood celebrity claims that OpenAI stole her voice even though she refused to let them use it.
“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” the Microsoft-backed startup wrote on X. “We are working to pause the use of Sky while we address them.”
Also last week, OpenAI disbanded its team focusing on the long-term hazards of artificial intelligence, just one year after announcing it, a person familiar with the issue told CNBC on Friday.
The insider, who spoke to CNBC on the condition of anonymity, said some of the team members are being moved to various other teams within the corporation.
The news came just days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departure. On Friday, Leike stated that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
OpenAI’s Superalignment team, established last year, has concentrated on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI stated that it would devote 20% of its processing capacity to the program over four years.
The business declined to comment on the record, instead directing CNBC to co-founder and CEO Sam Altman’s latest X post, in which he expressed sadness over Leike’s departure and stated that the organization still had work to accomplish.
On Saturday, OpenAI co-founder Greg Brockman released a statement credited to both himself and Altman on X, claiming that the business had “raised awareness of the risks and opportunities of AGI [artificial general intelligence], so that the world can better prepare for it.”