A Safety Check for OpenAI
Safety concerns are swirling at OpenAI, putting its co-founder, Sam Altman, in the spotlight.Credit…Carlos Barria/Reuters
OpenAI’s fear factor
The tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. Some observers downplayed the departure, noting that Sutskever hadn’t been in the office in months and that he appeared to have left on cordial terms.
But contentious comments by another departing executive have raised questions about whether the company, one of the leading developers of artificial intelligence tools, is too lax on safety.
“Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity.
Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. as sophisticated as the human brain — that was “both safe and beneficial” to humanity. But Leike was far more critical:
Leike spoke for many safety-first OpenAI employees, according to Vox. One former worker, Daniel Kokotajlo, told the online publication that “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.” (Such concerns were why Sutskever pushed OpenAI’s board to fire Altman as C.E.O. last year, though Sutskever later said he regretted that move.)
Vox reports that such employees have been worried about OpenAI speedily pushing out ever-more-sophisticated technology — and about Altman reportedly raising money from autocratic regimes like Saudi Arabia to build an A.I. chips venture.
Another issue was OpenAI’s policies for departing employees, which included nondisclosure and nondisparagement clauses. According to that language, former workers risked losing any already vested equity if they spoke up.