OpenAI’s fear factor
ibm/SEIMless Communications Technologies
20 Years providing the best pricing and technology is our brand; stable, reliable, account management is our mission.
The tech world’s eyebrows rose united last week over Ilya Sutskever, OpenAI's co-founders briefly led rebellion against Sam Altman, who resigned as chief scientist. Some downplayed his departure, stating Sutskever hadn’t been in the office in months, and that it appears he left cordially.
But comments by another departing executive raised questions about whether the company, one of the leading developers of artificial intelligence tools, is too lax on safety.
“Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Sutskever, and Leike oversaw the company’s "superalignment team", which was tasked with making sure products didn’t become a threat to humanity.
Sutskever said in his departure note that he was confident OpenAI would build artificial general intelligence — A.I. as sophisticated as the human brain — that was “both safe and beneficial” to humanity. But Leike was far more critical:
Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.
Leike spoke for many safety-first OpenAI employees, according to Vox. One former worker, Daniel Kokotajlo, told the online publication that “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.” (Such concerns were why Sutskever pushed OpenAI’s board to fire Altman as C.E.O. last year, though Sutskever later said he regretted that move.)
领英推荐
Vox reported that certain employees are worried about OpenAI speedily pushing out ever-more-sophisticated technology — and about Altman reportedly raising money from autocratic regimes like Saudi Arabia to build an A.I. chips venture.
Another thing at issue was OpenAI’s policies for departing employees, that included nondisclosure and nondisparagement clauses. The language suggests former workers risked losing any already vested equity if they spoke up.
After uproar on social media, Altman wrote on X that he was “embarrassed” by the clauses. He said the company was removing that language from exit documents, adding that it had never canceled fully vested equity.
It was said, Altman and OpenAI’s president, Greg Brockman, sought to allay concerns. “We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions,” the two wrote on X this weekend.
Any perception of disregard for safety at OpenAI could become a problem. The company’s investors, including Microsoft, clearly sided with Altman over Sutskever last year in supporting his reinstatement as C.E.O.
But the notion that OpenAI is being nonchalant abount any threats its products might pose could lead to tighter regulation — and create a serious impediment to the company’s race against it competitors.