Ilya Sutskever, who once pushed for Sam Altman’s removal as OpenAI CEO and later regretted it, has launched Safe Superintelligence Inc.
Business d'Or
World's leading magazine for CEOs, CFOs, investors, senior executives, business leaders, and high net-worth individuals.
Ilya Sutskever, who co-founded OpenAI and used to be its chief scientist, is starting a new AI company focused on safety. On Wednesday, he announced Safe Superintelligence Inc. (SSI), a startup with “one goal and one product”: making a safe and strong AI system.
The announcement says SSI is a startup that works on safety and capabilities together, allowing the company to quickly improve its AI system while keeping safety first. It also mentions the outside pressure that AI teams at companies like OpenAI, Google, and Microsoft often deal with. The company’s “singular focus” helps it avoid being “distracted by management tasks or product timelines.”
SSI’s Focus on Safety and Strategic Direction
“Our business model ensures that safety, security, and progress are protected from short-term business pressures,” the announcement states. “This allows us to grow calmly.” Along with Sutskever, SSI was co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who used to work as a technical staff member at OpenAI.
Last year, Sutskever pushed to remove OpenAI CEO Sam Altman. Sutskever left OpenAI in May and suggested he was starting a new project. Soon after he left, AI researcher Jan Leike also quit OpenAI, saying safety had been overlooked in favor of new products. Gretchen Krueger, a policy researcher at OpenAI, also mentioned safety worries when she announced she was leaving.
As OpenAI continues to work with Apple and Microsoft, it’s unlikely that SSI will do the same anytime soon. In an interview with Bloomberg, Sutskever stated that SSI’s first product will be safe superintelligence, and the company “won’t focus on anything else” until that is achieved.
Source: Business d'Or