Ilya Sutskever Launches Safe Superintelligence Inc.
Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched a new artificial intelligence company named Safe Superintelligence Inc. (SSI). The company's stated goal is to develop a superintelligent AI system with safety as its absolute top priority.

The formation of Safe Superintelligence Inc. was officially announced on June 19, 2024, via a post by Ilya Sutskever on X (formerly Twitter), as reported by outlets including The New York Times, Time, and CNBC. SSI's singular mission is to develop AI that significantly surpasses human intelligence while ensuring it remains safe and beneficial. Uniquely, SSI intends to focus solely on this one goal – creating safe superintelligence – shielding its work from the short-term commercial pressures, management overhead, or product cycles that affect many current AI labs. The company aims to tackle the core technical challenges of building safe superintelligence, distinguishing its safety focus perhaps from adjacent concepts like operational trust and safety or even nuclear safety paradigms.
Joining Ilya Sutskever as co-founders of Safe Superintelligence Inc. are:
- Daniel Gross: Former AI lead at Apple, where he worked on the company's AI and search initiatives until 2017, and a known investor/entrepreneur.
- Daniel Levy: A former technical staff member and researcher at OpenAI who previously collaborated with Sutskever.
Sutskever will serve as SSI's chief scientist, tasked with overseeing groundbreaking advancements. The company has established offices in Palo Alto, California, and Tel Aviv, Israel.
Sutskever's departure from OpenAI in May 2024 followed his involvement in an attempt to oust CEO Sam Altman in November 2023. After Altman was reinstated, Sutskever expressed regret for his actions and supported the board's decision, though he subsequently lost his board seat and his role diminished. Before leaving, Sutskever co-led OpenAI's "Superalignment" team alongside Jan Leike, a group focused specifically on the long-term safety challenges of highly capable AI. Leike resigned shortly before Sutskever, publicly stating concerns that safety culture and processes had "taken a backseat to shiny products" at OpenAI.
The launch of SSI occurs amidst growing concerns within the AI community regarding the safety and ethical implications of advanced AI systems. Sutskever's new venture underscores the critical importance of safety in AI development, aiming to directly address the technical hurdles of creating verifiably safe superintelligence. This sharp focus on safety is particularly relevant given the recent high-profile departures from OpenAI, like Jan Leike's, which highlighted tensions between rapid capability advancement and long-term risk mitigation.