OpenAI’s co-founder Ilya Sutskever has launched a new AI start-up focused on “building safe superintelligence,” just a month after leaving OpenAI following a failed coup attempt against CEO Sam Altman.
On Wednesday, Sutskever, one of the world’s most esteemed AI researchers, unveiled Safe Superintelligence (SSI) Inc. According to a statement published on X, the new venture aims to be “the world’s first direct SSI lab, with a singular goal and product: a safe superintelligence.
Sutskever co-founded SSI in the US with former OpenAI employee Daniel Levy and AI investor and entrepreneur Daniel Gross, who was previously a partner at Y Combinator, the well-known Silicon Valley start-up incubator once led by Altman.
Gross holds stakes in companies such as GitHub and Instacart, along with AI ventures including Perplexity.ai, Character.ai, and CoreWeave. The investors behind SSI remain undisclosed.
The trio emphasized that their sole focus is on developing safe superintelligence—a type of machine intelligence that could surpass human cognitive abilities. They stressed that SSI would not be burdened by revenue demands from investors, enabling them to attract top talent for their mission.
When OpenAI was founded in 2015 as a not-for-profit research lab, it had a similar mission to SSI: creating superintelligent AI for the benefit of humanity. Although Altman claims this is still OpenAI’s guiding principle, the company has evolved into a rapidly growing business under his leadership.
Sutskever and his co-founders stated that SSI’s “singular focus ensures no distractions from management overhead or product cycles. Our business model ensures that safety, security, and progress are all insulated from short-term commercial pressures.” The company will be headquartered in both Palo Alto and Tel Aviv.
Sutskever is widely recognized as a leading AI researcher. He played a crucial role in OpenAI’s early advancements in the emerging field of generative AI, which involves developing software that can produce multimedia responses to human queries.
The announcement of his new venture follows a tumultuous period at OpenAI, marked by leadership clashes and safety concerns.
In November, OpenAI’s directors—including Sutskever at the time—abruptly removed Altman as CEO, a move that stunned investors and employees. Altman returned days later with a new board, without Sutskever.
After the failed coup, Sutskever stayed with OpenAI for a few more months but eventually departed in May. Upon his resignation, he expressed his excitement for a personally meaningful project, details of which he promised to reveal in due time.
This isn’t the first time OpenAI employees have branched out to create “safe” AI systems. In 2021, Dario Amodei, a former head of AI safety at OpenAI, launched his own start-up, Anthropic. This company has since raised $4 billion from Amazon and additional millions from venture capitalists, reaching a valuation of over $18 billion.
While Sutskever has publicly expressed confidence in OpenAI’s current leadership, Jan Leike, another recent departure who worked closely with Sutskever, cited irreconcilable differences with the company’s leadership. He noted that “safety culture and processes have taken a back seat to shiny products” and has since joined OpenAI rival Anthropic.