top of page

Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Company Safe Superintelligence

Ilya Sutskever

Palo Alto, CA - June 20, 2024: Ilya Sutskever, co-founder and former chief scientist of OpenAI, announced on Wednesday the launch of his new artificial intelligence company, Safe Superintelligence. The company is dedicated to creating a safe AI environment amidst the competitive landscape of the generative AI boom.

The announcement was made by Sutskever in a post on X, where he emphasized the company's unique focus and operational approach. "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures," Sutskever stated in his post.

Safe Superintelligence is described on its website as an American firm with offices in Palo Alto, California, and Tel Aviv, Israel. The company aims to prioritize AI safety and security, addressing growing concerns about the ethical and safe deployment of AI technologies.

Joining Sutskever as co-founders are former OpenAI researcher Daniel Levy and Daniel Gross, co-founder of Cue and a former AI lead at Apple. Their combined expertise and experience in the AI field are expected to drive the company’s mission forward.

Sutskever's departure from OpenAI in May followed a tumultuous period for the company. He played a key role in the dramatic firing and subsequent rehiring of CEO Sam Altman in November last year. Sutskever was removed from the OpenAI board after Altman's return, leading to his decision to pursue new ventures in the AI sector.

With the establishment of Safe Superintelligence, Sutskever and his team are poised to contribute significantly to the ongoing discourse on AI safety and ethical practices, providing a counterbalance to the rapid commercialization of generative AI technologies by some of the world's largest tech companies.

36 views0 comments


bottom of page