The Path of OpenAI's Fission
Author: Flagship
Image Source: Generated by Boundless AI
How valuable is the title of "former OpenAI employee" in the market?
On February 25 local time, Business Insider reported that Mira Murati, the former Chief Technology Officer of OpenAI, has just announced her new company, Thinking Machines Lab, which is starting a $1 billion financing round at a valuation of $9 billion.
Currently, Thinking Machines Lab has not disclosed any product or technology timelines or specific details. The only public information about the company is that it has a team of over 20 former OpenAI employees and their vision: to build a future where "everyone has access to knowledge and tools, allowing AI to serve people's unique needs and goals."
Mira Murati and Thinking Machines Lab
The capital appeal of OpenAI entrepreneurs has formed a "snowball effect." Before Murati, the company SSI, founded by former OpenAI Chief Scientist Ilya Sutskever, had already achieved a valuation of $30 billion solely based on its OpenAI genes and a single idea.
Since Elon Musk's departure from OpenAI in 2018, former OpenAI employees have founded over 30 new companies, raising more than $9 billion in total financing. These companies have formed a complete ecosystem covering AI safety (Anthropic), infrastructure (xAI), and vertical applications (Perplexity).
This reminds one of the wave of Silicon Valley entrepreneurship that emerged after PayPal was acquired by eBay in 2002, when founders like Musk and Peter Thiel left to create legendary companies such as Tesla, LinkedIn, and YouTube—known as the "PayPal Mafia." The departing employees of OpenAI are also forming their own "OpenAI Mafia."
However, the script for the "OpenAI Mafia" is more radical: while the "PayPal Mafia" took 10 years to create two companies worth over $100 billion, the "OpenAI Mafia" has spawned five companies with valuations over $10 billion in just two years since the launch of ChatGPT, including Anthropic valued at $61.5 billion, Ilya Sutskever's SSI valued at $30 billion, and Musk's xAI valued at $24 billion. It is likely that within the next three years, a $100 billion unicorn will emerge from the "OpenAI Mafia."
The new wave of talent fission initiated by the "OpenAI Mafia" is impacting the entire Silicon Valley and even reshaping the global power landscape of AI.
The Fission Path of OpenAI
Among the 11 co-founders of OpenAI, only Sam Altman and Wojciech Zaremba, the head of the language and code generation team, remain in their positions.
2024 is expected to be a peak year for departures from OpenAI. During this year, Ilya Sutskever (leaving in May 2024), John Schulman (leaving in August 2024), and others will successively depart. The OpenAI safety team has shrunk from 30 to 16 members, a reduction of 47%; key figures among executives, including Chief Technology Officer Mira Murati and Chief Research Officer Bob McGrew, have left one after another; within the technical team, core talents such as Alec Radford, the chief designer of the GPT series, and Tim Brooks, head of Sora (who joined Google), have also departed; deep learning expert Ian Goodfellow has joined Google, and Andrej Karpathy has left for the second time to start an education company.
"Gathering is a fire, scattering is a sky full of stars."
Among the core technical backbones who joined OpenAI before 2018, over 45% have chosen to go their own way, and these new "portals" have also disassembled and restructured OpenAI's technological gene pool into three strategic groups.
First is the "direct lineage" that continues the OpenAI gene, which can be described as a group of ambitious OpenAI 2.0 individuals.
Mira Murati's Thinking Machines Lab has almost completely transplanted OpenAI's R&D structure: John Schulman is responsible for the reinforcement learning framework, Lilian Weng leads the AI safety system, and even the neural architecture diagram of GPT-4 has been directly used as the technical blueprint for new projects.
Their "Open Science Declaration" directly addresses OpenAI's recent trend towards closure, planning to create a "more transparent AGI development path" through the continuous public release of technical blogs, papers, and code. This has triggered some chain reactions in the AI industry: three top researchers from Google DeepMind have jumped ship to join.
On the other hand, Ilya Sutskever's Safe Superintelligence Inc. (SSI) has chosen a different path. Sutskever, along with two other researchers, Daniel Gross and Daniel Levy, founded the company, abandoning all short-term commercialization goals to focus on building "irreversible safe superintelligence"—a nearly philosophical technical framework. The company has just been established, and firms like a16z and Sequoia Capital have decided to invest $1 billion to "buy into" Sutskever's ideals.
Ilya Sutskever and SSI
Another faction consists of the "disruptors" who left before ChatGPT.
Dario Amodei's Anthropic has evolved from being the "opposition to OpenAI" to becoming its most dangerous competitor. Its Claude 3 series models are on par with GPT-4 in multiple tests. Additionally, Anthropic has established an exclusive partnership with Amazon AWS, which means it is gradually eroding OpenAI's foundation in terms of computing power. The chip technology jointly developed by Anthropic and AWS could further weaken OpenAI's bargaining power in GPU procurement from Nvidia.
Another representative of this faction is Musk. Although he left OpenAI in 2018, some founding members of xAI, which he established, also previously worked at OpenAI, including Igor Babuschkin and later-returning Kyle Kosic. With Musk's powerful resources, xAI poses threats to OpenAI in terms of talent, data, and computing power. By integrating real-time social data streams from Musk's X platform, xAI's Grok-3 can instantly capture trending events on the X platform to generate answers, while ChatGPT's training data is limited to 2023, creating a significant timeliness gap. This data closed loop is difficult for OpenAI to replicate, given its reliance on the Microsoft ecosystem.
However, Musk's positioning for xAI is not as a disruptor of OpenAI, but rather to reclaim the original intent of "OpenAI." xAI adheres to a "maximum open-source" strategy, for example, the Grok-1 model is open-sourced under the Apache 2.0 license, attracting global developers to participate in ecosystem building. This stands in stark contrast to OpenAI's recent trend towards closed-source (e.g., GPT-4 only providing API services).
The third faction consists of "breakthrough players" who are reconstructing industry logic.
Perplexity, founded by former OpenAI research scientist Aravind Srinivas, is one of the first companies to use AI large models to transform search engines. Perplexity directly generates answers through AI, replacing the list of links on search pages, and now has over 20 million searches per day, with financing exceeding $500 million (valued at $9 billion).
Adept's founder is David Luan, former Vice President of Engineering at OpenAI, who participated in the technical research of language, supercomputing, and reinforcement learning, as well as safety and policy formulation for the GPT-2, GPT-3, CLIP, and DALL-E projects. Adept focuses on developing AI Agents, aiming to help users automate complex tasks (such as generating compliance reports, designing blueprints, etc.) through the combination of large models and tool invocation capabilities. Its developed ACT-1 model can directly operate office software, Photoshop, and more. Currently, the core founding team of this company, including David Luan, has already joined Amazon's AGI team.
Covariant is an intelligent robotics startup valued at $1 billion. Its founding team comes from OpenAI's dissolved robotics team, with technical genes rooted in GPT model development experience. It focuses on developing robotic foundational models, aiming to achieve autonomous operation of robots through multimodal AI, especially focusing on warehouse logistics automation. However, three members of the "OpenAI Mafia" in Covariant's core founding team, Pieter Abbeel, Peter Chen, and Rocky Duan, have all joined Amazon.
Some "OpenAI Mafia" startups
Source: Public information, organized by Flagship
The leap of AI technology from "tool attributes" to "productivity factors" has spawned three types of industrial opportunities: substitution scenarios (such as disrupting traditional search engines), incremental scenarios (such as intelligent transformation in manufacturing), and reconstruction scenarios (such as breakthroughs in life sciences). The common characteristics of these scenarios are: the potential to build data flywheels (user interaction data feeding back into models), deep interaction with the physical world (robot action data/biological experiment data), and the gray space of ethical regulation.
The technological spillover from OpenAI is providing the underlying power for this industrial transformation. Its early open-source strategy (such as partial open-sourcing of GPT-2) has created a "dandelion effect" of technology diffusion, but as technological breakthroughs enter deeper waters, closed-source commercialization has become an inevitable choice.
This contradiction has given rise to two phenomena: on one hand, departing talents are migrating technologies like the Transformer architecture and reinforcement learning to vertical scenarios (such as manufacturing and biotechnology), building barriers through scenario data; on the other hand, giants are achieving technological positioning through talent acquisitions, forming a "technology harvesting" closed loop.
When the moat becomes a watershed
The "OpenAI Mafia" is advancing rapidly, while the old employer OpenAI is "struggling."
In terms of technology and products, the release date of GPT-5 has been repeatedly postponed, and the mainstream ChatGPT product is generally perceived by the market as failing to keep pace with industry developments in terms of innovation speed.
In the market, the newcomer DeepSeek has begun to gradually catch up with OpenAI, with model performance close to ChatGPT but training costs only 5% of GPT-4. This low-cost replication path is dismantling OpenAI's technological barriers.
However, the rapid growth of the "OpenAI Mafia" is largely due to internal conflicts within OpenAI.
Currently, OpenAI's core research team can be said to be in disarray, with only Sam Altman and Wojciech Zaremba remaining among the 11 co-founders, and 45% of core researchers having left.
Wojciech Zaremba
Co-founder Ilya Sutskever has left to establish SSI, Chief Scientist Andrej Karpathy has publicly shared Transformer optimization experiences, and Tim Brooks, head of the Sora video generation project, has moved to Google DeepMind. In the technical team, more than half of the authors of the early versions of GPT have left, most of whom have joined OpenAI's competitors.
At the same time, according to data compiled by Lightcast, which tracks recruitment information, OpenAI's own hiring focus seems to have changed. In 2021, 23% of the company's job postings were for general research positions. By 2024, general research accounted for only 4.4% of its job postings, reflecting a shift in the status of research talent within OpenAI.
The organizational culture conflict brought about by the commercialization transformation has become increasingly evident. While the employee scale has expanded by 225% over three years, the early hacker spirit has gradually been replaced by a KPI system, with some researchers openly stating that they are "forced to shift from exploratory research to product iteration."
This strategic oscillation has led OpenAI into a dual dilemma: it needs to continuously produce breakthrough technologies to maintain its valuation while also facing competitive pressure from former employees who quickly replicate results using their methodologies.
The key to victory in the AI industry lies not in parameter breakthroughs in laboratories, but in who can inject technological genes into the industry's capillaries—reconstructing the underlying logic of the business world through the flow of answers in search engines, the motion trajectories of robotic arms, and the molecular dynamics of biological cells.
Is Silicon Valley Dividing OpenAI?
The rapid rise of the "OpenAI Mafia" and the "PayPal Mafia" is largely attributed to the favorable legal environment in California.
Since California legislated against non-compete agreements in 1872, its unique legal environment has become a catalyst for innovation in Silicon Valley. According to California Business and Professions Code Section 16600, any clause that restricts professional freedom is deemed invalid, and this system design directly promotes the free movement of technical talent.
Silicon Valley programmers have an average tenure of only 3-5 years, far lower than other tech hubs. This high-frequency turnover creates a "knowledge spillover" effect—taking Intel and AMD as examples, former employees of Fairchild Semiconductor founded 12 semiconductor giants, laying the industrial foundation of Silicon Valley.
The law prohibiting non-compete agreements may seem insufficient to protect innovative companies, but it actually promotes innovation. The mobility of technical personnel accelerates the diffusion of technology and lowers the barriers to innovation.
In 2024, the U.S. Federal Trade Commission (FTC) is expected to fully ban non-compete agreements by April 2024, further releasing the innovative vitality of the U.S. market. In the first year of policy implementation, it is projected that 8,500 new companies will be established, with patent numbers surging by 17,000 to 29,000, and 3,000 to 5,000 new patents being added, with an annual patent growth rate of 11-19% over the next decade.
Capital is also an important driving force behind the rise of the OpenAI Mafia.
Silicon Valley's venture capital accounts for over 30% of the total in the U.S., with firms like Sequoia Capital and Kleiner Perkins forming a complete financing chain from seed rounds to IPOs. This capital-intensive model has generated a dual effect.
First, capital is the engine driving innovation; angel investors provide not only funds but also industry resource integration. When Uber was founded, its seed funding was only $200,000 from the two founders, with just three registered taxis. After receiving $1.25 million in angel investment, it began rapid financing, reaching a valuation of $40 billion by 2015.
The long-term focus of venture capital on the tech industry has also facilitated the upgrading of the tech sector. Sequoia Capital invested in Apple in 1978 and Oracle in 1984, establishing its influence in the semiconductor and computer fields; in 2020, it began to deeply invest in artificial intelligence, participating in cutting-edge projects like OpenAI. International capital (such as Microsoft) investing billions of dollars in AI has shortened the commercialization cycle of generative AI technology from several years to just months.
Capital also provides innovative companies with a higher tolerance for failure. The speed at which accelerators filter out failed projects is as important as that for successful ones. According to startup analysis firm Startuptalky, the global failure rate for startups is 90%, while Silicon Valley's startup failure rate is 83%. Although it is not easy for startups to succeed, within the investment grid of venture capital, failure experiences can quickly be transformed into nutrients for new projects.
Image Source: startuptalky.com
However, capital has also, to some extent, altered the development paths of these innovative companies.
Top AI projects have achieved valuations exceeding $1 billion even before releasing products, which indirectly increases the difficulty for other small and medium-sized innovative teams to access resources. This structural imbalance is more pronounced in regional distributions; research from database management company Dealroom shows that the venture capital received in the U.S. Bay Area in a single quarter ($24.7 billion) is equivalent to the total received by the global second to fifth-ranked venture capital centers (London, Beijing, Bangalore, Berlin). Meanwhile, emerging markets like India have seen a 133% increase in financing, but 97% of the funds flow to "unicorn" companies valued over $1 billion.
Additionally, capital has a strong "path dependence," favoring fields with quantifiable returns, which has led to many emerging basic scientific innovations struggling to receive strong financial support. For example, in the field of quantum computing, the founder of domestic quantum computing startup Origin Quantum, Guo Guoping, had to sell his house to fund his startup due to a lack of funds in the early stages. Guo Guoping's first fundraising attempt was in 2015, when data from the Ministry of Science and Technology showed that China's total investment in research was less than 2.2% of GDP, with basic research funding accounting for only 4.7% of R&D investment.
Not only is there a lack of support, but large capital is also using "money" as a lure to lock in top talent, which has led to CTO-level salaries in startups being essentially locked in at seven figures (in U.S. dollars for American companies and in renminbi for Chinese companies), creating a cycle of "giant monopolizing talent—capital chasing giants."
However, the significant pre-valuation of these "OpenAI Mafia" companies also carries certain risks.
Mira Murati and Ilya Sutskever's two companies have secured billions of dollars in funding based solely on a single idea. This stems from their trust premium in the technical capabilities of OpenAI's top team, but this trust also carries risks—whether AI technology can remain in a phase of exponential growth, and whether vertical scenario data can form monopolistic barriers. When these two risks encounter real-world challenges (such as a slowdown in breakthroughs in multimodal models and a surge in the cost of acquiring industry data), overheating capital may trigger a reshuffling of the industry.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。