Imagine it’s the American Revolution. A rag-tag band of rebels is fighting one of the world’s greatest superpowers.
Miraculously, after a long and bloody campaign, they seem to be winning. They’ve made some key allies. They’re scrappy and creative. They’re starting to win key battles, and it looks like they could actually unseat the enemy and win the war.
Then the Continental Congress hauls George Washington in and summarily fires him.
OpenAI is currently in chaos, and no one knows what will happen next. Altman may be returning to OpenAI, though the exact impact that will have is unknown. But ironically, the OpenAI board’s monumentally dumb business decision may ultimately help the company realize its original mission.
What Happened to Sam
A few weeks ago, OpenAI looked unstoppable. The company’s breakthrough success with ChatGPT made it one of the most powerful and influential AI companies on the planet.
In early November, the company announced new developer features that seemed certain to cement its lead as the generative AI platform of choice for thousands of startups and large enterprises.
That’s remarkable, given that OpenAI is a minnow. The company has only 770 employees. Its biggest rival, Alphabet (parent company to Google), has 118,899.
Despite its tiny size, OpenAI was on track to achieving a valuation of $80 billion through a private share sale. That’s an unheard-of $103 million in market value created per OpenAI employee.
At the center of OpenAI’s rise was Sam Altman, the company’s visionary (if somewhat strange) CEO. Altman did things like joking on Reddit that OpenAI had achieved Artificial General Intelligence (the holy grail of AI research) a prank that sent the tech world into a tizzy.
But he also oversaw the launch of one of the fastest-growing apps in history, as well as the rollout of enterprise features that looked certain to continue revolutionizing the world of AI — and making it more central to everything people do.
That success made the OpenAI board’s decision to fire Altman all the more bizarre. No one at the company or beyond appeared to see it coming.
As Casey Newton describes in his Platformer newsletter, no one on the board has yet given a clear reason for the sudden firing, instead lurching from vague messages about “lack of communication,” to thinly veiled statements about AI safety, to pleas for Altman to return.
We may never know what really happened. But one thing is nearly certain; OpenAI’s strange structure had a lot to do with Altman’s ousting.
A Strange Hybrid
As its name suggests, OpenAI was originally launched as a non-profit. Its 2015 mission statement reads like a tech utopian manifesto:
“OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
Clearly, a lot changed between the time when OpenAI was “free from financial obligation” in 2015, to its launch of a leading API for enterprise users in 2023, and its $13 billion money-raising spree.
So what happened?
Back in 2015, the idea of building an Artificial General Intelligence with a few billion dollars in funding might have seemed reasonable. The machine learning revolution was sweeping the tech world, and machines were already getting smarter. It likely seemed like some new algorithm or processing breakthrough would point a path forward.
As Large Language Models (LLMs) emerged as the most promising tech for taking steps towards AGI, though, all that changed. LLMs derive their intelligence primarily from processing massive amounts of data. That makes them monumentally expensive to train and operate. Training GPT-4, the model behind ChatGPT, cost an estimated $60 to $100 million. And running it reportedly costs as much as $700,000 per day.
Clearly, those kinds of costs aren’t reasonable for a scrappy non-profit. So as OpenAI matured, it changed its structure from a non-profit to a strange hybrid.
Essentially, the OpenAI we know today is a hybrid of a for-profit startup and a non-profit research lab. It retains aspects of its non-profit origins — including a board with the power to pull the plug on any potentially dangerous research — but also has the flexibility to raise capital through external investment.
One side effect of this strange structure is an unusual role for Altman. Unlike nearly every CEO of a fast-growing Silicon Valley startup, Altman has no equity in OpenAI. While he does have a board seat, he’s ultimately beholden to the whims of the company’s strange hybrid board — as we discovered in spectacular fashion last week.
This structure might make sense in terms of the history of how OpenAI was launched, but it also introduced some major issues.
No One at the Helm
The history of Silicon Valley innovation is littered with stories of founders who pursued a singular vision that no corporate board would ever support. That’s because, for better or worse, founders often retain enough shares in their companies to ensure that they can call the shots.
Facebook’s early 2000s blitz-scaling, and its controversial $1 billion purchase of Instagram in 2012 (despite having just 13 employees) seemed crazy at the time, but founder Mark Zuckerberg reportedly used his controlling interest in the company to push them through. Looking back, the Instagram purchase was one of the best bargains in tech history.
Likewise, Google’s decision to acquire and build its own mobile operating system, Android, looked like a giant money hole in 2008. The company’s founders pushed it through, though, and effectively bought Google a seat at the mobile table, keeping its lucrative search business alive for another decade.
Granted, founders aren’t perfect. One could argue that both social media and mobile devices could benefit from a bit more oversight. But still, founders who retain a controlling interest in their companies at least have the power to set and execute a consistent agenda — and to make unpopular or controversial decisions without the need to convince others of their vision.
Given his foundational lack of control over OpenAI, Altman lacked that power. And although the exact reasons for his ouster are still emerging, it’s clear that Altman’s lack of control was an Achilles’ heel for the company.
A Brilliant Dispersal
So what happens next? Immediately after his ouster, Altman accepted a high-profile job at Microsoft, OpenAI’s largest investor. More than 700 OpenAI employees reportedly signed an open letter saying they would follow him. Days later, reports indicated that Altman will likely return to OpenAI through an as-yet-unknown deal.
Even if Altman pulls a Steve Jobs and successfully returns to OpenAI, the governance cat is out of the bag. The company is unlikely to maintain its original meteoritic trajectory, especially with the sword of Damocles of a watchful (and perhaps vengeful) board monitoring Altman’s every decision.
At best, it can forge ahead while its staff members quietly polish up their resumes and fend off calls from recruiters offering them ludicrous sums of money.
And even if many of OpenAI’s staff land at Microsoft, it’s unlikely that all those staff members will love the lifestyle at a corporate behemoth enough to remain there long-term. Instead, many of OpenAI’s staff members will likely do what Silicon Valley talent has done for generations — flit about from company to company, join the competition or found AI companies of their own.
A Beautiful Demise
Ultimately, that dispersal of talent and knowledge may prove incredibly impactful for the broader AI industry.
Very few people truly understand the inner workings of today’s most advanced LLMs, and know how to create good ones. A disproportionate number of those people likely work at OpenAI.
If they disperse to other companies — both in the AI space and beyond — California’s strong prohibition on non-compete agreements means that they’ll take their hard-won knowledge and expertise with them.
Instead of 770 experts concentrated in a single company, we could see 770 companies, each harboring a “seed” of advanced AI knowledge in the form of a former OpenAI staff member. The end result could be a broader ecosystem of distributed AI innovation, where a single monolith once stood.
There is enormous precedent for this. The slow demise of Fairchild Semiconductor led to the founding of companies including Intel, AMD, National Semiconductor and many others. The knowledge dispersed from Fairchild effectively created the industry that gives Silicon Valley its name. It’s a story that has repeated itself so often in the history of the Valley that “creative destruction” is a universally understood term in the region’s lexicon.
Here’s the deepest irony, though. if OpenAI’s knowledge disperses around the economy, it might bring the company closer than ever to achieving its original mission of “advancing digital intelligence in the way that is most likely to benefit humanity as a whole.”
Revolutions — whether political or technological — rarely stay contained. If OpenAI’s implosion plants the seeds for better AI talent and knowledge in healthcare firms, government, transportation, or green energy, those “seeds” could grow in a way that has far more impact on humanity than the creation of ever-better chatbots, which seemed to be OpenAI’s main mission up until Altman’s ouster.
The OpenAI board’s decision to knee-cap their own company may go down as one of the dumbest governance moves in history. It’s unquestionably bad for OpenAI as a company, its investors, and the startups and businesses that rely on its services.