I wrote a book on AI development, exploring how it works, its history, and its future. Order it here. And drop a review if you’d be so kind!
Men at some time are masters of their fates: The fault, dear Brutus, is not in our stars, But in ourselves, that we are underlings.
So, if you were in tech or somewhat online this weekend must have been a whirlwind of news. I’ve been rewriting and adding to this article for the best part of a day and half because it’s the saga that never ends.
It reminds me less of corporate battles or egos and instead the Greek tragedies.
What might have been the highest flying startup of all time, I use the term advisedly since it’s a product that went from zero to $1.3 Billion in annual runrate revenues, in 9 months. It started a worldwide conversation about the fears and benefits of Artificial Intelligence, from the US Senate to the UN to the EU to UK to India to … Everyone, and I mean everyone, was discussing this.
A breakthrough that was compared to fire, to language, to the printing press, and in its least flattering comparison to the internet or software itself, that was OpenAI’s legacy.
They did this under the leadership of Sam Altman, who has been the public face for this intense scrutiny over the past years, and also the fact of progress. The modern Oppenheimer, he was called.
On Friday the 17th of November, the Board of OpenAI made the decision to fire the CEO, Sam Altman. Then Greg Brockman, the legendary President of OpenAI and previously the Chairman, quit.
The Board who did this was helmed by another historic figure in Ilya Sutskever, someone Elon called the lynchpin and secret sauce to making GPT work, who has since said he regrets his actions and wants to work to bring the team back together.
There’s no precedent. This is like Steve Jobs getting fired, except it happened after the iPhone took the world by storm.
The drama was extraordinary. Sam gets fired Friday afternoon, in a letter saying Greg is no longer chairman but stays in the company. Microsoft, who have put in $10 Billion into the company learns about this 1 minute before us. Greg quits a couple of hours later. More researchers quit. Everyone, from Satya Nadella to all tech luminaries to everyone on Twitter to OpenAI employees are all asking for any clarification, and none are forthcoming.
Then the Board tried to backtrack and court Sam and Greg to come back, led by the employees and Mira Murati the interim CEO. They negotiated over the weekend, had it all fall apart, courted almost everyone from Dario of the rival Anthropic to Nat Friedman who runs Github, and eventually hired Emmett Shear the ex CEO of Twitch as the new CEO. Meanwhile Sam and Greg with a bunch of others joined Microsoft to head a new AI lab. Oh and most importantly, most of the employees are threatening to quit unless the Board resigns and brings Sam back.
This is the most insane weekend to happen in tech ever. The closest analogy I can think of is Lehman Brothers, except it’s for the company that’s the strongest in the segment. Maybe FTX saga, but that was actual fraud. Perhaps Theranos? Also fraud.
This is unprecedented.
Its Shakespearean. A story of power and money and human nature.
Power
At the best of times VCs are lax with corporate governance. I’ve written before about how its lack was the most egregious sign in the FTX saga.
Non-profit boards and companies don’t mix usually, but it’s not exactly a complete no no. We’ve found plenty of ways to increase the “do good things” vibes into corporations, whether it’s non profit boards or trusts or “don’t be evil” mottos, or B corps like Anthropic.
This is nuanced. Novo Nordisk, Ikea and Hershey’s have nonprofit boards. There are arguments that some of these aren’t quite the same, but are somewhat tax shields, but remains true. Or Tata Group, the Indian conglomerate has a philanthropic org controlling $150B in revenues.
So it’s not that the structure itself was doomed to fail. We have keiretsus and chaebols and whatever-the-structure-is-in-China, plenty of corporate structures to satisfy one and all. Samsung or Toyota’s structure makes OpenAI’s structure look simple. They even have factions with opposing aims. That’s, in fact, what shows like Succession are about. That’s the reason there’s corporate infighting in the first place.
Power into will, will into appetite; And appetite, an universal wolf
What they didn’t have was a Board that took management decisions like firing the CEO for no apparent reason.
The board did not remove Sam over any specific disagreement on safety, their reasoning was completely different from that
The problem with the power wasn’t that it was misallocated. The problem was that “doing the best thing for humanity” is an insanely vague goal that anyone can fill with their conscience. While Ilya recanted later and unburdened his conscience, the best people could do was to offer solace.
And unless you have Mahatma Gandhi or Martin Luther King on there, you’re probably not going to trust their judgement on what “doing the best thing for humanity” means.
Note, I think the non-profit board was wrong, but what I despise is the incompetence in how they did what they did. It’s perfectly okay for them to want to stop work in its tracks because they didn’t like the way it went towards their goal - to “safe artificial general intelligence is developed and benefits all of humanity”.
But that’s not what happened. They, on a random Friday after perhaps the height of their success with new releases and an incredible position of strength, imploded the company in about as spectacular fashion as possible. The only real comparison is perhaps Game Of Thrones the last season.
Money
The problem with AI creating non-profits is that, by and large, making AI costs a lot of money. You need to buy GPUs. GPUs are expensive. And it’s really difficulty to get $10 Billion for you to throw money at a lark in the hope that it will “benefit humanity”.
Well, that’s not strictly true. VCs do this all the time. Put money into companies with low probability of success and incredibly vague and highminded missions to make the world a better place and also make an incredibly large amount of money.
So they had to take money from people who wanted to make more money. The thing with doing that is, despite Silicon Valley’s insistence that it’s mostly beneath money and focused on missions, you can’t call and tell them “hey, you know what, thanks for all the money, but we think we’re going to stop being so commercial. Please send more cheques for GPUs. Thanks”. Satya Nadella seems like a nice guy but this wouldn’t make him happy.
Plus, this is important, despite making more than a billion dollars in 12 months, OpenAI still needs incredible amounts of money. Sam was trying to raise a round even as he was sacked.
Vaulting ambition, which o'erleaps itself and falls on the other.
Money, in this instance, is a weird form of power. Mostly because OpenAI continues to need it, like all technology companies. And they can’t get it if they want to act like a research laboratory that wants to disengage from the world like Castalia.
You can’t say a) we need the money to research, b) we can’t tell you what we will do with it, c) we won’t try and make any money or help you gain from the investment in any way, d) don’t worry it’s for the benefit of humanity. The investors have investors of their own who will ask them questions like “why” or “what the hell” and they’ll have to try and explain your altruism in a boardroom wearing a tie, and that’s hard. They’ll get fired. And the net happiness in the world would drop.
I guess this is obvious, but then it seemingly wasn’t obvious to those who sat on the OpenAI non-profit board.
Human nature
Despite the GPUs and billions being needed, the weird thing about the AI boom is that most of the knowledge required is shallow, and sits inside people’s heads. Those people have other parts to their brains, like love or loyalty or avarice, which sometimes pushes them to do things.
So when they saw a) a $80 billion secondary transaction which would’ve made them rich vanish, and b) no reason why it actually had to happen, they naturally got upset and angry.
These are people who love a mission and joined for a mission and wanted to get filthy rich from a mission, and none of that will happen if they think they’re being led by a confederacy of dunces.
So when 700+ out of 770 employees threaten to walk out of the door, that’s a problem. For the Board, and also for Emmett Shear who was brought on board to be the new interim CEO. After all, you can kind of keep yourself warm with GPUs on the cold San Francisco nights, but that’s not going to help you keep the customers you have or to get new ones if you basically feel like you’re fighting the winds and the tide to even keep the employees.
And in a historically strong labour market, AI researchers are in an even stronger position than anyone else. They, quite literally, can quit and get a job in a minute. They have the hottest skillsets in the universe at the moment.
Ignorance is the curse of God; knowledge is the wing wherewith we fly to heaven
Them walking out of the door a) does reduce the existential risk from OpenAI, because OpenAI will cease to exist, and b) increase the proliferation of AI from being concentrated in one place with a clear enough safety lens compared to anything else that existed outside.
Lessons
The lesson most are focused on are the ones about corporate governance, where if you have opposing points of view you’ll have clashes.
I think this is wrong, since Boards always have opposing points of view. Thats the entire reason they exist. In this case the clash was more existential - between people worried about AI and those less so - but this is still much smaller than the delta between capital and labour, or between employee Board members and their VC investors.
No, the problem isn’t the clash, the problem was the incompetence. They took an unprecedented move without telling any of the other stakeholders, and then haven’t said anything to anyone for the next three days while the whole world fell apart around them!
The OpenAI Boards absolute incompetence is a perfect example of why if you're gonna have vague goals like "benefit humanity" you better have exceptional people to choose accordingly. Everyone else needs measurable KPIs.
The Board literally told informed the leadership team that allowing the company to be destroyed "would be consistent with the mission.”
This is an irreconcilable difference. Like if you put Greta Thunberg on the board of Exxon.
There’s no easy lesson here, beyond don’t do that!
I usually have a triangle of reasons why dumb things are done inside organisations - Bureaucracy, Incompetence and Malice.
Bureaucracy is usually a great answer in larger companies, because it explains a ton.
Malice is also a good answer in specific instances, like FTX or Madoff.
Incompetence though, when we deal with smart people we somehow forget it exists. But it’s still the lead singer of the band.
It’s the adage: Never attribute to malice that which is adequately explained by stupidity.
Like most tragedies, there are oh so many observations though from this saga. In no order:
If you’re going to fire the most beloved CEO of your high flying startup, you should explain why
If you get money from others saying “we will make cool things and make even more money”, then you can’t turn around and say “psych!” without people getting upset
If you want people to agree with you, you should explain what you’re thinking and why
Employees are the most valuable asset of almost any company
You should explain your thinking about high stakes Board decisions to everyone
What's “legally accurate” isn't a perfect reflection of what should be done in most instances
Hubris destroys
Microsoft had either the option of free licensing everything OpenAI produced until it hit AGI, or essentially owning OpenAI, and they’re I’m sure indifferent since either way they win
Google is nowhere to be seen, presumably busy working on thinking about Gemini
Everyone else from Salesforce to Nvidia made a play for OpenAI talent, because of course they did
When the traitorous eight left Shockley, Silicon Valley started. This weekend though was is as if Shockley himself was fired from it, then everyone else quit behind him.
A beloved leader unceremoniously ousted by knaves, with one conspirator later inspiring pathos, with the entire team switching to show their support for their ousted leader, while those who pushed him out seethe and roar at their impotence.
This is theatre. This is also human nature, red in tooth and claw.
I love this: Never attribute to malice that which is adequately explained by stupidity.
“There’s no precedent. This is like Steve Jobs getting fired, except it happened after the iPhone took the world by storm.”
I could be wrong here but isn’t it more like what actually happened with Jobs, that he got fired after The Mac took the tech world by storm and had a moderate impact for most non tech people (they knew it existed and likely knew someone who had one, was impacting the world but wasn’t ubiquitous)
Even with ChatGPT people think this is just step one, likely will see imitators (like windows did with the Mac) and the iPhone like big, everyone’s got one/using it development is still to come
I see this as more a replay of the Jobs firing than something new and if he comes back and is associated with the next big release that has a much more wide ranging impact for normal people not just tech heads, well then the similarities will just be spooky