There’s only one story on everyone’s mind this week in AI…
The sudden, controversial firing of OpenAI CEO Sam Altman—and the chaos that continues to unfold as a result of the event.
In Episode 73 of The Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer dived deep into what’s going on—and what comes next.
As of writing, here’s what has happened so far:
Obviously, this story is developing at lightspeed. But this is what’s happened as of Tuesday morning on November 21.
However, even if new developments arise, you’ll want to read the following analysis from Roetzer to understand the full context of the situation—no matter what happens next.
Understanding the history and structure of OpenAI as an organization is critical to understanding what happened here—and where it goes from here.
It’s easy to forget that OpenAI started as a non-profit back in 2015 when it was founded by Altman, Ilya Sutskever (OpenAI’s Chief Scientist), Greg Brockman (until these events, OpenAI’s President), and Elon Musk.
The non-profit’s stated goal at the time was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.”
In 2018, OpenAI published a charter, which outlines the organization’s strategy around the dawn of artificial general intelligence (AGI). In the charter, OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.”
According to the charter, “the timeline to AGI remains uncertain” but OpenAI’s mission is to ensure that AGI “benefits all of humanity” and their “primary fiduciary duty is to humanity.”
Roetzer says it’s critical to understand the concept of AGI and OpenAI’s commitment to building safe, beneficial AGI in order to understand what’s happening right now.
In 2019, OpenAI announces OpenAI LP. This is a capped-profit company that sits underneath the non-profit OpenAI.
OpenAI says it needs to form the entity in order to raise and invest massive amounts of money into compute and talent in order to scale as fast as needed to fulfill their mission.
“We're four years after the founding of OpenAI as a non-profit,” says Roetzer. “They are now launching a for profit entity underneath that non profit. Because they believe they're going to need billions of dollars that they cannot raise just through donations to train the most advanced models.”
Importantly, the company says in the announcement that:
“Going forward (in this post and elsewhere), ‘OpenAI’ refers to OpenAI LP (which now employs most of our staff), and the original entity is referred to as ‘OpenAI Nonprofit.’”
So, the OpenAI you know today is the for-profit company formed in 2019, not the non-profit formed in 2015, unless otherwise stated.
The post announcing the for-profit wing of OpenAI also makes it very clear that “the mission comes first.”
Said OpenAI in the announcement:
“We've designed OpenAI LP to put our overall mission, ensuring the creation and adoption of safe and beneficial AGI, ahead of generating returns for investors.”
This is a critical point, says Roezter. The for-profit OpenAI is designed to advance the OpenAI charter and the company is controlled by OpenAI’s non-profit board.
“So that means employees who have stock options, investors who have invested in the company, they all sign an agreement that if there comes a point that the benefit of humanity is more important than the work they're doing, they accept that their value may go to zero for what they have,” says Roetzer.
OpenAI’s charter and the OpenAI LP announcement make it clear that they are worried about the potential harm and rapid change that AI can cause—especially if AGI is achieved.
They state in the OpenAI LP announcement that they would merge with a value-aligned organization even if it means paying out investors “to avoid a competitive race which would make it hard to prioritize safety.”
“When you think about the rapid growth of OpenAI over the last 12+ months, you can start to see where there’s friction between what is happening and what the charter states,” says Roetzer.
The organization appears to have been increasingly challenged by the tension between the rapid, market-incentivized innovation of the for-profit wing and the non-profit wing’s mission to develop potentially super-powerful AGI as responsibly as possible.
On November 13, speaking to The Financial Times, Altman said OpenAI is working on GPT-5, though there is no timeline yet for its release.
On November 16, at the Asia-Pacific Economic Cooperation (APEC) Summit, he said he’d recently been “in the room” when the company has “[pushed] the veil of ignorance back and the frontier of discovery forward.”
We also know from multiple sources, says Roetzer, that Altman has been looking to raise billions of dollars from Softbank and investors in the Middle East to build a chip company that competes with Nvidia. There are also rumors he’s in talks to build a device, potentially to compete with the iPhone.
“It is not clear, despite whatever speculation you might hear, how informed the board was of these efforts,” says Roetzer. “Nor is it clear what the board knew about what Sam saw in that room when they ‘pushed back the veil of ignorance.’”
OpenAI currently appears to be in a state of open revolt.
700+ employees have signed a letter saying they’ll quit if the board doesn’t reinstate Altman—and many already have. Many are likely following Altman and Brockman to Microsoft. For now, Emmett Shear appears to be interim CEO. And, we still don’t know exactly why the board fired Altman as of writing.
So, what happens next? Roetzer has some thoughts.
OpenAI is highly reliant on Microsoft to do what it does. Under Microsoft’s previous investments in the company, it controls some of the company’s IP. Microsoft’s cloud infrastructure is also critical to train OpenAI’s models.
“It's really hard to imagine a scenario where OpenAI comes out of this as anything more than a shadow of its former self and a non-profit research lab that has no computing power and no talent,” says Roetzer.
Almost overnight, companies, startups, and users who rely on OpenAI technology are all asking the same question:
“What is the future of OpenAI?”
They have to be concerned the chaos at the company will completely disrupt the tools they’ve built using OpenAI technology or the products they pay for, like ChatGPT Plus
“So I would think that the phones are ringing off the hook, or the emails are burning up for Anthropic, Cohere, Google, and Amazon,” says Roetzer. “All the other companies who stand to benefit, whose customers may want a more stable company.”
“The open source crowd just got the greatest argument they could have ever wanted not to centralize AI into a few closed tech companies,” says Roetzer.
If we can’t trust a major company like OpenAI to govern themselves, how can we trust them to govern AGI and superintelligence?
“This talent, hundreds of the top AI researchers in the world, are going to disperse,” says Roetzer. They’re going to other major AI and technology companies, not just Microsoft, or start their own companies.
Innovation will still be largely centralized due to the need for highly advanced chips from players like Nvidia. But it will also accelerate within the players that can afford these chips.
“So I think it accelerates the development of the frontier models, but they probably are still centralized in four to five companies,” says Roetzer.
Roetzer expects the U.S. government to be watching this with great interest.
“And it is absolutely going to impact their next move and the speed with which they make their next move,” says Roetzer. “Because they now see how dangerous this is. How quickly these things can get out of control when you're relying on these few select companies.”
Roetzer says there’s one key question to him is:
What did Sam witness in that room he talked about at the Asia-Pacific Economic Cooperation (APEC) Summit?
We referenced this above, but he said he’d recently been “in the room” when the company has “[pushed] the veil of ignorance back and the frontier of discovery forward.”
Has OpenAI made a breakthrough in the pursuit of AGI?
When we think about the full context here, including the mission in OpenAI’s charter, “it really makes me wonder, what have they done?” says Roetzer. “Has there been some milestone in the development of AI that we’re not privy to yet? And that played a role in the board’s thinning and action?”
“I think we'll soon learn why the board made the move it did. They can't obviously keep this secret forever.”