In nearly 2 hours, he covered everything from his shocking firing to Elon Musk's current lawsuit against the company to GPT-5.
Which of Altman's comments should you be paying attention to?
I asked Marketing AI Institute founder/CEO Paul Roetzer on Episode 89 of The Artificial Intelligence Show.
Why bother unpacking Altman's interview?
Because it's one of the best ways to get a glimpse of the future, says Roetzer.
"One of the ways I learn most about AI is listening to the people leading these firms."
Understanding the technology behind AI is critical. But so is understanding the human aspects of decisions being made in AI.
This interview stood out for how raw it was. It jumped right into Altman's firing. "You can tell there are significant scars for Sam mentally," says Roetzer.
Some speculate that Altman was fired because the company developed AGI. And that Ilya Sutskever supported the firing because of fears over the technology.
During the interview, Altman very clearly put that rumor to bed:
"Ilya has not seen AGI. None of us have seen AGI. We’ve not built AGI."
Altman said he expects some things to "go theatrically wrong with AI."
He worries that public backlash against AI could target him, too. He told Fridman:
"I don't know what the percent chance is that that I eventually get shot, but it's not zero."
There was plenty of discussion about Musk's lawsuit against OpenAI.
In it, Altman confirmed that, contrary to his side of the story, Musk chose to part ways with OpenAI. He did that because he wanted to do the very thing he's criticizing the company for:
Raise funding to build a huge company to pursue AGI.
At one point, Altman reiterated to Fridman:
"Elon chose to part ways." Not OpenAI.
"You can tell Sam is just disappointed in how this has played out," says Roetzer. He said multiple times how much he looked up to Musk. And it's clear having his former hero now targeting him is personally painful.
What Altman didn't say also spoke volumes.
Fridman asked Altman:
"Do you think training AI should be or is fair use under copyright law?"
His answer was less than forthcoming.
"Sam completely dodged it," says Roetzer.
He said (from the episode transcript):
"I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it, and that I think the answer is yes. I don’t know yet what the answer is. People have proposed a lot of different things. We’ve tried some different models. But if I’m like an artist for example, A, I would like to be able to opt out of people generating art in my style. And B, if they do generate art in my style, I’d like to have some economic model associated with that."
Altman also talked about potential AI disruption to labor markets.
He noted his framework for thinking about this isn't job-related, it's task-related. "The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do over one time horizon."
This aligns with how he's talked about the subject before, says Roetzer.
(And it's especially important context given a controversial quote on marketing work from Altman we previously reported on.)
Fridman talked about how impressive GPT-4 was, prompting Altman to reply:
"I think it kind of sucks."
This is actually quite important, says Roetzer.
Altman is already thinking 3-5 years out. "He looks at the current stuff and says 'This is obsolete.'"
While he did not give a timeline for GPT-5, the speculation is that we get it midway through 2024. When we do, GPT-4 will feel as obsolete as GPT-3. And the leap will feel similarly mind-blowing to GPT-5.
This is at a time when most people are still trying to understand and apply GPT-4 to their business, says Roetzer.
So, you should take comments like this seriously. They indicate just how fast he sees innovation happening in the near future.
"He has a very, very strong history of being directionally correct in what he thinks the world will look like and timelines that will happen," says Roetzer.
Perhaps the most interesting part of the interview was what Altman refused to talk about, says Roetzer.
He was asked about Q* and immediately clammed up.
"When Sam got fired, there was this belief that they had had this scientific breakthrough they were calling Q*," says Roetzer. Commentators speculated that Q* was about doing math in new ways that enhanced AI reasoning.
Reasoning is a core unlock, says Roetzer. If models get much better at reasoning, that has huge ripple effects.
"Reasoning is fundamental to our human intelligence," says Roetzer.
But good luck learning more about that...
The moment Fridman asked about it, Altman abruptly answered:
"We're not ready to talk about that."