After a week that began quietly, we are here with a multitude of AI developments. Mike and Paul look into Silicon Valley's intense AI competition, where tech companies like Google, Microsoft, and Meta have rapidly shift gears to focus on rapid AI development, they also look at the implications of Google's new Gemini and read into the events within OpenAI, exploring the aftermath.
Listen or watch below—and see below for show notes and the transcript.
This episode is brought to you by Algomarketing:
Algomarketing connects ambitious B2B enterprises to the competitive advantages of the autonomous. Their workforce solutions work to unlock the power of algorithmic marketing through innovation, big data, and optimal tech stack performance. Visit Algomarketing.com/aipod and find out how Algomarketing can help you deliver deeper insights, faster executions, and streamlined operations through the power of AI.
00:03:32 — New York Times research explores the AI arms race
00:14:41 — Meet Gemini, Google’s long-awaited GPT-4 competitor
00:28:02 — A Brief Wrap-Up of the OpenAI saga
00:34:05 — GrokAI is now rolling out
00:42:14 — The European Union agrees on final details of landmark AI law, the AI Act
00:45:00 — Mistral releases a new LLM with a torrent link and zero fanfare
00:50:06 — Microsoft announces what’s next for Copilot
00:53:18 — Runway partners with Getty Images to build a safe AI video generation model
00:56:59 — Andrej Karpathy and the “hallucination” problem
The New York Times takes us inside the AI arms race that started after ChatGPT’s release
A new report from The New York Times provides some juicy details into the chaos within big tech companies like Google and Microsoft as they raced to deal with the launch of ChatGPT.
According to interviews with more than 80 executives and researchers, the Times found that “over 12 months, Silicon Valley was transformed” as Google, Microsoft, Meta, and others quickly pivoted to focus on building AI-powered products.
As the Times puts it: “The instinct to be first or biggest or richest — or all three — took over. The leaders of Silicon Valley’s biggest companies set a new course and pulled their employees along with them.”
In the process, players like Google, Microsoft, and Meta put aside some concerns about long-term AI safety to release powerful AI products as quickly as possible.
Meet Gemini, Google’s long-awaited GPT-4 competitor.
Google just announced its biggest AI launch so far with the release of Gemini, its new AI model that is a competitor to OpenAI’s GPT-4. The launch came with tons of buzz—and some controversy.
First, the buzz: Google claims Gemini outperforms GPT-4 on thirty out of thirty-two standard measures of performance. If true, that makes Gemini one of, if not the, most powerful commercial model out there.
Gemini is multimodal, so it can use different types of inputs, like text, images, and audio, to produce outputs. In a widely shared demo video (more on that in a second), Google showed off what appeared to be some stunning capabilities. These included Gemini’s ability to identify images, solve puzzles, and perform tasks of logical and spatial reasoning in real-time via the user having a voice conversation with the system.
Now, here comes the controversy. After some digging, Bloomberg found that the demo video had been edited significantly to display Gemini’s capabilities. The conversation in the video was represented text conversation, not a voice conversation. And what appears to be Gemini recognizing what’s happening in moving video in real-time consisted of feeding the system still images.
Not to mention, Google showed only brief prompts to get incredible answers—prompts that were shortened from what was really used. And, response times were sped up for brevity.
So, it appears the system actually does have impressive performance. But it also appears Google may have exaggerated its capabilities in the demo video—which is not uncommon in Silicon Valley.
A new report shows what really happened at OpenAI to cause Sam Altman’s firing.
We’re now learning what went down in the recent chaos at OpenAI, thanks to new reporting from The New York Times.
Altman was suddenly fired in mid-November by OpenAI’s board at the time. It sounds like theories (including our own) about why Altman was fired were broadly correct:
The board was concerned about AI safety. Specifically, several board members had backgrounds in areas of AI safety that caused them to worry powerful AI technology could, without proper guardrails, do immense harm to humanity—even becoming an existential threat to the species as a whole.
It appears several of these board members had long-simmering concerns that OpenAI was moving too fast and paying too little attention to safety. Those concerns came to the fore with the release of ChatGPT.
As ChatGPT—and OpenAI’s—popularity exploded, certain board members became increasingly worried that Altman was not sharing all his plans with the board, including a possible initiative to build AI chips with the help of investors in the Middle East.
This led the board to quickly, and secretly, vote to fire Altman, then spring that decision on him with little announcement or preamble.
Altman wasn’t the only one caught unawares. OpenAI’s main partner and patron, Microsoft, was caught completely by surprise. It appears that, after having conversations with Altman to determine if he’d done anything wrong, CEO Satya Nadella decided to throw Microsoft’s weight behind Altman.
That resulted in a chaotic battle to reinstate Altman, a battle that nearly resulted in most OpenAI employees resigning in solidarity with him. As we’ve reported, it’s a battle that Altman eventually won—ousting the board members who rebelled against him.
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: People feel like they just completely missed something, or that they haven't moved fast enough to catch up.
[00:00:06] Paul Roetzer: This generative AI age we find ourselves in is new to everyone. I mean we're talking about the emergence of a technology that just happened like seven months ago.
[00:00:17] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.
[00:00:37] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.
[00:00:47] Paul Roetzer: Welcome to episode 76 of the Marketing AI Show. I'm your host, Paul Roetzer, along with my co-host, Mike Kaput. Good morning, Mike.
[00:00:56] Paul Roetzer: Good morning, Paul. It is, I [00:01:00] don't know. Last week was weird. Like it started off kind of slow. Like sometimes I wonder like, Oh, what are we going to talk about next week? And then it just like hit where we have at least 10 things that normally would make rapid fire that did not make rapid fire this week, which is absolutely wild.
[00:01:16] Paul Roetzer: So, all right. So we're going to get into this. We have a lot to cover. today's episode is brought to us by Algo Marketing. Algo Marketing connects ambitious B2B enterprises to the competitive advantages of the autonomous. Discover our workforce solutions and unlock the power of algorithmic marketing through innovation, big data, and optimal tech stack performance.
[00:01:37] Paul Roetzer: Visit Algomarketing, that's A L G O, marketing. com forward slash AI pod. And find out how Algomarketing can help you deliver deeper insights, faster executions, and streamlined operations through the power of AI. So I have good news and maybe bad news for some people to start us off. So good [00:02:00] news is next week, so we are recording this right now, is Monday, December 11th at 11.
[00:02:05] Paul Roetzer: 20 a. m. Eastern Time. Next week, so this would be the, December 19th episode is going to be a special edition. We're going to have Cathy McPhillips, our chief growth officer, is going to be back. And we are going to do another, like, top questions about AI episode. So it's like a special episode, which actually was our most popular episode this year.
[00:02:25] Paul Roetzer: I think the total downloads for the year, the 15 questions everyone's asking about AI was the most popular. So we're bringing that back. So that'll be really cool December 19th, but that also means we won't be doing this. I am actually going to take a couple of weeks off. Hopefully Mike, you are also taking some time off for the holidays.
[00:02:41] Paul Roetzer: But December 15th at noon, I am like shutting down my brain. so we are not going to do, during the holidays, the couple of weeks, we are not going to have our regular format. we'll be back with that January 9th. But so again, this is, December 19th. We will do [00:03:00] kind of a 15, top questions.
[00:03:01] Paul Roetzer: Everyone's asking about AI, which hopefully will be really valuable to you. And then we'll be back January 9th with our next episode. So I know that like people are going to start like dropping new models and we may have to like show up and do a special edition and come off a vacation for, I don't know, but I'm going to try really, really hard not to do that.
[00:03:19] Paul Roetzer: So, okay. So with that said, we have a lot to cover in what. at the moment is our final, episode of 2023 under this format. So Mike, take it away.
[00:03:32] Mike Kaput: All right, we'll try to move fast through some of these topics, but first up we've got a new report from the New York Times. Provides us with some juicy details into the chaos That's within big tech companies like Google and Microsoft as they raced to deal with the launch of ChatGPT.
[00:03:51] Mike Kaput: So according to interviews with over 80 executives and researchers at these companies, the times found that quote over 12 [00:04:00] months, Silicon Valley was transformed as Google, Microsoft, Meta, and others. quickly pivoted to focus on building AI powered products when ChatGPT dropped just over a year ago. As the Times put it, quote, the instinct to be first or biggest or richest or all three took over.
[00:04:20] Mike Kaput: The leaders of Silicon Valley's biggest companies set a new course and pulled their employees along with them. In the process, it sounds like players like Google, Microsoft, and Meta may have put aside some of their concerns about long term AI safety in order to release powerful AI products as quickly as possible in the wake of ChatGPT.
[00:04:43] Mike Kaput: So Paul, when you and I read this, we found there were some really interesting points to kind of pull out of this for the audience and connect the dots. Could you kind of walk us through why this is so important to understand what happened in these major tech companies when ChatGPT dropped? [00:05:00]
[00:05:00] Paul Roetzer: Yeah, I feel like this article is just sort of a synopsis of the stuff we've covered in this podcast throughout the year.
[00:05:08] Paul Roetzer: it's, it's written by four, well, byline by four, journalists at New York Times, including Cade Metz of the New York Times. And so when I, posted on LinkedIn about it, I said, this reads like the next edition of Genius Makers by Cade Metz. it truly does. Like, To me, this obviously has been in the works for a while.
[00:05:29] Paul Roetzer: I think they have been, you know, again, 80 sources. This has been, they've been deeply embedded for a long time to pull this off, and they obviously are sourced at the very, very highest levels, like directly with the first parties involved in these conversations, because in some cases, They're telling the story of conversations between Yann LeCun and Mark Zuckerberg.
[00:05:51] Paul Roetzer: There's two people in that conversation. Like they're, they are talking to Yann and or Mark about this. So, you know, I think that [00:06:00] for me, it demonstrated how unprepared big tech was for the ChatGPT moment, how, how even the companies that. had the thousands of AI researchers, had the massive labs built dedicated to artificial intelligence, how they really just were not ready for that moment and how it's been very challenging for them to evolve.
[00:06:23] Paul Roetzer: And so again, Like when I go out and do talks and you're talking to CEOs, and even when I'm talking to leaders of tech companies or product teams of tech companies, I'll often say like, listen, give yourself a break here. Like, people feel like they just completely missed something, or that they haven't moved fast enough to catch up.
[00:06:43] Paul Roetzer: And then I think when you read articles like this, you realize like it's happened to everyone from Meta to Google to even, I mean, in some cases with an OpenAI themselves, that everyone has really struggled and that this generative AI age we find ourselves in is new to everyone. So, you know, I've [00:07:00] often said.
[00:07:00] Paul Roetzer: ChatGPT is basically a year old now, like in terms of publicly available. GPT-4 wasn't introduced until March of this year. So for many enterprises, many marketers, many business leaders, I mean we're talking about the emergence of a technology that just happened like seven months ago. And so when we go meet with all these enterprises that don't have AI roadmaps yet, or just maybe in the formative days of building AI consoles, haven't figured out how to do education and training about this stuff yet.
[00:07:30] Paul Roetzer: You know, I think people have to be realistic about how quickly this has all moved. The op, the alternative side to that though is I feel like, it's only going to move faster. So as much as I'm saying like, hey, it's okay if you haven't caught up, that doesn't mean you can't. double down on efforts to catch up into going into next year.
[00:07:53] Paul Roetzer: So I'll just call it a few of the ones I, that jumped out to me. I think that everyone [00:08:00] should read this article because I really do think it gives incredible context into what's, what was going on within these companies. So a few of the excerpts that I had highlighted. So The first Sundar Pichai, Google's chief executive, had decided to ready a slate of products based on artificial intelligence immediately.
[00:08:16] Paul Roetzer: So this is like right after ChatGPT. He turned to Mr. Walker, the same lawyer he was trusting to defend the company in a profit threatening antitrust case in D. C. Mr. Walker knew he would need to persuade the Advanced Technology Review Council, which I had not heard of previously, as Google called the group of executives to throw off their customary caution and do as they were told.
[00:08:38] Paul Roetzer: It was an edict and edicts didn't happen very often at Google, but Google was staring at a real crisis. Its business model was potentially at risk. So we heard earlier this year about this code red moment where the founders of Google all of a sudden re emerged. This was the first time I was hearing a story about that happening.
[00:08:58] Paul Roetzer: And it actually goes into [00:09:00] a meeting that occurred within it and what Sundar kind of laid out for everybody. So again, like cool insight into Google and actually gives you sort of a prelude of what led to Gemini and why they merged the two research labs at Google. We had Google Brain and Google DeepMind at this time last year.
[00:09:16] Paul Roetzer: There was two distinct research labs for AI at Google. They were then merged following this meeting and following, you know, this time period. And that led to Gemini, which we're going to talk about in a minute. another excerpt, this one about OpenAI said in mid November, 2022. So just a couple of weeks before ChatGPT, Mr.
[00:09:37] Paul Roetzer: Altman, Greg Brockman, and others met in a top floor conference room to discuss the problems with their breaking breakthrough tech. Yet again, suddenly Mr. Altman made the decision. They would release the old, less powerful technology. The plan was to call it Chat with GPT-3 . 5. and put it out by the end of the month.
[00:09:56] Paul Roetzer: They referred to it as a low key research preview. It didn't [00:10:00] feel like a big deal decision to anyone in the room. On November 29th, the night before the launch, Mr. Brockman hosted drinks for the team. He didn't think ChatGPT would attract a lot of attention, he said. His prediction? No more than one tweet thread with 5, 000 likes.
[00:10:18] Paul Roetzer: So again, we talked about how ChatGPT was sort of a spur of the moment decision and how Sam Altman kind of gave this guidance, but I had never seen this level of detail into exactly how that happened. So again, a topic we talked about a lot throughout the year. Now all of a sudden you have this layers of, of detail from the in, from the people that were in the room, basically, you know, through them.
[00:10:41] Paul Roetzer: The other one that I found extremely fascinating was the one I referenced kind of earlier, Zuckerberg and LeCun meeting together. So it says, as they waited in line for lunch at a cafe in Meta's Frank Garrity designed, or Gary designed headquarters, Dr. LeCun delivered a warning to Mr. [00:11:00] Zuckerberg. Now, keep in mind at this moment, Mark Zuckerberg is betting everything on the metaverse.
[00:11:07] Paul Roetzer: So LeCun comes into town, so then continue. He said meta should match OpenAI's technology and also push forward with work on an AI assistant that could do stuff on the internet on your behalf. This part I found completely fascinating. Websites like Facebook and Instagram could become extinct, he warned.
[00:11:25] Paul Roetzer: AI was the future. Mr. Zuckerberg didn't say much, but he was listening. The article went on to basically say how this convinced Zuckerberg, and he said, Okay, let's go. And they already had been building LLAMA, but it was called something else in Paris, in the research lab in Paris, I believe. And they green lighted the building of what became LLAMA.
[00:11:45] Paul Roetzer: which they released. And then now we have LLAMA 2, which up until yesterday was the most powerful open source model in the world. We'll get to that in a minute. but this is the origin of that. So an entire Zuckerberg had been [00:12:00] told by everybody that the 10 billion bet he was making on the Metaverse was like the wrong play and they needed to be doing AI and he wasn't listening and apparently standing in line at a cafe.
[00:12:10] Paul Roetzer: Change Zuckerberg's mind to re, reinvest in AI instead of the metaverse for now crazy. And then the final one was, they talked, they told the story about Microsoft and Satya Nadella and how he came to not just think that OpenAI was interesting, but actually like the future. So that excerpt was a year later.
[00:12:31] Paul Roetzer: So this is, again, it's telling the story of 2021 when they put their initial investment in and now we're, you know, fast forward to 2022. So a year later, Mr. Nadella got a peek at what would become GPT-5 . Mr. Nadella asked it to translate a poem written in Persian by Rumi, who died in 1273, into Urdu. Is that, am I saying that right, Mike?
[00:12:50] Paul Roetzer: Do you know Urdu? Yep. He asked it to transliterate the Urdu into English characters. It did that too. Quote from Nadella, Then I said, [00:13:00] God, this thing. Mr. Nadella recalled in an interview. From that moment, he was all in. Microsoft's 1 billion investment in OpenAI had already grown to 3 billion. Now Microsoft was planning to increase that to 10 billion.
[00:13:16] Paul Roetzer: So again, like, this is just like the surface of, this is probably like a 4 or 5 thousand word article. but those kinds of insights that I have not heard these stories anywhere. So they may have been talked about before, but these sound like truly embedded sources where they were able to get the highest level access.
[00:13:33] Paul Roetzer: I assume this is all going in a book, and they're just like, Maybe they're releasing it now because of everything that's going on. They chose to start putting out some of these excerpts, but every aspect of this reads like it's going to be in a book and knowing Cade Metz and, you know, his past writing, I wouldn't be shocked if, if we don't learn about a new book coming out sometime soon.
[00:13:55] Paul Roetzer: So, you know, I think that it just gives you context of what's been happening. [00:14:00] How, how crazy truly the last 12 months have been in business, how unprecedented. What has happened has been, and I am, I'm very confident in saying, I don't even think that this year begins to prepare us for what's going to happen in 2024.
[00:14:19] Paul Roetzer: Like, I think that as jarring and kind of amazing as the last 12 months have been, I think the next 12 months are, are going to completely change the way we think about, AI with the way we think about business, the way we think about knowledge work. And this story does a really good job of kind of showing how we're getting there.
[00:14:41] Mike Kaput: And in case you thought the year was going to end a little calmer than it started, we're getting some huge announcements. before we hit December 31st. So Google just announced its biggest AI launch to date with the release of GeminIts new AI model, which is a [00:15:00] competitor to OpenAI's GPT-5 . This launch came with tons of buzz and some controversy.
[00:15:08] Mike Kaput: So first let's talk about the buzz. The buzz is around the fact that Google claims Gemini outperforms GPT-4 on 30 out of 32 standard measures of performance. If true, that makes Gemini one of, if not the most powerful, kind of readily available commercial model out there. Now, Gemini is also multimodal, so it can use different types of inputs like text, images, audio, to produce outputs.
[00:15:35] Mike Kaput: Now, there was a widely shared demo video, which we're going to unpack a little more in a second where Google showed off what appeared to be some really stunning capabilities. These included Gemini's ability to identify images, solve puzzles, and do things like perform tasks of logical and spatial reasoning and do it all in real time via a user having a voice conversation with.
[00:15:59] Mike Kaput: Gemini. [00:16:00] Now, here's the controversial part. After some digging, Bloomberg found out that this demo video that's been seen millions of times already had been edited pretty significantly to display Gemini's capabilities in the best light possible. So, the conversation that the person is having with the system in the video is actually a representation of a text conversation, not one that was a voice conversation.
[00:16:27] Mike Kaput: And what appears to be GemIIni recognizing things happening in a moving video in real time actually consisted of feeding the system still images, then stitching all that together. Not to mention, Google shows in the video only really short conversational prompts that are apparently being used to get really, really incredible answers.
[00:16:50] Mike Kaput: But these prompts were actually shortened from what was really used and all the response times were sped up to make the video really punchy and for brevity's sake. [00:17:00] So it does appear this system has really impressive performance on some standard benchmarks. But it also looks like Google exaggerated the capabilities in this demo video.
[00:17:11] Mike Kaput: Now, that's not uncommon in Silicon Valley, but people are focusing negatively on that as an aspect of this launch. So Paul, I want to break this down into a couple pieces. First, can you talk us through how significant Gemini as a model is, like, is it living up to the hype of being as powerful, if not more powerful than GPT-5 ?
[00:17:36] Paul Roetzer: it does appear. I mean, I think there's a lot to be talked about on the technical side about how they did their, research and how they compared it to GPT-4 and until they open it up, like, and until honestly, Ultra is available, like the most advanced model, we're not going to really know. So it does appear that it's, it's going to be comparable to GPT-5 , you know, if [00:18:00] not more powerful.
[00:18:01] Paul Roetzer: And I think the key differentiator is going to be that, one, their data, but, they built it multimodal from the ground up, meaning the entire model is multimodal. That is not how GPT-4 is built. GPT-4 is a language model that is enhanced with other models that enable image and video and audio. So that is the the core differentiator that they stress in some of their videos is like how it's built is it is a single multimodal model from the foundation up.
[00:18:32] Paul Roetzer: So that's what in theory would enable some of the things that were shown in the video as well as advanced reasoning capabilities and some other unique things. the question Becomes the, so the their the middle. Is it pro? I forget what the middle one is. They have name pro. yeah, I believe so.
[00:18:51] Paul Roetzer: Yeah. Okay. So that's the one that's going to power bard. so I think your first experience with it is going to be by using Google [00:19:00] Bard, where you'll be able to see the mid-range model that's going to be available. . the most powerful, the ultra won't be released until, um. You know, they didn't say exactly, but sometime probably early next year, Q1, I would assume, and then we'll get a better sense.
[00:19:15] Paul Roetzer: The question I had was like, What are the chances that GPT 5's out by that time? So now they just gave OpenAI a runway to, to one up because now they generally know what's coming from Google and we know that they're working on GPT 5 at OpenAI. So it, I don't know, it'd be kind of funny. Like I, my feeling all along was like, you don't have to be GPT-5 .
[00:19:35] Paul Roetzer: GPT-4 is like a year and a half old in terms of, like, it was Red team for like seven months before it came out in March. So you don't need to beat a two year old model. You need to, you need to beat what they're building next. so their research did show it will be more powerful. Now, again, how they did it, like how they prompted it and stuff is different.
[00:19:56] Paul Roetzer: It's not apples to apples, but it's [00:20:00] not by much, which, which really goes to show how truly groundbreaking GPT-4 was and is. that we're going to be basically two years removed from it being built before we have Ultra, Gemini Ultra, and it's only going to be just a little bit better, it would appear, which is a pretty wild thing to consider.
[00:20:24] Mike Kaput: Yeah, it's pretty surprising, and maybe we're being, we're spoiled now that we have GPT-5 , but yeah, you would expect something that's been in the works for so long to maybe be a bit more of a step above. Speaking of a step above, what about this, demo video that appears to be showing capabilities that today, Gemini doesn't exactly have.
[00:20:48] Mike Kaput: What were your thoughts around some of the controversy people were pointing out with the videos? I mean, some people have gone as far as to say they were just fake or like really coming at Google over the fact they had edited these [00:21:00] videos.
[00:21:01] Paul Roetzer: Yeah, I mean, to me, honestly, I just didn't really get the, all the , the kind of reaction, the crazy reaction to it.
[00:21:09] Paul Roetzer: so when I watched the videos, my first reaction was they were just demonstrations. There, I did not assume they were all filmed in one take, nor did I assume that those capabilities were available now, because if they were, they would launch Ultra today. So, my initial feeling was, this is what we always get with these models.
[00:21:31] Paul Roetzer: It's like The AI companies demo something and then six to nine months later we get the thing and it's not capable of what the demo video showed nine months earlier, but it's like directionally accurate. So my immediate take, when I saw the demo, the six minute demo that everyone's referring to with the blue duck and all that stuff was like, wow, that's going to be pretty cool if it's able to do that when it comes out, I did not assume it could do it today, but so I was kind of actually more.
[00:21:58] Paul Roetzer: So, yeah, there's a [00:22:00] lot of stuff that's taken aback that people actually thought it was real. Maybe it's just the way they did it and that they didn't disclose it, but, again, I go back to the Microsoft Copilot demo video from March. They don't disclose anything that it's not real and like, that it can't do all of these things that it's showing.
[00:22:15] Paul Roetzer: And yet, you know, we didn't get Copilot truly until November of this year. So, you know, March to November is eight months by my quick math. And it still doesn't do everything it showed in that demo video in March. So I don't know. I just felt like it was people, you know, complaining because it's what people like to do online.
[00:22:36] Paul Roetzer: I would probably look beyond that. Honestly, you can be upset at them thinking it was misleading. Fine. but I tend to focus more on there's a very good chance it's going to have the capabilities they show in that video when it finally comes out in March or April, whenever it comes out. I would be focusing more of my energy on what does that mean?
[00:22:58] Paul Roetzer: If it can actually do [00:23:00] those things, which is highly likely that it's going to be able to do things very close to what it showed in that demo video. What does that mean to business, to the future of knowledge work, like all the stuff we talk about all the time. We are going to have models capable of what they show in that video next year.
[00:23:18] Paul Roetzer: Whether it's Gemini right away, or it's GPT 5, or it's Mistral, like what Lama 3. This is the future, and that future is happening in the next, like, six months. So, I would probably personally just say, I get it. Like, I get that it's kind of annoying and it was a bit misleading. Move past that. Start worrying about what does this mean when it has these capabilities, not if it has these capabilities.
[00:23:43] Paul Roetzer: They have the ability to do the things that are shown in there. If it was able to do it today, they would have rolled this out last week in the three city tour we talked about on last episode. Remember last episode we said, Hey, Sundar kind of randomly like canceled this [00:24:00] launch. They were going to be in these three cities.
[00:24:01] Paul Roetzer: They were going to demo this. Well, the reason they didn't do that is because it wasn't ready to be demoed live. So again, it's just like, it's kind of obvious if you connect the dots, what's going on here. They thought maybe they were going to be ready to show this thing, have it be able to do the things they showed in that demo video.
[00:24:15] Paul Roetzer: It wasn't going to be there yet. So they had to delay the launch, the launch at early next year, but they put together a marketing video to show what the capabilities are going to be. And in a developer post, they actually told you that's exactly what they did. They didn't hide this like it wasn't, I don't know.
[00:24:29] Paul Roetzer: So, But a couple other interesting quick notes. So one, Jeff Dean, the chief scientist at Google DeepMind did tweet out over the weekend where the name came from, which I thought was kind of interesting. So he said, Gemini is Latin for twins. The Gemini effort came about because we had different teams working on language modeling and we knew we wanted to start to work together.
[00:24:50] Paul Roetzer: The twins are the folks in the legacy Google Brain team, many from the Palm2 effort. Palm2 is the model that currently powers BARD. And the [00:25:00] legacy DeepMind team, many from the Chinchilla effort that became Gemini, that started to work together on the ambitious multimodal project we called Gemini.
[00:25:10] Paul Roetzer: Eventually joined by many people from across Google, Gemini was also the NASA project that was the bridge to the moon between the Mercury and Apollo programs, which if you want to like, read into this a little bit. What's the Apollo program? So they're building the bridge to the Apollo program. This is only version one.
[00:25:31] Paul Roetzer: Like they're very clear. This is like Gemini 1. 0. My thing, I start to worry about, not worry, think about, imagine is what's the Apollo program here? Like, what is the real play here when this thing really develops? And that leads into kind of like, And my final thought is, again, we keep going back to who, who wins in this?
[00:25:52] Paul Roetzer: Like, where does this all go? What differentiates Gemini from GPT 5 and, you know, LLAMA [00:26:00] 3 and Claude 3. 0, whatever. And I always go back to Google has the data, OpenAI and Anthropic don't. And, you know, I mean, all of them. So when you think about it, they have Android, Pixel, YouTube, Search, Maps, Nest, Fitbit, Gmail, Docs, Sheets, Cloud.
[00:26:17] Paul Roetzer: Like. Well, all this data nobody else has, so now they have a more powerful multimodal foundation model and they have the data. Plus, if you look into the technical specs of how they train this model, they use their own chips. the innovation they have. They also use their own data centers and they use an innovative approach where they were like training this thing across data centers.
[00:26:42] Paul Roetzer: Nobody else has what Google has. And so I just like, again, I'm not giving investing advice here, but I bought more Google stock. Like I just think the video could be off. People could be pissed about the video. I get it. You could be disappointed that it's not going to be available to [00:27:00] early next year. I get it.
[00:27:01] Paul Roetzer: But step back and look at the foundation of what they're now building. And again, know that they've only been doing this, like really this for like probably 10 months. And the other thing that I look at is nobody else has DeepMind, and Demis Esabas and Shane Legge and that team. And when you look on the surface of what they announced, you don't really see yet the competitive advantage of DeepMind built into this.
[00:27:26] Paul Roetzer: The capabilities DeepMind has that's been applied in other areas of AI, other projects, you're not seeing those all played out here yet. And so now imagine this kind of model, plus all the innovation DeepMind still has sitting in their lab, when those are built in, I just still really see a hard, a hard future where Google isn't, if not the dominant player here, certainly.
[00:27:51] Paul Roetzer: You know, one of the two or three that I don't see them losing, so I'm still very bullish on what Google builds next [00:28:00] year.
[00:28:02] Mike Kaput: So we did also want to quickly cover kind of a wrap up of the whole chaos and saga of OpenAI, because we're now learning kind of what really went down in the recent events at the company, thanks to some new reporting from the New York Times.
[00:28:18] Mike Kaput: So as we've discussed, you know, Sam Altman, CEO, was suddenly fired in mid November by OpenAI's board at the time. Now it sounds like theories, including many of the ones we've discussed on episodes of this podcast, about why Altman was fired. were broadly correct. The Times is reporting that the board was indeed concerned about AI safety.
[00:28:40] Mike Kaput: Specifically, several board members had backgrounds in areas of AI safety that caused them to worry powerful AI technology could, without the proper guardrails, do a lot of harm to humanity or even become an existential threat as a whole to humanity. And it appears that several of these [00:29:00] board members had long simmering concerns that OpenAI was moving too fast and paying too little attention to safety.
[00:29:07] Mike Kaput: Those concerns became much more, intense and came to the fore once ChatGPT was released because as the Times reports, you know, as ChatGPT and OpenAI's popularity exploded, certain board members apparently became increasingly worried that Altman was not sharing all of his plans with the board. And that included the details of a possible initiative to build AI chips with the help of investors in the Middle East.
[00:29:34] Mike Kaput: All of this led the board It looks like to quickly and secretly vote to fire Altman and then spring that decision on him with little announcement or preamble. Altman wasn't the only one, it turns out, that was caught unawares. OpenAI's main partner and patron, Microsoft, they were completely caught by surprise.
[00:29:53] Mike Kaput: And it sounds like from this story that after having conversations with Altman to see if he'd [00:30:00] actually done anything wrong, Microsoft CEO Satya Nadella decided to throw Microsoft's weight behind Altman. And this is what ultimately resulted in the chaotic battle to get Altman reinstated. That nearly resulted in most OpenAI employees resigning in solidarity with him.
[00:30:20] Mike Kaput: As we've previously reported, that battle was one that Altman eventually came out victorious in, and he ousted some of the board members. who rebelled against him. So there's probably still many, many more things and details that will come out, Paul, but it sounds like this is kind of wrapping up the phase one of what happened with OpenAI and just kind of giving our audience a sense of, okay, this was about AI safety.
[00:30:47] Mike Kaput: This was about a lot of competing visions. for the future of the company and the danger of the technology they were building. What were your thoughts kind of wrapping up this topic?
[00:30:58] Paul Roetzer: Yeah, you [00:31:00] know, I think like you said, it kind of summarizes everything we've sort of assumed and that you've seen reported.
[00:31:04] Paul Roetzer: I do feel like it's just probably chapter one of the story. I think, you know, moving into 2024, more information is going to come out. I listened to a podcast interview with Trevor Noah and where he interviewed Sam Altman on his WhatNow podcast. And it was like an hour of Sam, you know, first person explaining what was going on, his feelings, his emotions, his thoughts about it, how he's still kind of reeling from the whole thing.
[00:31:28] Paul Roetzer: So if you're intrigued by this topic, like I would go listen to that podcast. I think it's a really fascinating inside it. It's a very human interview from Sam, but he does not give more details. Like it, it. I really don't know that he truly knows why it happened and he is definitely bitter about it. He said he's kind of grateful for it because I, you know, I think it helped him.
[00:31:52] Paul Roetzer: It was actually this fascinating, I love the way it ended. Like Trevor, who's a great interviewer, obviously, he [00:32:00] said at the end, Like, I would encourage you, Sam, to remember how it felt when you lost your job, because you're building technology that's going to take jobs from people. And I think it's important that you remember how it felt.
[00:32:20] Paul Roetzer: When your technology does that to people, I just thought it was like such a, like, kudos to Trevor for putting them on the spot and like saying it. but I thought he did a really good job of kind of bringing it around. And I think, you know, that might end up being the biggest outcome from all of this is like, what is Sam Altman's perspective?
[00:32:43] Paul Roetzer: Post this all like, I don't know that we're going to learn that much. Interesting. I don't know that some like smoking guns going to come out and like some massive conspiracies going to be unveiled or anything like that. It's probably not going to be that interesting. Like the theories and conspiracies were probably going to be more interesting than what actually happened, but [00:33:00] you know, the impact it has on the people, what happens to Ilya as a result of this?
[00:33:04] Paul Roetzer: What does he stay there? Does he go? It's been very, very quiet on that front. How does Sam evolve? You don't even get a sense, listen to Sam, he really thinks he's going to be the CEO there that much longer, like, I don't know, like, I thought that was another weird thing is his talking point has been, it became very apparent to me that I'm not needed there basically, that this company can truly function without me as the CEO.
[00:33:30] Paul Roetzer: I just thought that was a weird perspective to be so vocal about, to where you can almost feel him kind of like, I'm going to be here, I'll right the ship and, but maybe I won't be here kind of feeling. So. I don't know. I mean, I, it is a fascinating topic. Like I, you know, I think that there's lots more to come, but I don't know that the most interesting stuff, like I said, is going to be some grand conspiracy about what happened or anything like that.
[00:33:55] Paul Roetzer: I think it might be more of a human story, honestly, about how this changed the [00:34:00] people involved and. how that might affect the future of AI.
[00:34:05] Mike Kaput: So in other news, Elon Musk has posted on X that his ChatGPT competitor called Grok AI is being rolled out to all X Premium Plus subscribers in the U. S. in beta format.
[00:34:19] Mike Kaput: He readily admitted that there will be many issues with Grok at first, but that users can expect rapid improvement almost every day. Musk also claimed Access would expand to all English language users in about a week or so, and that Japanese is the next priority language since it's the second biggest user base.
[00:34:39] Mike Kaput: He also said that hopefully all languages will be available with Grok by early 2024. Paul, you have some personal experience with Grok that you have some thoughts on. I'd love to Get your take on where Grok is at today, where it might be going.
[00:34:59] Paul Roetzer: Yeah, so [00:35:00] I did get access when it started rolling out on, it was like the 13th night or Friday night or something like that.
[00:35:04] Paul Roetzer: And honestly, I wasn't even sure how to do it. I was like, where's the access? Like, where do I, do I go to a website? Do I go to an app? And then I realized that there was this icon that looks exactly like the Deutsche Bank logo all of a sudden at the bottom tray of my, Twitter X. app and that's how you do it.
[00:35:18] Paul Roetzer: So you click this little icon right in the middle, where this, I don't know, spaces maybe was before or something like that, some other useless app within there. so my first take is this is not a ChatGPT competitor. Like this is unfair to ChatGPT to consider this a competitor to ChatGPT. So at the moment it is not a ChatGPT competitor.
[00:35:39] Paul Roetzer: not even remotely close. the first option and I'll, I'll pull it up to make sure I get this right. See if they haven't changed it yet. So if you click on it. It say h, Grok anything, start typing or start from a suggestion. And to give you a sense of this app and what it's for, the first [00:36:00] option that it prompts you to give is roast me, Grok.
[00:36:04] Paul Roetzer: Roast at Paul Roetzer based on their posts and be vulgar. that is the first recommended use case for this app is to roast you and be vulgar. so. There's no chat history. It supposedly keeps track of your chats. There's no way to do that. I've asked it how it just says scroll back. Well, there's nothing to scroll back to every time you pull it out of the app.
[00:36:29] Paul Roetzer: It's gone. So there's no functional chat history. I can't read you the first thing I asked it because it's not there. I, Oh no, actually I took a screenshot cause I wondered like, I don't know how I'm going to be able to go back and see this. So let me see if I can pull this up real quick. Cause it was actually kind of funny.
[00:36:46] Paul Roetzer: I said, why did Elon Musk really buy Twitter? This was my first question to Grok. It said, Elon Musk's acquisition of Twitter can be seen as a strategic move to control the narrative and shape public opinion. By purchasing the platform, he [00:37:00] gained the power to influence the flow of information and steer discussions in a direction that aligns with his personal and business interests.
[00:37:06] Paul Roetzer: I think Grok is being a little too honest here. this move also allowed him to promote his other ventures, such as Tesla and SpaceX, and potentially attract more investors and customers. Pretty funny. That's probably very accurate. but I don't think it's what Elon would like it to be saying. So, NoChatHistory, it, when it gives you answers, it gives you this like, wheel of, of tweets below it that you can kind of scroll through.
[00:37:31] Paul Roetzer: Completely random, like just user XYZ at 1, 2, 3, like it'll cite them as a source for something. So, there's, you can't trust anything it says, like any of like the tweets it would direct to, I would never click on these tweets. Like it's going to like a sort of spam account basically. so my, I guess my overall is it's, it's basically a novelty.
[00:37:55] Paul Roetzer: If I didn't have to do this for a living, I would cancel my 22 a month subscription [00:38:00] right now, like it's, it is not worth whatever the premium plus is or whatever you're paying, I don't know what else you get with premium plus, but the only reason I paid for it was to get Grok, so I could test it. their, their big push is that Twitter data is a differentiator.
[00:38:14] Paul Roetzer: My big thing is like, is that a good thing? Like, there's so much noise and hate and misinformation on this platform right now that I don't know how that's a good differentiator. So yeah, it has real time data, but from who? Like, just based on who they're surfacing tweets from. I don't want stuff from those people.
[00:38:34] Paul Roetzer: Like, I want to be able to know it's verified sources. So, there's no apparent algorithm that elevates reliable sources. And, based on my experience in this app in the last six months, what they would consider reliable sources and what I consider reliable sources are very, very, very different things. So, I'm sure it's going to get better.
[00:38:56] Paul Roetzer: whatever better is maybe more vulgar and better at roasting me. I [00:39:00] don't know what better is, but my takeaway in an, in the most objective way I can do this is I don't see a single business application for this product. Like this is not a ChatGPT competitor because there's no truly functional use of it other than.
[00:39:15] Paul Roetzer: apparently real time news that then cites accounts I would never follow or click on. So, they have to fix the whole algorithm to fix the whole thing, and I just don't see that. I feel like they're going to go in the opposite direction of where it should go. Now, one funny anecdote was, so, there was a, a tweet where Grok was asked something, and it actually replied.
[00:39:43] Paul Roetzer: I'm afraid I cannot fulfill that request as it goes against OpenAI's use case policy. We cannot create or assist in creating malware or any form of harmful content. Instead, I can provide you with information on how to protect your system, blah, blah, blah. So there it appears citing OpenAI's thing. So people are like, [00:40:00] hold on a second.
[00:40:00] Paul Roetzer: Are they just like scraping OpenAI's? Model, like what is going on. And so, Igor, Babas Baskin, he's one of the lead people on Crock. He said the issue here, this goes back to what we talked about last week with the Google search. The issue here is that the web is full of chat, GPT outputs. So we accidentally picked up some of them when we trained Grok on a large amount of web data.
[00:40:26] Paul Roetzer: This was a huge surprise to us when we first noticed it. For what it's worth, the issue is very rare, and now that we're aware of it, we'll make sure the future versions of Grok don't have this problem. Don't worry, no OpenAI code was used in the making of Grok. So then Perplexity AI's CEO, Arav, who you, I know you're a big fan of Perplexity, he replied, ChatGPT content is corrupting the web, which I thought was fascinating based on our conversation last week.
[00:40:55] Paul Roetzer: And then to wrap this, just gotta love Elon, so the [00:41:00] ChatGPT, so at ChatGPT app, which is the actual. app for ChatGPT on Twitter. They tweeted, we have a lot in common, and they retweeted the, I'm afraid I cannot fulfill because it goes against OpenAI's policies. Elon Musk replies to that tweet from ChatGPT and says, well, son, since you scraped all the data from this platform for your training, you ought to know.
[00:41:21] Paul Roetzer: So now we have name calling between Elon Musk and the ChatGPT app. You can't make this stuff up. Like it's just how he has time to like worry about these little things is just shocking to me. So anyway, long story short, I know this is supposed to be a rapid fire item. Go ahead and pay the premium plus if you want to play around with it.
[00:41:45] Paul Roetzer: It is, it is truly for novelty use at this moment. I would love for someone to like tell me I'm wrong and there is some actual functional use for this. Right now, but I just don't see it from a business perspective, [00:42:00] I would never use this as an alternative to Claude to, or hell even Bard, like it's like, so yeah, that my take.
[00:42:10] Paul Roetzer: Keep an eye on it, but otherwise, yeah.
[00:42:14] Mike Kaput: So in some other big news, the European Union has agreed on the details of the AI Act, which is its major law to regulate artificial intelligence, which has been in the work for a couple years now. Now the law still needs some final approvals, it sounds like, but it appears as if the general structure and scope of the law.
[00:42:33] Mike Kaput: have now been agreed upon by legislators. The final details of the law have not yet been released. However, based on some earlier drafts of the law, it is expected that the legislation will regulate high risk AI use cases and do things like require big AI companies to follow more stringent transparency requirements.
[00:42:54] Mike Kaput: chatbots and software that create Manipulated images, i. e. [00:43:00] deepfakes, would also have to make clear that these are AI generated under the new guidelines. Paul, we've talked about the AI Act, for a while now, so it seems like this is a major step forward, but we're still not seeing the final details of the legislation.
[00:43:16] Mike Kaput: Is that your read on this?
[00:43:18] Paul Roetzer: Yeah, I think the, you know, just even in the last couple of weeks, we highlighted that this, this thing and sort of run into some roadblocks and it was largely around how to treat these foundation models and whether you regulate the model itself or the use of the model, the applications of the model.
[00:43:33] Paul Roetzer: And so, you know, there was a lot of. momentum to the fact that this might not actually happen. And so it seems like they may be able to find some agreements at the end to get it to the finish line. So I think it's something we'll dive more into early next year, once it's truly finalized and we can really get into it.
[00:43:51] Paul Roetzer: the one thing I'll call out, I saw Yann LeCun, who again, runs the research lab, AI research lab at Meta slash Facebook. He tweeted the EU [00:44:00] AI Act negotiations ended. One contentious issue was the regulation of foundation models. particularly open source ones. Kudos to the French, German, and Italian governments for not giving up on open source models.
[00:44:13] Paul Roetzer: Again, context, Lama 2, the most powerful open source model in the world, is from Yann LeCun and his team. So they didn't want these regulated. juicy part, now back to Yann's tweet. Juicy part, quote, The legislation ultimately included restrictions for foundation models. but gave broad exemptions to open source models, which are developed using code that's freely available for developers to alter for their own products and tools.
[00:44:40] Paul Roetzer: The move could benefit open source AI companies in Europe that lobbied against the law, including France's Mistral, which we'll talk about in a moment, and Germany's AlephAlpha, as well as Meta, which released the open source model Lama. That, Mike, will lead us into our next rapid fire item. [00:45:00]
[00:45:00] Mike Kaput: So that company you just mentioned, Mistral, they are an open source AI company, and they just released a major new large language model.
[00:45:09] Mike Kaput: But what's curious about this is they did it with basically no fanfare. In contrast with Google's big Gemini announcement, the company actually took a guerrilla marketing approach to the announcement of their, what they're calling their MOE. 8x7b model. Mistral simply posted a torrent link to go download the model and gave basically no context or commentary about this model at all.
[00:45:36] Mike Kaput: Now, this approach caused tons of buzz about Mistral's new model and attracted significant media attention. And that's probably part of the point behind doing it this way. But it is really worth remembering and worth further discussion, especially as we get into. 2024. Mistral is a Paris based startup and it's a major open source AI player.
[00:45:58] Mike Kaput: They are valued [00:46:00] at 2 billion and they raised what is reportedly the largest seed round ever in European history at 118 million. So Paul, can you talk to us a little bit more about Mistral generally and specifically what's going on with kind of this counterintuitive model release?
[00:46:20] Paul Roetzer: This is a company that has definitely been under the radar, at least within business circles, certainly with within even our podcast, I'm not sure that we've even talked about Mistral before.
[00:46:29] Paul Roetzer: I don't know that we've mentioned them, but they've been a company I've been watching now for a few months. so they were the three founders. They come from Meta. So Yann LeCun's tree, you know, we talk about where these people all come from. So the three founders come from Meta. ai and Google DeepMind, no surprises there.
[00:46:47] Paul Roetzer: The company was founded in May, 2023. As you mentioned a massive seed round, they actually just closed their series A today, 415 million more, which put the value of the company at the approximately 2 billion. [00:47:00] my, my way of kind of explaining them is, I think the best way to understand them is they're what OpenAI was supposed to be.
[00:47:06] Paul Roetzer: It's like they're Now, I don't know if their mission is the same, like protect humanity, but they are building the most advanced models and then just releasing them to everybody. You mentioned it's an MOE model, also something I don't think we've discussed on the show previously. What that means is mixture of experts.
[00:47:24] Paul Roetzer: So, there's an article from Google Brain team in 2022 that we'll put in the show notes that sort of explains mixture of experts. I will do my best for a moment to give you my understanding of it, which I may adapt in future shows. But what we've talked about before is when you train one of these models.
[00:47:44] Paul Roetzer: So if you think about the large language model as a brain, in essence, when you give it a prompt, It fires every neuron in the brain, like it, all the parameters fire in this model. So it's trying to pull from its entire training data to fulfill your prompt to give you an [00:48:00] output. What mixture of experts allows the model to do is use a select section of the parameters.
[00:48:06] Paul Roetzer: So rather than firing the every neuron in your brain, it's firing the neurons that it thinks are going to be the best at providing the answer or the action based on your prompt. And so mixture, mixture of experts is. sort of a step forward in allowing the m to build, models that can perform like massive models at the quality and output of masses models, but not require as much computational power and energy to do it.
[00:48:34] Paul Roetzer: And so again, it's by, by finding the right parameters to fire, to fulfill the request. So I think the big takeaway here is. This is a company to watch, like they're doing things. They're opening up things that are going to offer benefits to people who want to build on top of open source. It is also fair to say, I think that it's Pandora's box.
[00:48:58] Paul Roetzer: Like they are now putting [00:49:00] models out into the world with complete open source. that, it would appear not only are on par with GPT-3 . 5, but our friend Matt Schumer, tweeted, looks like Mistral has a model that's even better than 8x7b and they're serving it to alpha users of their API. it's frighteningly close to GPT-4 and beats all other models tested.
[00:49:25] Paul Roetzer: This is their medium size. Large will likely beat GPT-5 . So here we're talking all about Gemini and these marketing videos. And if we don't like keep our eye on the ball, like we're going to miss what's actually happening, which is people are open sourcing stuff. That's going to be as powerful as GPT-4 now, and in 2024, and when that happens, like, it completely democratizes the good and the bad of these things.
[00:49:50] Paul Roetzer: These things are released with no guardrails. They are, they're not, they're not made safe out of the box. So it's [00:50:00] just something we have to keep an eye on, and we'll probably be talking a lot more about in 2024.
[00:50:06] Mike Kaput: So going into 2024, Microsoft has also announced what's next for Copilot, which is its AI assistant that performs all sorts of tasks across all the major Microsoft apps.
[00:50:19] Mike Kaput: So these updates include things like the ability to use OpenAI's latest model, which is called GPT-4 Turbo. right within Copilot. It also includes the ability to use OpenAI's DALI3 model within Copilot to generate images. And it also sounds like we're going to see some type of data analysis capability, similar to what is available in ChatGPT Microsoft announced Code Interpreter, which will allow you to perform more accurate calculations, do coding tasks, perform data analysis, and perform visualizations using Copilot.
[00:50:55] Mike Kaput: Last but not least, Bing, it was announced, will also have what Microsoft is [00:51:00] calling DeepSearch, and that uses GPT-5 . to deliver optimized search results on complex topics. Paul, what did you make of these kind of end of year updates for Copilot and just broadly, can you give us a little, some ideas of what's going on with Copilot and its adoption?
[00:51:21] Paul Roetzer: I've talked with quite a number of large enterprises lately who have access to Copilot and in almost all cases that I can think of, it's being beta tested by a small group of people. So while these capabilities exist. The things, the three things I'm keeping an eye on going into next year are adoption of Copilot.
[00:51:41] Paul Roetzer: So it's great that there's five or ten people in a big enterprise trying Copilot, but when is it actually going to be integrated into the organization? How is it going to be rolled out? so one is adoption of Copilot at the high level. So again, I think you need at minimum 300 licenses before they'll even return your phone [00:52:00] call or your chat or email or whatever.
[00:52:02] Paul Roetzer: So this is only like a large enterprise play at the moment. And so is it going to be moved down to a small midsize business next year? Like when is, when is the average company going to be able to get access to Copilot? so adoption is one. impact once adoption starts increasing. So what does it mean to the companies?
[00:52:22] Paul Roetzer: What does it mean to knowledge work when we start seeing wide scale adoption and integration of Copilot? And then the third is, the OpenAI saga spooked Microsoft. Like, they were already, you know, working on some things to make it so they weren't as reliant as OpenAI for their AI initiatives and products and services.
[00:52:44] Paul Roetzer: But I am sure that they accelerated whatever was going on there when they realized how, dysfunctional, I guess, OpenAI was at that moment. So I think that there's going to be a lot more [00:53:00] innovation coming from Microsoft in the, in, like, You know, from a chip perspective, from a smaller model perspective, from, you know, finely tuned models, like we're just going to see a lot more inside and outside of their partnership with OpenAI.
[00:53:13] Paul Roetzer: So those are kind of three of the things I'll be watching next year with Microsoft.
[00:53:18] Mike Kaput: So AI company Runway, who we talk about quite often on this podcast, has announced that it's partnering with Getty Images in order to launch a new video model for enterprise customers. According to runway quote, this model will combine the power of runway with Getty Images, world class, fully licensed creative content library, providing a new way to bring ideas and stories to life through video in enterprise ready and safe ways.
[00:53:45] Mike Kaput: It sounds like companies will then be able to use this model to build their own custom models that generate video content using again all that approved, licensed content from Getty to do so. Paul, what did you make of this announcement and kind of [00:54:00] where Runway is going in terms of building out enterprise grade capabilities?
[00:54:04] Paul Roetzer: It aligns with what we've talked about throughout the year that licensing is going to be a key aspect of the future building these models. There's all these, you know, open questions about copyright and, you know, infringement and, you know, intellectual property overall. And the most obvious path is that these companies are going to start or accelerate their licensing programs where, you know, they're using data that they know that is safe for them to use.
[00:54:30] Paul Roetzer: So I think it's a sign of that. potentially the more interesting thing was actually just this morning, I believe, they introduced something called General World Models, which is like a research initiative at Runway. And I think this is something that's going to be big going into next year. We've touched on this idea a little bit.
[00:54:49] Paul Roetzer: it is one of the drivers of Yann LeCun. is that what we have today are large language models that basically predict the next word in a sequence. That's what language models do. [00:55:00] We've seen GPT-4 through ChatGPT, you know, ChatGPT through GPT-5 v get the ability to see images and to generate images.
[00:55:08] Paul Roetzer: We're seeing Runway make progress on the generation of videos. Pica Labs, which we, I think we touched on last week, maybe, they're making breakthroughs in the generation of video. We're seeing a lot of innovation with audio. So all of these things What they're trying to do, what all these research labs are trying to do is give the AI an understanding of the world the way we would understand the world.
[00:55:33] Paul Roetzer: And so the general belief by many in the AI research world is that large language models on their own cannot achieve general intelligence. They have to understand the world around them. So this general world model. is basically trying to give these things the ability to predict outcomes. So, you know, if a car is driving down the street to predict what is that person standing on the side going to do.
[00:55:58] Paul Roetzer: Well, to do that, you need, like, [00:56:00] for us humans, we have instinct, we have common sense, we have reasoning about the world. From the time you're a toddler, if you touch something hot, like, you're not going to touch it again. No one told you that, like, you just touched the hot thing and it did, or you went by a dog and it It tried to bite you like you're not going to go buy a dog again.
[00:56:16] Paul Roetzer: So you learn from the world around you. It's not like you're training data per se. You're not fed this stuff right away. And so I think that's what's really interesting here is like, can we give these things through new training methods? instinct, common sense, true reasoning about the world where you can predict what occurs, what the outcomes of the world will be.
[00:56:37] Paul Roetzer: If they can make major progress on that, again, they're all trying to do it. that is where I don't even think there's a debate anymore about are we at AGI? Like, this is, a commonly agreed upon thing. If we can give this to, to these AI, then they truly have, are at that general intelligence realm. So just something to keep an eye on for sure.
[00:56:59] Mike Kaput: So in [00:57:00] our last topic today, Paul, you had highlighted a, tweet or a post on X from Andre Karpathy, who is a major player in AI. Did you want to maybe unpack what you saw for us? It's about large language models and their tendency to do what we call hallucination or make things up.
[00:57:18] Paul Roetzer: Yeah, I'm still kind of thinking about this tweet.
[00:57:22] Paul Roetzer: It was one of those when I read it and I just kind of sat back for a minute. I was like, wow, this is probably like really significant. and so I'm not going to like overanalyze this for you. I just want to kind of leave this with you. So as we sort of like wrap up our final weekly episode of 2023, I think.
[00:57:39] Paul Roetzer: This starts to set the stage for the kinds of things we're going to be dealing with going into 2024. So remember Andres, we just talked about, we talked a lot about this year, but certainly a couple episodes ago, we went deep on his intro to LLMs, intro to large language models. So this is someone, if you haven't listened to that episode, go back, go watch his large language [00:58:00] model video.
[00:58:01] Paul Roetzer: But I think the importance here is that when we're out giving talks, when we're getting asked questions, Mike, I'm sure you get this all the time. People ask me about the hallucination issue. Like, can we trust ChatGPT? Can we trust these models? Cause they make stuff up. And people think that hallucination is like this flaw or this bug in these models.
[00:58:20] Paul Roetzer: And it's like, what's preventing them from adopting it. We can't rely on these tools in business because they make stuff up. And what I always say is like, A lot of these researchers think that it's a feature, not a bug. Like, they think it's good that it hallucinates. But, you know, the way he explains hallucination was like a totally different way of thinking about it.
[00:58:41] Paul Roetzer: So I'm just going to read this tweet, and like I said, I'm not going to really analyze it for you. I just want you to think about what he's saying here and start to ponder this as we, we kind of move into our planning and building for next year. So he says, quote, on the hallucination problem, I always struggle a bit with when [00:59:00] I'm asked about the hallucination problem in large language models.
[00:59:04] Paul Roetzer: Because in some sense, hallucination is all large language models do. They are dream machines. We direct their dreams with prompts. The prompts start the dream, and based on the large language model or LLM's hazy recollection of its training documents, most of the time the result goes someplace useful.
[00:59:24] Paul Roetzer: It's only when the dreams go into deemed factually incorrect territory that we label it a hallucination. It looks like a bug, but it's just the LLM doing what it always does. At the other end of the extreme, consider a search engine. It takes the prompt and just returns one of the most similar training documents it has in its database verbatim.
[00:59:48] Paul Roetzer: You could say that this search engine has a creativity problem. It will never respond with something new. An LLM is 100 percent dreaming and has the hallucination [01:00:00] problem. A search engine is 0 percent dreaming and has the creativity problem. All that said, I realize that what people actually mean is they don't want an LLM assistant, a product like ChatGPT, to hallucinate.
[01:00:14] Paul Roetzer: An LLM assistant is a more complex system than just the LLM itself, even if one is at the heart of it. There are many ways to mitigate hallucinations in these systems, using Retrieval Augmented Generation, RAG, which we talked about recently, to more strongly anchor the dreams in real data through in context learning is maybe the most common one.
[01:00:38] Paul Roetzer: Disagreements between multiple samples, reflection, verification chains, decoding uncertainty from activations, tool use, All in active and very interesting areas of research. Too long, don't read, TLDR. I know I'm being super pedantic, which, I had to look this up. someone who has annoys others by correcting small errors, [01:01:00] caring too much about minor details, or emphasizing their own expertise, especially in some narrow or boring subject matter.
[01:01:05] Paul Roetzer: So I'm being super pedantic here. But the LLM has no hallucination problem. Hallucination is not a bug. It is LLM's greatest feature. The LLM assistant has a hallucination problem. And we should fix it. Okay, I feel much better now. So, again, Just this idea that hallucination is what they do, that they, their dream machines is such a fascinating way to explain what they do and that search has a creativity problem is so true.
[01:01:37] Paul Roetzer: Like nothing new is going to be created from search. It's just going to literally pull from everything it knows. And it's like that exact thing. And so I think again, as we move forward in next year, we try and figure out what are the use cases for these tools with their features and bugs that they have.
[01:01:54] Paul Roetzer: to understand the limitations. may actually be the benefits, [01:02:00] and to find the use cases where you can take advantage of it with all its flaws. And that's why often when I'm on stage, I always say, Copywriting is the least interesting thing to me that these things do. Like having it replace my writing.
[01:02:15] Paul Roetzer: is the thing I'm, I have zero interest in. It's all these other things. It's ideation, strategy, being a true assistant to bounce ideas off of. Like, I don't need it to be perfect when I use it in those ways. I need it to just help me, to enhance my creativity, my own innovation. And I think the more that companies and professionals and leaders start to think about what are they capable of now?
[01:02:38] Paul Roetzer: that we can infuse into our business, like find the use cases where you can find value in them. I think that that's, you know, an admirable thing to pursue early next year is don't get caught up in the things they don't do or the things they do poorly, focus on the things they do very well. And that comes from an understanding of the technology.
[01:02:58] Paul Roetzer: So, I'll kind of [01:03:00] leave it at that. So that, as I said, this is our final kind of weekly format episode of this year. Mike and I will be back together January 9th. Next week, December 19th, Cathy McPhillips and I will be here. with a kind of top 15 or 20 questions. Everyone's asking about AI. A reminder to subscribe to the newsletter, and follow Mike and I on Twitter and LinkedIn.
[01:03:21] Paul Roetzer: LinkedIn in particular, even during the holidays, I'll probably still be putting up like relevant information there. but the newsletter is a great way to stay updated. Mike, if you want to give a quick rundown of just some of the examples of things that are going to be in this week's newsletter that we didn't even have time to get to today.
[01:03:35] Mike Kaput: Yeah, of course. So as a quick reminder, go to MarketingAIInstitutcom/h newsletter to subscribe if you have not subscribed yet. Every single week, tons of links make it into the newsletter that we did not cover on this show. Many of these are very important. The newsletter is a fantastic way to get up to speed very quickly with AI.
[01:03:57] Mike Kaput: For instance, in this week we are going to [01:04:00] feature some content around the NeurIPS conference, which is a huge research focused machine learning conference where we can expect Plenty of important AI news and updates. We have multiple big announcements from Meta around AI safety, new tools, and actually new techniques about how AI systems can actually learn from skilled human activities.
[01:04:23] Mike Kaput: And then Google also has a couple other huge announcements that we cover in the newsletter that have been a bit overshadowed by all the talk around Gemini. So there's lots more going on than what we have just discussed in this episode. And I'd highly recommend you go ahead and try out the newsletter as a way to get up to speed very quickly on everything happening in AI in any given week.
[01:04:46] Paul Roetzer: And now, Mike, we just hope that nothing too crazy happens that calls us back for an emergency episode before the end of the year. All right. Well, thanks everyone for being with us for another [01:05:00] episode and really throughout the year. I mean, I think. the podcast, you know, we went from what 5, 000 downloads last year.
[01:05:06] Paul Roetzer: I think we're on path for 250 or 300, 000 downloads this year. So it wouldn't be possible without all of you listening or watching on the YouTube channel. So we appreciate everybody and all the wonderful feedback throughout the year. Mike, great job as always. Thanks for carrying the news for us every week and bringing it to everybody.
[01:05:22] Paul Roetzer: And, yeah, I'll be back next week. So I'm not going to sign off for the year, but, thanks again, Mike, and thanks to everybody for being with us.
[01:05:31] Mike Kaput: Thanks everyone.
[01:05:32] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to www.marketingaiinstitute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.
[01:05:53] Paul Roetzer: Until next time, stay curious and explore AI.[01:06:00]