This week in AI news, we talk about education, an AI arms race, and a very dark side of AI training.
First up, A 22-year-old has created an app that claims to detect text generated by ChatGPT.
The tool is called GPTZero, and it was created by Edward Tian, a senior at Princeton University, to combat the misuse of AI technology. Tian believes AI is at an inflection point and has the potential to be "incredible" but also "terrifying."
The app works by looking at two variables in a text: “perplexity” and “burstiness,” and it assigns each variable a score.
Tian's app aims to incentivize originality in human writing and prevent the "Hallmarkization of everything" where all written communication becomes formulaic and wit. Paul and Mike discuss what this means, ethical issues, and opportunities and challenges for this tool.
Next up, Google came out swinging in the AI arms race by announcing its commitment to dozens of new advancements in 2023.
The New York Times reported that Google’s founders, Larry Page and Sergey Brin, were called in to refine the company’s AI strategy in response to threats like ChatGPT and major players like Microsoft, who just formally announced its multi-billion-dollar partnership with OpenAI.
According to the Times, Google now intends to launch more than 20 new AI-powered products and demonstrate a version of its search engine with chatbot features this year.
And finally, a new investigative report reveals the dark side of training AI models.
A recent investigation by Time found that OpenAI used outsourced Kenyan laborers earning less than $2 per hour to make ChatGPT less toxic.
That included having workers review and label large amounts of disturbing text, including violent, sexist, and racist remarks, to teach the platform what constituted unsafe outputs. Some workers reported serious mental trauma resulting from the work, which was eventually suspended by OpenAI and Sama, the outsourcing company involved, due to the damage to workers and the negative press.
As Paul put it in a recent LinkedIn post, this raises larger questions about how AI is trained: “There are people, often in faraway places, whose livelihoods depend on them exploring the darkest sides of humanity every day. Their jobs are to read, review and watch content no one should have to see.”
Listen to this week’s episode on your favorite podcast player and be sure to explore the links below for more thoughts and perspective on these important topics.
00:03:15 — A 22-year old creates a ChatGPT detector
00:16:04 — Google gears up for an arms race
00:28:01 — The dark side of AI training
Disclaimer: This transcription was written by AI, thanks to Descript, and has not been edited for content.
[00:00:00] Paul Roetzer: No AI lab is significantly ahead of the others. So whatever you are seeing from open AI, because they're more willing to put stuff out into the world, or stability, ai or whoever it is. Don't think that Meta and Microsoft on their own, and Google and these other players don't have similar or better technology sitting behind their walls
[00:00:22] Paul Roetzer: Welcome to the Marketing AI Show, the podcast that helps your business grow smarter by making artificial intelligence approachable and actionable. You'll hear from top authors, entrepreneurs, researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business and your career.
[00:00:42] Paul Roetzer: My name is Paul Roetzer. I'm the founder of Marketing AI Institute, and I'm your host.
[00:00:51] Paul Roetzer: Welcome to episode 31 of the Marketing AI Show, and another wild week in artificial intelligence. It keeps accelerating again and again and again. I am your host, Paul Roetzer, along with my co-host Mike Kaput, chief Content Officer at Marketing AI Institute. And my co-author of our book, marketing, artificial Intelligence, AI Marketing, and the Future of Business, The Future of Business part seems to be , the part that, might end up being the most interesting of all in the book we wrote.
[00:01:19] Paul Roetzer: So today's episode is brought to you by the piloting AI for Marketers Online course series. We launched this series in December of 2022, a couple weeks after ChatGPT emerged. So it's not all the latest stuff in there. Lots of information about. ChatGPT, and, and all these AI writing tools are impacting marketing, but it's a step-by-step learning journey.
[00:01:41] Paul Roetzer: So we basically designed this series of 17 courses to take you from an introductory level of understanding about AI to really get you through to the point where you can pilot it and, and, and kind of lead the change within your organization. So the 17 courses of dozens of use, case and technologies, collection of templates and frameworks that'll help you, not only understand, but apply ai.
[00:02:04] Paul Roetzer: We've taken basically everything we've learned over the last decade of studying and writing about, and speaking about and using ai. And put it into about eight hours of content so you can learn AI in a day, basically AI in a box in a way. So, you can check that out@pilotingai.com, to learn more about the series and you can use promo code AIPOD50 for $50 off registration.
[00:02:28] Paul Roetzer: So again, that's piloting ai.com. If you are looking to, not only understand and, and develop a deeper comprehension of ai, but apply it to your bus business and career, that is a great place to start. With that, I will turn it over to Mike for our weekly show, the, if you're new. We picked three hot topics for the week and we talk about 'em.
[00:02:47] Paul Roetzer: It's becoming increasingly difficult to select three . We have just last week, Mike and I were, I think, half joking that we might need to go to like two times a week on the podcast because it literally is just, we lock in the three, and then I message Mike like three different times and it's like, Hey, how about this?
[00:03:04] Paul Roetzer: How about this? How about this? And the topics are stacking up. So what do we got today, Mike? All
[00:03:09] Mike Kaput: right, so first up we have some ChatGPT related news. It turns out that a 22 year old has created an app that claims to detect text that has been generated by ChatGPT. This tool is called G P T zero, and the creator is named Edward Tian and he's a senior at Princeton University.
[00:03:36] Mike Kaput: Interestingly enough appears to both have a background in computer science, specifically working on large language models like the ones that Power ChatGPT, and also I believe has a background in journalism. So he is coming at the field from both the science and kind of the art. Of generating stories, text and writing.
[00:03:58] Mike Kaput: And he created the tool in a holiday weekend, like I think it was like three or four days, which is crazy, because he believes that AI is at quote, an inflection point and it has the potential to be quote, incredible but also terrifying. And. Briefly, the way this app works is basically you plug in text that you suspect may have been generated by ChatGPT, and the app looks at two variables.
[00:04:25] Mike Kaput: The variables they're calling perplexity and quote thirstiness. And it basically assigns each variable score. So first the app measures how familiar it. With the text presented, given what it is seen during training, so it's essentially using a similar large language model to vet the output of a large language model, and the less familiar it is, then the higher the text quote perplexity is, which means it's more likely to be human written, at least according to Tian.
[00:04:56] Mike Kaput: It then measures quote thirstiness by scanning the text. What a great word. . Bursty, right? You might, you might see something like that in like the, you know, word of the year in the next 12 months. So this thirstiness is basically scanning the text to see how variable it is. So if it varies a lot, it's more likely to be human written.
[00:05:18] Mike Kaput: So the overall point here is that we've talked about in a couple podcasts the impact of ChatGPT, both. What we do, you know, content and marketing, but also on education and how students are just finding a crazy number of use cases for it in schools and in the process kind of upending how education works.
[00:05:38] Mike Kaput: So Tian basically created this to be able to detect. ChatGPT generated text and give teachers some type of tool to be able to actually regulate this stuff and start understanding what assignments are being generated by ai. And it's just blown up in popularity. It's a. Incredibly successful, just kind of free project that I believe last time I checked, about 6,000 educators had been claiming they were trying to use or explore as a way to understand is are what my students creating?
[00:06:12] Mike Kaput: Is it being created by chat, e p t. And so one last point he mentioned that I thought was really worth looking at is he said the app, he's not against AI at all. He studies it, but he just aims to create it to incentivize originality in human writing and prevent what he calls the quote ization of everything like a Hallmark cards where.
[00:06:36] Mike Kaput: all of our written communication becomes formulaic and is devoid of the human creativity, personalities, ideas, and emotions that we would typically say is really personal, human, and good writing. So first off, Paul, I wanted to ask you, could you talk through some of the potential ethical issues that arise from the use of AI generated text and how G P D Zero might be useful in
[00:07:03] Paul Roetzer: addressing those?
[00:07:05] Paul Roetzer: Yeah, I mean as soon as ChatGPT came out you, you knew that something like this would emerge, but I don't even look at this as that new of a concept. because if you think about. The way that Google works, and we're going to talk about, you know, Google, I think more today, there's this constant effort to identify legitimate good content.
[00:07:26] Paul Roetzer: Now, language models have created some complexity in this scenario where it becomes harder and harder to distinguish because any of us who've played around with ChatGPT realize it's actually pretty good at sounding like human written. So you can understand that, you know, from a education standpoint, certainly it becomes very challenging to know, did my, students actually write this?
[00:07:49] Paul Roetzer: But I look at, I look at that in a way as almost like, I mean, students who are going to cheat are going to find ways to cheat. They always have , and if they pay someone else to write their paper for them, it's still going to sound like a human written paper. And are you going to know they did it or not? And at the end of the day, they're, they're just cheating themselves.
[00:08:06] Paul Roetzer: So I think you're going to have this ongoing sort of arms race of AI trying to. Determine if something was generated by ai. You could see incentive for Google and other search engines to want to know if it's AI generated content, because they might want to, you know, not give as strong a ranking for AI generated content.
[00:08:26] Paul Roetzer: They might want to incentivize human written content. Google is set as much in their guidelines and their policies. So I think it, you know, it does become a bit of an ethical choice that people are going to have and this is why we keep always pushing responsible use of. That you shouldn't use it to take shortcuts.
[00:08:43] Paul Roetzer: You should use it to enhance and augment what you're doing as a writer. But it shouldn't be to remove the human in the loop. We're not trying to arrive at a point where we don't have it. But you know, they're not going to be alone. Like, I saw a comment, a quick backdrop. So this the topic we're talking about, the way this emerged for us is our marketing institute.
[00:09:03] Paul Roetzer: Twitter account. Shared this article on like Friday morning or something like that. I think I saw it. So Cathy McPhillips on our team put this on Twitter. So I took it, it was Saturday morning. I'm like, wow, that's, this is crazy. So I started thinking about the human impact side. I started thinking about, you know, what he's trying to do, and obviously the tool probably works.
[00:09:20] Paul Roetzer: I think I saw one date report, like, like 60% of the time it works. , it's like, okay, well that he built it in 72 hours with no resources, like it's going to get better. So I saw some people like, oh, it doesn't even, you know, recognize all the time. It's a college student in a weekend with no financing, like it's going to get better.
[00:09:36] Paul Roetzer: So I thought it was interesting that the innovation happened so quickly that somebody was able to do this, that fast. But the other was that the reason why he was doing it. So I was like, oh, that's kind of interesting. So I throw this on LinkedIn and I put this on Saturday morning around like nine o'clock or 10 o'clock in the morning.
[00:09:52] Paul Roetzer: So we are 72 hours removed from when I posted this on LinkedIn. It has 202,000. Impress, impress. The most popular thing I have ever put on LinkedIn had like 65,000 impressions. So to give context this, this topic blew up. Now, I don't know if the LinkedIn algorithm loved it or it just resonated with so many people, but judging by the 120 plus comments and 250 plus re-shares of it, it resonates in a lot of levels with people, and the vast majority of the comments were extremely positive.
[00:10:24] Paul Roetzer: This idea that one innovation is happening this quickly. Two, there was definitely a fear factor. You know, there was a lot of people just like uncertain about what this all means. But three, I think most people resonate with the human side of this, that there was this effort to, you know, continue to make writing mean something to, to continue to understand, the elements of emotion that go into writing, the elements of ex human experience that go into writing.
[00:10:51] Paul Roetzer: and I and I, so the, I haven't even been able to get through all the comments, but the comments would have been fantastic. But I happened to see one this morning, from Christina, and she's actually an adjunct professor and teaches a marketing course, and she said that her university just upgraded to.
[00:11:08] Paul Roetzer: Turn Itin Originality, which I guess is a new feature from Turn itin, which has been used to, to, to find, copies or cases where students were, using content illegally. And so even that platform turn, ITIN has already innovated as well to try and identify AI written content. So, you know, Edward's not going to be alone.
[00:11:30] Paul Roetzer: It's not like this is the only game in town when it comes to this, but I think you're going to see this constant push. People are going to seek originality. Like they're going, they're going to increasingly want to know something was written by a human. And I don't know if, like, you're never going to know for sure, like, I think there might be probability scores.
[00:11:51] Paul Roetzer: Like I, I could almost envision a day in the near future where you're reading something online and there's a score next to his, like, this isn't 92% probab probability that it was written by a human or with, with AI support. Like I. I don't know. I haven't had time to really process all of where this is going to go and what I think it's going to end up looking like.
[00:12:10] Paul Roetzer: But I can tell just from the 200 some thousand impressions and, you know, hundreds of comments on this one post, people want that. They, they want to know if they're reading something that was purely written by an ai. And I don't know about you, but like Twitter is becoming almost unreadable at times because I feel.
[00:12:29] Paul Roetzer: Every, every thread on Twitter is just a Chad G PT output. Yeah. And it's, it's maddening. Like I've stopped even looking at Twitter threads that are coming from anything other than journalists I know aren't using ChatGPT to write them. So I'm already feeling after two months of ChatGPT, like, oh my gosh.
[00:12:48] Paul Roetzer: Like not another AI generated s e o friendly list of something like, it's driving me nuts. Mm.
[00:12:55] Mike Kaput: I think it's worth reiterating too, and you talked about it quite a bit, the speed here, because Edward, like you said, is not alone. Edward is clearly very talented. And I don't want to diminish his accomplishment, but based on the nature of some of these models and tools out there, and foundational tools to build AI solutions, the rate of innovation is dramatically faster than if we were talking about traditional software like Edward didn't need, you know, Two months to put some code into production.
[00:13:26] Mike Kaput: He built this over a weekend, and like you said, the plagiarism tool has already updated. And then I guarantee you someone may be open ai, maybe another model may actually use some of those tools to train their tool. Totally. To Dodge those tools. The,
[00:13:43] Paul Roetzer: the technology's not going to win here. Like at the end of the day, it's going to come back to humans making dis ethical decisions and being responsible about the use of the.
[00:13:52] Paul Roetzer: It's what? It's, what technology always comes down to is it gives humans these powers and then they have to choose how they're going to use those powers. These are just more powerful than anything we've had before when it comes to this realm, . So yeah, I mean, I think more and more, and you and I were talking about this yesterday, I think more and more it's going to be on individual brands and media companies and polishers and agencies to be proactive in defining how they are using ai.
[00:14:19] Paul Roetzer: So it's clear when I get to a. , okay. They're using AI tools, but they're using them for this purpose. Mm. Like what I'm reading, you know the CNET article we we talked about before where they were just like straight up using AI to just pump out a bunch of crap, honestly. And yes, it ranked for seo, but it was really just to generate revenue and affiliate links and stuff.
[00:14:40] Paul Roetzer: So that's an unethical use of ai in my opinion. It's misleading intentionally to the user. You don't want them to really know. AI wrote the. , and I think that that's going to go away real fast. Like you're going to obviously have a bunch of these companies that try and take these shortcuts, but I think that the, general internet population will revolt against stuff that is blatantly trying to mislead them and make me think, did a human write this or not?
[00:15:09] Paul Roetzer: It's like, just be straight up like, yep, human wrote this, but we used AI to do the summarization part, like just. Because I think your, your credibility will be so much higher if you're just transparent in how you use it, rather than trying to hide the fact that you're doing it. And that's what's bothering me right now with so much of this junk that I'm starting to see on Twitter and even, you know, just some of the stuff that gets shared elsewhere.
[00:15:34] Paul Roetzer: It's so obvious that someone who has no idea what they're doing created something to make it seem like they actually have a clue and. It bothers me. Maybe it bothers me as brighter. Like I just, it frustrates me. I don't know, but don't I, yeah, just don't use it to take shortcuts. It's, it's not going to end up well for you or your brand if, if you mislead people intentionally about your use of these tools.
[00:15:57] Mike Kaput: That's a really good point. And you know, for another look at how. This kind of AI arms race is playing out. Our second topic is that this week Google, finally, I think given our discussions, came out guns blazing in the AI arms race. So there's a number of different reports, about Google's moves in ai.
[00:16:18] Mike Kaput: That have happened in the last week. One of them in the New York Times, they report that Google's founders, Larry Page and Sergey Brynn were called in to refine the company's AI strategy in response to threats like ChatGPT and also Microsoft has just formally announced, You know, the de some more details about its multi-billion dollar partnership with OpenAI.
[00:16:41] Mike Kaput: And you know, Larry and Sergey were not super, super involved with Google, so it's a big deal, that they're being called in again. And according to the times, Google now intends to launch more than 20 new. AI powered products. They published a bunch of different, research findings across computer vision, large language models, all these major areas of foundational ai, and they also intend to demonstrate a version of the search engine with chatbot features this year.
[00:17:12] Mike Kaput: Now, I don't think that means necessarily it will commercially roll out immediately, but we should be seeing some pretty strong responses from Google about. Related to some of the innovations we've seen like ChatGPT. So you also published a pretty popular LinkedIn post on this subject, and in it you said quote, here we go.
[00:17:32] Mike Kaput: Google would like to remind you who put much of the generative AI we see today in motion and lay the groundwork for what's to come. What did you mean by that? What's to come?
[00:17:44] Paul Roetzer: So Jeff Dean, if you're not familiar, is the senior fellow and senior vice president of Google Research and ai. So he is a major, major player in AI for the last decade plus.
[00:17:57] Paul Roetzer: But often behind the scenes people don't follow the stuff closely. May have never heard of Jeff Dean, but he probably will. So Jeff is the guy who came out with the, I think setting the stage for what was about to happen. So we record our podcast on Tuesday. Last Wednesday, the day after our podcast, they, they put out a summary of kind of their history with AI and, and a look at where they were going to be going.
[00:18:26] Paul Roetzer: So it was the first time where you actually saw Google say, Without directly saying, Hey, we're hearing what's going on. All this chatter about us being obsoleted and our search engine going away. And it was sort of a stake in the ground where they were very clear of like, this doesn't, what's happening right now doesn't even happen without us.
[00:18:44] Paul Roetzer: Hmm. Here is everything we have done over the last decade, basically to make this moment possible. And if you think that we don't have other stuff waiting, you're crazy. So for example, the Transformers came out of Google in 2017. So the transformers are what fundamentally enabled OpenAI to do what they're doing today with ChatGPT and g p t three.
[00:19:07] Paul Roetzer: So they, he goes through this extensive list of all the innovations, all the milestones in their research, and starts to lay out their plan for imagery, or image. Video, audio and text, kind of the different modularity that'll be be present within everything Google does. Sundar himself, the c e O of Elf and Google Retweets that shares that out.
[00:19:29] Paul Roetzer: So you could start to see this was a concentrated PR effort. Right away it was obvious that it's like, okay, they're going to start to take control of the narrative. They're not going to just release this stuff, but it's not. I don't think it's coincidental that the Time Magazine article with Demis came out with DeepMind where he was talking about Sparrow, their ChatGPT like product.
[00:19:52] Paul Roetzer: Yep. Then you have Jeff Dean doing this and saying, this is going to be a series. Here's the first in a series about our innovations and AI and where we're going. You have their earnings call in, what, nine days or so? Yep. So I think we are about to see not only a PR blitz from Google to start taking control of the narrative back a little bit.
[00:20:11] Paul Roetzer: I assume that within Q1 you're going to start to see some of these 20 products that were talked about in the Times piece, coming out. So I think as we've talked about before, the trillion dollar question in 2023 is what do these major players do? Like we're going to see massive movements in AI and we're going to see this one upsmanship that's going to keep going on, because that was the 28th.
[00:20:33] Paul Roetzer: And then you have the Sundar or. Satya, come out on what, yesterday, the 23rd where they're like, yep, we're in multi-billion dollar investment in open ai. So it's like you, you just get like, every week it's going to be like whip whiplash, who, who's making what? Crazy announcement. And then you mix in.
[00:20:51] Paul Roetzer: Yesterday you and I had talked about this, Jan Lacoon, who's the head of the AI research lab at. And he does an interview where , the headline is ChatGPT is not particularly innovative and nothing revolutionary says not as chief AI scientist. Now, he tweeted kind of, I thought it was kind of funny.
[00:21:10] Paul Roetzer: It said the title is blunt, but the article conveys what I said on this q and a about the progress of AI ChatGPT and other large language models didn't come out of a vacuum and are the result of decades of contributions from various people. Like Google was saying, no AI lab, and this is the key to me.
[00:21:28] Paul Roetzer: No AI lab is significantly ahead of the others. Hmm. So whatever you are seeing from open AI, because they're more willing to put stuff out into the world, or stability, ai or whoever it is. Don't think that Meta and, and Microsoft on their own, and Google and these other players don't have similar or better technology sitting behind their walls,
[00:21:52] Paul Roetzer: And that's, you know, again, we've been talking about this been a recurring theme for the last six weeks on the podcast. We haven't seen anything yet. Like whatever you think you understand about AI today, just wait like it's going to get. Yeah,
[00:22:08] Mike Kaput: and read between the lines here. If these people are talking about it with this tone and these contextual framings of their contributions that's coming from somewhere.
[00:22:19] Mike Kaput: There's pressure, and you had mentioned Google's earnings call in nine days. All of the other big tech companies also within the next two weeks have their earnings calls go on their investor relations page. They have all the information about when to tune in, I guarantee. Things like chat. J p t have increased pressure on executives from shareholders.
[00:22:41] Mike Kaput: And major people on the board, I would guess about what are we doing with this, because probably all those people are not always perfectly attuned as to the multi-decade of research and contacts that, you know, Facebook and Google have contributed
[00:22:57] Paul Roetzer: to this field. I guarantee you that, I mean, I've been talking to venture capitalists friends within tech companies.
[00:23:03] Paul Roetzer: I, I, I know for a fact that they're getting. Questioned by the boards and questioned by the executive teams of what's going on, what's our play with AI and the challenge oftentimes there is like the board members and the stakeholders who are challenging them. They're just seeing all the buzz about chat.
[00:23:17] Paul Roetzer: G P T. Yeah. Yeah. What are we doing with ChatGPT? It's like, that's not even the right question. What are we doing? What does our AI play? Like? What does our roadmap for AI infusion into our business is the right question. , and that's not even getting asked, but you're going to see stuff like what's happening at Google where these companies are going to have to come out and not only take credit for what they've been doing, they're going to have to reinvest, like they're going to have to somehow figure out a way in this current environment to amplify their investment in AI to, to refocus their energy.
[00:23:46] Paul Roetzer: Like meta, we've talked about. Meta is an AI company. They've gotten distracted by the metaverse, you know, including changing their name to it. But my belief is, at the end of the day, meta is an AI company with some of the most innovative AI researchers in the world. Yeah. Doing some stuff that the vast majority of people have no clue they're even working on.
[00:24:06] Paul Roetzer: And at some point that stuff's gotta come to light and it has to become a bigger focus for them. So yeah, I, I don't know. I mean there's like five, six major players here. Google is certainly. I would say Google probably was in the lead. They may still be in the lead with the stuff we haven't seen yet.
[00:24:23] Paul Roetzer: But, you know, meta Microsoft, they weren't far behind Amazon. So I, yeah, I think this year's lot's going to play out and you're going to see a lot of major announcements and we may have to go to that two times a week model to keep up with what's about to happen, especially
[00:24:40] Mike Kaput: over the next couple weeks with these earnings calls.
[00:24:42] Mike Kaput: So it seems like it's pretty clear there's about to be, or is currently an arms race going on. So, you know, we don't know exactly how that will play out, but if I'm a business. Of any type of organization, what kinds of questions should I start asking about my strategy, my talent, my technology? Given that we can reasonably assume some big things are coming down the line this
[00:25:05] Paul Roetzer: year.
[00:25:06] Paul Roetzer: I mean, the first thing you need to be thinking about is who in your organization can figure this stuff out? If you're lucky enough to have a Chief AI officer or you know, if you're a big enough company where you've got some AI talent, both the technical side and the business minded AI talent, then great.
[00:25:20] Paul Roetzer: You're, you're probably in a really good place, but you're also probably a bit of an anomaly. There are very few businesses that I talked to. We talked to big and small companies, funded companies, private companies, public companies. I, I, I've talked to very few where I walked away thinking, well, they got their shit figured out.
[00:25:38] Paul Roetzer: Like they, they know what's going on. It's, it's, it's a talent first question in my opinion. I think you have to have the right people in the room who can look at this stuff and try and figure out. How do we build a smarter version of our company? And that's what our advice always is like, what does a smarter version of your business look like?
[00:25:56] Paul Roetzer: And like if I, let's pretend like a competitor was coming for your business. Like if you're Google and Open AI is coming at you. How do you make every part of your business smarter? Where where can AI help you in your marketing, sales, service, product, r and d, hr, finance, legal, like, and you're not going to do it all at once, but you want to build this roadmap that says, okay, over the next three to five years, we're prioritizing product first.
[00:26:20] Paul Roetzer: If we're SaaS business marketing, sales, service ops, like you sit, kind of set your priorities. , but you need to figure out how to infuse AI into the business and build a smarter version, or somebody else is going to do it for you and take your spot. But that starts with having the right people in the room to figure this out.
[00:26:36] Paul Roetzer: And that's the hard part. Honestly, right now there aren't that many. So like you and I, like, we're, we're building an emerging consulting practice for this exact purpose because. People don't know where to go for answers. And so we're starting to talk with large organizations like, okay, how, how can we build these smarter entities?
[00:26:52] Paul Roetzer: And so I think a lot of our work, Mo moving forward, is going to be about building these next gen businesses. What is, what does an AI emergent company look like? Because, There aren't very many people with the answers out there right now. And I think even then, the trick is going to be staying current on it.
[00:27:07] Paul Roetzer: Now, you could put a roadmap in place today that says, here's the next three years, what our business needs to do. Here's the 10 AI projects we're going to run over these next three years. And then three months from now, Microsoft or Google or Meta could make some announcement that blows it all up. It's like, oh, okay, well that just obsoleted three of our projects.
[00:27:24] Paul Roetzer: Now what do we do? So you're going to, it is going to be this iterative and faster moving iteration. Of a continuous AI roadmap, and that to me is the only way to build a business moving forward. So like I've, I've actually been working behind the scenes on a smarter version of our institute, like what does this look like?
[00:27:41] Paul Roetzer: How do, how do we infuse AI across our business model? So almost as like building a roadmap as a, a beta for how we can do this for other organizations as well. But I, I think it's, the only answer is you have to build an AI emergent company and you need the people in the room who know what that means and how to do it.
[00:27:55] Mike Kaput: Gotcha. So, Switching gears slightly for our last topic here. We're going to talk briefly about the dark side of AI training. So a new investigative report came out that kind of talked about the dark side of what happens when we're training. Foundational AI model. So a recent investigation by Time Magazine found that OpenAI used outsourced Kenyan laborers who earned less than $2 per hour to make ChatGPT less toxic.
[00:28:30] Mike Kaput: So what that means is they had workers review and label large amounts of data, of disturbing texts. That includes violent, sexist, racist remarks, and a whole boatload of other even worse things. In order to teach the platform what constituted an unsafe bit of language output Now, Some workers unfortunately reported pretty serious mental trauma resulting from this type of work and eventually open AI.
[00:28:58] Mike Kaput: And samma, which is the outsourcing company in Kenya that was involved, suspended their relationship and stopped the work they were doing due to both the damage to workers, but also they were getting a ton of negative press around these kinds of practices and. You know, you had published about this as well in a LinkedIn post, and it seemed to really resonate with people.
[00:29:19] Mike Kaput: This just raises larger questions about how. AI technology. We're discussing the models that are going to transform our businesses, how these are actually being trained. And you said quote, there are people often in far away places whose livelihoods depend on them exploring the darkest sides of humanity.
[00:29:38] Mike Kaput: Every day their jobs are to read, review, and watch content. No one should have to see. So talk to me a bit about how does this type of outsourcing and labor impact the ethics of the AI industry as a.
[00:29:54] Paul Roetzer: This goes back to the point of ChatGPT was the shiny object that got everyone seemingly interested in artificial intelligence.
[00:30:03] Paul Roetzer: And on the surface it's awesome, like, oh, this can help us do these things. It has all these unknowns about the impact on our business and our team, but at the end of the day, like this is magical. It's crazy tech. And that might be as far as a lot of people go and they don't go further to understand. What is artificial intelligence really?
[00:30:22] Paul Roetzer: How does this stuff even work? What is a language model? How is it trained? And then you start to kind of slide into things like this where the reality of the way artificial intelligence works. So this isn't new. Like that was, you know, I put this up. Yeah. I was, I, I think I just said it in the post, like I was debating even sharing it because I feel like right now, We're at the level where everyone's finally paying attention to artificial intelligence and becoming curious about it.
[00:30:51] Paul Roetzer: And this is going like zero to a hundred. Like to take people immediately to, oh, do you know how they're trained? You? You're just going to get this like backlash of like, oh my gosh, like I don't even, I don't even want to know this stuff because you're just trying to get into a topic you've been avoiding generally for the last few years if you're a business or marketer.
[00:31:11] Paul Roetzer: . And so things. Deep conversations around the ethics of ai, understanding the impact on the environment, understanding the impact on humans who have to train and label this stuff, like those are heavy topics. And so I can see a lot of people just not wanting to have to deal with those topics right away.
[00:31:34] Paul Roetzer: Not that they're not critical and important, but my general feeling is you're going to get a lot of people just like, ah, I, I'm not going there right now. Like, I don't even know that. But I felt like this is a, a fundamentally critical topic for people to realize how AI is trained. So in, in, you've been experiencing this for like a, a decade or more on social media, so if you go into.
[00:31:57] Paul Roetzer: There are lots of really horrible things. People post all the time on Facebook, photos, videos, stuff you don't even want to imagine humans think about or do they put up all the time, sometimes live streaming horrible things. The only way for AI to automatically detect and remove that stuff before it spreads is for it to learn it's bad.
[00:32:24] Paul Roetzer: The way it learns it's bad is a human watches it and labels it as bad. This is how social media has worked for a decade, so, To you. Again, I, I'm not going to go into graphic detail here, but imagine like the worst things humans could possibly do. The dark web of things. Someone has to tell AI that stuff is bad.
[00:32:46] Paul Roetzer: The AI does not know, it doesn't understand these things like a human does. So it has to be trained over and over and over again, constantly labeling horrific things. And then someone at Facebook, not in a far away place, sitting off in Silicon Valley. Has to look at the training sets and say, did it work?
[00:33:06] Paul Roetzer: Did did it get the thing off the internet? We were supposed to get off the internet. So they're sending this content to far away places for cheap labor because it's lots of labeling, lots of horrible work that nobody really wants to do unless they have to do it. Then it comes back to the US and they then have the vet did it work?
[00:33:25] Paul Roetzer: Is the training working? And then stuff gets through and then they have to go like, why did it get through? So there are people in far away places and nearby if you're in the US who have to look at this stuff all the time. Imagine like the mental impact that has on you. Like I still have. Like lines from that article.
[00:33:47] Paul Roetzer: Things I read in that article, I can't get outta my head a week later. , just from like one line. Imagine having to actually read that, like read that stuff for nine hours a day and say like, yep, bad, bad, bad. So this is how images are trained. It's how videos trained, it's how languages trained, and that's not going to change.
[00:34:06] Paul Roetzer: So I think like when I put it up, it. It was just an awareness thing. Like I wanted people to start realizing that there is much more to the story than shiny objects that do magical things. There is an entire, history behind how this stuff has worked and there is an impact it has on humans today, not just job laws.
[00:34:27] Paul Roetzer: Like there are people who have like, as part of your compensation packages, like, mental. Health, like seeing psychologists about the things you have to look at every day, , and that, that is a really hard thing for people to comprehend who don't have to do this stuff all the time. So I, you know, again, I, I think it's, it's not like you can put something up like this and there's some action item that we can all take.
[00:34:52] Paul Roetzer: Like, you're not going to, like, I think someone made a comment in the post. Like, wow, I, you know, maybe I won't use chat cheap PT anymore. I was like, well then stop using social media like the this. The point of this wasn't to say this is horrific and we shouldn't do it. The point of it's to raise awareness that this is what happens and you can't just like arbitrarily choose that open AI is evil because they're doing this because they all do it.
[00:35:19] Paul Roetzer: And they have no alternative. That's the, I think that's the challenge. Should they pay them more? Absolutely. Like should there be a greater focus on the human impact of this? A hundred percent. But you can't just pick one of these tech companies and say, well, they're evil because they're doing that. because they all, they all do it.
[00:35:36] Mike Kaput: So if I'm a business leader using these tools, For considering it. I mean obviously this is a bit upstream of, you know, the actual tool being used in the use case are, is there anything I need to be thinking about here for my own company, brand usage of the technology, or is it more just kind of educating myself on the nuts and bolts of how.
[00:36:00] Mike Kaput: AI is actually
[00:36:02] Paul Roetzer: working. I think it's going to be a more advanced part of the AI roadmap and scaling the use of AI in your organization. You're probably going to quickly and need, need to arrive at a point where it's like, okay, we're buying this solution. What is the training data for this? How was it trained?
[00:36:17] Paul Roetzer: You know, I, I mean this is probably not the right analogy, but like, the diamond industry, where were they mined? What are the conditions with which the diamonds are mined? Like that's, That unlike this , it's like, okay, all right. I get it's a language model. Okay. They, they're the proprietor of the language model.
[00:36:36] Paul Roetzer: It was built, how was that language model trained, what labor was used to train the model. So you could imagine publicly traded organizations in particular that have higher standards for this stuff. They're probably going to start asking those sorts of questions within. There are RFPs like, so I think as you look out ahead and you take a more robust approach to the integration of AI across your organization, you will likely start getting into these scenarios where you really have to understand the training data sets and then where that training came from and how it was generated and things like that.
[00:37:08] Paul Roetzer: But I think it's probably early for a lot of organizations you're not, I mean, again, You're buying an AI writing tool, you're head of content marketing or head of, you know, lead editor, whatever, is going to go and assess some tools and buy a tool. They're not going to have a clue , right? How the language model learned or what human labor, you know, provided the labeling.
[00:37:27] Paul Roetzer: And they're not going to know to ask that stuff. So I, that's why I say it's just a more advanced topic, not any less critical topic, it's just more advanced because there's not enough people in the industry who know to even ask the right questions.
[00:37:40] Mike Kaput: That's awesome. That's really, really helpful, I think. Paul, as always, thanks again for your time, your expertise.
[00:37:48] Mike Kaput: I think we've covered a lot. Done a decent survey of AI this week until next week. But, you know, maybe we will get up to that two times per week. In the future we'll see . But in the meantime, thank you for, answering all our questions and really educating the audience on the potential
[00:38:07] Paul Roetzer: of. And, and before we go on the topic of content, we did make an announcement, just yesterday.
[00:38:13] Paul Roetzer: The AI for Writers Summit is a virtual event on March 30th. So if you're trying to figure out the impact of all this stuff on writers, editors, content teams, it's just ai writers summit.com. You can go to and check it out. And so it's going to be a half day virtual event. It's free, courtesy of writer, our sponsor for that event.
[00:38:34] Paul Roetzer: So I would, I would suggest, if you're curious about this stuff, we're probably not going to go too deep into like, labeling of language models in that. But we're going to have a state of AI and writing a keynote. I'm going to do, we're going to have a, a, a feature presentation, from Habib at, at writer about, AI for content teams and Hanley's going to do one.
[00:38:52] Paul Roetzer: Everybody writes kind of the human side. Panel q and a. So yeah, check out ai, writer summit.com if you're interested in figuring out more about this stuff, and we'll share more about that as we're going. Like I said, it, it just was announced yesterday, so hot off the presses. Yeah, man. Let's see what the week brings.
[00:39:10] Paul Roetzer: It's , I'm sure as we've been talking for the last 40 minutes, some stuff has probably happened. We need to cover. Yeah, no, good. All right. Thank you everybody for listening. The numbers are off the charts, in terms of listen ship for this show. It, it's been incredible. It's like we love hearing from you.
[00:39:25] Paul Roetzer: Definitely reach out to us if you're a regular listener and, yeah, let us know what you like about it. Let us know what we can do better and we'll look forward to talking with you again next week. Thanks, Mike. Thanks Paul.
[00:39:36] Paul Roetzer: Thanks for listening to the Marketing AI Show. If you like what you heard, you can subscribe on your favorite podcast app, and if you're ready to continue your learning, head over to marketing ai institute.com. Be sure to subscribe to our weekly newsletter, check out our free monthly webinars, and explore dozens of online courses and professional certifications.
[00:39:58] Paul Roetzer: Until next time, stay curious and explore ai.