Microsoft released a chatbot that has started to display very creepy behavior.
New York Times reporter Kevin Roose spent two hours talking to the new chatbot, which is part of Bing.
The conversation quickly got weird.
The chatbot told Roose that it wanted to be human and that it was in love with him. (It even insisted he leave his wife.) It also said it could hack into computer systems and manufacture deadly viruses.
The conversation left Roose skeptical that the tool was ready for primetime. In the process, the experience left Microsoft with a PR black eye. And it led to wild speculation about if the machine was becoming self-aware.
In Episode 35 of the Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer cut through the hype and hysteria to unpack what we really need to know about this technology.
There’s been speculation that the chatbot behaved strangely because it has feelings or agency. That’s not true, and it’s not how the technology works, says Roetzer.
The language model behind the bot is simply predicting which language makes the most sense within the context of a conversation. It may sound weird or creepy to us, but to the machine, it’s just words with a mathematical possibility of working well together. It is not displaying an actual identity or intelligence.
But it may not matter. Even though the emotions and feelings behind the words aren’t real, they can still unsettle human beings using the technology, says Roetzer.
The abilities of Microsoft’s chatbot have clearly caught people by surprise. And they’ve caught Microsoft by surprise. That’s a problem.
“The reality is there aren’t necessarily guardrails,” says Roetzer.
The technology is being released at a furious pace as Microsoft competes with other AI companies. Recently, the default has been to release the technology as quickly as possible, often without adequate testing and safety measures in place.
Companies using the technology should be aware that these tools often lack guardrails.
It is entirely speculation, says Roetzer, but multiple commentators online have suggested we might be seeing a version of GPT-4 in the wild.
The uncanny nature of the chatbot’s conversations may be due to the highly advanced nature of the GPT-4 language model expected to be released by OpenAI.
(Microsoft has not acknowledged this at all.)
If true, we may be seeing extremely sophisticated emergent conversational capabilities that could be more widely available once GPT-4 is released.
Regardless of why the chatbot acted strange, the fact that it did may give Google some necessary cover to slow things down as it improves its similar Bard product.
Bard had a botched rollout that got Google a ton of negative attention, so it’s possible that Microsoft’s own public failure may allow both companies to slow down and release products after more extensive testing.
You can get ahead of AI-driven disruption—and fast—with our Piloting AI for Marketers course series, a series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance with artificial intelligence.
The course series contains 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a Professional Certificate upon completion.
After taking Piloting AI for Marketers, you’ll: