The simplest way to understand artificial intelligence is to think about it as a set of technologies and algorithms that are designed to make machines smart, to give them humanlike capabilities (e.g. vision, hearing, speech, writing, understanding, movement).
Specifically, machine learning, a primary type of AI, makes machines smarter at making predictions.
Sometimes, what may seem on the surface like a simple prediction can have an immeasurably profound impact on the future.
For example, the effort to build truly autonomous vehicles, which would transform society and save millions of lives, is insanely complex. However, when you break it down to its most basic goal, companies like Tesla are trying to build AI systems that predict what a good, focused human driver would do.
So, the autonomous system doesn’t have to be programmed what to do in every situation, it just needs to learn, through billions of miles of training, “what would a human driver do?”
How about writing? How could an AI system learn to write as well, or better than, a human?
Well, in theory, it would simply need to continuously predict the next word based on what has already been written.
Sounds crazy, but if you use Gmail, you see it done every day when the system suggests words to finish your sentences. Or, when you reply to text messages using the suggested responses on your iPhone.
There aren’t humans hiding away somewhere at Google and Apple feverishly predicting what you’ll want to say next and sending you recommendations. No, it’s AI. You don’t care as a consumer that it’s AI, but you sure do appreciate the convenience it provides.
And that’s just the beginning. There is a race to train AI systems to generate human language at scale. When achieved, the implications, both good and bad, are immense.
OpenAI, a non-profit AI research company backed by the likes of Elon Musk, Peter Thiel and Reid Hoffman, builds AI models to do just that.
It started with GPT and GPT-2, AI language generation models that automatically produced human-sounding language at scale. GPT-2 wowed the world when it was released in 2019 with its ability to construct longform content in different styles, using huge amounts of content from the internet.
The GPT-2 model had such substantial implications for malicious use that OpenAI originally chose not to release the trained model. The organization’s hope was that by limiting the release they will give the AI community more time to discuss the larger effects of such systems.
Yet, in May 2020, OpenAI introduced a dramatically more powerful model called (predictably) GPT-3.
GPT-3 is able to produce human-like text.
In early experiments, the model has been used to produce everything from coherent blog posts to press releases to technical manuals, often with a high degree of accuracy. To do that, GPT-3 uses 175 billion parameters in its language model, compared to GPT-2's 1.5 billion.
It's still early days for GPT-3, and the validity of the model hasn't been fully explored. But one thing should give marketers pause:
The speed of improvement in OpenAI's language models.
The first GPT model came out in 2018. GPT-2 was released with greatly expanded capabilities in 2019. Just a year later, GPT-3 uses 100x as much data as its predecessor and is beginning to display incredible content creation capabilities, including turning text into code and evaluating investment memos.
This technology raises major opportunities and challenges for marketers.
Depending on if and how the technology is commercialized, brands may be able to build AI-powered content programs at scale. They may also be able to dramatically reduce the costs associated with content creation.
But, brands will also have to be wary of bias that comes with AI content models. It's all too easy for AI models to accidentally generate discriminatory or offensive content. (GPT-3 has already run into this issue.) Content at scale sounds great on paper, but becomes difficult to police in practice.
Not to mention...
What happens to content creators when AI can automatically generate human-like content at scale?
We're optimistic AI will create more jobs than it makes obsolete in the marketing industry at large. But professionals who predominantly create content may need to reevaluate their roles and skills, should this technology become widely commercially available.
The full story of GPT-3 is just beginning, and much is still unclear. But it provides the starkest example yet of just how powerful certain types of AI have become—and how they could seriously impact brands and marketers.
Editor’s Note: This post was originally published in March 2019. It has been updated and republished in August 2020 to include GPT-3 information. Mike Kaput contributed to the updated article.