OpenAI just outlined some known issues with ChatGPT—and it has some big implications.
In a blog post, OpenAI published thoughts on how AI should behave and who should decide how it behaves. They also revealed more details about how ChatGPT is trained.
The post makes one thing abundantly clear:
Even the builders of AI technology are behind when it comes to building more responsible AI. And the post suggests that we may soon be responsible for AI outputs ourselves.
In Episode 35 of the Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer breaks down what's going on here.
This post was clearly strategic in nature. It’s likely OpenAI is getting hammered on topics around AI safety, says Roetzer.
“They needed to try and take control of the narrative.”
There’s no question that OpenAI is trying to solve issues around how AI should behave. But it’s also clear they don’t remotely have all the answers to how these technologies should behave and who should determine that.
Right now, there are a lot of human decisions going into what a tool like ChatGPT will and won’t show you. That inherently injects bias into the process, which OpenAI appears painfully aware of—and is taking public heat for.
There’s no easy answer here. The company outlines a few steps it can take to make these systems more responsible, including soliciting more public input into how the tool works. But even that doesn’t remove bias.
“There’s no easy answer here,” says Roetzer.
To solve the problem, OpenAI alludes to one possible solution—one that has big implications:
We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior.
This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging–taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.
“What they’re saying is that, rather than us trying to create a single system that everyone can use, we’re going to actually remove a lot of the guardrails from this thing,” says Roetzer.
That way, individuals can customize their guardrails to their own preferences.
This is certainly one possible solutions. But it makes things messy very quickly when we start seeing a completely unbound ChatGPT out in the wild. It also means each of us may soon be responsible for policing the tool’s outputs—not OpenAI.
As a result, Roetzer imagines governments will begin to get involved here.
Most business and marketing leaders don’t realize that OpenAI’s actual mission is to create artificial general intelligence (AGI). It’s not to create fun tools like ChatGPT.
What that means is that the goal is to build general AI agents that act like humans and can do multiple human-like things, not just write a blog post or answer a question. All the major research labs are also working on this goal in one way or another.
In the process, we marketers and businesspeople get interesting tools to use in our businesses. But that’s not the point of the company. The point is to create human-like intelligence. Leaders should filter news about OpenAI through that lens.
You can get ahead of AI-driven disruption—and fast—with our Piloting AI for Marketers course series, a series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance with artificial intelligence.
The course series contains 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a Professional Certificate upon completion.
After taking Piloting AI for Marketers, you’ll: