OpenAI just published a post suggesting we need to start planning for AGI.
AGI stands for "artificial general intelligence." OpenAI defines it as “AI systems that are generally smarter than humans." That seems to mean an AI system that can do many things better than humans, not just very narrow tasks.
If AGI becomes a reality, OpenAI says it:
"...has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.”
However, the company also warns AGI can go very wrong if we're not careful:
“On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”
The company then articulates “principles” that they care most about regarding AGI, such as the need to “steward” AGI into existence gradually, so it doesn’t negatively disrupt society and the economy.
Why is OpenAI publishing this post now? Are we on the cusp of some superhuman artificial intelligence system?
In Episode 36 of the Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer broke down what it all actually means.
Whether they’re right or wrong about AGI, it seems OpenAI takes AGI seriously.
“It appears that they think they’re making progress towards AGI,” says Roetzer.
The company’s very mission is related to AGI, according to the post:
“Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.”
OpenAI’s post on AGI lacks specifics, says Roetzer. And that creates problems.
The current definition the company gives of AGI in the post is vague. (...”AI systems that are generally smarter than humans…”)
Not to mention, other AI companies have different definitions of AGI—and many AI researchers don’t even think AGI is possible.
This leads to problems. First, it creates possibly unnecessary fear around the technology. Second, even if AGI could exist, without a clear definition of it, how are we supposed to make sure it gradually comes into existence?
Forget the AGI label for a second, recommends Roetzer. The debate over the feasibility of AGI misses the bigger point for marketing and business leaders.
This type of talk from OpenAI—and others in the industry—probably indicates some big innovations are coming.
“I think it's pretty safe to assume that in the coming months there's going to be some advancements in AI that are going to be mind-blowing, whether we think they're actually a path to AGI or not,” says Roetzer.
And, in all likelihood, we’re not ready for it. So we need to start preparing now.
“The most important takeaway here is that we have to start being prepared for this kind of rapid, perpetual change that's about to occur as these systems get more intelligent and more powerful,” says Roetzer.
You can get ahead of AI-driven disruption—and fast—with our Piloting AI for Marketers course series, a series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance with artificial intelligence.
The course series contains 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a Professional Certificate upon completion.
After taking Piloting AI for Marketers, you’ll: