When people like entrepreneur Elon Musk (@elonmusk), academic and AI commentator Nick Bostrom and head of Google’s DeepMind AI system Demis Hassabis (@demishassabis) get in a room together, you pay attention.
Musk, Bostrom, Hassabis and six other top minds in the field of AI gathered on-stage at the Beneficial AI 2017 Conference on January 7 to discuss the potential and peril of cognitive machines. The conference was hosted by the Future of Life Institute, an organization created by top academics and businesspeople (including Jaan Tallinn, the cofounder of Skype) to safeguard life in a world where technology develops at a breakneck pace.
The conference covered topics like how law will be affected by AI and the possibility of human-level artificial intelligence. As part of those discussions, a panel was conducted on the topic of “superintelligence,” or an intelligence “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills,” according to Bostrom, who studies the subject.
Superintelligence is the next stage of AI evolution after what’s called artificial general intelligence or AGI. AGI is artificial intelligence that is across the board as intelligent or more intelligent than a human being. Right now, AGI and superintelligence do not exist. All we have is artificial narrow intelligence (ANI), or systems that are better than humans in one or several areas, not every area.
Learn more about key artificial intelligence definitions here.
Even though we don’t have AGI or superintelligence, these are serious discussion topics among AI experts, some of whom read this blog. While it can become a deep and philosophical conversation—an AI thousands or millions of times smarter than people—it’s a real concern for some, including members of the Beneficial AI 2017 Conference panel.
Given the topic’s value—and its current and future implications for real-world AI applications used by marketers, executives and entrepreneurs—we wanted to see what we could learn from the panel and have extracted some of the most valuable takeaways below. You can watch the full video of the panel here.
Superintelligence: Science or Fiction?
The Beneficial AI 2017 Conference panel on superintelligence included the following top names in AI and computer science:
Together, the panel discussed questions about the likelihood of superinteligence. Is it even possible given the laws of physics? If it’s possible, when is it coming? What happens if or when it does?
When surveyed about the likelihood of superintelligent AI, tellingly all respondents deemed it a possibility. This alone is worth considering. The viewpoint is based several assumptions:
The result is potentially a superintelligence with vast power to reshape our entire planet—or, as some theorize, put it in existential danger. Whatever the outlook, these nine minds on-stage firmly believe there is no reason why superintelligence shouldn’t be possible.
This is why initiatives like OpenAI and the Benevolent AI 2017 Conference exist: if superintelligence is a question of when, not if, preparations to handle its development must begin now.
Preparing for Superintelligence
As noted on the panel, just because superintelligence is possible does not mean it will happen. Like preparing for the eventuality of a catastrophic asteroid strike on Earth, the risk may be incredibly small, but so dire that it merits serious thought. Panelists were asked “Will it actually happen?” and given the choices of “yes,” “no” and “it’s complicated.”
Every panelist answered “yes” without hesitation, except for a clearly joking Musk and Bostrom, who said “probably.” Harris added one complication, that the only way he didn’t think it would happen is if a catastrophic or world-changing event occurred to prevent the rise of superintelligence.
On the question of when superintelligence might occur after achieving human-level AI, the panel disagreed.
Several panelists thought it would be a number of years between human-level AI and superintelligence. However, others thought this development would occur in a matter of days after AI reaches human-comparable capabilities.
However, Selman made the important note that AI is not one technology, but a suite of related and connected disciplines: “I think we’ll go beyond human-level capabilities in a number of areas, but not in all at the same time,” he noted. “It will be an uneven process.”
The debate over superintelligence highlights two important truths for marketers, executives and entrepreneurs:
Then again, you might not have to worry about your business if Musk is right:
“I think if [artificial intelligence] reaches a threshold where it’s as smart as the smartest most inventive human, then I mean it could be only a matter of days before it’s smarter than the sum of humanity.”
There’s plenty more where that came from. Watch the whole video for even more insight into the possibilities and perils of superintelligence.