AI can now flawlessly fake everything from celebrity endorsements to "live" footage of catastrophic events. And the scariest part?
It's just getting started.
In fact, telling what’s real and what’s not is increasingly difficult in today’s world of hyper-realistic deepfakes and synthetic content..
But some leading AI companies—and rivals—are making an attempt to fix the problem.
In the past couple weeks, Meta, OpenAI, and Google have announced they’ll join Microsoft, Adobe, and others in embracing Content Credentials, a technical standard for media provenance from C2PA.
C2PA is a standards organization (the name stands for Coalition for Content Provenance and Authenticity) and it’s working on ways, in partnership with over 100 companies, to identify where content came from online.
The C2PA standard (named after the organization) enables publishers and companies to embed metadata into media to verify the media’s origin. This metadata can be used to see if an image, for instance, was created with an AI tool.
(For example, you can now view additional metadata in any image generated by ChatGPT’s DALL-E 3 capabilities or the OpenAI API and see the AI tools used to generate it.)
And Meta is going even further than that.
The company says it’s “building industry-leading tools that can identify invisible markers at scale – specifically, the “AI generated” information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”
Could this initiative solve the deepfake problem?
That’s what I asked Marketing AI Institute founder/CEO Paul Roetzer on Episode 84 of The Artificial Intelligence Show.
Deepfakes are going to be a huge problem this year.
“My biggest concern for near term AI misuse is disinformation, misinformation, and synthetic content,” says Roetzer. “Because the average citizen has no idea that AI is capable of doing these things.”
The more advanced that deepfakes become, the more likely it is they become a major societal problem—especially as many parts of the world enter election season in 2024.
But this approach has significant limitations.
AI leaders coming together matter, says Roetzer. But there are some issues.
“It’s critical that they’re doing something in a unified way, where they’re trying to find ways to address this. But it does appear that even this unified approach has massive limitations.”
Metadata used to identify AI-generated content can be inconsistently applied or easily removed. People can easily screenshot or screen-record AI-generated content. And social networks need to be willing to detect, flag, and remove this content.
Not to mention, open source tools usually aren’t far behind commercially-available ones. And open source tools probably won’t come with guardrails or watermarks or labels.
There’s only one way to really address this.
It’s good to have companies working on this, says Roetzer. But he doesn’t think any of their efforts will solve the problem in a uniform way.
“The only true way to address this is through AI literacy that makes people aware that this stuff exists and it’s possible to develop very realistic videos, images, and audio,” he says.
That’s a monumental task. We live in a society where people want to believe whatever fits their view of the world.
We have to accept that we live now in a world filled with misinformation, disinformation, and synthetic media moving forward…
And we have to accept that we can no longer trust anything we see online without verification.
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.