First, a report in The Wall Street Journal kicks off by saying:
“Companies that simply use generative AI—say, by using OpenAI’s tech as part of a service delivered to a customer—are likely to be responsible for the outputs of such systems.”
And it seems like the legality of the output of these systems is seriously in question. Another report in The New York Times reveals:
“OpenAI, Google, and Meta ignored corporate policies, altered their own rules, and discussed skirting copyright law” in order to train their models. The article then cites several ways these companies acquired potentially copyrighted data.
How should business leaders navigate the legal and ethical minefields around generative AI?
I got the answer from Marketing AI Institute founder / CEO Paul Roetzer on Episode 91 of The Artificial Intelligence Show.
The first thing to know is this:
If you're in the United States, you don't own anything created by generative AI. At least, that's according to the latest guidance from the US Copyright Office.
That means you cannot copyright something created by generative AI. Prompting a system alone is not enough to prove human authorship.
Now, this could change. But, as of today, it's the guidance provided by the US government.
“To my knowledge, I don’t believe they have updated any additional guidance or given any additional indications about how copyright law may evolve in the United States," says Roetzer.
So, if you need a copyright on something in the US, you can't really use generative AI to create it.
Second, you may need to consider the legal risks of using these tools.
It's possible courts will determine that OpenAI, Google, and others trained models illegally. That could then make customers who use those tools liable for the outputs.
Some big tech companies are already trying to get ahead of this. Microsoft and Google say they'll cover your legal bills if you're sued for using their tools.
“I don’t know if that’s going to make your legal department feel much better about it, though," says Roetzer.
The New York Times story makes it clear that every company cut corners when training models. OpenAI used Whisper to transcribe YouTube videos and train models. Google trained models on YouTube data that they don't technically own. Meta execs said openly it would take too long to negotiate usage with publishers.
“So basically everyone is violating all of these rights and just sort of going full steam ahead," says Roetzer.
It helps to remember:
We still have to wait for courts to actually decide these things.
Roetzer thinks it's likely the companies pay fines and we all move on. But that's not the point, he says. The point is that you need to start planning for the legal implications of generative AI today.
How do you start doing that?
The first step is: Get legal involved.
“This isn’t something you’re going to solve on your own," says Roetzer. "You’ve got to have the lawyers in the room.”
Second, you need generative AI policies.
You need policies both for your team and all your service providers. Your policies should extend out to anyone you contract with to create content. Existing contracts should get reviewed.
Remember, if your contractor is using AI to create content, you do not own that content. And agencies or contractors may not even know themselves to what extent AI is being used.
“Our experience has been that a lot of agencies and freelancers are just trying to figure this stuff out," says Roetzer.
Roetzer says to ask yourself three big questions when creating generative AI policies:
The good news? There are plenty of generative AI policy templates out there to get you started.