On October 4, 2022, the White House announced what it calls an “AI Bill of Rights,” an important blueprint of the “five principles that should guide the design, use, and deployment of automated systems.”
What it is
The Blueprint for an AI Bill of Rights is a guide released by the White House’s Office of Science and Technology Policy that outlines five principles to guide responsible AI development and use:
- Safe and effective systems, or the right to be protected from AI systems that do harm or go wrong.
- Algorithmic discrimination protections, or the right to not be discriminated against by the rule-making logic used in the algorithms that power AI systems.
- Data privacy, or the right to control how your data is used and be protected from its misuse.
- Notice and explanation, or the right to know how an AI system works and why it does what it does.
- Human alternatives, consideration, and fallback, or the ability to opt out of systems if desired and/or be able to access human alternatives to the system.
The AI Bill of Rights also:
- Offers guidance on how to put the principles into practice…
- …but is not (at the moment) legally binding.
Why it matters
In Episode 20 of the Marketing AI Show, Marketing AI Institute founder/CEO Paul Roetzer and myself unpacked the implications of this major artificial intelligence milestone:
- Consumers often don’t understand AI’s power and impact. “AI is just everywhere in our lives today and the average consumer has no clue how it works or what undermines the technology,” Roetzer told me. We need help understanding what responsible AI looks like.
- Tech companies don’t have all the answers. The burden of building and using AI responsibly falls on technology companies, which don’t always have incentives to build systems that prioritize people over profit. While the AI BIll of Rights isn’t legal regulation, it signals that policymakers are looking to develop and inform points of view on responsible AI.
- And AI is moving fast. In the near future, every single business will use AI in some way or become obsolete, says Roetzer. Business leaders need to get up to speed now on how AI can—and will—impact their companies and customers.
How to apply it
Business leaders should think about applying the AI Bill of Rights in several ways:
- Read the AI Bill of Rights, or an in-depth summary, to understand the five core principles outlines that your business needs to be thinking about.
- Use it to inform an AI ethics policy. An AI ethics policy communicates to consumers and employees that you’re seriously formulating a point of view on the risks and rewards presented by this technology.
- Get the book. We talk more about how to establish an AI ethics policy in our book, Marketing Artificial Intelligence: AI, Marketing, and the Future of Business.
- Don’t delay. “Marketers and business leaders cannot afford to push this forward for a few more years,” says Roetzer. “That’s too late.” Rapid advances in AI, especially around generative AI like DALL-E 2, are raising thorny legal and ethical questions your business needs to start resolving.
Final thoughts
“This is the kind of stuff you have got to be thinking about from the ground up and figuring out who within the organization needs to be involved in these conversations.
To truly scale [AI], you’re going to have to become an AI emergent company where AI is infused into everything. And there’s going to be a lot of hard decisions that are going to have to be made.”
- Paul Roetzer, Founder/CEO of Marketing AI Institute
Check out the full episode:
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.