AI needs to be held accountable—by people and other AI.
That’s the takeaway from Zeze Peters, an actual rocket scientist and founder/CEO of Beam.city, in his talk at the Marketing AI Conference (MAICON) 2021.
In it, Peters shares how businesses can begin to hold AI tools and systems accountable, and prevent serious issues down the line.
PS - Have you heard about the world’s leading marketing AI conference? Click here to see the incredible programming planned for MAICON 2022.
There’s no question some very smart people are scared of AI’s potential to go wrong, says Peters. He cites Elon Musk’s fears that the “danger of AI is much more than the danger of nuclear warheads.”
Much of the fear around AI comes down to explainability. Right now, many AI systems, even dangerous ones, are “black boxes.” We can’t see inside them to fully understand how or why they make decisions. Because AI can devise its own pathways to goals, sometimes even the engineers who build these systems don’t always know why they do what they do.
We need to make AI understandable by opening the black box, says Peters. There are four key ways to do that.
In some advanced cases, says Peters, you may actually need another AI to audit your first AI system and keep it in check. (These are sometimes called generative adversarial networks.)
This may be a lot for non-technical business leaders and marketers. But it’s necessary to start understanding, so that your AI adoption efforts benefit, rather than harm, your business.
Even if you’re starting from square one, you can get your bearings by asking a simple question, then finding people who can help you answer it?
What is your AI, and what does it do?
PS - Have you heard about the world’s leading marketing AI conference? Click here to see the incredible programming planned for MAICON 2022.