Welcome to our Marketing AI Conference (MAICON) 2021 speaker series. We’ll introduce you to our speakers, tell you a little about why they’ll be at MAICON, what resonated with us in articles or resources of theirs, and how to connect with them prior to the event.
Karen Hao is the senior AI editor at MIT Technology Review, covering the field’s cutting-edge research and its impacts on society.
She writes a weekly newsletter called The Algorithm, which was named one of the best newsletters on the internet in 2019 by The Webby Awards.
Her work has also won a Front Page Award and been short-listed for the Sigma and Ambies Awards. Prior to joining the publication, she was a tech reporter and data scientist at Quartz and an application engineer at the first startup to spin out of Google X. She received her B.S. in mechanical engineering and minor in energy studies from MIT.
Longtime Marketing AI Institute followers and MAICON 2019 attendees know Karen well. During her 2019 keynote at MAICON, she helped the audience understand, “What is AI?”
She helps us answer the question, “what is artificial intelligence?” The question may seem basic, but the answer can be complicated.
In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.
But, what would have been considered AI in the past may not be considered AI today. Because of this, the boundaries of AI can get really confusing, and the term often gets mangled to include any kind of algorithm or computer program.
Watch Karen’s keynote above for a great primer as we prepare for MAICON 2021. In it, you'll learn how to differentiate whether something is using AI or not by following a series of simple questions.
Some of Karen Hao's Work
Big Tech’s guide to talking about AI ethics
An excerpt: AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.
Why we like it: Hao identified some real issues in the AI space—moving too quickly, not thinking about our customers or the ethics of AI, and more. We love this post and the definitions of 50+ terms because quite simply, “it’s funny because it’s true.” Take a look and see if you can identify with, or see yourself and your company, in any of these terms. Then consider some deeper conversations on how to set yourself on a better course.
How Facebook got addicted to spreading misinformation
An excerpt: Zuckerberg’s obsession with getting the whole world to use Facebook had found a powerful new weapon. Teams had previously used design tactics, like experimenting with the content and frequency of notifications, to try to hook users more effectively. Their goal, among other things, was to increase a metric called L6/7, the fraction of people who logged in to Facebook six of the previous seven days. L6/7 is just one of myriad ways in which Facebook has measured “engagement”—the propensity of people to use its platform in any way, whether it’s by posting things, commenting on them, liking or sharing them, or just looking at them. Now every user interaction once analyzed by engineers was being analyzed by algorithms. Those algorithms were creating much faster, more personalized feedback loops for tweaking and tailoring each user’s news feed to keep nudging up engagement numbers.
Why we like it: When the general public thinks about AI, especially the negative connotations of artificial intelligence in their lives, Cambridge Analytica is often at the forefront. This article talks about the intentions of Facebook, what happened, and what’s next. This will be a major talking point of Hao’s keynote at MAICON 2021.
We need to design distrust into AI systems to make them safer
An excerpt: The negatives are really linked to bias. That’s why I always talk about bias and trust interchangeably. Because if I’m over-trusting these systems and these systems are making decisions that have different outcomes for different groups of individuals—say, a medical diagnosis system has differences between women versus men—we’re now creating systems that augment the inequities we currently have. That’s a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you can’t recover from. So we really have to fix it.
Why we like it: We talk at the Marketing AI Institute about how AI can give you superpowers, and how we can grow smarter with AI. However, we need to think about bias, ethics, and “excessive faith,” as Ayanna Howard says in this article. It’s an important conversation that needs to continue - it is not a one-and-done. With marketers using AI-powered technology more and more, our marketing AI community is in a position to pave the way for responsible AI in our industry.
Learn More About Karen Hao
- Follow Karen on Twitter
- Follow Karen on LinkedIn
- Read Karen’s work at MIT Technology Review (subscription required)
Karen Hao's Keynote at MAICON 2021
Responsible AI: Ethics, Innovation, and Lessons Learned from Big Tech
In this conversation with Karen Hao, senior AI editor, MIT Technology Review, we explore the ethical development and application of AI. Drawing on her expansive research and writing, Hao offers an inside look at the policies and practices of major tech companies, and shares lessons learned that you can use to ensure your company’s AI initiatives put people over profits.
Join us at MAICON 2021 on September 13-14, 2021 to hear Karen and 20+ other AI and marketing leaders. Prices increase on August 6, so secure your pass today. BLOG20 saves 20% off current prices.