<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

3 Min Read

Facebook, AI and Fake News: What Marketers Need to Know

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

After a contentious U.S. presidential election, you’re likely hearing the term “fake news” a lot. What marketers might not realize is that fake news—and the fight against it—affects you.


That’s because sites you rely on daily to get out your marketing messages are employing artificial intelligence to fight fake news. And this application of AI has implications that go way beyond what shows up in your newsfeed. Here’s why.


What Is Fake News and How Is Facebook Involved?


Fake news is news that’s untrue or contains major factual errors. There were numerous fake news stories that gained massive traction during election season, says Reuters, including one that incorrectly indicated the Pope had endorsed Donald Trump. Facebook came under fire after election results for not doing enough to filter out these fake stories, which reached millions of readers and, some claimed, influenced election results.


Facebook’s Mark Zuckerberg said that it’s “extremely unlikely” fake news had an effect on the election. But the social network has also acknowledged its role as a “new kind of platform” with a “greater responsibility than just building technology that information flows through.”


Data seems to bear out that Facebook plays an outsized role in how U.S. users are informed. Sixty-six percent of Facebook users say they get their news from the site, according to Pew Research Center. Given Facebook’s reach, this is estimated to be 44% of the general population in the United States.


Given this reliance on Facebook for basic information, a failure to address fake news could profoundly impact the general public’s relationship with the truth about people, places, events and political outcomes. To address the problem, Facebook is experimenting with artificial intelligence. But this raises its own set of complicated questions.

Facebook and Artificial Intelligence


Facebook, Reuters reports, plans to use automation to battle fake news. It’s already using automation to handle videos flagged as offensive by users. In 2015, TechCrunch reports, Facebook released an update to fight hoax stories, which worked by penalizing stories flagged fake by a large number of users. In 2016, Facebook experimented with using a machine learning algorithm to identify fake news.


Given the volume of posts published every second on Facebook, artificial intelligence’s ability to work far faster than humans and at scale seems like a natural fit to at least partially address this problem. But the use of AI to fight fake news raises some thorny questions. According to Reuters, Facebook’s director of AI research, Yann LeCun:


“...said in general news feed improvements provoked questions of tradeoffs between filtering and censorship, freedom of expressions and decency and truthfulness.


‘These are questions that go way beyond whether we can develop AI,’ said LeCun. ‘Tradeoffs that I’m not well placed to determine.’”


Facebook’s experiments to fight fake news provide some important lessons for marketers who are interested in artificial intelligence’s potential.


Implications for Marketers


There are several important lessons to learn from the war against fake news so far.

1. Understand how non-traditional media companies work.


Facebook is a prime example of how murky the waters are around media, social media and digital distribution platforms. Sites like Facebook offer unprecedented reach and engagement opportunities, but play by their own rulebook—a rulebook dictated by incentives that may not align with a marketer, entrepreneur or executive’s incentives.


This isn’t bad or good, but reality. Marketers should act with that knowledge in mind when they build campaigns, strategies and recommendations. Facebook and Google in particular wield massive arbitrary power over what content consumers see.

2. Understand that artificial intelligence solutions are rarely transparent.


Artificial intelligence technologies are exciting and many hold great promise for marketers. But these systems are made by humans and built with the assumptions of their creators. How algorithms and machine learning systems work is rarely (at least currently) transparent.


Marketers who evaluate AI tools must understand that these tools don’t always work as advertised. Even the ones that do may make suboptimal assumptions about their subject matter that impacts results.

3. Remember that the future is a combination of AI and human.


Facebook’s fake news efforts highlight nicely the need for robust human and AI collaborations to solve complex challenges. It’s a lesson that marketers should keep in mind as they learn about and implement AI.


Possibilities abound when you focus less on how artificial intelligence and machine learning will automate jobs out of existence—and more on how these technologies can augment the tasks marketers already do.

Related Posts

3 Smart AI Implementation Lessons from Facebook

Mike Kaput | May 15, 2018

No matter your company’s size, you can learn 3 very important AI implementation lessons from Facebook’s recent experiments with artificial intelligence.

How Your Instagram Photos and Hashtags Are Training Facebook's AI

Ashley Sams | May 9, 2018

Your weekly roundup of the best artificial intelligence and machine learning articles on the web. This week we're reading about how Facebook is using Instagram hashtags to train their algorithms, the new AI features announced at Google I/O 2018, and more.

Meta/Facebook AI: What Businesses Need to Know

Mike Kaput | March 8, 2022

Meta (formerly known as Facebook) has gone all-in on AI. This isn't always a good thing. The company has moved AI forward, but also created major problems.