At the Marketing AI Institute, we read dozens of articles on artificial intelligence every week to uncover the most valuable ones for our subscribers and we curate them for you here. We call it 3 Links in 3 Minutes. Enjoy!
One of the top names in artificial intelligence, Kai-Fu Lee, venture capitalist and author of AI Super Powers: China, Silicon Valley and the New World Order, has a lot to say about AI’s impact on the workforce. This week, he published an article in TIME that cuts through all the hype. Here are the highlights.
First, AI can outperform humans at certain tasks, but this doesn’t mean it will be replacing all human workers. Lee lists four types of jobs that are not at risk: creative, strategic, empathetic, and the unknown jobs that will be created by AI.
For many marketers, at least part of our roles fall into one of these categories. AI is expertly skilled at learning and optimizing on a level above humans’ capabilities. However, it cannot invent like scientists and artists; it cannot strategize like executives, diplomats, and marketers; it cannot react with compassion or babysit your children.
However, Lee reminds us there are certain risks that come with the adoption of AI. Inequality could be multiplied for economic classes and between countries. Security and privacy will create many challenges moving forward and must be prioritized by governments and companies.
According to Microsoft’s CTO Kevin Scott, understanding AI is part of being an informed citizen of the 21st century.
In an interview with VentureBeat, Scott explained, “You don’t want to be someone to whom AI is sort of this thing that happens to you. You want to be an active agent in the whole ecosystem.”
However, he admits that staying up to date with the rapidly changing environment can be a challenge.
One aspect of AI that has drawn significant interest from the general public is facial recognition technology—and the government’s involvement. Just last week, the American’s Civil Liberties Union (ACLU) called for all major tech companies to refrain from sharing facial recognition technology and findings with governments to prevent religious and ethnic discrimination.
In the interview, Scott also calls for AI experts to do more to educate people on the positive outcomes that can stem from technology like facial recognition software. For example, he shares how it can be used to improve building security, identify who’s in a meeting, or verify that people handing dangerous machinery are certified to do so.
If you want to stay up to date on the happenings of artificial intelligence and machine learning, you should read Scott’s forthcoming book (the title is still to be announced). When writing the book, Scott framed it as if we were defining ”AI for his grandfather, a former appliance repairman, farmer, and boiler room mechanic during World War II.”
The World Economic Forum (WEF) is in full gear this week and they’re making serious moves on the subject of artificial intelligence. Besides forming an AI council, WEF is encouraging citizens around the world to stop talking about AI and ethics and just start doing.
In an official post by Dharmesh Syal, chief technology officer of BCG Digital Ventures, he makes the case for the importance of ethics in a swiftly evolving landscape: “The good news is, it’s not too late; we’ve only seen a glimpse of what AI is capable of. The only way to make sure we don’t create a monster that could turn against us is to incorporate ethical safeguards into the architecture of the AI we’re creating today.”
To do so, he offers three strategies for anyone building AI. First, include humans in any sensitive scenarios. Algorithms are more accurate when they employ a “human-in-the-loop” (HITL) system. Second, put safeguards in place so machines can self-correct, so we don’t have another episode like Facebook’s fake news problem. And, lastly, create an ethics code. Said best by Apple CEO Tim Cook, “The best regulation is self-regulation.”