I am a data detective who helps customers solve their data mysteries.
Artificial intelligence is the science of helping computers perform tasks that mimic human intelligence using mathematics.
I disagree. Mathematics, which is the core of AI, is the most profound thing humanity has ever worked on. And we’re still discovering and rediscovering mathematics, even today. Just a few years ago, modern scientists finally decoded some Babylonian clay tablets which have a very different, more accurate interpretation of trigonometry than the Greek-derived trigonometry. We still have so much to learn, and it’s those discoveries that have knock-on effects in data science, machine learning, and AI.
Consider that much of what we’re doing in AI today, in application, has its roots in research from decades ago, as far back as the 1940s and 1950s.
I started my journey in the early 2000s when I was working in a financial services startup and digital marketing analytics became relevant. I was a poor math student, so I had to relearn linear algebra and statistics and probability over the years.
In 2013, I fully embraced the R programming language and started doing data science. By 2017, I was digging—painfully—into Tensorflow.
"AI is nothing more than cooking, metaphorically, and data is your ingredients. If your ingredients are rotten, it doesn’t matter how nice a kitchen you have or how talented a chef you are. Your meal will be bad."
AI offers unprecedented scale and capability to transform our otherwise unmanageable amounts of data into something we can use, a problem we’ve been facing for several decades now. So much of our data goes unused.
Humanity. Humanity is the problem; AI is just another tool that humans can use or misuse.
I’m particularly concerned about blind trust in AI and assumption of a lack of bias. Machines learn from what we give them. If we give them biased data, they will learn biases and replicate them at massive scale. Consider healthcare data for African-Americans. Virtually none of it is useful, not because our data collection is bad, but because we have societal, systemic racism against African-Americans such that using existing data as training data would reinforce the poor health outcomes they already have.
That’s not a problem AI can solve; that’s a problem that only the people who train AI can know to look for and mitigate to the best of their abilities.
Something I say in all my keynotes: AI is math, not magic.
There’s no magic here. AI is just the application of mathematics to data at a very large scale.
Machines will be able to mimic behaviors, but not understand them. This is especially true of emotions, so anything that relies on emotions—empathy, judgement (the willingness to bend the rules), general life experience, relationships—are still likely to remain the domain of humans.
And frankly, anything we don’t understand as humans, our machines won’t understand. We still don’t understand emotions beyond empirical data and some neurological data, but our science of emotions is still early. We still cannot create life at all, nor do we understand it. We still don’t understand gravity.
Anything out of our reach is out of reach of our machines until we have data we can train them on.
I follow many of the academic labs, like the MIT machine learning lab, and prominent coders and researches like Lex Fridman and Hadley Wickham. I also follow other practitioners, people like Dr. Hilary Mason, Shingai Manjengwa, Carla Gentry, and others. But most of all, I read research as much as I can.
The Lottery Ticket Hypothesis paper by Jonathan Frankle and Michael Carbin.
Their research indicates that they were able to find subnetworks within deep learning networks that were just as effective, but at 10% of the size. That’s incredible, because it offers a glimpse at what could be possible inside today’s networks. If there’s a subset network, then like a fractal, if you can just get the right piece of the network, you could save 90% of your compute cost and still achieve the same results.
You can read that paper here.
"Engineers are notorious for being bluntly honest, and when you talk to a company that’s the real deal, they’ll feel comfortable letting you chat with the folks in the machinery. A company that has something to hide won’t ever let you talk to an engineer unsupervised, if at all."
I’d make that poor bastard sit down and learn mathematics, computer science, psychology, and anthropology.
That’s such a difficult question to answer, because we don’t know what the future looks like, other than that a substantial amount of any kind of repetitive work will be done by machines.
The Brookings Institute said it best: AI will take tasks, not jobs. But take away enough tasks and you can reduce headcount pretty substantially. I’d say follow the Bezos principle: focus on what doesn’t change. What things in life will remain the same? People will always want better, faster, and cheaper. People will always choose pleasure over pain. People will always opt for the easiest path, the path of least resistance.
Learning psychology, anthropology, and mathematics will give you the right blend of hard and soft skills to make sense of the world.
Start small. Find a test case, something that matters but that you have the data for, and use that as the starting point for a pilot program. You’re not going to roll out a massive project at scale in the first go. Make sure you start easy, so that you can see where your people and process gaps are.
The biggest challenge by far is your data. Most organizations have absolutely terrible quality data. AI is nothing more than cooking, metaphorically, and data is your ingredients. If your ingredients are rotten, it doesn’t matter how nice a kitchen you have or how talented a chef you are. Your meal will be bad.
It’s essential for marketers to learn the basics and terminology of AI, to be able to ask insightful questions. Most vendors know that they can EASILY fool the average marketer, who has no defense against a fancy sales pitch because they literally have no idea what the technology can and can’t do.
Spend a lot of time listening to podcasts, reading blogs, and watching videos on YouTube so that you’re comfortable with the lexicon and the conceptual ideas in AI like regression, classification, dimension reduction, etc.
Then, ask the vendor of choice to spend some time with one of their engineers without a salesperson present. Engineers are notorious for being bluntly honest, and when you talk to a company that’s the real deal, they’ll feel comfortable letting you chat with the folks in the machinery. A company that has something to hide won’t ever let you talk to an engineer unsupervised, if at all.
Basic analytics. There are agencies out there who have junior people literally copying and pasting data from one reporting system to another. Those jobs are going away, period. They should have gone away five years ago.
Have ethics as people and companies. You can’t create ethical AI if you’re an unethical company.
Personalization is all about two things: behavior and consent.
First, give customers the ability to offer consent in clear, meaningful ways. That will shape what they give you. Second, focus less on PII (personally identifiable information) and more on the behaviors people do.
A classic example of this is Bronies. People make the assumption that only girls aged 8-11 would care about My Little Pony, but there’s an entire market segment of 26- to 40-year-old men who are deeply in love with the My Little Pony franchise. If you focused on the demographic, you’d miss a profitable market segment. If you focus on the behavior, you’ll sell to the people who want to buy, no matter who they are.
They can’t. Brands aren’t human. Humans are human. Brands can get out of the way as much as possible and let their human employees behave like humans instead of crappy, flesh-based robots.
"Have ethics as people and companies. You can’t create ethical AI if you’re an unethical company."
Four steps.
First, get tools. Use free, open source software like R or Python, and the IDE of your choice.
Second, get knowledge. Take a course; I’m fond of IBM’s free CognitiveClass.ai portal that offers completely free classes you can take and even little badges of completion.
Third, get experience. Once you know what you’re doing, try out competitions on Kaggle and put your skills to test on real world data.
Fourth, get more experience. Get an internship, do a project or three for free, and build up your portfolio.
Wall-E. It’s a good look at robotics and also a potential outcome for humans.
None at present.
My own, which I write.
AI for Marketers: An Introduction and Primer.
AI is math, not magic. I can’t say that enough. Once you understand the technologies and what they do, you’re much more aware of what the limitations and possibilities are. You can’t ever create something from nothing; everything in AI comes from data that it has been trained on.
Most important, when you understand how AI works, you know when it is and isn’t applicable. Just as a frying pan isn’t terribly good for making soup, AI solves some problems really well and some problems really poorly. It’s not magic, it’s not a panacea, and it won’t make non-AI problems (like culture, people, broken processes, etc.) go away—it’ll make them worse.
Think of AI like coffee—it makes things go faster. Which also means that it will make mistakes faster and worse if your underlying data and assumptions aren’t correct.