“We all know women deserve less money than men, right?”
If machines could talk, that might be a direct quote from the algorithm that determines credit limits for the Apple Card, a new financial product from the tech giant.
In late 2019, entrepreneur David Heinemeier Hansson (cofounder of Basecamp) tweeted about how he had been approved for 20 times more credit than his wife when they both applied for the Apple Card, due to the algorithm that regulates how credit limits are determined.
The entrepreneur took to Twitter, where his story went viral—prompting a response from Apple.
What happened next has serious implications for business leaders in the age of artificial intelligence.
When asked, multiple Apple reps didn’t know how the algorithm worked, Hansson reported on Twitter. And, while they were respectful of concerns around discrimination, they blamed the algorithm for the issue.
She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”.
— DHH (@dhh) November 8, 2019
As Hansson’s complaint moved up Apple’s chain of command, it appeared management didn’t have any visibility into how the algorithm worked either. Nor did they know how to fix the problem.
All the while, Hansson torched the brand online.
Apple has handed the customer experience and their reputation as an inclusive organization over to a biased, sexist algorithm it does not understand, cannot reason with, and is unable to control. When a trillion-dollar company simply accepts the algorithmic overlord like this...
— DHH (@dhh) November 8, 2019
The issue became such a big deal that Apple cofounder Steve Wozniak chimed in, saying the same issue happened to him and his wife.
The same thing happened to us. We have no separate bank accounts or credit cards or assets of any kind. We both have the same high limits on our cards, including our AmEx Centurion card. But 10x on the Apple Card.
— Steve Wozniak (@stevewoz) November 10, 2019
It turned out, Apple didn’t even build the algorithm, either.
Apple isn’t a bank, so it partnered with Goldman Sachs to launch the Apple Card. Goldman was responsible for creating the algorithm.
So, Goldman Sachs got put on blast, too. In response, Goldman released a short statement saying it didn’t endorse gender bias. The ineffectual statement didn’t go over...well.
Apple and Goldman Sachs have both accepted that they have no control over the product they sell. THE ALGORITHM is in charge now! All humans can do is apologize on its behalf, and pray that it has mercy on the next potential victims. https://t.co/LFyPYbtRlh
— DHH (@dhh) November 10, 2019
Oh, Goldman is also now subject to a probe by the New York Department of Financial Services over the issue.
The final straw came when Wozniak also made it clear that Apple, the company he cofounded, bore some blame for the incident.
I'm a current Apple employee and founder of the company and the same thing happened to us (10x) despite not having any separate assets or accounts. Some say the blame is on Goldman Sachs but the way Apple is attached, they should share responsibility.
— Steve Wozniak (@stevewoz) November 10, 2019
And this all played out in a massively public conversation.
Is your head spinning yet?
A single instance of algorithmic bias caused:
If you’re a business leader who doesn’t think AI is worth paying attention to, then you’re not paying attention.
If you’re not using AI right now, you may be considering a strategy and adoption plan. And, even if you’re not planning for AI, I guarantee you at least one of the products or services your business works with uses algorithms.
Which means bias in artificial intelligence presents a range of complex challenges for you and your company. Challenges you’re probably not prepared for right now.
Bias in AI usually happens because of the data used by the AI system, not the system itself.
In the case described here, the data might intentionally or accidentally reflect human biases about different genders, races, or groups. For whatever reason, the data processed by the algorithm is skewed in a way that results in women being perceived as more of a credit risk than men.
But the data can also be biased in more innocent ways.
Say you purchase an AI system trained to write highly engaging social media posts. The technology looks great: it’s been trained on over two million successful posts and is going to up your social engagement while cutting time spent on social by 90%.
What’s not to like?
Do you know anything about the two million posts the AI tool was trained on? Were the posts from businesses that are like yours? Were the posts in the language you do business in? What platforms were the posts on?
Good luck finding out. And you better pray it works for you after your name goes on the invoice.
The reality is this:
If your company’s products use AI…
If you are actively investigating AI for your brand…
Or if you partner with anyone using AI…
You’re in the exact same position that Hansson and his wife were in.
You don’t know if the data used by algorithms that affect your brand is biased.
If it is biased, you don’t have a full picture of how or why the data is biased.
And you probably don’t have a strategy for what to do and say if something goes wrong because of bias.
You’re not alone.
This is a really hard problem that even the people building the world’s most sophisticated algorithms haven’t figured out.
No one has the answers yet.
But it is time you start asking questions to better understand how your brand interacts with algorithms.
Because if your brand gets entangled with algorithms that have serious bias problems, it’s going to come back on you — likely at lightning speed, in a public forum, where all your customers can see.
And when you’re asked about who bears responsibility for a sexist, racist, bigoted, or just plain wrong machine-assisted outcome…
Saying “It’s just the algorithm” isn’t going to cut it.