messy process

AI Is Only as Good as the Question You Ask It

At my first job out of college, I built a book inventory database.

An older colleague — the kind of woman who thought she was always right — was frustrated it didn’t match her paper records and wanted to know what I’d broken. “A computer can never be wrong,” she told me.

I said, “It can be as wrong as the question you ask it.”

She walked away.

I’ve thought about that exchange a lot lately. Because we’re thirty years on, watching companies deploy AI at scale — and we’re still asking the wrong questions.


Amazon recently cut 16,000 roles, citing AI as a driver of the decision. They weren’t alone. Across industries, the narrative has been the same: deploy AI, reduce headcount, improve efficiency. A clean equation on a slide deck.

Except it isn’t working the way anyone planned.

A Forrester report found that 55% of employers now regret AI-related layoffs — not because AI failed to automate tasks, but because they lost the institutional knowledge, the judgment calls, the human capacity to notice when something was wrong. And the Qualtrics 2026 research found that 1 in 5 consumers say AI customer service delivered zero benefit. Not mixed results. Zero. The failure rate for AI in customer service is four times higher than AI deployed in any other context.

The instinct is to blame the AI. The technology wasn’t ready. The model wasn’t trained properly. We need a better tool.

But that’s not what’s happening.


Here’s what I’ve watched play out across every major technology implementation in my career: organisations don’t deploy technology into functioning systems. They deploy it into systems that were already broken — and then act surprised when the technology runs those broken systems faster and at greater scale.

The customer service queues that frustrated people before AI? AI didn’t fix them. It inherited them, automated the frustrating parts, and removed the one saving grace: the human on the other end who could tell when the system was failing someone and do something about it.

The Qualtrics research found that 47% of bad AI experiences lead to decreased spending — but only 29% of customers ever tell the company. The rest just leave. Quietly. Which means companies aren’t just losing customers. They’re losing customers without ever knowing why.

AI didn’t create any of that. It inherited it. And then it ran it at scale, faster and cheaper than before, which meant the cracks became impossible to ignore.

My colleague was right that the computer wasn’t wrong. It was doing exactly what it was asked to do. That was the problem.


The organisations getting this right aren’t starting with AI. They’re starting with the problem.

Bank of America built Erica — their virtual assistant — around the tasks customers actually needed to complete: understanding their balance, finding a transaction, getting a notification when something unusual happened. By 2025, Erica had handled over 2 billion interactions, resolving 98% of queries in under 44 seconds. Not because the AI was smarter. Because the problem was defined before the solution was chosen.

DoorDash set a specific metric — 70% of customer contacts resolved without human intervention — before deploying any automation. They hit 74%. The metric came first. The technology came second.

These aren’t AI success stories. They’re problem-definition success stories.


The question most organisations are asking is: “How can we use AI to reduce costs?”

It’s not a bad question. But it’s the second question. The first question is: “What problem are we actually trying to solve — and for whom?”

That question is harder. It requires sitting with ambiguity, talking to customers, understanding where the real friction is before reaching for a solution. It doesn’t fit neatly on a slide deck. It doesn’t generate a press release.

But it’s the only question that leads to AI that actually works.

My colleague walked away from that conversation thirty years ago because she didn’t want to hear that the computer might be reflecting a problem in her process rather than a flaw in my code. The data was right. The inputs were wrong.

We’re still having the same conversation. Just at a much larger scale, with much higher stakes, and a much quieter customer exit when we get it wrong.


Before your next AI initiative, ask one question first: what specific problem, for which specific customer or employee, are we trying to solve?

If you can’t answer that in a single clear sentence, you’re not ready to choose the technology. You’re ready to define the problem.

That’s where the work starts.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top