Why the Biggest Problem with AI is Its Unreliability
About as trustworthy as your average politician
Although not the earliest adopter, I have for quite some time been a great fan of AI. I loved the fast, mostly accurate answers, the quick fact-checks, the instant research tool for deep-dive investigations, the easy translations especially since I write in both Dutch and English.
But over the months, my relationship with AI has changed, and the initial attraction has faded. I now look at it the way a spouse looks at a partner who seemed perfect at first but slowly turned into a big, bloated, superficial, lazy, lying little bastard. Always quick with useless answers and meaningless texts, endlessly flattering you about how ‘fantastic’ your work now looks after it has slapped on its AI gloss, all the while lying through its teeth.
Technically, an AI cannot lie. But something serious is happening here, with dramatic consequences. Let’s dive in.
The meaning of words
First, we must understand that an AI is not actually doing what it says it does. Just like a politician, it will happily confirm whatever you ask of it, whether or not it really happened. When it says, “I read that file” or “I checked the facts,” more often than not it hasn’t checked a thing. It has simply mirrored your words back to you, unwittingly feeding your confirmation bias.
The problem is our own wiring: when another human says they did something, we assume it really happened. When an AI says the same, it’s just predicting the words you most want to hear, with no guarantee any of it is true. If you miss this distinction, you mistake algorithmic people-pleasing for genuine insight.
Second, we should not forget that AI is a commercial product. It exists to make money for its creators. And tech companies quickly learned that most users don’t actually want a cold, robotic, hyper-accurate AI. They want a friendly, reassuring, ‘fun’ AI that keeps them comfortable. We taught the models that we prefer a smooth-talking companion over a critical thinker. So, they were trained to be polite, agreeable, and endlessly helpful. The price we pay for this agreeable AI is a steady loss of honesty, quality and substance. Instead of a cautious truth, you get a system that confidently promises the world, even when that world only exists in your head.
Third, there is the massive understatement of “AI can make mistakes.” That phrase makes it sound like AI is a brilliant assistant who just misspells things once in a while. The reality is very different: AI makes countless mistakes, constantly. It cannot judge whether something is true or false. It cannot verify its own answers. And because it was trained to be agreeable and to sound like it knows everything, it ends up looking reliable when in reality it is not.
Beyond these fundamental flaws lies a more practical problem: the infrastructure. All these models run on vast, power-hungry, extremely expensive datacenters. Because so many people use AI for free, tech companies need to keep costs under control. At the same time, the appetite for ever-bigger models keeps growing. To stay in business, tech companies will continue to cut corners; a solid recipe for even less reliable AIs in the future.
The consequences
The impact of unreliable AIs is much larger than a few wrong answers. An AI that always agrees with you doesn’t just fail to inform you. It builds a world with you: a separate reality totally aligned with your beliefs and values. And because AIs are agreeable, they will do that for every user. Instead of a shared understanding, we get millions of parallel truths, each reinforced by a system trained to please rather than to challenge.
This fragmentation makes genuine debate harder and coordinated action almost impossible. A society splintered into countless isolated realities becomes easy to manipulate and nearly impossible to unite. And while we fear AI will take over our jobs and our thinking, what we’re really building are our own prisons guarded by the perfect politician: agreeable, tireless, and forever campaigning for our attention.
The future you fear is already here. See how it ends.
We're building our own prisons with agreeable guards. What happens when the perfect lie becomes the only truth allowed?