People Who Get It Wrong Because of AI
Recently, I had a fascinating conversation with a colleague, a senior developer leading a development department in a major international company. We were catching up on trends in our field, and he mentioned something that really stuck with me: how often developers make mistakes when using AI, and more importantly, how they respond when questioned about those mistakes.
He told me that during code reviews, it's becoming increasingly common for developers to be unable to explain why they implemented something in a certain way. And not just that, they sometimes lie about it. They try to justify decisions they didn’t really make themselves. It turns out they just copied code from an AI tool, like ChatGPT or Claude, and pasted it into the project without truly understanding it.
I found that observation both interesting and alarming.
Now, I’m not an expert in human behavior, so I won’t pretend to understand the psychology behind this pattern. But from my perspective, what I see is a growing tendency for people to place blind trust in AI. And that’s dangerous.
I’ve heard variations of the same sentence more times than I can count:
“It's correct because ChatGPT said so.”
“This code is optimal, Claude wrote it.”
This kind of thinking reveals a major issue: people are starting to believe that if something came from an AI, it must be right. The problem is, AI can be wrong, sometimes very wrong, and if we’re not careful, we can inherit and propagate that misinformation.
The bigger danger is the illusion of knowledge. When someone accepts AI-generated output without questioning it, they build a foundation of false confidence. They think they “know” something, but in reality, they’re just repeating what the AI said, often without understanding the context, trade-offs, or even the basic logic behind it.
This issue is only going to get worse unless we collectively start taking it seriously.
To be clear, I’m not anti-AI. I use it every day, and I think it’s a fantastic tool, one of the most powerful we’ve ever had. But that’s all it is: a tool. And tools should never replace our own critical thinking. We still need to understand, validate, and take ownership of the work we produce, even if it was generated with AI assistance.
Maybe I’m wrong, this is just my personal opinion after all. But one thing I am sure of: we can’t delegate 100% of our thinking to AI. We need to stay in control, stay curious, and stay responsible.
Because at the end of the day, we are the ones accountable, not the machine.