AI Is Making Us Stupid, and We’re Applauding It
All hail the mighty Dumb-Dumb, Bringer of Confident Nonsense
I don’t know about you, but I’m really worried about where we’re heading with AI. Tools like ChatGPT and other LLMs are an incredible leap forward, but honestly, people are treating them like some kind of oracle. The problem isn’t the tech. It’s us. We’re just blindly following each other off a cliff like lemmings, cheering the whole way down.
Here’s what gets me. I work in a field I know inside out. I’ve spent decades honing my craft. When I ask ChatGPT questions about it, the answers are often wrong. Not just a little wrong but deeply, dangerously wrong. And now I’m seeing posts, articles, and so-called “insights” flooding my feed from people clearly doing their “research” by asking these tools for answers. It’s lazy. It’s misleading. And it’s becoming the norm.
Let’s talk about where this data comes from. These models are trained by crawling the web. Websites, blogs, forums, corporate pages. None of these are verified sources of truth. We already joke about how “just because it’s on the internet doesn’t mean it’s true.” Yet here we are training AI on exactly that. Garbage in, garbage out.
General LLMs like ChatGPT, Claude, Gemini, and others are built on unvalidated content. A sea of opinions, echo chambers, and unverified assumptions. And now it’s feeding itself. People use these tools, post the AI’s answers as fact, and that new content gets scraped and fed back into the next model. Reinforcing the non-nonsense. We’re building layer after layer of misinformation, treating it like gold.
It’s a feedback loop of crap.
And don’t even get me started on plagiarism. AI regurgitates, rewords, and rehashes stuff that might have been wrong to begin with. We’re rewarding speed over depth, surface-level content over real thinking.
The scariest part? People believe it.
In my world, where we build complex software, I’m now seeing this rise of the “Vibe programmer.” Someone who has no fundamental grounding in software design or engineering. They slap together code using AI tools and call it a product. They have no idea that their precious AI was trained on a mountain of digital junk. Good luck scaling that thing. Good luck maintaining it. And good luck when it fails.
This isn’t innovation. It’s a glorified echo chamber of a cargo cult.
I’m not anti-AI. Far from it. The potential is enormous. But not like this. Not by filling the world with half-truths and pretending it’s wisdom. We have to be better than this.
There needs to be a shift. AI models should be trained on knowledge from verified, credible subject matter experts. People who are recognised in their fields. People who’ve done the work, not just written about it. And we need to start validating the data that feeds these systems. Otherwise, we’re not preserving intelligence. We’re destroying it.
I don’t trust what I get from AI anymore. And if you’ve got any real expertise in your field, I bet you don’t trust it either.
It’s time we stop worshipping the dumb-dumb dressed up as a genius. AI can be brilliant. But only if we build it on a foundation of truth, not the collective nonsense of the internet.
Let’s not lose our minds while trying to make machines smarter. We are the smart ones. Help me educate people that AI is just starting out, in its infancy and that the content should not be trusted. Let’s also push for model training with validated data, because that will be the big game changer.
Please share your thoughts in the comments section below.