By Simon Crawford Welch, PhD
The “hands-off-the-wheel” problem
We keep calling it artificial intelligence, and lately it’s behaving less like intelligence and more like a very polite, always-available thinking substitute. And like any substitute, it works best when it’s occasional… not when it becomes your primary diet.
The real issue isn’t that AI can write essays, summarize research, draft emails, or generate ideas. It’s that it can do these things fast enough to bypass the mental friction that used to force us to think. That friction… confusion, effort, uncertainty, trial and error… isn’t a bug in human cognition. It’s the gym. Remove it, and the muscles atrophy.
This isn’t a moral panic about technology. It’s a predictable psychological outcome. When humans are given the option to offload thinking, most of us will. Not because we’re lazy, but because our brains are wired to conserve energy. AI didn’t invent this tendency. It just supercharged it.
What the research is starting to show
Early research is already pointing in a troubling direction.
Emerging research shows that heavy reliance on generative AI is consistently associated with reduced critical thinking and cognitive effort. Surveys of knowledge workers find that over 60% report thinking less independently when using AI, while studies of AI-assisted writing show lower brain engagement and weaker recall, and large-scale studies of 600+ participants link frequent AI use to significantly lower critical thinking scores, driven by cognitive offloading. In short, when AI replaces thinking instead of supporting it, performance may improve… but reasoning declines.
In controlled experiments where participants used large language models to complete writing tasks, researchers observed lower cognitive engagement compared to those who worked unaided. Over time, participants relying on AI showed weaker recall of what they’d written and less ownership of their ideas. The work still looked polished… but the thinking behind it was thinner.
Workplace research is showing a similar pattern. Knowledge workers who report higher trust in AI tools also report exerting less critical thinking effort, particularly on routine tasks. Those routine tasks used to function as daily mental reps…. small but essential workouts for judgment, reasoning, and synthesis. Remove them, and you don’t just save time. You lose practice.
The danger isn’t that AI makes people incapable of thinking. It’s that it slowly retrains people not to.

Cognitive offloading: a familiar trap, now on steroids
We’ve seen versions of this before.
Heavy GPS use correlates with weaker spatial memory. Relying on calculators reduces mental arithmetic skills. Offloading memory to devices changes how well we encode and recall information. None of this means tools are bad. It means that skills you don’t use decay. Generative AI takes cognitive offloading to a new level. It doesn’t just store or retrieve information. It constructs arguments, explanations, and conclusions. When you outsource that layer, you’re not skipping busywork. You’re skipping the very process that builds understanding. The result is a subtle but dangerous shift: from “I understand this” to “I can produce something that looks like understanding.”
The “plausible nonsense” effect
There’s another uniquely risky feature of generative AI: confidence.
AI doesn’t say, “I’m not sure.” It says, “Here you go,” in clean paragraphs, with professional tone, logical flow, and just enough nuance to sound smart. That presentation triggers a powerful cognitive bias: if it sounds coherent, we assume it’s correct.
Critical thinking, by contrast, is slow and uncomfortable. It asks annoying questions. It challenges assumptions. It notices gaps. AI can imitate that voice without doing the actual work, which trains users to accept surface-level plausibility instead of demanding substance.
Over time, people don’t just stop checking facts. They stop asking whether the argument itself makes sense.
A generation already under pressure
AI didn’t arrive in a vacuum. It landed in an environment where attention is already fragmented, reading stamina is declining, and educational outcomes are under strain. Across multiple countries, standardized assessments have shown drops in reading comprehension and math performance compared to pre-pandemic levels. These declines have many causes… disrupted schooling, increased screen time, chronic distraction, and social media being chief among them. AI enters this picture as a coping mechanism. When someone feels behind, overwhelmed, or underprepared, AI offers instant relief. It fills the gap quickly and quietly. The short-term outcome is productivity. The long-term cost is skill erosion. Instead of rebuilding capacity, we mask the weakness.
What actually deteriorates
The phrase “generation of dummies” sounds harsh, but it’s less an insult than a diagnosis. The decline isn’t in raw intelligence. It’s in default behaviors. People get worse at staying with hard problems long enough to reach clarity. They get worse at detecting weak logic because the output looks finished. They get worse at forming original arguments because the machine always provides a first draft. They get worse at tolerating uncertainty because AI always produces an answer…. even when there isn’t a good one. Perhaps most concerning is the confidence inversion. The people most likely to over-trust AI are often the ones least equipped to evaluate it. When effort drops, so does skepticism. You end up with confident conclusions built on unexamined assumptions. That’s how you get organizations full of elegant documents and fragile thinking.
The cruel joke: it works
Here’s the uncomfortable truth: AI is effective enough to reward cognitive laziness. It produces “pretty good” output at almost no mental cost. And humans respond to incentives. If you can get the grade, finish the report, or sound competent without struggling, your brain learns that struggle is optional. We used to have friction that forced learning. Now we have convenience that allows skipping. It’s like having a personal trainer who offers to lift the weights for you. Very helpful. Completely useless if your goal is strength.
Intelligence requires resistance
Thinking well has always required resistance. Difficult texts. Conflicting evidence. Unclear problems. Slow reasoning. AI removes resistance by design. That’s why it feels magical—and why it’s dangerous when used indiscriminately. When every question gets an instant answer, curiosity fades. When every task gets an instant solution, judgment weakens. When every blank page gets an instant draft, originality declines. Over time, the habit forms: don’t think first—prompt first.
How this ends if we’re not careful
If this trajectory continues unchecked, we don’t get a future of smarter humans amplified by tools. We get a future of competent-looking outputs produced by increasingly shallow reasoning. We get managers who can’t explain their own decisions. Graduates who can’t defend their arguments. Professionals who sound sharp but fold under scrutiny. The tragedy isn’t incompetence. It’s dependency.
Using AI without losing your brain
AI isn’t the villain. Unthinking use is.
The solution is simple, but uncomfortable: do the thinking before you prompt. Wrestle with the problem. Sketch the argument. Identify the uncertainty. Then use AI to challenge, refine, or stress-test your ideas… not replace them. If AI is doing the foundational cognition, you’re not saving time. You’re borrowing it from the future at a steep interest rate. It’s called artificial intelligence for a reason. The intelligence doesn’t live in the machine. It lives in the human choosing how to use it.
When we hand over the hard parts, we don’t become advanced. We become dependent… just faster. And the generation raised on instant answers will eventually face a moment no model can handle for them: a real problem, with no prompt, no template, and no obvious solution. That’s when you find out whether you learned to think… or just learned to submit.
Simon Crawford-Welch, PhD, is the Founder, The Critical Thought Lab. His latest book, “Artificial Authority: When Leadership is Performed Instead of Carried” is scheduled to be released in March 2026. He is also the author of “American Chasms: Essays on the Divided States of America” & “The Wisdom of Pooh: Timeless Insights for Success & Happiness” (All available on Amazon) www.linkedin.com/in/simoncrawfordwelch