We Trained a Generation to Obey — Then Gave Them AI

Something weird is happening. People aren’t just relying on AI — they’re deferring to it. Like it’s not just smarter, but more right.

You ask a question. You get an answer. And you don’t question the answer.

That’s not an AI problem. That’s a human problem.

Is AI making us dumb?

A recent ETH Zurich study found that when people use AI-generated answers during reasoning tasks, their performance drops by 20–30% — even when the AI is wrong. In other words: people default to trusting the machine, and shut off their own analysis.

Brookings, MIT, and Stanford have all raised similar red flags: over-reliance on AI seems to erode metacognition — the ability to reflect on and improve your own thinking. This isn’t just about students cheating on essays. It’s about a society outsourcing its brain.

But here’s the twist:

AI didn’t make us dumb. It exposed how lazy we already were.

We built educational systems that train obedience, memorization, and gaming tests. Then we handed everyone a chatbot that rewards passive prompting. And now we’re shocked people aren’t thinking deeply?

What happens when this becomes generational?

If kids grow up using AI before they learn to think independently, we’re in trouble. Prompting becomes a crutch. Critical thinking atrophies.

By age 12, a child should be learning how to debate, reflect, compare perspectives. Instead, we’re seeing classrooms that reward the fastest ChatGPT summary or most polished AI slide deck.

We’re not raising thinkers. We’re raising AI operators. And that’s a dangerous distinction.

So what do we actually do?

This isn’t a call to ban AI. It’s a call to build real safeguards — not firewalls, but frameworks.

1. Start with how we raise kids

Don’t make AI the enemy. But don’t make it the answer, either. Here’s what actually works:

  • Critical thinking journals: Let kids write reflections on AI answers — what they agree with, what they’d change, what bias might exist.

  • AI vs Human debates: Kids research both sides: what the AI says, and what independent sources say. Then argue it out.

  • Prompt literacy training: Teach kids how to ask better questions — not just "what’s the capital of France" but "how would a Parisian describe their identity vs a tourist?"

If you raise a child by prompting AI, good luck rewiring their habits at 25.

2. Rewire adult habits, too

The AI decay isn’t just in kids — it’s everywhere.

We Google less. We sit with questions less. We reach for tools before we even try to think.

Some fixes:

  • AI shadowing: Encourage people to try solving a problem first, then compare with AI — not the other way around.

  • Justification prompts: Before accepting AI’s answer, require a 1-minute written justification of why it makes sense.

  • Reflection logs: Keep a short weekly note on when AI helped — and when it misled. Build awareness, not dependency.

3. Measure the right things

The current system still rewards regurgitation, not reasoning. If we want smarter humans, we need to actually track thinking.

New metrics for the AI era:

  • Argument strength (written/oral): Can students defend a position, anticipate rebuttals, synthesize perspectives?

  • Prompt quality over result quality: Are people asking thoughtful, layered questions — or just farming outputs?

  • Long-term retention checks: Can a student recall and build on ideas 2 weeks later — or did it all go in one prompt and out the other?

What about careers where people’s lives are at stake?

You don’t want a doctor who blindly accepts GPT’s diagnosis. You want one who reads it, considers it, and disagrees if needed.

Same with engineers. AI might suggest a structural design — but if you can’t spot the flaw, the bridge falls.

So how do we integrate AI in high-stakes careers without breaking trust?

  • AI second opinion tools: Treat AI as a check, not a guide. Doctors see what it suggests — then explain why they’re overriding it.

  • Peer + AI cross-verification: One human, one AI, then one human reviewer. Triple-filtered trust.

  • Simulation-based upskilling: Run real-time challenges where AI gives flawed advice — and see who catches it.

This builds a culture of thoughtful disagreement, not passive deferral.

Institutional responsibility: not just talk, real reform

It’s easy to say "teach critical thinking." It’s harder to redesign schools.

Here’s what actual reform could look like:

  • Funding for AI-integrated learning design: Pay educators and researchers to build new curriculums that teach with and against AI.

  • Assessment reform: Move away from fixed-answer tests. Use oral defenses, projects, collaborative debate formats.

  • Prompt literacy modules: Just like we teach essay writing, we now teach prompt crafting — layered, ethical, nuanced.

Don’t just fear the tool. Fund the counterbalance.

Final thought: this is on us

AI isn’t the villain. It’s just fast.

If we don’t redesign how people think — if we don’t prioritize curiosity, analysis, and productive doubt — it’ll flatten us.

Not because it’s too powerful. But because we were never taught to resist its pull.

And in the end, that’s not AI’s failure. It’s ours.

Next
Next

Is AI Getting Dumber? I Tested ChatGPT, Claude, Gemini and Perplexity So You Don’t Have To