We Trained a Generation to Obey -Then Gave Them AI

Something weird is happening. People aren’t just relying on AI, they’re deferring to it. Like it’s not just smarter, but more right.

You ask a question. You get an answer. And you don’t question the answer.

That’s not an AI problem. That’s a human problem.

Is AI making us dumb?

A recent ETH Zurich study found that when people use AI-generated answers during reasoning tasks, their performance drops by 20 to 30 percent. Even when the AI is wrong. In other words, people default to trusting the machine and stop thinking for themselves.

Brookings, MIT, and Stanford have all raised similar red flags. Over-reliance on AI seems to erode metacognition: the ability to reflect on and improve your own thinking. This isn’t just about students cheating on essays. It’s about a society outsourcing its brain.

But here’s the twist.

AI didn’t make us dumb. It just exposed how lazy we already were.

We built educational systems that reward obedience, memorization, and gaming tests. Then we handed everyone a chatbot that rewards passive prompting. And now we’re surprised that people aren’t thinking deeply?

What happens when this becomes generational?

If kids grow up using AI before they learn to think independently, we’re in trouble. Prompting becomes a crutch. Critical thinking atrophies.

By age 12, a child should be learning how to debate, reflect, and compare perspectives. Instead, we’re seeing classrooms that reward the fastest ChatGPT summary or the most polished AI slide deck.

We’re not raising thinkers. We’re raising AI operators. That’s a dangerous distinction.

So what do we actually do?

This isn’t a call to ban AI. It’s a call to build real safeguards. Not firewalls. Frameworks.

1. Start with how we raise kids

Don’t make AI the enemy. But don’t make it the answer either.

Things that actually work:

  • Critical thinking journals
    Let kids write reflections on AI responses. What do they agree with? What would they challenge? Where might bias show up?

  • AI vs human debates
    Students research both sides. What the AI says, and what credible sources say. Then they argue it out.

  • Prompt literacy training
    Teach students to ask layered, thoughtful questions. Not just “What’s the capital of France?” but “How would a Parisian describe their identity compared to a tourist?”

If you raise a kid by prompting AI, good luck breaking that habit at 25.

2. Rewire adult habits too

This isn’t just a kid problem. Adults are falling into the same trap.

We Google less. We sit with questions less. We reach for tools before we try to think.

Some fixes:

  • AI shadowing
    Try solving a problem yourself. Then compare it to what the AI says. Not the other way around.

  • Justification prompts
    Before accepting an AI answer, write down why it makes sense to you. Even a few lines helps.

  • Reflection logs
    Keep a simple weekly log. When did AI help? When did it mislead? Track patterns and build awareness.

3. Measure the right things

Right now, our systems reward quick answers. Not actual reasoning.

If we want to build smarter people, we need to start measuring thought.

New metrics to track:

  • Argument strength
    Can students defend a position, respond to counterpoints, and build something coherent?

  • Prompt quality over output quality
    Are people asking thoughtful, layered questions? Or just chasing easy outputs?

  • Long-term retention checks
    Can they recall and build on ideas two weeks later? Or did it all evaporate after one prompt?

What about careers where mistakes cost lives?

You don’t want a doctor who blindly accepts GPT’s diagnosis. You want someone who thinks critically and explains their disagreement.

Same with engineers. AI might suggest a structural plan. But if you can’t spot the flaw, the bridge collapses.

How do we build trust without dependence?

  • AI as second opinion
    Use AI to double-check, not lead. A doctor sees what it suggests, then explains their call.

  • Peer and AI cross-verification
    One human, one AI, then one human reviewer. Triple filter.

  • Simulation-based training
    Use AI to inject mistakes on purpose. See who catches them in real time.

This builds a culture of thoughtful resistance, not passive agreement.

Institutions need to do more than talk

It’s easy to say “teach critical thinking.” It’s harder to actually change schools.

Real reform looks like this:

  • Fund AI-integrated curriculum design
    Pay educators to build learning models that use AI and challenge it too.

  • Fix assessments
    Move beyond multiple choice. Use oral defenses, longform projects, and real-time argumentation.

  • Teach prompt literacy
    If we teach essay writing, we need to teach prompt writing. It’s now just as important.

Don't fear the tool. Build the counterbalance.

Final thought

AI isn’t the villain. It’s just fast. If we don’t redesign how people think. If we don’t rewire curiosity, reflection, and productive doubt. It’s going to flatten us.

Not because it’s powerful.

Because we were never taught to push back.

That’s not AI’s failure. That’s ours.

Previous
Previous

Rocket Money Review 2025: 7 Things I Learned After a Month of Use

Next
Next

Is AI Getting Dumber? I Tested ChatGPT, Claude, Gemini and Perplexity So You Don’t Have To