Untools logo

Untools

Subscribe
Archives
Sponsor
July 22, 2025

Why AI is making us worse thinkers (and how to avoid it)

Exploring how AI usage erodes our critical thinking and learning to treat it as a thinking partner for better outcomes.

Hi there, it's Adam from Untools.

With AI being so widespread these days, I think it’s important to talk about its impact on how we think as well as how much we think when we use it.

I use AI in various forms daily and at times I’ve definitely felt the lure of just taking its outputs and using them with a sort of blind trust. Maybe you’ve felt something similar. Submitting to that is dangerous, though.

What I’ve found in my experience and in recently published research is that we’re generally better off treating AI as a thinking partner rather than a tool we delegate tasks to. In today’s post, we’ll explore why that is and how to work with AI more collaboratively.

The AI thinking paradox

If you use AI regularly, you've probably noticed it gets things wrong from time to time: it can hallucinate and sometimes doesn’t produce very accurate outputs. On top of that, AI misses a lot of our context and nuances.

That means we need a lot of critical thinking to question and verify AI’s outputs. The problem is that research shows that frequent use of AI actually erodes critical thinking skills. That’s the paradox: Using AI a lot weakens the very skill that’s necessary for working with it well.

One study by Microsoft of 319 knowledge workers showed that using generative AI “can inhibit critical engagement with work … and diminished skill for independent problem-solving”.

And this study by Michael Gerlich from SBS Swiss Business School found that “participants who reported higher usage of AI tools consistently showed lower scores on critical thinking assessments.”

The main problem causing this is called “cognitive offloading”.

The dangers of cognitive offloading

Cognitive offloading means delegating your thinking to AI rather than thinking with it. Sure, it’s faster and easier but at the expense of your critical thinking skills and the potential of using AI.

Recognising when you’ve been cognitive offloading is the first step to improving how you work with AI. Take a moment to reflect and ask yourself:

  • How often do you critically evaluate AI’s output?

  • How much do you question its answers and cross-check them against reliable sources?

  • How well could you explain the reasoning behind the solutions or decisions that AI suggests?

If your answers lean towards “not often” or “not very well”, you might offloading too much to AI. When you consistently offload thinking to AI, you gradually lose confidence in your own judgment and become less able to spot when AI gets things wrong.

The good news is that a change in how you approach AI can help you get more out of it without compromising your thinking abilities.

How to work with AI as a thinking partner

It all boils down to moving away from delegation towards what I would call ‘cautious collaboration’. AI works best (and you work best with AI) when you treat it as a thinking partner, a sounding board, a peer who gives you feedback.

Why ‘cautious’? Remember that AI doesn’t know your full context and can be prone to hallucinations and inaccuracies. It’s useful to think in terms of a generation/verification loop: whatever AI generates, a human should verify.

There are a few principles I follow when working with AI:

  • Be specific: Think about what context AI might need to produce a relevant output. What exactly do you want the output to be?

  • Default to suggestions, not decisions: I treat AI as an assistant and ask it for suggestions instead of final decisions or results. This naturally keeps me in control and follows the generate/verify loop.

  • Challenge your thinking: I’ll often share my existing thinking (or a draft of my work) with AI and ask it to critique it. Spot weaknesses, point out what I’m missing. Interestingly, this is the generate/verify loop in reverse.

Let’s see how this can be done in practice with a few examples.

Example 1: Writing a social media post

🚫 Delegation: “Write a LinkedIn post about our Q4 results.”

✅ Collaboration: “Help me share our Q4 results with our LinkedIn audience in an authentic way that credits the team without being boastful. Here is my first draft – review it and suggest what can be improved.”

There are a few things going in this example: we wrote our own draft first instead of mindlessly delegating it to AI. Then we’re asking it to give us feedback while providing more context on how we want the post to sound. Being specific with AI greatly helps with better output as well.

Example 2: Code review

🚫 Delegation: “This part of the code results in an error, fix it.”

✅ Collaboration: “I get this error when I run this code. Analyse it step by step. Then give me possible causes and debugging approaches.”

Here we’re getting the AI to brainstorm possible solutions to give us a chance to review and pick the best one. Also notice we’re asking it to reason step by step which can help us understand what’s going on.

Example 3: Preparing for a meeting

🚫 Delegation: “I want to convince our client to approve the presented strategy. Give me arguments supporting it.”

✅ Collaboration: “We’ll be presenting our risk-averse client with this strategy. Review it and identify points that the client might object against. Formulate possible objections and help me rehearse the conversation.”

With the collaboration approach in this example, we ask AI to argue against our position instead of doing our work for us. In the process, we can formulate our own arguments and understand both sides well instead of just letting AI do the thinking we need to do.

How this approach avoids cognitive offloading

Notice the common theme in these examples: we engage our critical thinking before interacting with AI. When you create a draft first or analyse the problem before asking AI, that’s thinking you haven’t delegated.

When you develop your own thoughts first, you also have a baseline, something concrete, to compare AI’s output to. And then you can use AI to improve your work and thinking while keeping the cognitive ownership of it.

Practical tip: Give AI a framework, not just a task

I've found that the easiest way to avoid cognitive offloading is to structure the problem yourself before asking AI to help solve it.

When you give AI a generic task, you're basically saying "do my thinking for me." But when you create a framework first, you outline how to think about the problem first and then use AI to work through it with you. In my experience, this makes a big difference.

Here’s an example of a framework you could use for decision making or problem solving:

Current situation: [Status quo and your context]
Goals: [What we want to achieve]
Constraints: [E.g. budget, timeline, other resources]
Options to consider:
- [Option 1]
- [Option 2] 
- [Option 3]
Success criteria: [How we'll know if this worked]

For each option, analyze pros, cons, and implementation steps.

I would encourage you to build your own framework for each specific problem you’re solving. When you build the framework, you're doing the important thinking about things like constraints, available options, and success criteria. AI becomes your thinking partner to work through each part, not your replacement for figuring out the approach.

You keep ownership of the thinking because the structure comes from you. AI just helps you think through each piece more thoroughly.


Coming up for Vault 💎 members: AI collaboration toolkit

Next week, Vault members will receive a comprehensive guide on AI collaboration techniques, plus a printable reference toolkit. This will include:

  • Research-backed prompting fundamentals: How to work with examples, reasoning process and other principles that improve AI output quality.

  • AI collaboration techniques: Specific techniques for engaging with AI in a collaborative way.

  • Thinking tools prompt templates: Practical prompt templates for popular thinking tools like Second-order thinking or Six thinking hats.

  • Printable reference cards: A PDF toolkit you can keep handy, share with your team, or use in workshops.

Become a Vault member

Along with new monthly deep dives and guides, Vault members get instant access to all previously published premium content.


Small changes, better thinking

The shift from delegation to collaboration can be sometimes subtle, but even small changes in how you approach AI can make a big difference. Start with one technique that feels most relevant to your work, and gradually build from there. The goal isn't to use AI perfectly but to use it in a way that makes you think better, not less.

Coming next

Next month, we will look at cognitive biases and how they might be impacting your thinking. We’ll explore the most common biases, the research behind them and how to avoid them skewing our thinking.

Until next time,
Adam

P.S. What's your experience been with AI so far? Are you finding it helps or hinders your thinking? Leave a comment or reply to this email.

Don't miss what's next. Subscribe to Untools:
Start the conversation:
Powered by Buttondown, the easiest way to start and grow your newsletter.