Is AI making us lazy thinkers?
Exploring the potential price we're paying for cognitive convenience.
When calculators became widespread, people worried we'd lose our ability to do mental math.
In reality, calculators freed us to focus on higher-level mathematical concepts.
But there was a tradeoff—many of us probably can't do complex calculations in our heads as easily as our grandparents could.
Arguably, there are some similarities between calculator use and Generative AI. As we become reliant on GenAI to do tasks that typically require high-level thinking, reasoning and research, we remove the need to do that critical thinking ourselves.
I’m guilty of it.
Case in point: My co-worker Keshav sent me an interesting TechCrunch article today on “Is AI making us dumb?” because he thought it would make a good topic for our blog.
The first thing I did?
Hopped into Claude and asked for its opinion on humans becoming reliant on AI and critical thinking tasks. Very meta.
I’ll admit, when it comes to writing I enjoy using GenAI to do the initial brain dump stuff: gathering ideas, forming through-lines, and thinking big before narrowing the focus.
But AI makes that part so easy—almost too easy—that I find myself defaulting to using it before I’ve even done any thinking myself. Which leads to the question…
When we rely on AI like this, are we losing the ability to think critically for ourselves?
The allure of instant answers vs. the value of mental struggle
Let’s be honest: the temptation to use AI for routine tasks pretty much everything is real.
A recent study by Microsoft and Carnegie Mellon University found that people choose GenAI because it:
Helps them feel more confident about skills they normally lack, e.g. spelling, grammar, letter writing, etc.
Gets tasks done more quickly and efficiently.
Lets them reserve brain power for higher-level tasks.
But when we use GenAI as a first resort, we're potentially skipping crucial mental processes.
There are valuable things that happen in our brains when we sit with a problem:
The initial confusion and discomfort that forces our brain to work harder
The connections we make to other problems we've solved
The dead ends we explore that might lead to unexpected insights
The "aha" moments that strengthen our problem-solving muscles
A calculator just computes—it doesn't reason, synthesise, or create.
But GenAI can do all these things. So we might be outsourcing not just the grunt work, but the very cognitive processes that make us better thinkers.
Microsoft’s paper also states that by mechanising routine tasks and leaving only exceptions to humans, we "deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
In other words:
When we rely on AI to think for us, we risk not being able to use our brains well when AI isn’t an option.
How to know if you’re using AI as a crutch instead of a tool
If you’re not yet ready to lose all brain cells to the AI overlords, it might be helpful to recognise when you’re becoming overly reliant on AI.
When GenAI is a crutch:
You ask AI questions before spending even 30 seconds thinking about them yourself
When faced with a problem, your first instinct is to open ChatGPT rather than think it through
You accept AI's answers without questioning them or checking if they make sense
You find yourself unable to explain how you reached a solution (because you didn't—AI did)
You feel anxious when AI tools are unavailable as if you can't work without them
You've stopped saving useful resources or keeping notes because "I can just ask AI about this later"
If any of these sound familiar, then there are a few ways you can try and break your habits and find a healthy balance.
How to use GenAI as a tool:
Set a "think first" rule: Spend 5 minutes trying to solve or think through a problem before asking AI
Use AI to expand your thinking rather than replace it—ask "What am I missing?" rather than "What's the answer?"
Create "AI-free" time blocks in your day for deep thinking and problem-solving
Document your own process and thinking, even when using AI
What does this all mean? Some final thoughts
AI might be changing the types of thinking we value
Just like GPS made us value getting somewhere quickly over understanding the route, AI might be shifting what we consider valuable thinking.
We’re starting to optimise for quick answers rather than deep understanding. Why spend an hour deeply understanding one concept when AI can give us surface-level knowledge of five?
We’re getting better at asking questions but worse at finding answers
We're developing a new skill: the ability to frame questions to get the best out of AI— a bit like learning to write the perfect Google search.
But this might be coming at the cost of our ability to methodically work through problems.
It's the difference between knowing how to ask for directions and knowing how to read a map. The former is useful, but the latter teaches you about geography, spatial relationships, and problem-solving.
The importance of intentional AI use
This is about developing what we might call "AI literacy." Just like we had to learn when to use a calculator (it's fine for checking your tax math, maybe not great when learning basic multiplication), we need to develop good judgement about AI use.
For example, a student using ChatGPT to write an essay isn’t just unethical—they’re missing out on learning how to think critically, construct arguments, and write well.
But for a student who’s struggling with a physics concept and uses ChatGPT to help unpack the topic by asking probing questions… this enhances learning and is a great way to use AI.
Sometimes the struggle of finding an answer yourself is more valuable than the answer itself.
The key is being intentional about these choices rather than defaulting to AI for everything just because it's available.