Does AI Kill Critical Thinking? Maybe Not If We Use It Right.
A Response to Claims of AI Over-Reliance
Generative AI is under scrutiny once again. A recent study by Lee et al. (2025)1 argues that AI reduces the effort required for critical thinking, leading knowledge workers to engage less deeply with their tasks. Their survey of 319 professionals suggests that as users become more confident in AI-generated content, they rely on it more and think less critically. In short, AI is supposedly making us lazy thinkers.
That’s a bold and interesting claim—but it also might be an incomplete one.
My research and real-world application of the Rhetorical Prompt Engineering Method (RPM) challenges this narrative.
Instead of AI diminishing our ability to think critically, I’ve seen firsthand how structured prompting enhances metacognition, decision-making, and intellectual engagement. I want to share some preliminary data here that demonstrates my point.
AI and Critical Thinking: The Wrong Question?
Lee et al. suggest that AI shifts cognitive effort from problem-solving to oversight, implying that this shift leads to less engagement with deep thinking. But here’s the problem with that argument:
Critical thinking isn’t just about effort. It’s about strategy.
If AI allows us to automate routine cognitive tasks—like information retrieval or summarization—this doesn’t mean we’re thinking less. It means our thinking is changing. And that shift can be an opportunity rather than a loss—if we learn how to use AI intentionally.
Preliminary Data: The Rhetorical Prompting Model Might Increase Critical Thinking
Let’s move from theory to practice. I recently analyzed feedback from 112 adult learners in Coursera writing courses who used the Rhetorical Prompting Model RPM to guide their AI interactions. These learners possess higher education degrees (28% Bachelor’s); (36% Master’s). More than 56% of them are employed full-time. Their responses tell a different story than the one Lee et al. present.
Preliminary Findings:
✔ 92% strongly agreed or agreed that RPM helped them evaluate their writing choices before and during the writing process.
✔ 75% strongly agreed or agreed that they were able to maintain their authentic voice while using AI assistance.
✔ 89% strongly agreed or agreed that RPM helped them think critically about their writing.
These preliminary numbers make me think: when learners engage with structured prompting, AI doesn’t replace their critical thinking—maybe it amplifies it.
For example, one participant noted:
“Using rhetorical prompts forced me to pause and question why I chose certain words or examples, making me realize how much my audience influences my writing decisions.”
Another learner wrote:
“It was surprising how often I revise on autopilot. Rhetorical prompts helped me understand when a change was necessary versus habitual.”
These responses indicate active cognitive engagement, perhaps not passive AI reliance.
AI Doesn’t Reduce Thinking—It Redirects It
Lee et al. argue that because AI reduces effort in some areas, it leads to less critical engagement overall. But my research shows that AI isn’t eliminating effort—it’s redistributing it toward higher-order thinking.
With structured prompting, learners spend less time struggling with mechanical aspects of writing and more time evaluating, revising, and structuring their work.
How AI Shifts Cognitive Effort:
🧠 From gathering information → To verifying information
🧠 From problem-solving → To integrating AI responses effectively
🧠 From task execution → To overseeing and refining AI-assisted outputs
None of these shifts are inherently bad. They just require a different approach to thinking—one that many traditional models of education haven’t caught up with yet.
AI Literacy: The Missing Piece in the Overreliance Debate
Another issue with recent studies writ large is that they don’t necessarily account for AI literacy. They claim that higher confidence in AI leads to reduced critical thinking—but they fail to differentiate between passive AI reliance and active engagement with AI as a tool.
The solution might be to teach people how to interact with it effectively. In my research, learners using RPM developed stronger metacognitive awareness because the method forces them to question, evaluate, and refine AI-generated content. Two participants summed it up:
“Each time I answered a prompt, I understood my thought process better—especially how I choose evidence to support my claims.”
“I felt like I was thinking differently, but I’m not sure if it was because of the model or just the process of working with AI – [the] RPM gave me a framework to interact with AI, but I sometimes wondered if the shifts in my thinking were due to the method itself or simply from experimenting with AI over time.”
This suggests that rhetorical prompting prevents overreliance, rather than encouraging it. Instead of blindly trusting AI, learners are trained to interrogate and refine it—a skill that will be essential in AI-driven workplaces.
Stewardship, Not Automation, is the Future
Lee et al. frame AI oversight as a burden. But stewardship is not a byproduct of AI use—it is the goal. The future isn’t about doing less thinking—it’s about thinking differently, more strategically, and with better tools.
My Ethical Wheel of Prompting, which works alongside RPM, reinforces recursive evaluation and intentional interaction with AI. The goal is not just accuracy but agency—ensuring that the human user remains at the helm of decision-making.
Final Thoughts: AI Won’t Replace Thinking—But It Will Reshape It
Lee et al. are right that AI changes how we think. But, the article’s assumption that AI diminishes engagement misses the bigger picture. Instead, I think about how use cases, in professional writing specifically, can help communicators understand that:
🔹 AI is a tool for metacognition, not a crutch for passivity.
🔹 Critical thinking isn’t about effort—it’s about intentional strategy.
🔹 AI literacy, not AI avoidance, is the key to preventing overreliance.
Instead of resisting AI’s role in knowledge work, we should be asking:
How do we teach people to prompt better, question better, and think better with AI?
That’s the real challenge—and that’s exactly where my Rhetorical Prompting Model is making an impact, at least based on preliminary results for adult learners. I will continue to update as more students respond and as I scale this work to graduate learners at Kennesaw State.
What do you think? How has AI changed the way you approach thinking and problem-solving? Interested in collaborating? Let’s talk about it in the comments.
https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf?ref=404media.co
Dr. Law, it’s great to read your work again! I have been following this space from afar, since I am on self-provided paternity leave. I am always open to collaborating and/or talking to your students! I had a great experience last time! Let me know! rhepler@csi.edu. heplerconsulting.com I have linked people to your Rhetorical Prompt Framework multiple times in my webinars and blog posts and LinkedIn posts. I think that what I like most about it is that the critical thinking has been done BEFORE you even TOUCH an AI tool. You know 1. what you are creating, 2. whom you are addressing, 3. what objectives you and your audience are attempting, 4. what language and key points you must include, 5. the format you have to create in, etc. any of the core parts of the eventual product. I have created similar patterns for PERSONALIZED LEARNING and CREATING AND USING CUSTOM AI AGENTS. It really helps to have a person-first framework!
The issue is who is using AI - those who have strong foundational knowledge and deep habits who already write well versus fledgling writers who are still learning how to articulate their thoughts and get them on the page. For the latter, AI is a double edged sword - it produces text that looks good and generally satisfies the requirements of an assignment but does not help them with their own skills. As a high school teacher, I can attest that this is an incredibly difficult line to walk - I think there will be room to use AI as a feedback mechanism but it’s very challenging getting students to see that they need to do produce their own initial draft. But I agree that most critics of AI are not using it right. AI skeptics generally have no interest in learning how to prompt well.