What the Alarm Misses: Why AI Isn’t Ruining Student Writing—But Shallow Reporting Might Be
A Letter to the Anxious Reader
I read the New York Magazine Intelligencer piece1 “Rampant AI Cheating Is Ruining Education Alarmingly Fast” the way I imagine many educators did—tight-jawed, half-nodding, and wholly unsettled. Not because the piece got it all wrong. In fact, it got one thing deeply right: faculty are concerned, tired, and unsure how to respond to what feels like a seismic shift in student learning.
But I worry that headlines like these do something far more damaging than ChatGPT ever could. They flatten students into cheaters, flatten teachers into victims, and flatten a nuanced, discipline-rich, and rapidly evolving reality into a single narrative of collapse.
This rebuttal is not an attempt to deny that some students cheat with AI. They do. But it is an attempt to counterbalance the alarm with evidence—and to invite a more honest, more hopeful conversation about what it means to write, to learn, and to teach in the age of generative AI.
What the Article Gets Right (Briefly)
The Intelligencer article captures real anxieties. Faculty are seeing polished but lifeless essays. Students are turning in assignments that feel AI-scrubbed. And across campuses, there’s a policy vacuum where clarity should be. The piece is right to acknowledge that some students cheat. The tools have changed. The stakes haven’t.
But it’s one thing to identify a moment of crisis. It’s another to generalize from a few sensational cases, ignore the data, and declare that the sky is falling.
What the Article Gets Wrong: Anecdote ≠ Evidence
Where the article falters is in turning one student’s story into a parable of decay. “Roy” Lee, a self-professed serial cheater and startup founder, is portrayed as the future of higher ed. But a single student isn’t a dataset. He’s a narrative hook. And building a national indictment of AI-integrated learning on a single case is not journalism. It’s moral theater.
The real picture looks very different when we zoom out.
What Data Actually Says
From Anthropic:2 Their 2025 education report analyzed over a million student interactions with Claude and found that students overwhelmingly used AI for Creating (39.8%) and Analyzing (30.2%)—not for shortcuts, but for deep thinking.
From OpenAI:3 Their February 2025 report shows that 1 in 3 U.S. students aged 18–24 uses ChatGPT, and 1 in 4 messages is learning-related. Most students aren’t cheating—they’re self-teaching in the absence of formal instruction.
From my classrooms:4 I teach rhetorical prompting to undergraduate and graduate students the way I teach writing: as a recursive process.5 Using my Rhetorical Prompting Method, students learn to guide, revise, and ethically evaluate AI output using my Ethical Wheel of Prompting. They annotate, critique, and reflect—not because I ban AI, but because I teach with it. Both of these models have been tested, vetted, and revised since January 2023 with hundreds of university students and thousands of Coursera students.6 These models are also licensed CC:BY, so please use them. I always seek feedback and usage of parts of my work that you find helpful.
V. Reframing the Conversation: From Cheating to Cognitive Offloading
Perhaps we should stop asking how many students are cheating and start asking how many are trying to think, with instructor support. Cognitive offloading7 is a well-documented learning strategy. Generative AI simply expands the scaffolding. Research Psychologist Dr. Michelle Miller discusses this concept in relation to LLMs in her February 2025 audio Substack.
For many students, especially those who are neurodivergent or overextended, AI is a way into the work, not around it. I’ve watched students write more, not less, because the blank page no longer stares back in silence.
Offloading isn’t the death of thinking. It’s often the beginning of it.
VI. What Educators and Journalists Owe Students
As I wrote May 7, 2025 in The Conversation, “AI isn’t replacing student writing—but it is reshaping it.” That reshaping opens space for rhetorical thinking, reflection, and deeper authorship—if we let it.
Journalists must do better, too. Anecdotes about cheating don’t tell the whole story. We need journalism that asks: Who’s teaching ethical AI use? Who’s left out of access? And how can we reimagine assessment in a world of generative assistance?
VII. The Real Crisis Isn’t AI—It’s Cynicism
As Nick Potkalisky and I argue in Education & AI, critical thinking isn’t dead—it’s evolving. And the students who use AI aren’t undermining learning. They’re showing us how it’s changing.
AI is not the end of education. It’s a prompt. And how we respond to it will shape not just our pedagogy, but also our purpose.
I encourage colleagues not to flatten that opportunity into panic. Instead, let’s teach toward it. As always, I welcome your feedback, thoughts, and insights to keep the conversation going.
📊 AI Resource Use Disclosure
This post was collaboratively drafted using GPT-4.5, with substantial iterative revisions by the author. Estimated computational resources:
~2,865 words → 85.9 Wh electricity / 14.3 L water
Walsh, James D. “Rampant AI Cheating Is Ruining Education Alarmingly Fast.” New York Magazine Intelligencer, 5 May 2025, https://nymag.com/intelligencer/article/openai-chatgpt-ai-cheating-education-college-students-school.html. Accessed via Wayback Machine, 7 May 2025.
Anthropic. Anthropic Education Report: How University Students Use Claude. 8 Apr. 2025, https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude. Accessed 8 May 2025.
OpenAI. Building an AI-Ready Workforce: A Look at College Student ChatGPT Adoption in the US. Feb. 2025, https://openai.com/research/openai-edu-ai-ready-workforce. Accessed 8 May 2025.
Law, Jeanne Beatrix. “Bits on Bots: Continuing the Conversation on Generative AI.” Macmillan Learning, 6 Mar. 2024, https://community.macmillanlearning.com/t5/bits-blog/bits-on-bots-continuing-the-conversation-on-generative-ai/ba-p/22735.
Law, Jeanne Beatrix. “Bits on Bots: Process, Post-Process, and AI—Navigating the New(ish) Normal.” Macmillan Learning, 8 May 2024, https://community.macmillanlearning.com/t5/bits-blog/bits-on-bots-process-post-process-and-ai-navigating-the-new-ish/ba-p/23202.
Law, Jeanne Beatrix, and Nick Potkalisky. “Does AI Kill Critical Thinking? Maybe Not—If We Use It Right.” Education & AI: Research and Practice, 26 Mar. 2024, https://edu-ai.org/does-ai-kill-critical-thinking-maybe-not-if-we-use-it-right/. Accessed 8 May 2025.
Laura Dumin, Sarah Silverman, and Lance Cummings frequently discuss on LinkedIn, Medium, and SubStack how this behavior enhances the writing process for neurodiverse writers. You should follow them and check out their work.



A very important and timely re-framing of this central issue for educators. If only you had the same readership as the Intelligencer!
Your point about students writing more when they use AI in the right way chimes precisely with my own experience
Thank you Jeanne! So much discourse around AI misses a fundamental benefit - to improve the output of an ai you have to improve your thinking. It’s great to see educators who aren’t “shaking their fist at the tidal wave” and instead teaching students how to surf it.