How University Students Are (Really) Using Generative AI: Insights, Trends, and Surprises
An Analysis of Anthropic and OpenAI Reports
Hey friends,
One thing about working daily in generative AI that I embrace is that we’re all learning together, especially when it comes to understanding how our students actually engage with these tools.
Anthropic recently published their Anthropic Education Report, digging into precisely how university students use their AI assistant, Claude. They analyzed around one million anonymized conversations from students with .edu or .ac.uk emails, filtering down to over half a million academically-focused interactions (574,740 to be exact).
Why should we care? Seeing real data on AI usage helps us better teach, guide, and ethically scaffold students toward responsible AI literacy.
Here's what the study found:
How Students Use Claude: By The Numbers
Students primarily use Claude in six key ways:
Creating and enhancing educational content: 39.3%
Technical explanations or solutions: 33.5%
Data analysis and visualization: 11.0%
Research design and tool development: 6.5%
Creating technical diagrams: 3.2%
Translation and proofreading: 2.4%
These numbers paint a nuanced picture: students aren't just copy-pasting AI-generated essays—they're collaborating with Claude to solve complex tasks.
Discipline-Specific Trends: Who's Engaging Most?
Here's something fascinating:
Computer Science students dominate Claude usage (36.8%), despite making up just 5.4% of bachelor’s degrees.
Natural Sciences and Mathematics students also show elevated adoption (15.2% vs. 9.2%).
In contrast, fields like Business (8.9% vs. 18.6%), Health Professions (5.5% vs. 13.1%), and Humanities (6.4% vs. 12.5%) significantly underutilize Claude relative to their population size.
How Students Interact with AI: Four Styles Emerge
Claude identified four nearly equal ways students interact:
Direct Problem Solving: immediate answers, minimal back-and-forth.
Direct Output Creation: immediate content creation, low interaction.
Collaborative Problem Solving: extensive dialogue, iterative work.
Collaborative Output Creation: interactive dialogue to co-create content.
What's striking is the nearly even distribution (23-29%) across these modes, suggesting students are experimenting widely and intentionally to find their optimal interaction style.
Cognitive Tasks and AI: A Double-Edged Sword?
The analysis also mapped student interactions onto Bloom’s Taxonomy. Surprisingly (or perhaps not?), students most frequently asked Claude to handle high-order cognitive tasks:
Creating: 39.8%
Analyzing: 30.2%
Applying: 10.9%
Understanding: 10.0%
Remembering: 1.8%
This high-level cognitive outsourcing could mean students trust AI to enhance their critical thinking, but might also risk inadvertently neglecting these crucial skills themselves. It's a delicate balance faculty must help them navigate. And, the more data we have, the more prepared we can be to meet our students where they are in their usage.
Methodology Matters: How Did Anthropic Get This Data?
Anthropic's methodology included analyzing anonymized Claude conversations tagged to educational emails (.edu/.ac.uk), narrowing to academically relevant interactions. They applied their in-house automated tool (Clio) to ensure relevancy and classified interactions through Bloom's Taxonomy.
But every study has limits: these insights reflect only Claude's users (not other tools like ChatGPT) and involve some inaccuracies inherent to automated filtering.
Claude vs. ChatGPT: Contextualizing the Numbers
Let's put Anthropic's numbers into a broader context: ChatGPT’s user base dwarfs Claude significantly. While Claude’s educational usage numbers are robust (574,740 filtered conversations), they’re relatively small compared to ChatGPT's vast scale, which, as I’ve shared in previous posts, reaches almost 500 million interactions weekly.
And while OpenAI has not publicly released its “academic use numbers,” there is still a significant difference in usage numbers (ChatGPT is #1 in use; Claude is #6 among genAI web products). This disparity likely comes down to brand recognition, ease of access, institutional adoption, and overall public familiarity with ChatGPT compared to Claude. (But Claude’s detailed analysis here offers valuable insights we don’t yet fully have from OpenAI.)
Let's pause and contextualize Anthropic’s findings about Claude by looking briefly at ChatGPT’s recent data, shared in OpenAI’s February 2025 Education Report. (And yes, I've talked about this before—but it's worth repeating to understand how students are using the two models.)
Quick numbers:
Claude: Analyzed roughly half a million (574,740) academic interactions, predominantly from STEM disciplines.
ChatGPT: Used by over one-third (around 33%) of traditionally aged U.S. college students (18–24), according to OpenAI’s recent data, with approximately a quarter of these interactions directly tied to coursework—things like writing papers, brainstorming, and summarizing texts.
The gap between Claude’s precise but modest reach and ChatGPT’s broad penetration is significant. ChatGPT clearly holds the lion’s share of student attention, thanks in part to widespread institutional adoption and ease of access.
But here’s what strikes me most:
Claude users seem deeply task-oriented, especially with coding, analytics, and content creation.
ChatGPT users engage more broadly, across disciplines, utilizing AI for foundational learning activities.
This comparison underscores that not all generative AIs are being used equally or similarly. While ChatGPT offers broad appeal, Claude is carving out a strong niche in technical and collaborative academic work.
This chart from the OpenAI report resonated with me in terms of how students were using ChatGPT. Check out the top 15 use cases, keeping in mind that the top 5 accounted for almost 40% of all usage. As a writing professor, I am intrigued to see students using GPT in high-level writing processes.
So, what’s the takeaway?
As educators, we should help students harness whichever AI tools best align with their academic goals—mindfully, ethically, and responsibly. There’s room (and a need) for both breadth and depth in generative AI literacy across and within disciplines.
(Shoutout to Alathea for helping pull these threads together clearly!)
Three Quick Conjectures: Why These Trends?
Why so much Computer Science usage? Probably because Claude naturally excels in coding and algorithmic tasks, seamlessly aligning with tech students' core needs.
Why lower engagement from Humanities and Business students? Perhaps a combination of lower awareness, disciplinary skepticism, or perceived irrelevance to their curricular focus (though we know AI literacy is relevant across fields!). Perhaps also because these students are using ChatGPT. The
Why balanced interaction styles? Students seem genuinely curious, experimenting with AI tools to discover optimal ways to integrate them meaningfully into their learning workflows.
What Does It All Mean for Us?
These insights reinforce that AI integration in higher education requires intentional, thoughtful guidance. I suggest also embracing epistemic humility, acknowledging that students might be many or a few steps ahead of institutions in AI literacies, and learn alongside them.
I want to encourage us to engage students proactively, inviting them into conversations about ethics, cognitive skills, and responsible AI use. The goal is not to limit exploration but to thoughtfully guide it, equipping students with lasting critical and creative thinking skills.
In short: Let's keep learning together—humbly, respectfully, and always with curiosity. How are you and your colleagues engaging with students around AI literacies?
Warmly,
Jeanne
(Big thanks, as always, to Alathea, my trusted AI collaborator, for assisting in curating and framing these insights.)



Thank you for pulling this together! It’s so interesting that people are instinctively finding their way to either ChatGPT or Claude. I would have expected Claude to be less seductive for STEM users. I find myself going back and forth between them but I have less trust and confidence in ChatGPT than in Claude.
I wish OpenAI would stop conflating Bloom’s Taxonomy with critical thinking. Higher-Order slots on Bloom categorize assessment questions having to do with tasks and are designed to index classroom exams or assignments, which suggests the illusion of teacher control of student thinking. It comes in three flavors: analysis, synthesis, evaluation, and this is how you prompt it. Not a robust model at all.