The narrative about students and AI has settled into two camps. In one, students are using these tools to cheat on essays and dilute their education. In the other, AI is a transformational learning partner that will replace the inefficiency of traditional instruction. Both stories are too clean.
What students are actually doing with AI in their day-to-day academic work is messier, more pragmatic and less ideological than either side suggests. A look at how the tools are being used, semester by semester, on real coursework, reveals a set of behaviors that have more in common with how previous generations used Wikipedia, Stack Overflow and graphing calculators than they do with the apocalypse or the revolution.
The unglamorous uses are the dominant ones
Surveys of undergraduate AI use that look beyond the headlines find that the largest category by far is not essay writing. It is summarization and explanation: pasting a difficult passage from a textbook into a chatbot and asking for a simpler restatement, asking it to explain a math problem step by step, requesting a list of the main arguments in an assigned reading.
This is less dramatic than the “AI wrote my paper” story, but it is also more consequential for how classrooms work. The traditional model of college reading assumes that the student will struggle through a difficult text, make sense of it, and arrive in class prepared to discuss. When a chatbot can produce a cleanly organized summary in 30 seconds, the question of what “reading” means at the assignment level starts to shift.
Some students use the summary as a substitute for reading the text. Others use it as a scaffold – they read the passage themselves, generate the summary, and use the comparison between the two to identify what they missed. The same tool, in the same dorm room, produces opposite educational outcomes depending on the habits of the student using it.
Coding assignments and the new normal
In computer science courses, the integration of AI into student workflows has happened faster than in any other discipline. The reason is structural. The output of a coding assignment is a working program, and a working program produced with AI assistance is often indistinguishable from a working program produced without it. Faculty have responded by changing what they evaluate – more emphasis on code reviews, on explaining design decisions out loud, on building during in-class proctored sessions.
The students who are getting the most out of AI in programming courses tend to use it as a more aggressive autocomplete. They write the first version themselves, generate alternatives, compare what the assistant produced with what they wrote, and incorporate the cleaner pieces. Students who lean on the tool from the beginning – describing the problem and asking for the full solution – tend to do well on assignments and poorly on exams, where the assistant is absent.
The gap between assignment performance and exam performance has become a key signal for instructors, and it has changed how grades are weighted in introductory courses. A C grade from a final coding interview where the student has to talk through their reasoning now carries more information than an A on a take-home project.
Writing assignments are the trickier case
The use of AI on writing assignments has produced the most public anxiety, and the picture is genuinely complicated. The simplest pattern – paste prompt, copy response, submit – is detectable enough that most students who try it stop trying it within a semester. Faculty assignments have evolved to include personal observations, references to specific class discussions, and in-class drafting requirements that are difficult to fake.
The more interesting pattern is partial AI use that is harder to characterize. A student might write a first draft themselves, then ask an AI to suggest a stronger opening, then rewrite the suggestion in their own words. Is that cheating? It depends on the course, the instructor, the syllabus language, and frankly the standards the student grew up with. There is no consensus, and a single university often has different working norms in different departments.
What is emerging in practice, at least at institutions that have tried to think this through carefully, is a continuum rather than a binary. Some instructors require disclosure of any AI use. Some require that AI not be used for the generation of first drafts. Some encourage AI as a tutor for grammar and structure but treat content generation as out of bounds. The most useful thing an instructor can do for their students is be explicit about which point on the continuum they expect, and most are not yet that explicit.
What students are doing outside of assignments
The use of AI in coursework is only one part of the picture. The other, less visible part is the use of AI as a study companion outside of any specific assignment.
Students preparing for an organic chemistry exam might walk through reaction mechanisms with a chatbot, having it generate practice problems and check their answers. Students struggling with a difficult reading might ask the assistant to play the role of an interlocutor – to challenge their interpretation, ask follow-up questions, push them to defend a claim. Students who are stuck on a particular passage of code can describe what they want it to do in plain language and ask for a starting point.
None of these uses are detectable by an instructor, and most of them are entirely consistent with the goals of the course. They look more like having a patient tutor on call than they look like cheating. The students who use AI this way report being more engaged with their material, not less, because the friction of getting unstuck has dropped.
The risk is that the same friction is part of the educational experience. Sitting with a confusing passage for 20 minutes and slowly understanding it is a different cognitive activity from reading a clear explanation of the same passage. Whether the slower path matters depends on what the course is trying to teach. For a survey course in a non-major, probably less. For a technical foundation course that the student will build on for years, probably more.
The new academic literacy
The students who seem to be benefiting most from AI tools share a habit that earlier generations had to learn for different tools. They treat the assistant’s output as a draft, not as truth. They check what it tells them against the source material. They notice when it confidently produces something wrong, which it still does on a regular basis, particularly for technical content where the model has overfit on common patterns.
This kind of skepticism is the same intellectual habit that a strong student has when reading Wikipedia, when using Stack Overflow, when consulting a textbook from 1995. The form is different. The habit is the same. The students who have internalized this habit before they encountered AI are doing fine. The students who have not are getting worse outcomes than they would have without the tool.
The implication for educators is uncomfortable. The single most important thing the system can teach right now is not how to use AI tools, which the students will figure out on their own. It is how to verify what they read – how to triangulate a source, how to recognize confident wrongness, how to know what you do not know. That work has always been part of college education, and it has always been done less seriously than the rhetoric suggests.
What the next few years probably look like
The arguments about AI in classrooms will keep happening, but the underlying student behavior is likely to stabilize. Tools will continue to improve, faculty will continue to adjust assignments, and a generation of students will graduate having grown up with the assistant as a peripheral piece of their working life, no more dramatic than a calculator or a search engine.
The classrooms that adapt best are the ones that have already started thinking about what AI cannot do well: in-person discussion, hands-on lab work, oral examination, real-time problem solving under pressure. The classrooms that hold tightest to the take-home written essay as the primary measure of student understanding are the ones that will keep being surprised by how the tools are being used.
None of this is the future of education. It is just the next thing. The students figuring it out in real time, by trial and error, will be the ones who set the norms the institutions eventually catch up to.