Historians are familiar with revolutionary change. Investigating pivotal moments in the past is central to what we do. The Print Revolution accelerated the spread of information, facilitating social and religious upheaval. The Industrial Revolution mechanized production, putting entire professions out of work and reshaping class relations. As AI has gradually entered our lives, from autocorrect on our phones to default Google search responses to platforms that can read, research and write, we would be remiss not to recognize that we are living through a comparable transformation.
I came face to face with this change in the classroom in Fall 2025 while teaching a first-year course on the History of Money. Of 82 students, roughly three-quarters came from outside history, especially Business, Economics and Political Science. The main writing assignment, which I had used for a decade, asked students to compare Max Weber’s “Protestant ethic” thesis with two case studies, to assess whether the evidence supported his argument about the emergence of capitalism. I had extra TA hours available so I asked them to verify footnotes before I graded the papers.
What we found was striking. In paper after paper, footnotes did not correspond accurately to sources. Page numbers were incorrect, references led to irrelevant illustrations, or broad page ranges were cited without justification. Some papers included direct quotes that did not exist anywhere in the assigned texts. What was going on?
These kinds of citation errors are evidence of AI use, and investigating the papers revealed how students are engaging with these tools – and how some were abusing them. Many students relied on AI to summarize readings, generate outlines, and suggest arguments or examples. Others went further, submitting AI-generated text as their own, adding filler to meet word counts, fabricating references, or even producing entire essays. The particular context of that assignment – a fixed topic, familiar sources, additional time for verification, and a large number of non-history students – made the problem visible. But the broader lesson was unmistakable: we can either ignore increasing AI use among students, or we must rethink how we teach and assess research and writing.
Faced with finalizing syllabi for the next term, I contacted colleagues across Canada, describing my experience and asking for departmental policies and suggestions of AI-resistant assignments. Their responses were both validating and discouraging. Like me, many instructors were uncertain how to proceed. Some had left assignments unchanged; others had shifted entirely to in-class assessments. Many were experimenting, incorporating presentations, using assignments based strictly on course material, scaffolding research essays, or assigning alternative formats such as visual projects and group videos. A few were confronting the issue directly by teaching students how to use AI responsibly – critiquing its outputs or learning to craft effective prompts.
Despite these efforts, many of us feel we are working in isolation. Institutional guidance remains limited. Academic integrity policies are struggling to keep up with the rate of change, and the ones we do have are ambiguous (as not all parts of campus see AI as a problem) and non-committal (with responsibility frequently pushed back onto individual instructors to define their own policies). At a broader level, some administrators frame AI as a tool for efficiency rather than a challenge requiring scrutiny. Faculty who attempt to detect or limit AI use may find their increased workload unrecognized, as suggested by a recent article from the UK.
It’s easy to share my story, but much harder to offer guidance as we move forward. As my experience with the Protestant Ethic assignment showed me, verifying AI use is inherently challenging. Detection tools are unreliable, and it is often impossible to meet the burden of proof required to pursue academic misconduct cases. Instead, we rely on indirect indicators: nonexistent sources, superficial analysis, or weak engagement with course material. Yet these signals are likely to become less visible as AI improves and as students become more adept at writing prompts. And we need to recognize that our students are faced with a difficult choice: use AI to produce polished essays, outperform their peers and strengthen their graduate school applications, or produce their own work and risk receiving lower grades. In this environment, those who choose to do their own work may be disadvantaged, effectively inverting the logic of our evaluation systems.
Indeed, the focus is already shifting from prohibiting AI use to defining its misuse. Students see the advantages of the tool: summaries of historical topics can provide orientation (even if they are produced from such sources as Wikipedia), and AI-generated flash cards can help prepare for exams. They use AI for brainstorming and libraries encourage its use to locate sources – and literature review tools, like Consensus, are now being integrated directly into LLMs. Yet although we can draw the line between AI assistance and submitting AI-generated work, the fact remains that even students who use AI to generate ideas and sources are bypassing foundational intellectual work. And studies are starting to show that heavy reliance on AI may diminish cognitive engagement over time.
At the same time, a retreat to exclusively in-class assessments risks creating a new problem. If we abandon research papers, do the assessments that replace them teach the same skills? Some of my strongest students have expressed disappointment at the loss of opportunities to conduct independent research, and I am left wondering how we will recruit students to MA programs if they reach the end of their undergraduate degrees without substantial research experience. Our students need our guidance: they want to be rewarded for not cheating, but a blanket condemnation of AI tools makes little sense to them. If we are teaching research and writing, we do them a disservice if we do not discuss proper use of the tools that surround them.
History reminds us that technological revolutions cannot be halted. Just as manuscript copyists could not prevent the spread of print, and the stockingers could not stop mechanization, we need to accept that the AI Revolution is upon us and that a purely defensive posture is insufficient. In fact, this moment may create new opportunities for the humanities. Skills such as critical evaluation, contextual understanding, and the ability to interrogate sources may become even more valuable in a world saturated with AI-generated content. Learning how to assess outputs and refine prompts may emerge as essential competencies. And I still find solace in the occasional indication that AI is intrinsically incapable of performing certain tasks correctly, highlighting the continued importance of human expertise.
In the end, I think we must remain informed and alert: read news and opinion pieces on the topic, talk to your students, and try the tools yourself to understand how they work. In fact, this blog is a little long – perhaps I’ll ask AI to tighten the prose. (As my students would say, jk!)
Jill Walshaw, Associate Professor of History, University of Victoria