Why I'm Saying No to Generative AI
There is no value-add to using generative AI for historical work
The Important Work is a space for writing instructors at all levels—high school, college, and beyond—to share reflections about teaching writing in the era of generative AI. We hope to spark conversations and share ideas about the challenges ahead, both through regular posts and through comments on these posts. If you have comments, questions, or just want to start a conversation about this week’s post, please do so in the comments at the end of the post.
This week’s post is by Cate Denial. Cate is the Bright Distinguished Professor of American History and Director of the Bright Institute at Knox College in Galesburg, Illinois. Cate is a pedagogical consultant who works with individuals, departments, and institutions in Australia, Canada, Ireland, the U.K. and the U.S. Cate’s new book, A Pedagogy of Kindness, argues that instructors and institutions of higher education must urgently focus on compassion in the classroom. You can read more of Cate’s work on her blog and find her on Bluesky.
If you’re interested in sharing a reflection for The Important Work, you can find all the information here. Your reflection does not have to be about using AI in the classroom—we are interested in any way that you’re thinking about the important work these days. If you’ve redesigned assignments to avoid using AI, if you have strong feelings about when and how AI should or should not be used in the classroom, if you do something that you think works very well without AI, we want to hear about that too. —Jane Rosenzweig
When I first heard about ChatGPT I felt a jolt of panic. Here was a product that had the potential to upend so much of what I did as a history professor, teaching undergraduates in courses that involved a lot of writing. The many ways in which it could be abused spooled out before me like a ticker-tape of woe. But I have learned by now—through the age of MOOCs, the adoption of Zotero and other useful software, the lifeboat learning of the pandemic—that while panic is an absolutely normal first response to change, I do best when I slow down, think carefully, and purposefully reorient myself toward my values.
Central among these is the concept of trust. I do not believe students are, as a whole, up to no good. I do not believe they all have nefarious intentions. While outlier students might pop up now and again, I do not believe it makes sense to anticipate that everyone is out to cheat. Instead I believe my students when they tell me about their educational experiences, and I believe in them, as collaborators on the way to meaningful learning.
What would it look like to trust my students in an age of generative AI? An informal sampling of some of my students told me that most knew about ChatGPT and its ilk, and had either messed around with it themselves or knew someone else who had. But those same students expressed little awareness of the context in which ChatGPT existed, be that context ethical or technological. I trusted my students to be critical thinkers, so I put together a slate of readings to inform conversation between us about what ChatGPT could do.
I shared a link to Mark Riedl’s excellent “A Very Gentle Introduction to Large Language Models Without the Hype” in order to discuss the fact that generative AI apps work by means of predicting text, not by thinking. I drove this point home by having my students take out their phones, open up a text message, and using only the predictive text in that app write the history of the day before. The resulting stories were ridiculous and funny, and demonstrated that while predictive text could string together grammatically correct sentences, it couldn’t actually make sense.
We tackled ethics by reading about the ecological costs of data centers, the labor violations involved in cleaning up the data sets from which generative AI apps drew, questions of access, and popular messaging about generative AI. “I have never heard about this before,” said one student. “Thank you for bringing this issue to my attention! I didn't know anything about it!” said another.* At the end of our lengthy conversation about these issues, I had my students write their own generative AI policies, to which I would hold them for the rest of term. In the first class in which I used this approach, everyone rejected AI.
I also supported my students in their decisions by changing my approach to my courses. Rather than assuming that students had all the time they needed to complete assignments as homework, I began to offer in-class workshop days. These served multiple functions. They provided time for students to work on their papers, so they would be less likely to run out of time at the end of the assignment period and turn to something like ChatGPT out of desperation. They also allowed me to enter into one-on-one conversations with students multiple times about their work. I saw their outlines, their drafts, and their edits. I had to remove some content from my courses to make this possible, but the resulting papers were of such a high quality that they more than made up for that loss.
This, then, was how my relationship to ChatGPT began. It was not where it has ended. My own ethical wrangling with the material and environmental costs of generative AI, and the harm it causes to other humans, continued. I now feel comfortable banning students from using generative AI for class, not because I distrust them, but because there is no value-add to using generative AI for historical work.
There is nothing that I do in my professional capacity—a capacity I am modeling and passing along to students—with which generative AI can help. When asked historical questions, it can weave superficially-related words together into incorrect answers. It cannot easily read human handwriting, and even in those instances where it is has done this, human fact-checking is required after the fact. It can only work with digital files, which excludes the vast majority of the sources we have.
And when it comes to writing history, I feel very strongly that the value of searching for and using words to brainstorm, draft, redraft, polish writing skills, and discover our own thinking is paramount. Not everyone finds working with writing easy, and I have a responsibility to meet every student where they are and help them in as many creative ways as I can become better at it. But if employers want “good” writing (for some value of good), and if my students can only produce it by asking generative AI to do the work, then I’ve done them a disservice. Why wouldn’t an employer turn to a machine to write something in that case? (At least while that machine does the job for free.) Tracking down primary sources, collecting objects, listening to stories, deciphering handwriting, analyzing ideas, and thinking through what we think about those ideas . . . that is beautifully human work. I will no longer apologize for wanting humans to do it.
My position may change again in the future. Teaching is a wonderfully iterative process, and as new information becomes available it makes new decisions possible. Perhaps the accuracy of transcription software will increase; perhaps funding for the digitization of sources will do likewise. What will remain the same is my commitment to supporting students in becoming excellent and ethical historians, whatever the tools—pencil, manuscript, tapestry, pottery shard, oral tradition, painting, laptop—we have at hand.
* student quotes anonymized and used with permission.
Thank you for sharing this! One thing that particularly stood out for me was your use of workshop days. This is something I've also done as a community college writing professor, starting a few years back now, and like you, I've found that it works really, really well. I've been thinking about *why* it works, and I thought I would share some reflections on that, in case it may be helpful to anyone who is thinking of trying something similar.
- As you say, part of it is simply the time. For my classes, it probably adds up to about 10-15 hours over the course of the term, and for students who have a lot of demands on their time, that makes a difference.
- Part of it is also the time over time. In other words, the way that the time gets spread out over the term supports a more effective writing process.
- Part of it is also the space and the social support of working in that space with others. This is big for many of my students, and while I'm happy to see them working intently, I'm also happy to see them having positive interactions with each other during workshop time.
- Part of it is what they get from their check-ins with me. I think there are several dimensions to this. It's about getting regular, timely, formative feedback. It's also about getting that feedback in person, which seems to work better for a lot of students. But most importantly, I think, it's about (a) building a relationship with me that (b) recognizes and welcomes them as individuals, (c) supports their growth as thinkers and writers, and (d) provides a consistent structure for them to think and talk about the work that they're doing and what they're learning from it.
- And part of it, frankly, is what I get from my check-ins with them. I get to know them a lot better. I get a better sense of how they work and how they think, what works for them and what doesn't. What I learn from these conversations has been immensely helpful for my teaching.
- For all of us, my sense is that workshop time has helped to make the work of writing feel more human. While the practical and logistical side is important, I suspect that this is ultimately why it has helped students do better work.
I really, really appreciate this line within the overall piece: "My position may change again in the future."
Anyone drawing lines right now feels at best naive, as there is so much movement in this arena and, if nothing else, humility feels like the most necessary of mindsets.