The Important Work is a space for writing instructors at all levels—high school, college, and beyond—to share reflections about teaching writing in the era of generative AI. We hope to spark conversations and share ideas about the challenges ahead, both through regular posts and through comments on these posts. If you have comments, questions, or just want to start a conversation about this week’s post, please do so in the comments at the end of the post.
This week’s post is by Stephanie Kratz, who is Distinguished Professor of English at Heartland Community College in Normal, Illinois, where she teaches composition and literature in online, face-to-face, and hybrid formats. She is interested in faculty professional development, alternative grading, and instructional design. You can find Stephanie on LinkedIn and through her website.
If you’re interested in sharing a reflection for The Important Work, you can find all the information here. Your reflection does not have to be about using AI in the classroom—we are interested in any way that you’re thinking about the important work these days. If you’ve redesigned assignments to avoid using AI, if you have strong feelings about when and how AI should or should not be used in the classroom, if you do something that you think works very well without AI, we want to hear about that too. —Jane Rosenzweig
“Your post reads like it is AI-generated. I'm not saying that it is, but I wanted to point it out so you are aware.”
Have you found yourself writing similar feedback on student work? My time is increasingly spent doing so. With the increasing availability of AI writing tools, faculty are faced with difficult questions: is the research paper dead? How can we stop students from cheating? What does this mean for higher education? As I stand next to these monumental questions and wonder about the future of my life’s work, I am overwhelmed. Even as an experienced community college English teacher of 29 years, what can I do in the face of such foundational change? It feels like the answer is very little, but what I can influence is individual students who are in my classes learning about writing this semester. As I navigate student AI use, I have learned to lead with curiosity, not accusation.
Most of my teaching is in online, asynchronous general education courses with 22 students. I read a lot of student writing (our course load is five/five), and I have become adept at identifying writing that is written by someone other than my student. AI-generated writing is often highly-polished and lacking in personality. Some AI-generated essays I’ve received refer to things we didn’t even cover in our course. In short, some AI writing is a dead giveaway; however, it is not always so easy to spot.
AI-generated texts frequently include inconsistencies in writing style. Student writing doesn’t usually improve by leaps and bounds between the rough and final drafts, so when I see it, I notice. It’s in the voice, the word choice, and the sentence structure. When something feels off, I trust my instincts; I use a combination of the student’s previous work, my experience as a writing professor, and AI detectors to identify cheating (more on detectors below). And when I think an essay was written by AI, my next step is to invite students into a conversation about their writing process. However, I did not always do so.
Last semester in week two, a composition student submitted a suspiciously well-written and polished essay with a surface-level discussion of ideas. The AI detector flagged it as 78% AI-generated. Much has been written about the inaccuracy of AI detectors, and I have found their reliability inconsistent; they often disagree not only about the percentage of AI text but also about whether a human wrote it. I still use them because there is something comforting in their quantitative percentage data point, but I am also deeply uncomfortable about them. Research has shown that AI detectors are biased against non-native speakers, Black students, and neurodivergent students.
In this case, however, even without the detector, I had other reasons to believe this student was trying to get away with something: he submitted this essay two weeks early before we had even covered all the material; and he had not been engaged with the course up to that point. In fact, this was the first assignment the student had submitted for the semester. I graded it as zero points, wrote a comment which bluntly stated that it is not acceptable because of the 78% score, and offered a re-do option. I thought that the re-do option was a gift! The student responded angrily: “AI written? Those checkers r not always rite. how dare u? This is my work. i can’t believe u would insult me like this.”
As I read his error-laden response, I began to second-guess myself. Had I been too harsh? I knew that AI detectors are less-than-foolproof. Was I being a bad teacher? But then— before I could respond to his angry outburst — he withdrew from my course. In hindsight, I saw his defensiveness as a sign of his guilt. Worse though, he fled. I had lost a learning opportunity with him.
In other AI cases, students are not intentionally trying to cheat; they are experimenting with AI without fully understanding its ethical implications. When I received more suspicious essays, I responded differently, having learned from the tone of my last exchange. I decided to guide rather than punish. A literature student told me she had used AI as a proofreading tool, not realizing how significantly it had altered her original writing. She admitted she didn’t know where to draw the line between acceptable use and dishonesty. Another composition student relied on AI too much when he was overwhelmed with an assignment. Rather than consult with me or a tutor, he had turned to AI out of frustration.
I responded by acknowledging the ubiquity of AI in the world and encouraging students to think critically about responsible AI use. I asked, “How can you make use of AI as a tool for brainstorming or feedback while maintaining your own voice?” and “What is one way to break down this assignment into more manageable steps?” As my students learn about writing, I am learning about how to integrate AI into my teaching.
In spite of my initial discomfort with AI, I decided to embrace its existence and designed an AI-aided assignment. After writing a rough draft of an essay, the assignment prompts students to ask an AI tool for advice on how to revise it. Then they write a reflection about which suggestions they will use or abandon, and why. This assignment has opened up conversations about AI use rather than shutting them down.
I decided to integrate AI into my teaching partly because I felt like I had no choice. Also, however, it became more obvious as time went by that those students who had no experience with AI tools—perhaps because their professors were banning their use outright and accusing them of cheating without the proof that AI-generators do not allow us to have—would be at a disadvantage in their careers.
I read blogs like The Important Work and newsletters like Teaching from The Chronicle of Higher Education. These resources have provided more ideas for adapting my assignments to AI. Right now, I’m part of a faculty group discussing the books AI 2041 by Chen Qiufan and Teaching with AI: A Practical Guide to a New Era of Human Learning by José Antonio Bowen & C. Edward Watson. Have I solved the problems of how AI will affect higher education? Of course not, but I am improving my daily interactions with students.
Perhaps my biggest lesson so far is this: first, trust students. My comments now look more like this: “AI tools can impact how your writing comes across to others. These tools are not ‘bad’ or ‘good,’ but they carry a sort of impersonal tone. They need to be used critically as a support without letting them take over your writing. Did you use AI tools or grammar assistance to write this? If so, you should work to make the writing sound less impersonal and add more of yourself to the discussion.” I have found that emphasizing students’ writing development and how AI will shape their futures encourages strong writing habits that reduce over-reliance on AI.
It is too soon to determine whether my AI-adapted assignments are the reason that I’m seeing less unauthorized AI use. Perhaps I’m not noticing AI as much because the technology is evolving to make it professor-proof. Or maybe students welcome the opportunity to discuss its use in an academic setting, and maybe they appreciate me making my learning visible. Discussions help us hone our thinking about AI, warts and all. I’m striving to be more transparent and less perfect with students. As for writing instruction, the emphasis may need to shift even more towards the process instead of the product since a passable— if boring —essay can now be written in seconds. Trying to control students’ use of AI seems futile. For better or worse, the AI future is here. How will we help students manage it responsibly?
I acknowledge the use of Grammarly to help me organize my blog post. I entered the following prompt: “Help me organize my ideas” with a rough draft to help focus my essay.
This is a wonderful account of how to engage with students about their writing process post-ChatGPT, right down to the acknowledgment at the end.
I struggled with pre-ChatGPT forms of academic dishonesty, including a case where I think someone engaged in contract cheating, all the way down to having paid for three rough drafts as well as a final paper.
I love that your initial comment starts the discussion without assuming anything inappropriate has happened. I wish I had started the conversation with the suspected contract cheater similarly.
Genuinely appreciative of the openness of this—the generosity it affords to others walking parallel paths right now? Invaluable.