I am a proud Luddite in the classroom
There is no middle ground
The Important Work is a space for writing instructors at all levels—high school, college, and beyond—to share reflections about teaching writing in the era of generative AI.
This week’s post is by Clayton Kistner. Clayton is in his 7th year of teaching high school English in the Kansas City metro, where he currently teaches courses designed for struggling readers and students not attending 2 or 4-year universities upon graduation. He holds a Bachelor of Science in Secondary Education - English from Kansas State University and a Master of Science in Curriculum & Instruction from Emporia State University.
If you’re interested in sharing a reflection for The Important Work, you can find all the information here. —Jane Rosenzweig
My gut feeling of unease that rises up when opening an LLM has been there from the beginning, when I misheard a colleague recommending a new website to me (ChatGDP?), and queried it for a response to the prompt my students would be writing on the next day. As the words ticked onto the screen, I felt a series of emotions—none of which I would place in the positive half of a good-bad spectrum. I found myself immediately uninterested in it as a tool, but wildly interested in the implications if the technology took off as many people thought it would. That subtle unease has transformed now, as the more I read, listen, and speak about generative AI, the more I feel an obligation to actively resist its encroachment into every aspect of my life, but particularly that of my and my students’ classroom.
This evolution from unease to resistance has been especially shaped by one book from my summer reading list: Brian Merchant’s Blood in the Machine (though special note also to Karen Hao’s Empire of AI, Emily M. Bender & Alex Hanna’s The AI Con and John Warner’s More Than Words, all of which were very influential for me). I found much hope and inspiration in the story of the Luddites, and as Merchant argues, their experiences give us a strong framework for approaching disruptive technological advances in the present day.
I stumbled upon Merchant’s book by recommendation when I proudly proclaimed myself a Luddite at a table of fellow educators, at the moment knowing only the general derogatory meaning as someone broadly opposed to technological change. I had the book sent to my local bookseller immediately and was quickly engrossed by the story of the Luddites—working class people not opposed to technological change on its face, but to technological change that “reaped profits for their owners at the expense of their neighbors” and worked “to the disadvantage of the human being.” Luddites, though their legacy unfairly tinted by the pejorative coined in their name, were members of a group that took a critical look at their moment of technological progress, and fought back against specific technologies and people that disrupted their way of life. Today, no advancement in technology (save for cell phones, which are now facing widespread, warranted bans) has shred through the fabric of our classrooms like AI has.

In 1991, in response to the growing influence of computers as we moved “Into the Electronic Millenium,” Sven Birkerts wrote that “[t]ransitions such as the one from print to electronic media do not take place without rippling—more likely, reweaving—the whole of the social and cultural web.” OpenAI’s Sam Altman seems to agree in his assertion that “the whole structure of society itself will be up for some degree of debate and reconfiguration” as AI makes its way into our lives. In schools, we’re seeing the ripples of this in a myriad of ways: the viral report that college students are cheating their way through school, the AI-driven school where “students spend a total of just two hours a day on subjects like reading and math,” or the AFT’s full embrace of AI in the new “National Academy for AI Instruction” created in collaboration with Microsoft, OpenAI, and Anthropic.
Like many teachers, I’ve felt these ripples in the interactions I have with students suspected of improperly using AI; interactions that have become increasingly painful and discouraging. Did I not do enough to make the content relevant or interesting for them? Should I have been using the browser-monitoring technology that my district insists is useful and not demeaning and surveillant?
Before long, I fear these ripples may become a full reweaving that does not seek to benefit our students, nor create a world that values curiosity, community, or the inherent value of education and learning over the mere output of products.
I think also of an exchange I had this past year with one of my students who was excited to show me a graphic design project she had just finished, one of many hanging on the wall of her classroom. A freshman, she had created an amazing greeting card that featured two of her favorite Pokémon jumping across the cover; I asked her how she made it, knowing she’s a talented artist who’s dabbled in many different mediums. “I used [Adobe] Illustrator for all of it!” she responded with pride. Pointing then to a few cards hanging near hers as I glanced at them, her eyes fell to the ground as she said, “Yeah, a bunch of people just used AI for theirs.” In that moment, I felt a grief for her as an artist and as a student. What is there to say when something heralded as grand and inevitable technological progress stomps on something as human as the earnest art of a developing learner?
Before long, I fear these ripples may become a full reweaving that does not seek to benefit our students, nor create a world that values curiosity, community, or the inherent value of education and learning over the mere output of products.
To manage this technodeterminist future, many educators have attempted to find a middle ground that invites AI into the classroom with alleged fidelity. These may look like new versions of the iconic (in name, not in practice) CRAAP test, or categorizing assignments based on expected use of AI. Though even with these ‘responsible use’ programs, opening up a trickle of AI seems so often to result in a flood—as described by a California high school student in “The Important Work” recently—and even ‘responsible’ use brings with it the undeniable issues of environmental harm, plagiarism, and labor exploitation.
I find myself instead more drawn to the experience of the Luddites. They saw what Charlotte Brontë described in Shirley, her novel set in the Luddite uprisings of the early 19th century, that “[t]he future sometimes seems to sob a low warning of the events it is bringing us.” In my school and many others, I worry for those teachers that do not hear nor heed this warning, and the implications that their ambivalence toward / embrace of AI may have for students and education writ large. I don’t seek to play into AI doomerism (I don’t think AI overlords will be turning us into meat batteries anytime soon), but the risks AI poses to our way of life and our students’ learning are real, and we’re facing them now.
Schools remain an enticing market for AI corporations, as seen by OpenAI’s new “study mode” which purports to “[help] you work through problems step by step instead of just getting an answer,” though it’s already shown to be quite fallible in that regard. In order to create a world with AI at its center both culturally and economically—as AI corporations surely hope is in our future—the entrenchment of their products in the lives of vulnerable young people is key, and no educator should be unwittingly, and certainly not actively, complicit in that regard.
The Luddites banded together to quite literally destroy the technology that threatened their livelihoods, taking hammers (potential future tattoo idea for me) to the machinery itself while also vilifying the men responsible for these profit-seeking, job-destroying ventures. This year, I will not be employing the use of any generative AI in my practice, nor will I permit or encourage the use of it in my students. This isn’t to say that I will turn my nose to the issue and ignore it; on the contrary, I plan to spend multiple months with my students reading, writing, and discussing AI as an important aspect of our culture to be deeply understood and scrutinized critically, but we will do this without entering a single query into an LLM. Keeping our learning environment free of AI use is a principled response to the techno-bro-driven future forcing its way through our classroom doors, as it disrupts the ability of AI corporations to use the minds and writing of our students to train their models, build a world “where entire classes of jobs will go away,” disrupt vulnerable ecosystems, and further entrench exploitative labor practices into our society. George Mellor, a Luddite executed by the state for fighting to protect his community’s way of life, wrote that “a soul is of more value than work or gold,” and surely some of the souls most worthy of our careful attention and protection are those of our students. Even without the (literal) Enoch’s hammer of the Luddites, I'll gladly raise my (figurative) hammer high if it means protecting the humanity of the students entrusted in my care. I hope you'll raise yours, too.



Terrific read! Clayton Kistner takes no prisoners on the issue of AI in the classroom. There is no middle ground between helping or harming a student’s education.
Thank you so much for this Clayton! Your ethical leadership on this gives me strength and hope to keep up the good fight!