Shifting My Thinking about AI in the Classroom
Was I helping kids become critical thinkers or just assisting in the dumbing down of society?
The Important Work is a space for writing instructors at all levels—high school, college, and beyond—to share reflections about teaching writing in the era of generative AI.
This week’s post is by Michael Burns, who is in his 30th year of teaching high school English and Humanities in Atlanta, Ga. He holds a Bachelor of Arts in English from Furman University and a Master of Arts in Teaching from Agnes Scott College. Currently, he serves on the Teacher Advisory Board for the National Humanities Center. Connect with him on Substack and LinkedIn.
If you’re interested in sharing a reflection for The Important Work, you can find all the information here. —Jane Rosenzweig
When colleagues and friends ask me why I’m so interested in AI, my standard answer is to quote the adage “Keep your friends close and your enemies closer.” It comes from my first experience with ChatGPT. A colleague texted me the link during our Christmas 2022 break, and we spent the night exchanging silly ChatGPT-generated stories. Even through the tears of laughter, we both knew this technology was scary good and would change everything. I jokingly texted, “Humanity is doomed.”
Since then, it's been both fear and fascination driving me to learn all I can about AI. I’m fascinated by the power and potential of this technology; I’m also fearful that if it doesn’t doom humanity, it could certainly change what it means to be human.
For the past two years, I’ve leaned into experimenting with AI in my high school classes. My school has been very supportive, giving teachers the freedom and resources to use or ignore AI. And as you might imagine, my students were excited to use it. I had them create Fintsas (fake Instagram accounts) for Gatsby characters. We had success in using Flint to chat with bots in response to my close reading questions. My students and I were even interviewed for a news segment on AI in the classroom. I was developing a reputation as an “AI Guy.”
Each time I allowed students to use AI, I had them write reflections as part of the final submission: What worked? What, if anything, was better with AI? What did it miss or get wrong? Should we allow AI in our class? Feedback was usually mixed. Some students loved it, and others felt there was something off about using it in their writing. We were engaging with the technology in a careful and thoughtful way—progress for my students and demonstration of my competence as a modern English teacher, right?
Even as I was having these positive classroom experiences with AI, I had doubts. Sure, the AI-related assignments in my classes looked good and felt cutting-edge, but what were students actually learning? Was AI actually helping them develop as thinkers or writers? Nicholas Carr writes in “The Myth of Automated Learning” that “Armed with Generative AI, a B student can produce A work while turning into a C student.” Is that what I’d been doing? Was leaning into AI undermining my students’ education?
Over the past ten years, I’ve seen how smartphones and social media have shortened attention spans and how more screen time means people are reading fewer books, myself included (Federal Data on Reading for Pleasure: All Signs Show a Slump). During Covid, I witnessed how online classes and more technology didn’t correlate well with learning or social development. Last year, my school banned smartphones (a great success), so why employ this even more disruptive technology? Was I helping kids become critical thinkers or just assisting in the dumbing down of society?
I know my students are generally trying their best to do the right thing; they are good kids living busy lives. Yet too often, they are overscheduled, overstressed, and overly fixated on grades. So why wouldn’t they use AI if it means saving time, reducing stress, and possibly getting a better grade? Why should anyone bother reading or writing (or even thinking) if using AI means doing things more quickly with less personal and mental energy?
Things reached a tipping point for me in the late spring. The images and videos were so good that it became hard for me to tell what was real from what was AI-generated. It was the same with writing. Neither my BS detector, seasoned by 29 years in the classroom, nor the Turnitin.com technology were good enough to keep up with the quality of generative AI. A clever student could now, with minimal effort, slip something by me and thus bypass my goal of getting him or her to think and write carefully and critically. A win for technology, but a loss for learning.
Colleagues suggested fighting technology with technology. So I used browser add-ons to review document histories and try to catch students cutting and pasting chunks of AI-generated writing. But honestly, I didn’t get into teaching to play Big Brother. Spending hours sleuthing didn’t feel right. Giving thoughtful feedback on writing that I wasn’t sure was original didn’t either. Why bother?
Of course, I would still catch the students making poor decisions with the easy AI tells: the ubiquitous em dashes, overly sophisticated diction, or a paper with exponentially improved quality when compared to recent assignments. There were also after-school meetings with students after Turnitin flagged their work as AI-generated, but my further investigation proved, in fact, the student’s work was original. AI was making me doubt my students and myself.
Maybe helping students to be successful in an AI world means NOT using more AI in my classes.
I took these questions and doubts with me this summer to the AI and Digital Literacy Institute at the University of Kansas (AIDL). The high school and college teachers from across the nation and Canada challenged and inspired me. I expected to return with best practices and specific AI programs to try. Instead, I returned with a new mindset, best captured in Maha Bali’s blog post What I Mean When I Say Critical AI Literacy. Bali defines literacy as “beyond the basic skill of how to use something...into the capacity to know when, where, and why to use it for a purpose, and, importantly, when NOT to use it.”
That emphasis on NOT using it reframed my thinking. I had been judicious about AI use, but this new mindset freed me to see things differently. Maybe helping students to be successful in an AI world means NOT using more AI in my classes. Also at AIDL, I encountered Marc Watkins’ writing, and his “Adapting to AI Is Not Adopting AI” crystallized things for me: “At best we’re left trying to adapt to generative AI and keep intact what we want to preserve about what makes human learning and labor dignified.” Maybe preparing my students with skills and experiences other than those that are AI-related is the best way to teach these days?
When school starts back, some may still view me as an “AI Guy,” which I guess I’m OK with. But my view of that role has shifted. Now I want to get my students—and my colleagues—to think more critically about when, where, and why to use or not use AI. I recognize that there is great power in the technology and potential for societal good, just as there are many ethical, economic, and environmental questions that need to be addressed.
While all the issues around AI feel overwhelming, it’s injected new energy into my career, which is a rarity for someone starting his 30th year as a teacher. Part of that energy comes from trying to keep current on ever-changing AI developments. Part of it comes from fear of being replaced by some future teaching bot. And part of it comes from the inspirational community I’ve discovered among others thinking critically about AI. Most urgently, I feel energized to do what I can to preserve and nourish the humanity in my students and colleagues.
What that will look like exactly in my classes, I don’t know. I do know I can’t keep teaching the way I have been. For now, I won’t be using AI with my students. Yes, AI is sexy and fun and efficient, but I haven’t seen enough credible evidence (is there any?!) to show that it helps develop young readers, writers, and critical thinkers. I hope by unplugging from screens, conversing face-to-face, writing by hand in class, reading more books, and slowing down instead of speeding up, my classes can put more value on what is real and authentic.
I used to find myself asking, “How can I get more AI in my classes?” This year, I think the question to ask is “How can I get more human in my humanities classes?” I still plan on keeping AI close, just maybe a little closer to me and a lot farther from my students.




Michael - I feel like I could have written your post because your experience almost perfectly mimics my own and I think I've landed very close to where you are. I've experimented with AI in my classes (and it is actually quite useful in the research course I teach) and I will continue to keep AI options in mind when and where I think it may be valuable, but, especially at the HS level, I think it needs to be much more judicious. As for student use on their own, that's a much tricker one because, as you point out, they are using it more and more for a lot of things teachers may wish they didn't. I was unable to attend the program you mentioned but have read and followed Marc Watkins work since its inception and watched his thinking evolve as well. Thanks for the post.
It's my 31st year of teaching writing (and HS English), and I'm feeling both energized and aghast at the same time. It's funny how you mention getting a 2nd wind three decades in. I had the same thing happen to me about 3 years ago, and it led me to write a book about my approach to teaching writing. Now that AI is in full bloom, I'm feeling some blood moving again. It's like a fresh influx.