The Important Work is a space for writing instructors at all levels—high school, college, and beyond—to share reflections about teaching writing in the era of generative AI.
This week’s post is by Samantha Collins, who is a high school English teacher in the Boston area with a decade of classroom experience. She holds a Bachelor of Arts and a Master of Arts in Teaching from Brown University. Samantha is passionate about expanding access to diverse reading opportunities and designing curriculum that helps students navigate the complex literacy challenges of the 21st century, including the emerging demands of critical AI literacy. You can find Samantha on LinkedIn.
If you’re interested in sharing a reflection for The Important Work, you can find all the information here. —Jane Rosenzweig
This June, while flying to the AI & Digital Literacy (AIDL) institute at the University of Kansas, I opened an assigned article. Before I could read a word, Adobe popped up: “This document looks long. Would you like me to summarize it for you?” A moment later, I opened Gmail. Google asked, “would you like me to help you write this?”
I thought of my high school students using our district’s GSuite. How could they get a word in before AI offered to read and write for them?

Like many educators, I have been wrestling with the role AI should play in the high school English classroom. I have taught through a short-lived district ChatGPT ban, asked students to disable auto-enabled Grammarly’s AI, and returned to blue books and timed writing. Still, the public narrative insists: “join AI or fall behind.” AI promises to save time and serve as tutor, counselor, assistant, and friend. As Jane Rosenzweig has pointed out, it is the digital equivalent of a “chicken in every pot.”
But in education there are never enough chickens. Seeking balanced perspectives on AI, I found the AIDL institute, which centered around critical AI literacy. This term was new to me. Dr. Maha Bali, a Professor at the American University in Cairo, defines critical AI literacy as knowing "when, where and why to use AI” and “when NOT to.” Educators, she argues, must weigh AI’s impact before allowing uncritical use.
With this framing in mind, I landed in Kansas with two questions:
What place does AI have in a high school English classroom, if any?
Now that generative AI is here, is there a pragmatic way to engage it in a humanities classroom?
I left the AIDL institute with the following reflections:
Reframing Generative AI as a Text, Not a Tool
The White House, OpenAI, Google, and others push for AI in schools, but key questions remain: What, How, Why and At what age? At AIDL, we discussed students using generative AI for brainstorming, editing, resume generation, or as a writing sounding board. Despite varying degrees of enthusiasm, we agreed that the effective use of AI depends on domain expertise. AI isn’t always right, and even successful prompts require prior knowledge.
Using AI itself is easy because it is designed to be. The challenge is not in teaching students how to use the tool but instead in helping them to “read AI” —or evaluate its outputs — and understand when, where, and why to use it. At AIDL, Dr. Lauren Goodlad of Rutgers University urged us to see students not just as AI users, but as critical investigators of these tools.
Generative AI, a tool trained to mimic human thinking, can become a text itself, inviting students to examine their own humanity in contrast
Some of my colleagues at AIDL voiced the common concern that discussing generative AI takes time away from “more important work.” However, this concern is based on the assumption that students aren’t already using AI. They are, whether we acknowledge it or not. If we focus on the central questions of our courses, such as what does it mean to be human?, the specific texts become less important. What matters is how we engage with those texts, whether they are classic literature, contemporary media, or even AI itself. Generative AI, a tool trained to mimic human thinking, can become a text itself, inviting students to examine their own humanity in contrast. Teaching critical AI literacy reinforces the value of students developing their own skills rather than outsourcing to the first app that offers to do the work for them.
Differentiating Yourself from the Robot
While AI companies argue classroom integration will foster workplace readiness, humanities educators can respond by helping students understand what sets them apart from AI tools—and why it matters. To outpace systems that “might replace them”, students must understand their technological competition and develop human qualities that AI cannot replicate.
These qualities are becoming easier to identify. MIT’s Media Lab recently published a study examining brain activity during writing with and without LLMs. While the study has received pushback—namely its small sample size and lack of peer review—it still points toward an important, underexplored question: what might students lose cognitively when AI takes over the hard parts of thinking? The researchers found that although ChatGPT reduced “friction” while writing, participants using AI consistently underperformed at neural, linguistic, and behavioral levels.
Skeptics rightly argue that we need more evidence before drawing sweeping conclusions about the impact of generative AI—and I agree. But that caution should cut both ways. If we're going to question the limitations of this early research, we must also scrutinize the uncritical embrace of AI in classrooms. Where is the peer-reviewed evidence demonstrating that AI improves student learning outcomes? What supports the claim that bots should be tutors, assistants, and co-authors for our youngest learners? As schools rush to adopt generative tools, we risk letting students become guinea pigs in an educational experiment already underway. The absence of evidence is not evidence of absence, and in the meantime, we should be wary of assuming these tools are harmless simply because long-term effects have yet to be measured.
The MIT study underscored another weakness of using generative AI when writing: homogeneity. Essays generated with ChatGPT tended to follow conventional structures and included similar, predictable content. This makes sense: LLMs are designed to produce the most probable response to a prompt. But when probability becomes pedagogy, creativity narrows. And when students begin to internalize that kind of “correct” expression, we’ve traded the development of voice for the illusion of fluency.
Some critiques of the MIT study note that cognitive disengagement may reflect outdated assignments more than flaws in AI itself. I agree that education in the age of AI requires a significant shift. If students are turning to AI out of boredom or confusion, our challenge as educators is to design assignments that can act as CAPTCHA tests—tasks that only humans can do. This could mean breaking up assignments into smaller steps for assessment and placing a higher value on the process over the final product. Originality and creativity must also be weighted heavily in our assignments: we now know that this is something LLMs currently do not do well. This might mean creating scoring systems to evaluate how different students' responses are from popular analyses of assigned texts or even how unique their ideas and processes are from their peers.
The humanities help students form belief systems, develop identity, and cultivate a voice. With the rise of AI, we now have a new context for asking our most essential question: What does it mean to be human? How can we come to understand generative AI in order to differentiate ourselves from it?
For my part, I am centering my 9th grade English course on the question of what makes us human. After brainstorming the values and capacities that define humanity, my goal is to conclude each unit with students writing on how the text we read shaped their understanding of these values. They’ll also reflect on how their final piece could only have been created by a human: what tension, or difficulty, did they experience during the process? What ideas did they have when completing their assignment? Research like the MIT study will continue to help us create assignments that are true CAPTCHA tests: ones that require both critical AI literacy and metacognitive thinking.
Addressing the Computer in the Room
If we want students to differentiate themselves from chatbots and center their humanity, we also need to examine our schools’ technology practices. A 2021 EdWeek Research Center survey found that 90% of middle and high schools and 84% of elementary schools in the United States have 1:1 computer programs. As we try to understand the scope of student AI use, we cannot ignore the fact that nearly all schools in the United States have equipped students with personal computers. As generative AI integrates into school platforms, we must acknowledge that if we give a student a computer, they’re going to have an AI bot to go with it.
As we determine how to proceed with generative AI in schools, we also have an opportunity to decide how we want students to use computers in their education. We are finally removing phones from classrooms in response to students’ declining mental health, but school-issued devices can pose many of the same hazards. The platforms we require, like Google Docs, now include Gemini-powered features asking, “Would you like me to write this for you?” More AI integration will follow soon—and students know this.
While my reflections from my time at the AIDL institute may offer insights into how we might think about AI in the humanities classroom, larger questions loom unanswered. At what age is screen-based learning and critical AI instruction developmentally appropriate? How can we sequence AI instruction across grade levels? Especially after what we experienced during COVID, do we believe education should continue to revolve around devices? What guardrails can we put in place to protect students’ opportunities to read, write, and learn for themselves?
Shaping the next chapter of education will require educators and school leaders to act with intention.We need to empower students to get a word in first. Before they collaborate with a chatbot, they deserve a space to discover their own voices. Once they have that, we can teach them how to use generative AI wisely. For their sake, we cannot let it happen the other way around.
Wonderful insights, Sam! You give me the courage to really lean into investigating what it means to be human in my classes. I might actually ask my students what they would do if they were ever asked to “prove to me you aren’t a robot!”
Great piece. I concur with many of your observations and conclusions but there will be many challenges ahead. Different teachers and schools will approach AI in diverse ways - it's important to recognize that there is no "right" way to deal with AI right now except to make sure we as teachers are aware of it. I do think reading - whether the outputs of AI or just in general - is going to be an even more important skill as many outputs will be completely automated, so I like the framing AI as a text not a tool (though I do think it can also be a tool). Most of the AI academic conversation revolves around writing, but with multi-modal capabilities and the introduction of agentic AI, there will continue to be more and more complex ways it will impact schools and, by default teachers and students.