Wonderful insights, Sam! You give me the courage to really lean into investigating what it means to be human in my classes. I might actually ask my students what they would do if they were ever asked to “prove to me you aren’t a robot!”
Great piece. I concur with many of your observations and conclusions but there will be many challenges ahead. Different teachers and schools will approach AI in diverse ways - it's important to recognize that there is no "right" way to deal with AI right now except to make sure we as teachers are aware of it. I do think reading - whether the outputs of AI or just in general - is going to be an even more important skill as many outputs will be completely automated, so I like the framing AI as a text not a tool (though I do think it can also be a tool). Most of the AI academic conversation revolves around writing, but with multi-modal capabilities and the introduction of agentic AI, there will continue to be more and more complex ways it will impact schools and, by default teachers and students.
Thanks for reading, Stephen! I agree that strengthening reading skills, which were in decline even before the release of ChatGPT and similar tools, will be essential for learning in the age of AI. I’ve been expanding the definition of a “text” in my classroom for several years now, and it feels only natural that AI should be included. I think the same literacy skills—close reading, analysis, and evaluation—can apply when examining how generative AI platforms work, what they produce, and how they shape both learning and our humanity.
As automation becomes more widespread, understanding what we’re consuming and how it was made will become even more essential.
I’m grateful for the work you’re doing and sharing. Thank you!
May I suggest that a good place to start when considering what it means to be human and what ways we are different from computational machines might be the first chapter in Jerome Bruner's Acts of Meaning (1993). https://www.hup.harvard.edu/books/9780674003613.
While the Cartesian computational metaphor has come to dominate in cognitive science, there is an entire wing of cognitive science that argues that human cognition is an embodied, interactive, cultural, social phenomenon. I suspect most teachers intuitively know this.
Another resource that makes some of this very accessible is the documentary Being in the World:
They discuss jazz, cooking, and carpentry and more but writing, doing math, etc. are done the same way. They are all skillful activities that can not be decomposed into computation. They are learnt by being in the world. This documentary was made before the explosion of generative AI but the issues are fundamentally the same. Next token prediction based on the encoding on statistical patterns in vectors and massive brute force computational power is just as dumb as the earlier generation of rule-based AI.
I was struck by your claim that Generative AI is "a tool trained to mimic human thinking," as this is not my understanding of how it works. Did you get this impression from the AIDL Institute or somewhere else?
I think it's important for students (or anyone using Generative AI) to understand the basic principles of how it works and how, unlike in humans, this mechanism is not tied to any actual model of the world. Or did you mean just that it creates the superficial impression of 'thinking'?
If we understand a bit about how it actually works, we can also better understand its uses and limitations.
This post might be a fun way to explore 'critical AI literacy' with high school students:
Sam can of course answer for herself, but I will say that when I read this, I read "mimic" here as "imitate" or "create the impression that it is thinking"--which it certainly does when you interact with it (including where ChatGPT actually gives you that constant stream of "thinking" even when it is not--and even though we all know that this is not what's happening on the back end. The AIDL institute included a great discussion about how generative AI works, something that indeed everyone engaging with these tools should know.
Thanks! On my second read, it did seem clearer that she meant “create the impression of thinking” or “appear to be thinking” and not “designed to work according to mechanisms that underlie human cognition” (like some other AIs). However, I’m not sure we can assume that everyone knows what is and isn’t happening on the back-end. I’ve heard a lot of people say that the “hallucinations” will be fixed eventually, not grasping that they are a feature, not a bug. The same processes that produce the fluent “thinking-like” language also produce the hallucinations. You can’t have one without the other. The question is how accurate do you need it to be for whatever purpose it’s serving and what role the “human in the loop” needs to play. And of course, as the authors of these essays explore, how best to educate humans in an age when this exists.
Thanks for hosting this very interesting Substack!
(By the way, are you familiar with Daisy Christodoulou’s work? She has been writing great stuff about AI and writing (and the use of AI to help with assessment of writing) on the No More Marking Substack).
Wonderful insights, Sam! You give me the courage to really lean into investigating what it means to be human in my classes. I might actually ask my students what they would do if they were ever asked to “prove to me you aren’t a robot!”
I'm also thinking of doing that on the first day of my AI course this fall!
Great piece. I concur with many of your observations and conclusions but there will be many challenges ahead. Different teachers and schools will approach AI in diverse ways - it's important to recognize that there is no "right" way to deal with AI right now except to make sure we as teachers are aware of it. I do think reading - whether the outputs of AI or just in general - is going to be an even more important skill as many outputs will be completely automated, so I like the framing AI as a text not a tool (though I do think it can also be a tool). Most of the AI academic conversation revolves around writing, but with multi-modal capabilities and the introduction of agentic AI, there will continue to be more and more complex ways it will impact schools and, by default teachers and students.
Thanks for reading, Stephen! I agree that strengthening reading skills, which were in decline even before the release of ChatGPT and similar tools, will be essential for learning in the age of AI. I’ve been expanding the definition of a “text” in my classroom for several years now, and it feels only natural that AI should be included. I think the same literacy skills—close reading, analysis, and evaluation—can apply when examining how generative AI platforms work, what they produce, and how they shape both learning and our humanity.
As automation becomes more widespread, understanding what we’re consuming and how it was made will become even more essential.
I’m grateful for the work you’re doing and sharing. Thank you!
May I suggest that a good place to start when considering what it means to be human and what ways we are different from computational machines might be the first chapter in Jerome Bruner's Acts of Meaning (1993). https://www.hup.harvard.edu/books/9780674003613.
While the Cartesian computational metaphor has come to dominate in cognitive science, there is an entire wing of cognitive science that argues that human cognition is an embodied, interactive, cultural, social phenomenon. I suspect most teachers intuitively know this.
Another resource that makes some of this very accessible is the documentary Being in the World:
https://www.youtube.com/watch?v=fcCRmf_tHW8
They discuss jazz, cooking, and carpentry and more but writing, doing math, etc. are done the same way. They are all skillful activities that can not be decomposed into computation. They are learnt by being in the world. This documentary was made before the explosion of generative AI but the issues are fundamentally the same. Next token prediction based on the encoding on statistical patterns in vectors and massive brute force computational power is just as dumb as the earlier generation of rule-based AI.
Thank you so much for reading and sharing these resources! Hopefully I will be able to share how this goes once the year is underway.
I was struck by your claim that Generative AI is "a tool trained to mimic human thinking," as this is not my understanding of how it works. Did you get this impression from the AIDL Institute or somewhere else?
I think it's important for students (or anyone using Generative AI) to understand the basic principles of how it works and how, unlike in humans, this mechanism is not tied to any actual model of the world. Or did you mean just that it creates the superficial impression of 'thinking'?
If we understand a bit about how it actually works, we can also better understand its uses and limitations.
This post might be a fun way to explore 'critical AI literacy' with high school students:
https://substack.com/history/post/168417141
Sam can of course answer for herself, but I will say that when I read this, I read "mimic" here as "imitate" or "create the impression that it is thinking"--which it certainly does when you interact with it (including where ChatGPT actually gives you that constant stream of "thinking" even when it is not--and even though we all know that this is not what's happening on the back end. The AIDL institute included a great discussion about how generative AI works, something that indeed everyone engaging with these tools should know.
Thanks! On my second read, it did seem clearer that she meant “create the impression of thinking” or “appear to be thinking” and not “designed to work according to mechanisms that underlie human cognition” (like some other AIs). However, I’m not sure we can assume that everyone knows what is and isn’t happening on the back-end. I’ve heard a lot of people say that the “hallucinations” will be fixed eventually, not grasping that they are a feature, not a bug. The same processes that produce the fluent “thinking-like” language also produce the hallucinations. You can’t have one without the other. The question is how accurate do you need it to be for whatever purpose it’s serving and what role the “human in the loop” needs to play. And of course, as the authors of these essays explore, how best to educate humans in an age when this exists.
Thanks for hosting this very interesting Substack!
(By the way, are you familiar with Daisy Christodoulou’s work? She has been writing great stuff about AI and writing (and the use of AI to help with assessment of writing) on the No More Marking Substack).
TBH, there are days when I feel like a robot. ;-)
Hey, at least you’re aware of it! Thank you for reading!