17 Comments
User's avatar
jwr's avatar

Thank you for sharing this! One thing that particularly stood out for me was your use of workshop days. This is something I've also done as a community college writing professor, starting a few years back now, and like you, I've found that it works really, really well. I've been thinking about *why* it works, and I thought I would share some reflections on that, in case it may be helpful to anyone who is thinking of trying something similar.

- As you say, part of it is simply the time. For my classes, it probably adds up to about 10-15 hours over the course of the term, and for students who have a lot of demands on their time, that makes a difference.

- Part of it is also the time over time. In other words, the way that the time gets spread out over the term supports a more effective writing process.

- Part of it is also the space and the social support of working in that space with others. This is big for many of my students, and while I'm happy to see them working intently, I'm also happy to see them having positive interactions with each other during workshop time.

- Part of it is what they get from their check-ins with me. I think there are several dimensions to this. It's about getting regular, timely, formative feedback. It's also about getting that feedback in person, which seems to work better for a lot of students. But most importantly, I think, it's about (a) building a relationship with me that (b) recognizes and welcomes them as individuals, (c) supports their growth as thinkers and writers, and (d) provides a consistent structure for them to think and talk about the work that they're doing and what they're learning from it.

- And part of it, frankly, is what I get from my check-ins with them. I get to know them a lot better. I get a better sense of how they work and how they think, what works for them and what doesn't. What I learn from these conversations has been immensely helpful for my teaching.

- For all of us, my sense is that workshop time has helped to make the work of writing feel more human. While the practical and logistical side is important, I suspect that this is ultimately why it has helped students do better work.

Cate Denial's avatar

Thank you for this wonderful comment. I agree with you about it being a multi-factor process, and so rewarding, not just for students but for us!

Marcus Luther's avatar

I really, really appreciate this line within the overall piece: "My position may change again in the future."

Anyone drawing lines right now feels at best naive, as there is so much movement in this arena and, if nothing else, humility feels like the most necessary of mindsets.

Cate Denial's avatar

I think humility serves all of us, including those who feel certainty around generative AI.

Justin's avatar

Indeed, the only lines being drawn should be for sketching!

Marc Watkins's avatar

While I certainly agree there are significant harms associated with the uncritical adoption of generative tools to student skills, I don't think bans are practical. I would caution against such sweeping statements: "There is nothing that I do in my professional capacity—a capacity I am modeling and passing along to students—with which generative AI can help." There is far more to generative tools than ChatGPT. Transcribing historical material by analog methods may be your preferred mechanism that you find value in; however, it may not be so for all researchers and all students, especially those who have issues processing information. I would ask you to consider if banning an entire technology for students may have adverse consequences that may be unforeseen for learners who could benefit from using generative AI as a reading assistant, translator, transcription tool, note taking aide, or many other valid use cases. I would also ask if you think banning such use cases outside of text generation is actually possible.

We are already seeing students with disability accommodations asking for specific generative tools. Part of my own pedagogical stance is to not create a policy where a student would ever have to go through the often painful and very expensive process of disclosing they have a disability in order to access a tool for an accommodation.

I will give you an example. I asked my students in a Digital Media Studies class to examine Google's NotebookLM as an artifact of inquiry (e.g. they can read about it, or use it). NotebookLM only works with the material you provide it. The AI can summarize and synthesize material, but it cannot generate new writing. Several students used the tool's AI to help them explore their own notes and find connections they struggled with connecting on their own. Google recently updated the system to allow users to generate a podcast of the material. One student reflected on how she used the AI to create a podcast of her notes, then listened to the podcast while cooking dinner and folding laundry and reported she was able to process and retain information better than doing so by reading her notes alone. I could never police that sort of usage, nor could I know if a student used such a tool for their studying habits, so why would I take a position about banning it in my course?

Students will have access to these tools and look to faculty for guidance about both potential harms and practical applications. I'm glad to hear that you are open to changing your position. I would hope you keep the door open to hearing how your students actually use this technology.

Cate Denial's avatar

I really appreciate your comment! And yes, thinking about the access needs of students should be of the highest priority for all of us. I teach small classes where I'm able to offer my students a great deal of one-on-one time; I also teach classes where they're designed according to UDL+ principles because you're right, students should not need to disclose or go through testing etc. to get the support they need. I can see how what I've written here may not come across as UDL+ friendly - but the truth is that we have supported students with a variety of learning needs for a long time, long before generative A.I. was on the scene. So far, I have not seen any application that can do something that cannot be done in an analog fashion, but again, I am wholly open to that changing over time.

What I have trouble settling for myself is the degree of human harm - not just in the classroom but far beyond it - that is part and parcel of some of these technologies. Generative A.I. is not, to me, worth traumatizing child workers, for example. I don't think it's acceptable to most people - but the scales with which we balance harm and benefit may tilt different ways for all of us, for a variety of thoughtful reasons. I'd like to see, in general, a bigger conversation about these harms.

Marc Watkins's avatar

I agree the harms are legitimate and should be thoughtfully discussed. I think, though, I much prefer talking about the general harms of technology and what that has done to learning and our world than to isolate just generative AI. I tell my students the screens they watch reels on, like the smartphones in their pockets, all come with costs that are bore largely by the Global South. It is also becoming clear that the use of screens in learning has impacted our attention spans. I use AI as a starting point to ask students to begin critically analyzing their relationships with tech and how complicated those are now and were before AI.

Stephen Fitzpatrick's avatar

You might want to rethink some aspects of your position (not necessarily the writing piece, though that is coming ...) given the recent release of Deep Research models by most of the major companies. I think the quality issue is a red herring - it will get better and better which comes with its own issues. But for the initial foray into at least some aspects of research, the future is clear. If these models get access to paywalled or harder to access resources, it will be impossible not to see how much AI will impact at least some aspects of the research process.

https://fitzyhistory.substack.com/p/the-20-page-research-paper-in-20

Cate Denial's avatar

As I made clear in the piece, I am open to my position changing again and again as more information becomes available. Transcription, however, is an act that is deeply meaningful to me. I know many find it tedious, but it's how I read each source deeply and make connections to other pieces of evidence. It's how I do the thinking work that A.I. will never be able to do.

Stephen Fitzpatrick's avatar

I think everyone needs to do what works for them. I have helped a student analyzing some old handwritten documents written in Catalan which he simply could not decipher and translate it using AI. It did an excellent job and allowed him to work with the text more directly, saving hours of work. My point is simply that as current professors retire and the new generation takes over, AI will likely assume a significant role in virtually every aspect of the process. I think declaring at this point that "AI will never be able to do it" simply may not be the case, at least insofar as something like making connections to other pieces of evidence. But I totally understand your point - if it's deeply meaningful to you, that's what matters.

Cate Denial's avatar

Equally, I think saying that AI *will* be able to do it is equally an unprovable assertion.

Delia Lloyd's avatar

Agree with Steve below vis Deep Research models, which I learned about from him! But I loved the idea of creating "workshops" in class where students can write so that they don't feel so stressed. Thanks for this thoughtful post.

Kate Koppy's avatar

I've done similar class activities in my writing and composition classes.

In addition to these, I've also steered students to other AI-based tools that are more suited to supporting their work--Elicit, Semantic Scholar, Connected Papers, Notebook LM, and Grammarly's spelling and grammar checker modes. In conversations, I have learned that students reach for ChatGPT because it's the one with all the hype. Once they understand how it works, where it fails, and what else is out there, they are equipped to make better choices.

Justin's avatar

Thank you for sharing your perspective and also showing openness. Here via Mike