More on AI in the Writing Classroom (Fall 2025 Experiment)

by Emily Pitts Donahoe on Substack

Back in July, I wrote about how I planned to handle AI in my classroom this fall, and in September, I shared some of my AI-related materials. If you want full details about what I’m doing, feel free to explore those posts, as well as the “Use of Generative AI” document I gave to students this semester. But here’s the short version:

At the beginning of the semester, I asked students to make a public commitment to one of two tracks in the course, AI-free or AI friendly. (Language for the latter track was inspired by Noël Ingram.) Students on the AI-free track commit to avoiding intentional uses of generative AI for writing support. Students on the AI-friendly track are allowed to use AI in limited ways, which I specify, for most assignments. They are also required to disclose their use of AI—sharing a summary of how they used it, along with their chatlogs—and write a short reflection on what they think they gained or lost from the process.

I went into the semester reasonably confident that this would be a good system, but I won’t lie: the first two weeks shook this confidence considerably. More students than I expected (about 75%) chose the AI-friendly track. This didn’t seem to align with the orientations of my previous classes, in which many students were ambivalent about AI. Moreover, students expressed a lot of enthusiasm about AI’s potential to assist them in the learning process—enthusiasm I thought was unwarranted. The kinds of things they said about this technology in our conversations made me worry that they would have a hard time distinguishing between what helped them generate a polished paper and what helped them actually learn to write.

At this point, however, I’m happy to report that my fears were mostly unfounded. I’ve had to have a few uncomfortable conversations about AI misuse, but for the most part, I’m finding that students are still pretty ambivalent about AI when they feel well-supported in their learning. In fact, many of the students on the AI-friendly track appear not to be using AI at all, and most are not using it extensively. Here’s what I think is helping:¹

We work up to the longer papers. My class begins with a series of Rhetorical Analysis Exercises, which are more like worksheets in which students answer questions about the things they read. I think tackling one question at a time, as opposed to having a big, blank page to fill, helped encourage students to work on their own. They didn’t get in the habit of using AI from day one.

We do lots of in-class writing and have plenty of opportunities for revision. Hard to misuse AI when you have focused work time during which the teacher is looking over your shoulder. Throughout this time, I walk around and check in with each student individually to address any concerns. Students also know that if they don’t do so well on the first attempt, they can keep working up until they submit final portfolios at the end of the semester.

There are clear guidelines and suggestions for AI use. I’ve provided students with a list of ways I think it might be acceptable to use AI (getting feedback on your draft, brainstorming potential counterarguments to your position, etc.). Now that they’ve fully internalized the guidance, they do tend to use AI in these ways—lowering the chances that they’ll use it to generate whole essays or paragraphs.

Students know I can see their Google Doc version history. Ok, so this is the one “assessment security” measure I permit myself. Let me caveat this by saying that if you’re relying on version history to “catch” AI use, you’re going to make a lot of damaging false accusations, particularly of people whose writing process may differ from the norm. (More on that here, and in the conversation Sarah and I had on The Grading Podcast earlier this month.)

That said: if something seems off to me and a student hasn’t provided their chatlog or any information about using AI on the assignment, I do dive into their version history to see how the piece came together. This allows me to say things like, “I notice that you’ve copied-and-pasted some material that looks AI-generated to me. Can we review your chatlogs together to make sure AI isn’t inhibiting your learning?” Nine times out of ten, when pressed, students default to “I’ll just redo the assignment!” and we don’t even get to the chatlogs. I don’t really like doing this. But I think it helps that…

We talk about AI in non-punitive ways. Half the reason I want to see how students are using AI, truly, is because I am genuinely curious about how it might be affecting their writing process and their learning. I am just as concerned that AI will give them bad writing advice as I am that they will use it to cheat. So, that’s what I tell them. And we’ve discussed, as a class, a few occasions on which AI kind of led a student astray in their work. I think this really helps them feel more comfortable with being transparent about their use.

Students know I know their voices, and we have lots of face-to-face conversations. We had a brief in-class discussion about AI the other day, and one student observed in passing that I, as the teacher, could tell when they used AI because it didn’t sound like them. I try to convey that I’m paying close attention to this by crafting feedback that includes phrases like, “I’ve noticed that your writing tends to ___” or “One interesting thing about your writing style is ___.”

I also try to have individual conversations about AI mis- or overuse in person as much as possible—usually before/after class, during work time, or, when necessary, during office hours. Students know that I will notice when they use AI (or at least when they use it in lazy ways).They also know that they will likely have to talk to me about it face-to-face at some point. Under these conditions, I think it’s rare that a student will brazenly submit fully AI-generated work.

All that said, I have run into a few obstacles that are worth noting. Here’s what’s not working so well:

Students need a lot of reminders and reinforcement. In the first half of the semester, the handful of students who were using AI seemed to struggle to remember the rules and guidelines we have in place: you can only use it in specific ways and at specific times, you must share your chatlogs, you have to fill out the AI reflection section of your assignment, etc. Students weren’t, as far as I could tell, trying to circumvent the rules; they were genuinely confused, despite what I thought was clear guidance. I suspect this is because they had a lot of cognitive overload trying to manage several different classes (each with different AI policies) while also adjusting to college life. Things are better now, but getting to this point was a little rocky.

Students don’t use AI particularly well, even with guidance. I gave students specific copy-and-paste-style prompt language for the first assignment. After that, I provided only suggestions about ways students might use AI, not step-by-step instructions or particular prompts. This works fine for some students; others have seemed confused about what to do with these more general suggestions. Or it becomes clear to me, when looking at their chatlogs, that they don’t quite know how to get the most out of AI assistance. For example, occasionally I’ll see students entering prompts that are more like the kinds of phrases you would put into a search engine, even when they aren’t using it for search-like functions. Others have prompted it in ways too vague to be very useful.

All this presents a problem for me because while I want them to be able to explore AI if they would like to, I am not particularly enthusiastic about AI assistance myself, and I don’t want to spend class time teaching it. I was hoping this would be a good opportunity for some self-directed learning, but I’m having only limited success in facilitating it.

Some students have been reluctant to share their chatlogs. In the past few weeks, some of my students forgot, or “forgot,” to log in to whatever AI platform they’re using, and therefore weren’t able to save and share their chatlogs with me. In these cases, I ask students to explain, in detail, how they used AI. Some have offered perfectly clear and satisfactory explanations, which lead to fruitful conversations. Others have been vague about their use. In these cases, I simply explain that I don’t have a good sense, from our conversation, about whether or not AI has impeded their learning, and ask them to re-do the work. Almost everyone has reacted to this request with more gratitude than annoyance. But monitoring their AI use still takes up more time and brain space than I would like.

Student reflection on AI use is still not as robust as I would wish. Again, I was hoping to engineer this primarily through carefully-crafted policy and reflection questions, but it’s clear to me now that some students need a lot more support to reflect effectively. They need sample reflections, modeling, lots of class discussion on the issue where they might hear and consider other perspectives or experiences. As I noted above, however, I’m reluctant to spend too much time on AI in class, since I don’t think it’s very important for developing writers and most students don’t want to use it extensively anyway.

All in all, I think this is working better than my previous system in that student use of AI is still low, it’s more transparent, and it’s somewhat more intentional. But there are still some things I’m not entirely happy with.

How are things going with AI in your classes? If you’re permitting AI use, how are you balancing the desire for students to develop critical AI literacy with the need to help them build the knowledge and skills essential to your discipline? I’d love to hear from you in the comments.


Posted

in

by

Tags: