Crystal Lee (she/她)

I'm struggling to teach and work with gen AI

N.B.: before anyone comes for me, all of my blog posts are 0% AI from beginning to end, if for no other reason that I only write these blog posts when the occasion strikes me, and only for fun. It's just an unimportant blog post on my unimportant website, so I honestly have low editorial standards beyond it being interesting for me. Take that for what you will...back to the content.

Like many of my colleagues, I am grappling with the enormous struggle that is thinking about how to use (or not use) gen AI when I teach. For the intro seminar I teach, it's a class heavy on reading and writing, with an emphasis on learning how to communicate. In the big lecture class I teach, there are design reflections and a data storytelling assignment. In other words, reading and writing is an important component of both styles of classes I teach, for undergrads and grad students. Every educator I know has plenty to say about whether or how to use chatbots, what this means for higher ed, the broader social implications of data centers and intellectual property theft.

There is a particular flavor of resistance to the use of gen AI that is about principled refusal, particularly in teaching writing. I find these positions to be incredibly well-founded, as it is very important to consider the effects of linguistic homogenization, intellectual property theft, carceral approaches to pedagogy, environmental impacts, labor problems, and beyond that come with the use of AI models. Using gen AI -- however rarely -- necessarily incurs these human costs. Refusing these technologies until they can center things like citational justice, particularly when it comes to the racialized and gendered dimensions of who gets credit for academic work, is a necessity.

There are enormous benefits to waiting instead of participating in the AI rush. As my colleague Justin Reich writes, no one really knows what AI literacy is, and the research on the topic is incredibly nascent. Teachers should investigate AI by leading with uncertainty and humility. I highly recommend this guidefrom Justin's lab, as it compiles a lot of the issues and potential opportunities to using AI in the classroom. I particularly like this point he makes:

A guidebook of tying knots will show you exactly how to tie the knots the correct way. A guidebook on AI in schools in 2025 can’t possibly do that because we don’t even know what the knots are, let alone how to tie them. What we can show you is how people are taking this new kind of rope and bending it around in interesting ways, some of which might prove sturdy and some of which might prove faulty. And we won’t know which is which for a long time.

I'm excited to see where his research goes, especially since his lab does a great job of putting research insights into practice. I have no doubt that it will provide a critical, empirical approach to thinking about the role of AI and pedagogy, and I impatiently await his research insights.

I can't speak to Justin's stance on this, and this may still be an overgeneralization, but one critique that a lot of the AI refusers make is the fact that AI is not inevitable. We have the capacity to shape more just futures; AI is not something that just "happens." We can and should critique the extractive nature of these systems and teach students about the range of social issues that come with AI adoption. Writing is a form of power, and we should treat it as such; indeed, English composition -- particularly writing instruction in the Anglophone world -- replicates the "linguistic ghost of the British Empire." (I highly recommend that read, and I would especially send it to any colleague who believes that AI detectors work and that they are not biased against non-native English writers.)

All this to say: yes. All of these points reflect my purported disciplinary values; they are also the logical extension of my teaching on critical approaches to technology.

And yet.

I struggle with enormous cognitive dissonance with this and other issues. I know about the inhumane conditions for Amazon workers, the unethical labor practices and environmental destruction of fast fashion, and I am devastated even by the comparatively tame portrayal of animal cruelty in the Wicked movies (CW: lots of description, but no pictures). But I will be honest here: I have an Amazon Prime membership and I am eating berries from Whole Foods while writing this. I buy clothes from GAP and Everlane. I order sushi on weekends and, on occasion, eat at Chick-Fil-A.2

I also know there are a lot of very good (and bad) responses to these kinds of dissonances. There is no ethical consumption under capitalism, corporations are amplifying the myth of consumer responsibility when it comes to things like recycling and "green" anything, we can't be everything all the time. Insert meme about a medieval peasant trying to improve society. We do what we can, ideological purity be damned.

So too with gen AI. When I first started using it, I was relatively befuddled by what I could actually use it for, since the results from my prompts were hilariously bad. ("You just don't know how to use it!" Fine, probably true.) I often felt like I spent more time checking the work than I would have spent if I just looked something up myself, even for relatively straightforward tasks that AI is theoretically good at, like summarization. Asking AI simple questions about fields where I have substantial expertise was...generally okay, even if it was vague and superficial. It would get minor details wrong. It performed like a slightly-above-average undergraduate student who had sort of done the reading. I don't buy Sam Altman's claim that GPT-5 can provide PhD level expertise, I find many (most?) use cases of gen AI to be bullshit, and even Goldman Sachs is skeptical about its benefits.

Then I read Mike Masnick's interesting article about how he uses AI to help with his work at Techdirt. He's emphatic to the would-be haters that his use of AI is very much not about generating text for his articles, even though he regularly churns out article after article a day.1 He essentially describes AI as a sounding board and editor: he asks if certain sentences make sense, and he's created a scorecard that he uses with Lex (his AI of choice) for evaluating his article. It generates 10-15 headlines for him to tweak. It shows him where the strongest and weakest points of the essay are and runs checks for grammar and readability. I didn't know that I could go so far as have an AI evaluate my laborious writing -- my skin is too thin to be criticized by a computer, I think -- but generating titles seemed like a good start, even if most were unusable.

After feeling relatively frustrated and confused about what the fuss was all about, something began to click. I realized pretty early on that I could never use it for precise, correct answers, but it gave me a decent shot at collecting some possible directions about a certain topic. It wasn't a great sounding board for me since I felt constrained by the frameworks it used, but it was very good about adjusting the tone of something I was writing. Little things started creeping up as possible use cases, even though I wanted to be careful about relying too much on it and having my brain atrophy.

I'm still wary of that. I want to write like myself and think for myself. But I started using it bit by bit: I was looking for speakers for a conference on X topic with scholars who used Y methods and centered Z theories. I asked Chat-GPT for a list of potential speakers and went from there. I had written a peer review that was more inflammatory than I had intended. I asked Chat-GPT to help me adjust the tone. I was worried that a letter of recommendation I wrote might include biased, easy-to-misinterpret language. Chat-GPT identified a couple sentences and suggested some tweaks. I'm enormously grateful for that last example, because that writing affected someone other than me in a very meaningful way and the last thing I want to do is contribute to gendered or racialized approaches to describing people's work (e.g., "caring" or "articulate"). It was, of course, the day before the deadline and I really did not think that it merited asking someone to look it over on such short notice. Chat it is.

There's a part of me that wants to be the "pick me" girl. I use AI, but I'm not like others. Having it edit work is fine, but they're using it to grade and create instructional material. I'm not like that! I'm better than that!

Despite my general agreement with the AI refusers, I do also think that "AI use" is a pretty wide spectrum. I don't want to sanitize my own use and condemn others, especially when so many people in academia are working in conditions of extreme precarity and a heavy teaching load. Writing 20 letters of recommendation, even with a few weeks' notice, is an impossible task on top of a 4/4 load, research, and a family. Is the appropriateness of AI use always dependent on personal context? Who gets to decide what that is? What about the stakes of the writing -- is it okay if it's a data management plan? The alt text for figures in an article? What about the first draft of an abstract, given the ACM Digital Library's new AI summary feature?

I don't think anyone has figured out what and where the line is, and anyone who says they have is lying. I don't know that gen AI is inevitable, or even if it's going to be around after (and if) the bubble pops -- whatever that means. What I do know is that students are used to and often do use it. That doesn't mean you have to give in, they say. You have agency.

Maybe I am throwing up my hands and just giving in -- even if I don't believe that it's inevitable in the future, I do believe that it feels inevitable right now. My students use Chat-GPT. I use Chat-GPT. I think there is tremendous value in learning how to write and learn without it -- a skill that I also hope to cultivate in my students. But even if I stopped using it (which is actually pretty likely for reasons I won't go into here), I genuinely do not think that I will be able to shape a classroom environment right now where there is a blanket refusal to use gen AI. The Refusing Gen AI group I had mentioned earlier is clear that refusal is a disciplinary position, not a "head-in-the-sand" approach that necessarily implies the implementation of an AI ban. Conscientious, well-informed refusal is important. But if I were completely honest, I don't know what refusal implies other than a ban. There is a spectrum to using gen AI; I don't know what a spectrum for not using AI looks like.

My good friend and collaborator Jonathan wrote a paper about refusal that helps me think about this, and I enormously respect the folks behind the Feminist Manifest-no. I understand refusal as an intellectual framework, and Jonathan has a great blog post that describes different forms of collective refusal: class action law suits, browsing with Tor, digital homesteading, shutting down online communities to force change. I like the blog post because it gives concrete examples of what refusal looks like. I'm not sure I know what the equivalent looks like for using gen AI.

Maybe I'm taking it too literally. I do hope that the work my students do in my critical tech classes show how science -- and language -- are never neutral, and that we don't need to let big AI companies determine how we talk about (and how we shape) the future. The environment matters. Intellectual labor matters. For some, I'm sure that all of that rings hollow given how I actually ignore all of those things when I decide to ask Chat-GPT to help me write carefully worded but trivial emails. I don't know what, if anything, to do about that. I guess I do believe in and contribute to the AI inevitability discourse after all. Cognitive dissonance abounds.

I recently went to a workshop on "AI-resilient teaching," where other faculty shared how they administer oral exams or assign class essays about incredibly obscure, non-OCR'ed source material. For my writing-intensive class, I will likely start with blue books and then adjust Masnick's editing workflow so that students learn how to revise their work. I have never (and probably will never) use student surveillance technologies, particularly gen AI detectors. (I guess that's an example of blanket refusal.) But in the absence of a complete ban in the classroom, I'm not sure I know where to go. What is or isn't acceptable AI use? Sigh. I'll let you know when I figure it out.


  1. Between Masnick and Cory Doctorow, who both put out substantial pieces of writing daily, I can barely keep up with reading both of them, much less produce at such volume. 

  2. The feeling of guilt and transgression makes it taste extra good.