Kirkwood Adams
Lecturer, Undergraduate Writing Program
I teach first-year writing at Columbia, a class called University Writing that all of our undergraduates take. My research into generative A.I. began as a joke at the Undergraduate Writing Program’s holiday party in December 2022. Normally, my colleagues might have fired up our projector to screen a silly movie on the office wall, but that night we played around with ChatGPT instead: asking it to write Hallmark-style film scripts about Columbia’s Writing Center featuring cameos by Michel Foucault. And it did. We also asked it to write poetry and, of course, it did! But one noticeable feature of its poems was a total lack of enjambment. Always. No matter what we prompted, the system couldn’t break a line without an end-stop. Over that winter break I couldn’t stop thinking on that.
I realize now that learning to observe, name and describe such features of generative A.I. outputs has been my research agenda ever since that party, an agenda I replicate with my students. While I haven’t systematically asked A.I. to write poems recently, I imagine similar consistent inconsistencies will exist in its poetic outputs, even if the latest/greatest models have been trained to see data patterns that complicate an understanding of verse beyond Mother Goose.
Approach to teaching and learning in the age of AI
With apologies to all futurists and technochauvinisits (see Meredith Broussard), I’ll offer a compulsory PSA: ChatGPT is a robot. We need only look back to the principles derived by the Lady Lovelace in the 19th century to understand the uses and limits of its machine intelligence.
Now, if you’ve also experimented with A.I. as I have, you may have noticed that these ‘bots offer homogenous responses so banal you laughed out loud—until you remembered how the monopolizing tech firms that design chat robots are worth so very, very much money.
Worse still, the neural networks which create machine intelligence often learn reductively: boiling down complex concepts into whitewashed standards that reflect the priorities and preferences of the systems’ designers. This is not a new problem with algorithms, as Joy Boulamwini long-since argued of computer vision systems and scores of other scholars have argued is true of other digital products which keep becoming indispensable in our lives.
Teaching Students to leverage AI and develop their AI literacy
So, as part of University Writing, for the past three semesters, I’ve developed a novel approach to incorporating A.I. into a writing intensive class—and it ain’t teaching prompt engineering. What I’ve learned from addressing generative A.I. in my first-year writing classroom is that our students, despite being young and digital natives, weren’t born yesterday. They possess the shrewdness required to make critical observations of generative A.I.’s outputs and recognize the uses and limits of this ‘disruptive’ advancement. Can an LLM write a discussion post? Totally. Limericks? Boy howdy. An essay on a Homeric simile? Youbetcha! But when we read these outputs closely it becomes clear that the bot-texts offer stock knowledge, often making observations packaged as analysis which anyone one of us might readily think already. What these generative systems rarely generate is anything interesting. Unless we change our frame of reference. Asking students to study A.I. outputs has been a fruitful exercise in A.I. literacy because we think about the special way A.I. ‘thinks’.
At the core of this approach to A.I. literacy is a dispositional shift: I teach students to study ChatGPT in-of-itself and as such to consider themselves as ‘auditors’ of the system not ‘users’ of it. Generative A.I. aspires to frictionlessly integrate into our writing, teaching and studying lives to efficiently replacing any of our labor. But one size does not fit all. And auditing refuses a transactional, extrinsically motivated exchange between system and user.
Lessons learned from teaching and learning with AI
Developing the ability to judiciously use A.I. tools in the pursuit of scholarly work will require being literate of A.I. systems’ capabilities. All of us need to ask fundamental questions about how these system work, why they work in such ways, and what the contributions of such a machine intelligence are good for. Throughout the U.W. courses I’ve taught, ChatGPT has proved as a useful foil for the hardest skill to teach: metacognition. Students have seriously considered, for the work of inquiring into problems they care about through their coursework, the tasks machine intelligence can replace or augment and the tasks it can’t.
Advice for colleagues on leveraging AI for teaching and learning
I encourage my fellow faculty to resist embracing or rejecting A.I. systems. Instead, I hope members of Columbia’s community will find ways to leverage their unique disciplinary expertise to study generative A.I. as a system in-of-itself in order to make informed decisions about the benefit machine intelligence may have for our students. Designing experiential activities which dovetail with the learning goals of my class has helped to impart critical A.I. literacy and supplement the achievement of the outcomes my class sought in the first place.
Currently, my colleague Maria Baker and I are pursuing several lines of research into generative A.I. including its image-making capabilities, its genre knowledge, its ability to give feedback, and much, much more. Come by 310 Philosophy Hall, our homebase in the UWP, and chat with us. You’ll find us wondering, writing, re-writing, and re-wondering on this topic.
For fun, because technology *can be cool, below is a .gif of my revising this very piece, visualizing how I was writing and thinking in real-time, starting a fresh fifth draft from scratch above the half-baked fourth draft that needed to develop still into this attempt you now read. #writingisaniterative,recursive,&epigeneticprocess