In 2001, when I was in undergrad, I got my first laptop, a ten-pound Toshiba I inherited from my mother after she died. Sometimes I’d lug it around to late-night coffee shops to write papers.
One night I was sitting there, drinking chai and trying to look deep, typing trenchant observations about Beowulf or whatever, when an old guy at the next table disdained my laptop and asked if that thing helped me learn better. I blew him off, but remember thinking: does it?
I’ve been reminded of that moment recently, as friends and algorithms have been recommending me articles about ChatGPT, the free1 AI chatbot launched late last year. I first heard about it months ago, because I teach writing for a living. It’s the panic du jour in my professional contexts.
Back in December, a week after ChatGPT’s debut, The Atlantic solemnly declared it the end of the college essay.2 Ever since, journalists and academics—i.e., the kind of people who read The Atlantic—have been pumping out paranoid articles about it. Just this morning, checking my email, I saw another example, a piece from The Chronicle of Higher Education called “It’s Not Just Our Students—ChatGPT Is Coming for Faculty Writing.”3
Those articles are paywalled, but you don’t really need to read them. The irony here is that most of these takes are so predictable, they might as well have been written by a robot. Pretty much anytime you read about the humanities4 in the popular media, it’s some article about their imminent death. (The New Yorker published another one the day I finished writing this; I haven’t read it yet.)
As an actual writing teacher, I’m not too worried about ChatGPT. Maybe that’s naive. I admit I don’t fully understand it, although that hasn’t stopped anyone else from writing about it. At one point in that Chronicle article, the author frets about the prospect of plagiarizing from AI; he seems to take it as a given that the chatbot has original ideas to plagiarize. If that’s the case, I think we have more pressing issues than how to cite them.
The aforementioned Atlantic article speculates about how quickly academia will adapt:
Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.
That's ridiculous. Less than three months after ChatGPT's debut, students are using it already, and professors know; I talked to my class about it this week. My university issued a quick response memo in January with best practices for ChatGPT. I’m guessing we’ll have policies on generative AI by the end of this academic year; a task force has been formed.5
Personally, I keep imagining two scenarios for how this all plays out. The optimistic one is that ChatGPT will soon fade out of the news cycle, quietly become institutionalized, and in ten years, generative AI will be widely used in writing classes as a tool, the use of which is subject to policies and conventions, just like has happened with lots of other disruptive technologies.6 The other scenario is that ChatGPT is an early sign of an imminent AI paradigm shift that’s going to transform every aspect of our lives very soon, in which case the death of the college essay is the least of our worries.
I’ve been teaching college writing for nineteen years now, long enough that my career spans a series of major technological changes: social media,7 YouTube, the explosion of online classes, smartphones, course management software, Zoom, and so on. Those all drastically changed how we read, write, and teach. This morning, I walked into class, turned on the projector, logged into the computer, and loaded the Canvas page for my class, a few YouTube videos for context, and the two true crime podcasts we were discussing. I clipped a wireless lapel mic on my shirt so I could post the recording for anyone who wasn't there. Afterward, I held office hours on Zoom. Ten years ago, I didn’t use any of those technologies to teach.
Thinking about that fills me with dread, but I have to admit that my classes are better for it, on balance. And pretty much every class I teach reminds me that, with the (big) exception of the pandemic’s lingering mental health effects, the kids are alright. I’m guessing we’ll all figure out how to teach and learn just fine with generative AI. If anything, it seems likely to improve student writing.
That guy at the next table didn't really want to know if technology made learning easier. He wanted to believe the way he learned was better. I get it—I worry about technology and its effects all the time, especially the very same algorithmic information ecosystem that keeps bombarding me with ChatGPT articles. But trying to resist it will only make me worse at my job.
Instead of tilting at windmills , I might try to incorporate ChatGPT into this class. One of the learning outcomes8 is to “recognize the characteristics of the true crime genre, as well as its evolution in the context of historical and contemporary examples.” Maybe I’ll ask our new robot overlord to write a true crime story and we can discuss the result.
I haven’t been reading many books this month. I’ve spent a lot of it teaching In Cold Blood, a book I’ve written enough about already, and doing various other job-related reading. I also started Paul Auster’s new book on gun violence, Bloodbath Nation,9 but that’ll have to wait for next time.
I’ve read a lot of good longform nonfiction, though. The algorithm has me pretty well figured out by now, so my phone shows me mostly news of minor Philly sports transactions and long, weird essays on apocalyptic topics. Usually I’ll skim the first few paragraphs, enough to say I read it, and move along, as one does. But occasionally I’ll be engrossed enough by the writing that I wind up reading the whole essay on my phone. When that happens, I’ll do the thing where you scroll back up to see the byline so you can remember the writer’s name.
I’ve done the byline check with a few articles this month. One was an Atlantic cover article by Megan Garber, “We’ve Lost the Plot,” which seems related to the ChatGPT/technology/paranoia conversation. Garber argues, essentially, that we already inhabit the metaverse—that our realities have merged with our entertainment. I was interested in what she writes about our evolving relationship to truth:
“Each invitation to be entertained reinforces an impulse: to seek diversion whenever possible, to avoid tedium at all costs, to privilege the dramatized version of events over the actual one.” (Emphasis mine.)
As someone who teaches two ostensibly truth-based subjects—creative nonfiction and true crime—I think that’s absolutely true. Creative nonfiction writers say things like “it’s my truth” with a straight face, and if you’re curious about lies in true crime, do I have the book for you.
Another was this Wired piece by Susana Ferreira about the effects digital nomads are having on parts of Portugal, and another was this James Pogue piece from Vanity Fair on the “dissident fringe” in the rural West. I wasn’t familiar with Garber’s or Ferreira’s work. I’d read and liked Pogue’s account of attending a national conservatism conference last year. All of those seem like examples of a more essayistic kind of longform magazine writing that I like much better than the standard feature.
My favorite essay I read this month wasn’t recent. I wish I could remember how I stumbled on this nearly ten-year-old piece on class and writing by the poet and essayist Jaswinder Bolina. It’s so good I wanted to nail it to doors. I especially love this point …
One of the things that distinguishes the classes is that they speak and sound different from each other. The thousands of choices we make daily in our diction and syntax are almost entirely reflexive. We hardly notice them at all when we’re talking or writing to people who are like us. If we encounter only such people, if nobody comes along to challenge our language and its embedded frames of reference, the result is that ours becomes a private conversation continually reaffirming our existing perspectives.
… which I would argue is a bigger threat to academic writing than ChatGPT.
I also appreciated this recent New York Times piece, which seems to encapsulate a lot of what I’m writing about in this post. Having written a book about Truman Capote, one of the most successful American writers ever, I appreciated its point about the current generations of writers living in an aftermath:
The writing of our time is in constant, unrelenting transition. One mode of writing (print) is dying and another mode (digital) is being born. And in digital writing, whole schemes of meaning arise and then dissolve or rot or flame out … Each transition requires starting over, re-evaluating, submitting and, above all, failing. Just to survive, young writers today will have to live through multiple revisions of who they are and what they do. Within a few years, the modes of expression they’re learning now, the writerly identities they hunger to inhabit, won’t exist or won’t be recognizable.
I guess it’s appropriate that this month I’ve been listening to a lot of William Basinski, the ambient musician/composer best known for The Disintegration Loops, a document of decaying sounds recorded on a dead technology. Lately I’ve been into his latest record, from last year. It’s good writing music:
Speaking of societal decay, you may be wondering if I’m going to discuss the Super Bowl. No. No I’m not. It’s baseball season. So enjoy this supercut of awkward moments from baseball broadcasts, which Instagram recently suggested to me. All hail the algorithm.
For now.
As if that would be a bad thing?
I hope it’s coming for their headline writer first.
For way longer than I’d like to admit, well after I had a graduate degree in one of them, I didn’t really understand what the humanities meant. I’m not sure anybody does. But it’s basically disciplines that are about language and/or culture: art, literature, philosophy, music, theater, film, languages, etc.
Stay tuned for my forthcoming novel, A Task Force Has Been Formed.
This is more or less what’s happened with Wikipedia, another unprecedented aggregation of human knowledge, since I started teaching.
Facebook arrived at my institution the same month I started teaching.
And boy, do I have thoughts.
Not to date myself, but the big academic argument when my children were in school was the handheld calculator, introduced by Texas Instruments in 1967, and how it would dumb-down children studying math. The educator's argument was made largely irrelevant by knowing that, whether they accepted it or not in the classroom, calculators would be used at home for homework.
http://hackeducation.com/2015/03/12/calculators
Accepting technological evolution is hard for some.