ChatGPT Is a Smart Computer’s Impression of a Know-It-All

Before I got distracted, I meant to write a little about ChatGPT. I have been playing with it since it launched last week and it is downright impressive in many circumstances. But something felt wrong, and I could not quite put my finger on it until I read a piece from Ian Bogost, in the Atlantic:

Even pretending to fool the reader by passing off an AI copy as one’s own, like I did above, has become a tired trope, an expected turn in a too-long Twitter thread about the future of generative AI rather than a startling revelation about its capacities. On the one hand, yes, ChatGPT is capable of producing prose that looks convincing. But on the other hand, what it means to be convincing depends on context. The kind of prose you might find engaging and even startling in the context of a generative encounter with an AI suddenly seems just terrible in the context of a professional essay published in a magazine such as The Atlantic. And, as Warner’s comments clarify, the writing you might find persuasive as a teacher (or marketing manager or lawyer or journalist or whatever else) might have been so by virtue of position rather than meaning: The essay was extant and competent; the report was in your inbox on time; the newspaper article communicated apparent facts that you were able to accept or reject.

It is a little late here and after reading the first three paragraphs of this story — generated by ChatGPT, obviously — I was worried Bogost had somehow lost a writerly edge. Context matters and reveals so much.