When ChatGPT came out, many people deemed it a perfect plagiarism tool. “AI seems almost built for cheating,” Ethan Mollick, an A.I. commentator, wrote in his book “Co-Intelligence.” (He later predicted a “homework apocalypse.”) An agricultural-science professor at Texas A&M failed all his students when he became convinced that the whole class had used A.I. to write their assignments. (It turned out that his method of detection—asking ChatGPT whether it had generated the papers in question—was unreliable, so he changed the grades.) A columnist in The Spectator asked, “How can you send students home with essay assignments when, between puffs of quasi-legal weed, they can tell their laptop: ‘Hey, ChatGPT, write a good 1,000-word paper comparing the themes of Fleabag and Macbeth’—and two seconds later, voila?” (View Highlight)
But “voilà” was not the word for whatever ChatGPT was doing for Chris. He was not outsourcing his exam to ChatGPT; he rarely made use of the new text or revisions that the chatbot provided. He also didn’t seem to be streamlining or speeding up his writing process. If I had been Chris’s professor, I would have wanted him to disclose his use of the tool, but I don’t think I would have considered it cheating. So what was it? (View Highlight)
I asked Chris to explain what he thought he was doing. “You could say that while my own role is a sort of orchestrator and guide, ChatGPT’s role is mainly the execution and supporting of the structure of my work,” he told me. This made sense in theory, but it didn’t seem to capture the chaotic transcript I had read. Chris then rattled off several additional contributions that ChatGPT had made to his process. He was struggling to explain how the tool had actually helped him; he seemed to lack the language to describe this new style of collaboration. (View Highlight)
For the writers Pigg studied and the students I interviewed for this article, ChatGPT was not so much a perfect plagiarism tool as a sounding board. The chatbot couldn’t produce large sections of usable text, but it could explore ideas, sharpen existing prose, or provide rough text for the student to polish. It allowed writers to play with their own words and ideas. In some cases, these interactions with ChatGPT seem almost parasocial. Chris told me that Chat—his nickname for ChatGPT—was a “good conversation partner.” Another student, who was profiled in a recent paper on the subject, nicknamed the chatbot Lisa and described “her” as “a partner and even a friend.” ChatGPT raises difficult practical issues about originality and plagiarism. But the binary question “Is it cheating?” hides the possibility that something new and inventive might be going on here. (View Highlight)
Writing is hard. It requires us to use multiple parts of the brain in an improbable symphony of high-strain effort. Our hippocampus summons relevant facts; the prefrontal cortex tries to organize them. A brain region known as Broca’s area helps us to narrate in a familiar inner voice; our verbal working memory stores and manipulates the narration as we transfer it to the page. Meanwhile, our brain recruits our spatial working memory, which evolved to track our location in physical space, to orient our words within a whole. (In the lab, if you ask a person to trace a pattern with her free hand—a standard method to exhaust spatial working memory—her ability to structure an essay diminishes.) (View Highlight)
New highlights added October 9, 2024 at 10:58 PM
writing programs such as Scrivener, which will organize research materials and partition projects into numerous pieces. (View Highlight)
first, I struggled to understand why anyone would want to write this way. My dialogue with ChatGPT was frustratingly meandering, as though I were excavating an essay instead of crafting one. But, when I thought about the psychological experience of writing, I began to see the value of the tool. ChatGPT was not generating professional prose all at once, but it was providing starting points: interesting research ideas to explore; mediocre paragraphs that might, with sufficient editing, become usable. For all its inefficiencies, this indirect approach did feel easier than staring at a blank page; “talking” to the chatbot about the article was more fun than toiling in quiet isolation. In the long run, I wasn’t saving time: I still needed to look up facts and write sentences in my own voice. But my exchanges seemed to reduce the maximum mental effort demanded of me. (View Highlight)
Old-fashioned writing requires bursts of focus that remind me of the sharp spikes of an electrocardiogram. Working with ChatGPT mellowed the experience, rounding those spikes into the smooth curves of a sine wave. (View Highlight)
It’s possible to explain my experience with ChatGPT in cognitive terms. If writing requires a person to store information using multiple types of working memory at the same time, then back-and-forth conversations with ChatGPT may provide moments of respite, by temporarily offloading some of this information. (A more complete break—for example, scrolling through one’s phone or checking one’s e-mail—would be much more disruptive.) So even seemingly unproductive interactions might provide the subtle benefit of increasing your over-all writing stamina. Collaborating with A.I. can also offer you a high-tech “shitty first draft,” allowing you to spend more time editing bad text and less time trying to craft good text from scratch. ChatGPT is not so much writing for you as generating a mental state that helps you produce better writing. (View Highlight)
M. Knowles, a researcher who studies human-A.I. interactions, recently wrote in the journal Computers and Composition. “These are both meaningful categories that should not be discarded, but they are insufficient for discussing how writers use GenAI in practice.” (View Highlight)
Knowles describes the collaboration between writers and A.I. as “rhetorical load sharing.” A.I. isn’t writing on our behalf, but neither is it merely supporting us while we write from scratch; it sits somewhere in between. In this way, it is both on the spectrum of writing hacks and rituals and also, in some sense, beyond it. This helps to explain our discomfort with the technology. We’re used to writers moving to a quiet location or using a special pen to help get their creative juices flowing. We’re not yet used to the idea that they might chat with a computer program to release cognitive strain, or ask the program for a rough draft to help generate mental momentum. (View Highlight)
When I asked a few professors about A.I.-assisted writing, I was met with mixed feelings. One instructor told me that easy writing assignments, such as short response paragraphs that nudge students to complete their reading, are easily “GPT’d,” and would likely need to be eliminated. But, for longer and more complex assignments, such as Chris’s graduate-school paper on perspectivism, teachers seem to be discovering, to their relief, that “load sharing” with ChatGPT—though it is a new and unfamiliar technique—still requires students to think carefully and write clearly. It might make the process of completing an assignment feel less daunting, but it’s not a shortcut to receiving a higher grade. (View Highlight)