Take Control of Your Words
In university circles, there currently is ubiquitous discussion about how our pedagogies need to adjust to the fact that students have ready access to various chatbots that work with large language models. Some call them bullshit machines. Conferences in my field — rhetoric, writing, and discourse studies — are filled with panels on teaching toward, around, through, or against generative AI applications. I'm not at all convinced that every one of our writing courses now needs to focus in some way on teaching with or about generative AI tools. Refusal to do so has many well-informed reasons. One of them being that what these tools produce is, at best, only mid. Use of generative AI tools in classrooms needs good reasons; in fact, we should only accept the best reasons.
Regardless of how generative AI tools show up in my class, my courses continue to foreground pragmatic and metalinguistic analysis of text. Such analysis is meant to help students build versatile practices of writing, revising, rethinking which, I hope, will speak to their various intellectual ambitions and many neurodivergent minds. Whatever range of tools students use in that process, I will not always know and can hardly control. I'm not a genAI cop. I also don't teach by admonishment. Rather, my somewhat selfish expectation is that at the end of the course students will submit text that is worth my time, energy, and thought to read. As every university instructor knows, reading final papers, really reading them, requires a substantive amount of end-of-term energy — we should do what we can to make that worthwhile. What do these kinds of worthwhile text look like and how can we create satisfying conditions for their production, that is the question.
On Monday, March 24, 2025, Jeffrey Goldberg of The Atlantic published a breaking story about a group chat that has since occupied much attention: "The Trump Administration Accidentally Texted Me Its War Plans." It’s a tensely unfolding piece of writing which, as I was reading it on that day, I started to see as an illustration that becoming reliant on chatbots will not do my students any argumentative favours. I did not need to bring chatbots into the classroom for demonstrating that. More on that in a moment.
Explicit instruction is a key tenet in writing pedagogy. As writing instructors, we try to be explicit about what kinds of genres we are teaching, what functions typical discourse elements fulfill, what syntactic ranges students can explore, and what action and agency is enabled by the writing students are doing. Course design often begins from questions like: what forms of writing do we ask students to explore, why, how, and which thoughts and intentions do we anticipate growing from it. As researchers, we also have internalized habits and thoughts (and externalized questions) that help us perceive what purposes someone else’s writing is pursuing. When we are skeptical of the honesty or intention of some work, we can pinpoint and discuss with students why it raises academic integrity concerns. The idea is to positively teach students how to develop habits of reading and writing so they can pursue their own work more confidently and so they can better assess the work of others. The fact that more and more synthetic text is sloshing around us brings questions of trust and integrity in research writing even further into the foreground.
What do we as professional researchers take into consideration — what textual, paratextual, and contextual knowledge do we rely on — when we develop trust as we read someone else's publications? Conversely, what makes us notice, what leads us to be suspicious about the quality of someone’s writing and the trustworthiness of their claims? In this era of genAI chatbots, students have increased interest in knowing and demonstrating the textual and non-textual signals that indicate they are doing trustworthy intellectual work. And so as instructors, we should discuss what these signals are — as we guide the legwork that stands behind their textual, visual, or aural emergence.
Back to the Jeffrey Goldberg piece.
When Goldberg published that first article which broke the group chat story, he was concerned about three key issues: (1) how to make readers believe the story and understand its weight, (2) how to convey which facts were certain and which were not entirely verified, and (3) how to protect himself and his magazine against legal challenges from those who his story exposed. These areas of concern call for very nit-picky editing. In turn, they produce excellent material for equally detailed analysis. My students were prepared for that.
As the unexpected recipient of the group chat messages, the journalist himself is part of the narrative. Through his writing, Goldberg puts himself in the story’s middle— not only as the person who received these messages, but also as the person who was trying to make sense of them at the time, who checked his sense-making with colleagues, and who is expressing how certain or uncertain he is about what he is reporting. The piece has a lot of first-person pronouns, which is unusual for current affairs reporting: “I, however, knew two hours before,” “I received a connection request,” “I did not assume,” “I have met him in the past,” “I received notice,” “I consulted,” “I had very strong doubts,” “I could not believe,” and many more. The story follows a chronology not only of events unfolding but also of sense-making — how initial disbelief could be partially put aside based on new evidence, how some suspicions received confirmation as world events happened, how uncertainties were answered well enough to become publishable but not as much as allowing everything to be told.
As a result, this also becomes a story about a journalist’s thought processes. We hear about Goldberg’s questions, practices, and principles. For instance, he provides reasons for not quoting certain messages: “I will not quote from this update, or from other subsequent texts. The information contained in them, if they had been read by an adversary of the United States, could conceivably have been used to harm American military and intelligence personnel.” Only later, when members of the chat had publicly declared that there was no sensitive information circulated— Tulsi Gabbard and John Ratcliffe said under oath “there was no classified material that was shared” and Pete Hegseth told reporters “nobody was texting war plans” — did the Atlantic take that as permission for making all the group chat messages available. Even a reader who doesn’t trust Goldberg’s reasoning cannot claim that signs of transparency are lacking.
We talk a lot about citation in my course — all aspects of it. When reading this piece, students noticed that Goldberg rarely uses paraphrase. He chooses direct quotation almost all the time. The exception is when he explains, as quoted above, his use of paraphrase for the sake of military security. It’s unusual to be using verbatim quotes so exclusively. Quite a contrast to other journalistic pieces and not at all similar to the patterns of citation students find in the research papers we analyze. Why is that? For one, the text messages are not difficult to quote in full as they aren’t that long. For another, these high-ranking leaders’ word choices are jarring to a degree that paraphrase would not be able convey. “Good Job Pete and your team!!”, exclaims Marco Rubio when bombs are falling on Sanaa.
Most importantly, full direct quotes are evidence. Evidence of exactly what was being said, how it was said, and who precisely said it. In the process, readers can imagine themselves in Goldberg’s shoes, trying to discern what is going on from the textual clues available. Such that, lastly, the many direct quotes help maintain the distinction between Goldberg, the skeptical recipient and careful sense-maker in the middle of this narrative, and the talkers of the “Houthi PC small group,” who unquestioningly take instruction from Stephen Miller and cheer disturbingly as elsewhere in the world about 53 people are being killed.
It was also obvious to students that when Goldberg names the people he quotes he does so in particularly modified ways. Why and which modifiers are attached to the naming of a cited author is a research interest of mine. Goldberg’s choices in modifiers communicate the uncertainty that remained at the time and they aim to protect the magazine from legal challenge. The added phrases have a dramatic effect, too. They become introductions of characters in a strange play. There is “a user identified as Michael Waltz” who may be “the actual Michael Waltz” but could also be someone “masquerading as Waltz.” Once Goldberg was joined to the chat, a message “from ‘Michael Waltz,’ read as follows.” Shortly after, “a person identified only as ‘MAR’” responds and we are reminded that “the secretary of state is Marco Antonio Rubio.” Next, “a Signal user identified as ‘JD Vance’” enters the scene, and then another “‘TG’ (presumably Tulsi Gabbard, the director of national intelligence, or someone masquerading as her).” A “user called ‘Pete Hegseth’” writes something, followed by “‘John Ratcliffe’” in quotation marks, along with “someone identified only as ‘S M’” which Goldberg “took to stand for Stephen Miller.”
I hope it became clear to my students how carefully we need to read when we look to other people’s published writing with the aim of relying on their research, analysis, and evidence. Likewise, my students’ own research writing and its detailed choices are not inconsequential — I want them to take care and control of these choices through several rounds of feedback and revision. And finally, I wish students to know that I read their work as attentively as I read Goldberg’s article. It pays to edit every little word. All the words matter when trying to show the evidence on which one’s argument builds while also expressing the uncertainty which nevertheless remains.