“The fact is that effective communication is hard. Clear writing is hard. For the typical undergraduate—and no doubt the typical academic scholar or scientist—it is easier to string together vague buzzwords in meandering sentences than it is to say something clearly and concisely.”
cademia has failed in many respects. It has become too expensive, too ideological, and, frankly, it produces too much useless or misleading research. Perhaps most egregiously, academia has even failed at its most basic task: to teach students to write and communicate effectively. Higher education encourages students to favor jargon-heavy, needlessly complex prose over clear language. The consequence is a particular type of bad writing described by Steven Pinker as “highfalutin gobbledygook.”
Pinker and other prominent intellectuals have long griped about this bad writing problem in academia. Even Albert Einstein advocated for clear writing over the needlessly complex, declaring, “If you cannot explain it simply, you do not understand it well enough.” Nevertheless, academia is so accustomed to highfalutin gobbledygook, it is doubtful anything short of a miracle could ever fix the problem.
Fortunately, we live in the era of miracles—technological miracles, at any rate. Unless there is some genuine value to needlessly complex writing in academia, it seems inevitable that ChatGPT and other artificial intelligence (AI) tools will eliminate incomprehensible academic papers. The benefits of this will not only save students from pumping out incoherent slop in the name of higher learning; it will also potentially help to improve academia’s terrible reputation with the public.
In the mid-2010s, I worked as an editor for FindLaw. The writers under my purview were all lawyers, which generally meant that they were accomplished communicators. But no matter their competence level, I insisted that they periodically pass their writing through the Hemingway Editor before submitting. This simple web-based application provides a readability score (e.g., “Grade 6”) and highlights in red any sentence that is overly complex. If a writer finds that his prose is over Grade 8 and has more than a few red-highlighted sentences, then generally it is too complex for lay readers and needs editing.
The Hemingway Editor was early proof to me, as an editor handling complex content, that technology could solve the problem of incomprehensible academic or technical writing. The only shortcoming with the Hemingway tool is that it merely shines a spotlight on bad writing; the writer still needs to possess the skill and the requisite understanding of the material to wrangle the prose into something that is both reader friendly and actually coherent.
ChatGPT largely does away with this shortcoming. If one lacks the skill to rewrite complex prose into something readable, the AI will do the rewriting for him. And if one lacks the understanding necessary to craft a coherent final product, again, ChatGPT can serve as a powerful assistant. Notably, even the old-school Hemingway Editor has recently stepped up its abilities. In the latest version of the app, a “Hemmingway Plus” feature is available which can re-write overly complex sentences.
The AI revolution promises to reshape the world. Many leaders in the field, including Brad Smith, the president of Microsoft, have compared AI to the invention of the printing press. According to Smith, “It’s fundamentally an invention that can help us all do research, learn more, communicate more, sift through data better, and its uses are almost ubiquitous.”
This is the correct way for writers and academics to think about AI tools like ChatGPT: They are as revolutionary as the printing press. Any communicator who fails to adopt this technology will be left behind. In this context, being “left behind” means, for instance, being the only student in class who hands in a paper with obnoxiously long sentences, an incoherent thesis, no transition between thoughts, and an overload of needless jargon.
Of course, there is pushback to the use of AI tools for writing. This is a tired story, however. To jump ahead: technology wins in the end. It is already happening. As The Wall Street Journal reported in November of 2023, “Yes, [teachers are] worried about students using ChatGPT to cheat or cut corners. But that is tempered by hopes that it can help them learn faster. And many teachers say that students should be encouraged to use bots because employers will expect them to have learned that skill.”
On another front, there are those who stand by the use of highfalutin gobbledygook. These apologists for needlessly complex writing contend that academics need to use an oblique style in order to show in-group affiliation with other academics and to exclude the uninitiated. According to writing consultant Deborah S. Bosley, the “intended audience [for academics] is always their peers. That’s who they have to impress to get tenure.” Bosley notes that this attitude is even more entrenched in France than in the United States: “I gave a presentation in France [on accessible language] and academics there flat out told me that academics shouldn’t write to express, they should write to impress.”
AI tools will not change these cultural attitudes overnight. But it seems inevitable that the culture will change. Now that anyone, regardless of their education level or academic credentials, can generate pompous, jargon-laden prose using a simple prompt, the allure of this type of writing will almost certainly lose its exclusionary appeal. Additionally, even the most obscenely oblique writing can now be translated effortlessly into clear language using ChatGPT. If a student or academic has nothing meaningful to say, he can no longer hide behind indecipherable prose.
This may not bode well for departments that subsist primarily on the art of playing obscure language games (such as the easily-mocked “studies” departments—gender studies, queer studies, etc.). However, moving beyond needlessly complex writing would almost certainly help academia improve its reputation with the public.
According to Gallup, the public’s confidence in higher education has dropped off a cliff: In 2015, 57% of the public had “a great deal” of confidence in higher education; in 2023, that number was down to 36%. Trust in science is also declining. The trust problem is multifaceted. Universities are now seen as untrustworthy and overbearingly ideological to anyone with viewpoints that conflict with social justice ideologies. And it does not help that higher education has become outrageously overpriced. However, failures of communication are also certainly part of the problem. Research professor Paul M. Sutter argues that this is the case. In order for scientists to regain the trust of the public, he writes, “It’s crucial that scientists find ways to communicate more effectively and directly with the public, so the public can have access to the minds, and hearts, of scientists. In other words, they need to see scientists as people whom they can empathize with and learn to trust.”
The fact is that effective communication is hard. Clear writing is hard. For the typical undergraduate—and no doubt the typical academic scholar or scientist—it is easier to string together vague buzzwords in meandering sentences than it is to say something clearly and concisely. As Bosely summarizes, “It’s easy to be complex, it’s harder to be simple.”
This is why we develop technology-based tools in the first place: to make hard things easy. At this point in history, any writer who continues to churn out needlessly complex writing is doing so willfully. It should no longer be tolerated by teachers, journals, or the reading public. Academic disciplines that fail to adapt will become increasingly irrelevant. Such is the reality of technological advancement.
Peter Clarke is a lawyer and a freelance journalist in San Francisco. He can be found on X @HeyPeterClarke