Psychology Today: Why AI Must Not Do Our Writing for Us
Summary of Gunderman’s Argument
Dr. Richard Gunderman, writing in Psychology Today, argues that students must not allow AI to write their essays, regardless of the technology’s ability to produce technically superior work.Gunderman observes that since AI writing tools emerged, his students’ papers have become mechanically flawless—free of the grammar, spelling, and punctuation errors he once routinely corrected. Yet he finds this troubling rather than beneficial. Students increasingly treat writing assignments as boxes to check rather than opportunities for intellectual exploration.
Drawing on the French essayist Michel de Montaigne, Gunderman contends that writing is fundamentally an act of “essaying”—trying out ideas, examining questions from multiple angles, and discovering meaning through the process itself. Writing resembles conversation: an unrehearsed adventure whose value lies not merely in the finished product but in the journey of exploration.At the heart of his argument is a philosophical claim about human nature. We are “witnessing creatures”—matter that has become aware of itself, capable of beholding and marveling at existence. This capacity for reflection constitutes our essential responsibility. Just as we cannot outsource living, we cannot outsource the thinking, feeling, and discovering that writing demands.
Gunderman concludes that students must write their own essays even when machines could earn them better grades or spare them difficulty. Words, he argues, are sacred instruments of self-formation. Through the struggle to find our own words, we discover new perspectives, develop compassion, and uncover purpose. To surrender this process to machines is to surrender a fundamental dimension of being human.
Dr. Richard Gunderman’s recent Psychology Today essay argues that students “must write their own essays” because AI outsources the sacred acts of thinking, feeling, and discovering. It’s an elegant philosophical position—and a fundamentally flawed one. The wholesale abandonment of new technologies represents not principled resistance but intellectual retreat—a reactionary impulse that echoes across five centuries of innovation. When Gutenberg’s press democratized the written word, critics warned it would corrupt scholarship and cheapen knowledge.
Desiderius Erasmus, despite using printing presses extensively, warned that “quick commercialization of the industry had a tendency to produce texts of poor quality” by “stupid, ignorant, raving, irreligious and seditious” printers who contaminated the book market. He feared “classical literature falling to contemporary publications” as printing democratized publishing.
Giorgio Merula and Gerolamo Squarzafico (1481) claimed that many printers were mostly “illiterate” and expressed concern “that printing could have negative effects on classical scholarship.”
Abbot Johannes Trithemius (1492) wrote In Praise of Scribes, arguing that the ease of printing would make monks “intellectually lazy” and that printed books were “less durable and aesthetically inferior to manuscripts”. He claimed hand-copying holy texts was a spiritual exercise that printing would degrade.
Scribes who had devoted lifetimes to copying manuscripts saw their sacred craft profaned by mechanical reproduction. Yet the printing press didn’t destroy learning; it ignited the Renaissance, the Reformation, and the Scientific Revolution. Every transformative tool—from the typewriter to the word processor to the search engine—has provoked the same fearful response: that something essentially human would be lost. And every time, humanity has instead expanded its capabilities while preserving what truly matters. Gunderman’s argument, however beautifully articulated, places him in a long tradition of intelligent people standing athwart progress, warning of a spiritual catastrophe that never arrives.
Gunderman commits a category error that technology critics have repeated for centuries: conflating the tool with the task. When Socrates warned against writing itself (a point Gunderman acknowledges), he feared students would mistake the written word for understanding. Yet writing became civilization’s greatest amplifier of thought, not its replacement. The printing press, calculators, search engines—each spawned identical anxieties. Each was absorbed into learning rather than destroying it.
The professor observes that his students’ technical writing quality has improved, then laments this as a problem. This is backwards. Grammar, spelling, and punctuation are mechanical barriers to expression, not its essence. When AI eliminates these friction points, students can focus on what actually matters: constructing arguments, synthesizing sources, and developing original perspectives. A surgeon who uses precision robotics isn’t outsourcing surgery; she’s extending her capabilities.
Consider the research process itself. A student investigating climate policy might spend weeks hunting for statistics, historical context, expert opinions, and counterarguments. An AI research assistant can surface these materials in minutes—not to replace critical thinking, but to enable it. The student still must evaluate sources, identify bias, construct coherent narratives, and reach defensible conclusions. The cognitive heavy lifting remains human; only the drudgery is automated.
Gunderman invokes Montaigne’s “essays” as trials of thought, intellectual adventures requiring personal struggle. But Montaigne wrote in an era when access to information was desperately scarce. Today’s students drown in information. The challenge isn’t finding material—it’s making sense of it. AI tools that organize, summarize, and identify patterns don’t circumvent thinking; they make rigorous thinking possible at scale.
The article’s most revealing admission is its title’s imperative: AI “must not” do our writing. This moralizes a pedagogical question. The real question isn’t whether students should struggle, but whether they should struggle productively. Spending hours proofreading comma splices produces no intellectual growth. Spending those same hours engaging with ideas—aided by tools that remove mechanical obstacles—produces genuine learning.
Educational research consistently demonstrates that expertise develops through deliberate practice with appropriate scaffolding. AI provides exactly this: scaffolding that can be gradually removed as competence grows. Condemning the scaffold because it makes early attempts easier misunderstands how skills develop.
Gunderman compares AI writing to outsourcing love letters—a reductio ad absurdum that reveals his argument’s weakness. Writing a love letter is the point; researching citations for an academic paper is not. Learning operates at the level of synthesis and argumentation, not bibliography formatting.
The professor is correct that we cannot outsource living. But he’s wrong to assume that efficient research constitutes outsourced thought. When a student uses AI to gather information, compare perspectives, and identify gaps in reasoning, then writes original analysis engaging that material—she has lived the intellectual adventure Montaigne described. She has simply traveled further in less time.
That’s not cheating. That’s progress.
The Path Forward: AI Literacy as Educational Imperative
UC San Diego Today: The Future of AI in K-12 Education
With AI technology, teachers will need to change learning priorities. Mastering proficiency will not be as important as it has been. Students may need to place greater emphasis on learning how to analyze and evaluate content for accuracy, as current AI tools are not reliable for fact-checking. They will also need strong skills in editing and refining what they create.
Greater use of AI in the classroom could encourage a greater focus on deeper learning instead of a focus on memorization.
~ Dr. Amy Eguchi is a Teaching Professor (UC San Diego) in the Department of Education Studies. Amy holds her M.A. in Child Development from Pacific Oaks College, Ed.M. in Education from Harvard Graduate School of Education, and Ph.D. in Education from the University of Cambridge. She possesses a wealth of experience as a teacher and leader in technologically enhanced education to promote students’ STEM+C learning, with particular focus on educational robotics, computer science (CS) education and AI in K12.
The solution isn’t prohibition—it’s pedagogy. Rather than treating AI as an adversary to be defeated by plagiarism detectors and surveillance, universities should embrace it as a skill to be mastered. Imagine a required freshman seminar in AI-assisted research: students would learn to craft precise prompts, critically evaluate AI-generated content for accuracy and bias, distinguish between productive augmentation and intellectual surrender, and understand when human judgment must override algorithmic convenience. Just as we teach information literacy to help students navigate the internet’s chaos, we must teach AI literacy to help them harness these tools without being diminished by them. The students who thrive in tomorrow’s economy won’t be those who avoided AI during their education—they’ll be those who learned to wield it as an extension of their own intellect, knowing precisely where the tool ends and their thinking must begin. Banning calculators didn’t produce better mathematicians; banning AI won’t produce better thinkers. Teaching wisdom will.
This article demonstrates the very thesis it defends: it was written collaboratively between a human author and Claude, an AI assistant developed by Anthropic. The AI provided rapid research synthesis, historical context, counterarguments, and initial drafts—tasks that might have consumed weeks through traditional methods. Yet the original ideas, concepts, research parameters, editorial judgment, intellectual discernment, argumentative direction, and final prose remained entirely human decisions. Every claim was evaluated, reorganized, challenged, and refined by the author. The result isn’t outsourced thinking but augmented thinking—a partnership in which AI handled the mechanical labor of information gathering. At the same time, the human supplied the intellectual vision that transformed raw material into a coherent argument. Far from proving Gunderman’s fears, this process vindicated the opposing view: the author engaged more deeply with ideas, not less, precisely because AI cleared away the underbrush of tedious research. The witnessing, beholding, and marveling that Gunderman rightly prizes all occurred—they occurred more efficiently.