Image: Proof that the robot uprising won’t be a bang, but a slow, magical brain-drain. This AI-created image shows a scholar in his dusty library having his original thoughts systematically harvested by a complicated, rune-covered algorithm machine, leaving him staring blankly into the digital void. Isn’t progress wonderful?
Every generation of transformative technology produces its cohort of alarmed researchers warning that the new tool is rewiring civilization toward catastrophe. The printing press would make scholars lazy. Calculators would destroy mathematical intuition. Google would hollow out memory. Search engines, we were told in Nicholas Carr’s widely-cited 2008 Atlantic essay, were making us all shallow. The pattern is so reliable it has a name: the “technology panic cycle.” The Engadget piece published April 15, 2026 — dutifully summarizing a preprint by Grace Liu et al. titled “AI Assistance Reduces Persistence and Hurts Independent Performance“ — is the latest iteration. It deserves to be taken seriously, examined carefully, and ultimately, challenged on multiple structural and interpretive grounds.
Let me be clear at the outset: the researchers themselves are more careful than the Engadget coverage suggests. But the article’s framing — stacking this study atop a pile of similar headlines to construct a narrative of cognitive doom — commits several compounding errors of evidence, analogy, and scope. Each deserves unpacking.
Engadget: There’s yet another study about how bad AI is for our brains
Researchers found that using the technology helps at first, but “it comes at a heavy cognitive cost.”
A group of researchers from across the US and the UK have conducted a study on what AI does to our brains and the results are, in a word, grim. These results were published in a paper called “AI assistance reduces persistence and hurts independent performance” which kind of tells you everything you need to know.
“We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost,” the study declares. Researchers went on to state that just ten minutes of using AI made people dependent on the technology, which led to worsening performance and burnout once the tools were removed.
The study followed people who use AI for “reasoning-intensive” cognitive labor. This refers to stuff like writing, coding and brainstorming new ideas, which are some of the most common use cases.
I. The Methodological Ceiling: What the Study Actually Proves
The study recruited 354 U.S.-based participants to solve fraction problems, randomly assigning half to an AI-assisted condition using GPT-5 and then abruptly removing AI access before a final three-problem test. The design is clean, and the researchers deserve credit for a randomized controlled trial in a space dominated by correlational work. But the conclusions the Engadget article draws from it dramatically exceed what the methodology can support.
Consider the task selection. Fraction arithmetic is not “reasoning-intensive cognitive labor” in any meaningful professional sense. It is a closed, procedural skill with objectively correct answers. Measuring “cognitive cost” by testing fraction performance after ten minutes of AI assistance is roughly equivalent to measuring whether GPS navigation degrades your ability to memorize street maps — and then concluding that GPS threatens human spatial cognition. The conclusion may be technically true in a narrow sense while being strategically irrelevant to how human beings actually deploy tools in productive life.
The study’s own abstract acknowledges that “these effects emerge after only brief interactions with AI (~10 minutes)” — a finding the Engadget article presents as alarming. But consider what that framing obscures: it means the effect is shallow, not strong. Deep cognitive restructuring — the kind associated with genuine deskilling — takes sustained practice over an extended time. Demonstrating that ten minutes of AI assistance reduces performance on the immediately subsequent, AI-free version of the same task reveals something about short-term task-switching costs, not long-term cognitive erosion.
II. The Abrupt Withdrawal Problem: Confusing Dependency With Adaptation
The study’s most fundamental design flaw is one that the Engadget piece never interrogates: participants had their AI access removed without warning, mid-task. The AI assistant was “then removed without warning, and participants were asked to solve 3 additional fraction problems.”
This is not a test of cognitive capacity. It is a test of adaptation to sudden, unexpected tool removal — a condition that would impair performance on almost any augmented cognitive task. Remove a surgeon’s laparoscopic monitor mid-procedure, take away a pilot’s instruments in cruise, revoke a programmer’s IDE mid-debugging session, and performance will decline precipitously. That decline does not mean the tools were “bad for the brain.” It means that humans are adaptive systems that calibrate effort and strategy to available resources — exactly as they should.
The researchers conflate two entirely distinct phenomena: dependency, which is pathological reliance that persists when tools are available, and superior alternatives exist; and calibration, which is rational reallocation of cognitive resources in tool-rich environments. Evolution and learning theory both predict that intelligent agents will offload tasks to reliable external resources when doing so increases net output. This is not a bug. It is the core logic of civilization itself.
III. The Historical Analogy the Article Refuses to Make
The study is likened in the article to the “boiling frog” effect, in which “sustained AI use erodes the motivation and persistence that drive long-term learning.” This metaphor has rhetorical power but virtually no historical support.
When calculators became universally available in American classrooms in the 1970s and 1980s, educators raised precisely the same objections. The National Council of Teachers of Mathematics eventually endorsed calculator use, and subsequent research demonstrated that students with calculator access did not become worse mathematicians — they became better ones, because they could tackle more complex, conceptually rich problems without being bottlenecked by arithmetic execution. The NAEP long-term trend data show no generational collapse in mathematical reasoning that correlates with calculator adoption.
When spell-check became standard, the panic was that writers would lose spelling ability. They largely did — and it did not matter, because spelling ability was never what prose quality depended on. The cognitive bandwidth freed by spell-check went into argument construction, voice, and revision.
The question is never “does this tool erode the raw underlying skill?” Tools almost always do. The question is whether the underlying skill remains valuable enough to preserve independently, and whether the cognitive resources freed by automation flow toward higher-order capabilities. On both counts, the Grace Liu et al. study offers no evidence whatsoever, because it tests neither long-term effects nor higher-order skill development.
IV. The Engadget Article’s Cumulative Error: Evidence Stacking Without Synthesis
The article cites several additional studies: one finding that AI increases fatigue among full-time workers, one finding that AI users actually work harder and longer than non-users, and others linking AI use in schools to poor social and intellectual development.
The problem is that these studies, presented as a converging indictment, actually contradict one another in important ways — and the article never notices. If AI causes workers to work “harder and longer,” the concern is overuse, not underperformance or cognitive atrophy. If students who rely on chatbots perform worse on tests, the appropriate policy response is better pedagogical integration, not blanket alarm — particularly when we know from decades of educational research that passive tool use produces worse learning outcomes than active, scaffolded engagement. The Grace Liu et al. study itself found that “people who used AI tools for hints and clarification had a much easier time once the chatbot was removed when compared to those who used the bot to essentially prompt the answers” — a finding that points toward design and usage pedagogy as the variable, not AI itself.
This is the article’s most significant journalistic failure: presenting a design problem as an ontological one. AI as currently deployed may, in some contexts, reduce the effortful struggle that produces durable learning. That is a finding about the design of current systems and the norms around their use in institutional contexts — not a finding about what AI is or what it inevitably does.
V. The Deeper Problem: A Failure to Distinguish Augmentation From Replacement
The entire cognitive-cost discourse rests on a model of human intelligence as a fixed-capacity substrate that can be depleted or degraded. This model is wrong, or at a minimum severely incomplete. Human cognition is not a single reservoir. It is a dynamic, distributed system that continuously recruits both internal and external resources — what cognitive scientists Andy Clark and David Chalmers influentially called the “extended mind.”
When a professional researcher uses AI to synthesize literature, draft outlines, generate hypotheses, or check reasoning, they are not replacing their cognition. They are extending it in the same way that writing itself extended it millennia ago. Socrates, famously, worried that writing would destroy memory and make people seem wise without being so. He was partially right about the mechanism and entirely wrong about the consequence. Writing did degrade certain forms of oral memory. It also enabled science, law, theology, philosophy, and every other systematic human enterprise that depends on the accumulation and transmission of complex knowledge across generations.
The researchers themselves write that their “findings need not be cause for pessimism. Rather, they point toward a clear design imperative: AI systems should optimize for long-term human capability and autonomy.” That is a reasonable, actionable conclusion. It is conspicuously not the conclusion the Engadget article’s framing invites readers to reach.
VI. What Responsible Coverage Would Look Like
None of this is to say the Grace Liu et al. study is without value. It is, in fact, a useful contribution on a specific and bounded question: does unstructured, answer-providing AI assistance produce worse subsequent unassisted performance on procedural tasks in short-term controlled conditions? The answer appears to be yes, and that has implications for how AI tutoring tools should be designed — emphasizing hints, Socratic prompts, and scaffolding over direct answers.
That is a finding about pedagogical design. It is not, as the Engadget headline implies, evidence that AI is “bad for our brains” in any meaningful general sense.
Responsible technology journalism would have noted that the study has not yet been peer-reviewed, that its task domain is narrow and procedural, that its effect window is ten minutes, and that its own authors frame their findings as a design challenge rather than a condemnation of AI assistance. It would have asked whether the finding generalizes beyond fraction arithmetic to the writing, analysis, and synthesis tasks that characterize professional AI use. It would have sought comment from cognitive scientists whose work on extended cognition or distributed intelligence offers a counterweight. It would have applied the same critical lens to the “boiling frog” metaphor that it applies to AI vendor claims.
Instead, it stacked studies. It leads with “grim.” It reaches for cultural panic as a substitute for analytical precision.
Conclusion
The Grace Liu et al. study is a preprint, not a verdict. Its findings are real within their constraints and deserve continued investigation in peer review, longitudinal designs, and broader task domains. But the leap from “ten minutes of fraction-solving AI assistance reduces persistence on subsequent unassisted fraction problems” to “AI is bad for our brains” is not journalism. It is technophobia in a lab coat.
The history of cognitive technology is the history of humans offloading mental tasks to external systems — clay tablets, printed books, punch cards, calculators, search engines — and then discovering that the freed cognitive bandwidth flows upward toward more complex, more creative, more distinctly human endeavors. There is no reason to assume AI will be the singular exception to that pattern. There is every reason to design AI tools that facilitate exactly that upward flow, and to hold both technologists and journalists to the standard of rigor that honest reckoning with transformative technology demands.
A Note on Research Methods and Accuracy
In recent years, some have voiced concern that artificial intelligence may distort facts or introduce inaccuracies into serious research. That criticism deserves acknowledgment. However, AI has now evolved into the most powerful research instrument available to any dedicated scholar—capable of analyzing vast datasets, cross‑referencing historical records, and surfacing overlooked connections across sources. This work represents a collaboration between the author’s investigative inquiry, verified primary documentation, and the advanced analytic capabilities of AI research tools. Here, AI was not used as a ghostwriter or a shortcut for scholarship, but as a disciplined research partner devoted to rigor, accuracy, and transparency.
Every factual claim in this work has been subjected to active verification. Where AI‑generated content was used as a starting point, it was tested against primary sources, peer‑reviewed scholarship, official institutional documentation, and established historical records. Where discrepancies were found—and they were found—corrections were made. The author has made every reasonable effort to ensure that quotations are accurately attributed, historical details are precisely rendered, and theological claims fairly represent the positions they describe or critique.
That said, no work of this scope is immune to error, and the author has no interest in perpetuating inaccuracies in the service of an argument. If you are a reader—whether sympathetic, skeptical, or hostile to the conclusions drawn here—and you identify a factual error, a misattributed source, a misrepresented teaching, or a claim that cannot be substantiated, you are warmly and genuinely invited to say so. Reach out. The goal of this work is not to win a debate but to get the history right. Corrections offered in good faith will be received in the same spirit, and verified corrections will be incorporated into future editions without hesitation.
Truth, after all, has nothing to fear from scrutiny—and neither does this work.