I was at a business lunch a few weeks ago. Investors, hedge fund managers, executives and the like. Naturally, AI made its way into the conversation. But unrelated to productivity or AI pilot projects, the discussion centered on how AI has already eroded the desire to properly draft emails (why draft an email when an AI can do it better?), but more importantly, the know-how to draft an email. “I feel like I can no longer spell or write well,” one of the investors confessed.
Unsurprisingly, it was the younger folks at the table, those under 40, who led the discussion about the slow erosion of basic writing skills and the fading desire to write anything at all. Which aligns with the data: Millennials and Gen Z have adopted AI more quickly.
What is AI doing to our brains? And our memories?
A recent MIT study is shining a flashing red light on those answers. My social feeds have been flooded with posts and comments about it. People are scared, and rightfully so.
It shows how AI use can weaken our cognitive capacity. Researchers split students into three groups: one wrote essays entirely on their own, one used a search engine, and one used an AI language model to draft and refine. They tracked the students’ brain activity, read the essays, and sat them down for interviews.
The results weren’t subtle. Students who leaned on AI remembered less and felt less connected to their work. Some didn’t even want to take ownership of large sections of “their” essays. Many couldn’t quote what they had “written” days later. The more the machine did, the less the brain did. The researchers called it cognitive debt: the mind’s muscle shrinks when we stop asking it to carry weight. We are exchanging our cognitive capacity for time.
So what happens when the machine can handle baseline thinking and middle-of-the-road synthesis — the writing, summarizing, strategic proposals — and you don’t have to remember how? It will train a generation for mental atrophy by design, and we will watch our mind’s edge fade away.
If we don’t fight our way up the cognitive ladder, we won’t be needed on it at all.
For some tech leaders, this is the inevitable reality. The goal of Artificial General Intelligence (AGI), the trillion-dollar race every AI company hopes to win, is literally to create systems just as good as the average human at nearly every task. We are moving toward a future where “AI will handle everything” — or so we’re told — and we’d best “start preparing” for that.
With the exception of a few, I think most people (myself included!) wouldn’t be thrilled about the prospect of not needing to — or knowing how to — think. From a health perspective, “use it or lose it” is not just a catchy phrase. It is a physiological reality. A society without thinking skills could face faster cognitive decline and more unstable democracies. We already struggle to decipher fake news or think deeply about the intent behind polarizing content, and it hasn’t turned out well for us.
So what do we do? And have we been here before? Sort of.
When writing began to replace oral tradition in ancient Greece, Socrates, through Plato, worried that the written word would sap the human mind. People, he warned, would stop remembering things for themselves. They would appear wise, but the wisdom would not really be theirs.
When the printing press arrived in Europe in the fifteenth century, the fear intensified. Critics panicked that the technology would drown the world in shallow ideas. Renaissance moralists worried that too many books would produce superficial thinkers, people who skimmed everything but mastered nothing. They were not entirely wrong. Rote memorization did wither. But that loss made room for something bigger. The press rewired how human knowledge moved.
Ideas that once sat locked away in monasteries and royal courts spilled into streets, coffee houses, and lecture halls. Reading and writing stopped being solitary acts and became a shared network, a feedback loop of correspondence, criticism, and replication. Scientists, philosophers, pamphleteers did not just consume words. They sorted, tested, verified, debated, and built on them.
These new network effects gave rise to something civilization had never managed at scale: cumulative knowledge and the creation of new knowledge. The scientific method, peer review, standardized textbooks, and the industry we now call journalism were all born in the turbulence that cheap print unleashed. The written abundance demanded complexity, created intellectual friction, and rewarded those who could handle it.
But generative AI does far more than store, copy, or calculate. It simulates reasoning. For routine tasks — the bland essay, the generic memo, the safe policy draft — it is already better than the median human. And it will only get better. So the question becomes: how do we create cognitive friction for the age of supercomputers?
Raising the bar cannot just mean sprinkling AI on top of old tasks. It also cannot mean unsubscribing from AI altogether, as some protest. It is a general-purpose technology that will become as common as computers and the internet. Those who learn how to leverage it while strengthening their thinking will break away from those who do not (more on that below!). It means designing work and learning environments that keep the mind in the loop and test what the machine cannot supply.
This will demand a major redesign of education systems (which I have been sounding the alarm on for over two years), both traditional education and opportunities for lifelong learning. And this does not always mean AI in every assignment. Imagine going to school and your economics class requires you to debate the opportunity cost of climate policies proposed in your civics class, based on the science you discussed in chemistry.
The reality is that we have no way around this. Even if all AI progress stopped today, the systems we already have are more proficient than the average person at most basic writing, summarizing, and synthesis tasks. If we do not act, we will see a great cognitive divide unfold.
And the majority could end up on the wrong side, outsourcing struggle, cognitive capacity, and accumulating cognitive debt.
The other side, the self-directed learners and the highly motivated, will break away. They will use LLMs as sparring partners, pushing them into unknowns. How does this idea break? What is missing? What happens when we test this assumption in the real world? They will run experiments AI alone cannot run, using live data, unpredictable variables, moral trade-offs, and physical constraints.
(A few weeks ago, I wrote about the advantage that highly motivated, self-directed learners will have in the AI age, and the complex reasons why motivation is not always evenly distributed).
The cognitive divide inevitably becomes an economic divide, which deepens the power divides we already face.
So who should be leading this charge? We have heard plenty about AI from political leaders, from national security to boosting GDP. But when it comes to what AI is doing to our brains, our jobs, and economic mobility, it has been mostly crickets. The irony is that a generation that forgets how to think is itself a national and economic security crisis. Designing cognitive resilience has to become a national strategy.
Academia, civil society, and the press are doing their part, raising the alarm and investigating the risks, even as many of these same institutions fight for their own existence in *some* places. The question is whether leaders will back them up before the gap grows too wide to close.1
Tech companies have a major role to play, too. Supporting society through this transition will be expensive. For starters, we could be taxing capital more heavily than labor to capture some of the economic gains AI companies reap.
insightful read! appreciate discourse around historical patterns that mirror AI advancements and the words of caution about resisting atrophy… what will our minds be needed for? we evolve into a new aptitude entirely?
Dear Sinéad, thank you for writing this piece. Your voice is so important!
I do think that the biggest danger lies in the fact that we are delegating our cognitive effort to AI and how easy that will become (and increasingly is) for many people. I wrote this text discussing the issue.
https://learningcosmos.substack.com/p/outsourcing-thinking-will-ai-atrophy