Ask Onix
AI tools linked to reduced brain activity in new study
Scientists at MIT have found that students using large language models like ChatGPT show significantly less neural engagement than those working without AI assistance. The preliminary findings raise concerns about long-term cognitive effects as AI adoption grows.
Cover letters and classroom trends spark investigation
Nataliya Kosmyna, a research scientist at MIT's Media Lab, first noticed something amiss when reviewing internship applications. Many cover letters shared an unusual structure-lengthy, polished, and jumping abruptly to abstract connections with her work. She suspected AI tools were behind the similarities.
Around the same time, Kosmyna observed students in her classes struggling to retain information more than in previous years. The pattern led her to question whether reliance on AI might be reshaping cognitive processes.
Controlled experiment reveals stark differences
To test her hypothesis, Kosmyna and her team recruited 54 MIT students to write short essays on open-ended topics like loyalty and happiness. Participants were divided into three groups: one used ChatGPT, another relied on Google searches (with AI summaries disabled), and a third worked without any digital tools. Brainwave activity was monitored throughout the task.
The results, though not yet peer-reviewed, were striking. Students who wrote without AI showed widespread brain activation, which Kosmyna described as "on fire." The Google-only group displayed strong visual cortex activity, while the ChatGPT group exhibited up to 55% less neural engagement, particularly in areas tied to creativity and information processing.
"The brain didn't fall asleep, but there was much less activation in the areas corresponding to creativity and to processing information,"
Nataliya Kosmyna, MIT Media Lab
Memory and ownership concerns emerge
After submitting their essays, students in the AI group struggled to recall their own work and reported feeling little ownership over it. Similar findings have emerged in other studies, suggesting that AI use may impair information retention. Separate research from the University of Pennsylvania identified a phenomenon called "cognitive surrender," where users accept AI outputs with minimal scrutiny, even overriding their own intuition.
Outside academia, real-world consequences are already apparent. A multinational study found that medical professionals who used AI to screen for colon cancer became less effective at detecting tumors without the tool after just three months.
Long-term risks and potential solutions
While short-term effects are becoming clearer, the long-term implications remain uncertain. In a follow-up to her initial study, Kosmyna asked students to write another essay four months later-this time without AI. Those who had previously used ChatGPT showed reduced neural connectivity compared to peers who switched from manual writing to AI, hinting at lasting cognitive impacts.
Computational neuroscientist Vivienne Ming, author of Robot Proof, warns that over-reliance on AI could accelerate cognitive decline. In her own research at the University of California, Berkeley, students who simply copied AI-generated predictions about oil prices showed minimal gamma wave activity-a marker of cognitive effort. Weak gamma waves have been linked to later-life cognitive decline in other studies.
"If that is a natural mode for people to interact with these systems-and these are smart kids-that's bad. Deep thinking is our superpower. If we don't use it, the long-term implications for cognitive health are pretty strong."
Vivienne Ming, Computational Neuroscientist
Strategies to mitigate cognitive risks
Despite the risks, experts argue that AI can be a valuable tool if used intentionally. Ming advocates for "hybrid intelligence," where humans and machines collaborate on complex tasks. She suggests learning subjects without AI first to build foundational knowledge, then using tools to refine understanding.
One practical technique is the "nemesis prompt," where users ask AI to critique their ideas, forcing them to defend and refine their reasoning. Another approach involves "productive friction"-configuring AI to ask questions rather than provide answers, which Ming found increased engagement in her experiments.
Kosmyna emphasizes the importance of resisting cognitive shortcuts. "Our brains love shortcuts, but for long-term brain health, we need to keep challenging ourselves," she said. "Our minds, creativity, and cognitive health will benefit in the process."