A new study examining the effects of AI-assisted writing has found that reliance on large language models (LLMs) like ChatGPT may come at a cognitive cost, weakening neural engagement, memory recall, and a sense of ownership over one’s work.
The research, led by cognitive scientist Nataliya Kosmyna and posted as a preprint on arXiv, tracked 54 participants as they wrote essays using either an LLM, a search engine, or no tool at all. Over four months, electroencephalography (EEG) scans revealed striking differences in brain activity: those writing without any aids exhibited the strongest neural connectivity, while LLM users showed significantly weaker engagement.
“The more external support participants used, the less their brains activated,” Kosmyna noted. When LLM users were later asked to write without assistance, their brain activity remained subdued, suggesting a lingering under-engagement. Conversely, participants who switched from unaided writing to using an AI demonstrated heightened memory recall and visual processing—similar to search engine users—but still reported lower ownership of their work.
The findings raise concerns about the long-term educational impact of AI tools. Essays written with LLMs scored lower in human and AI evaluations, and users struggled to recall their own content. While AI offers efficiency, Kosmyna cautioned, “The convenience may mask a trade-off in deeper learning.”
The study has not yet been peer-reviewed, and the team acknowledges limitations, including its small sample size and focus on one AI model. Still, as classrooms and workplaces increasingly adopt LLMs, the research underscores the need to examine how these tools reshape cognition—and what might be lost when thinking is outsourced.
“This isn’t about calling AI ‘harmful,’” Kosmyna emphasized. “It’s about understanding how we interact with it—and ensuring we don’t unlearn the skills we need to thrive.”






Leave a comment