“Brain rot,” Oxford’s Word of the Year in 2024, refers to both the mental degradation from consuming too much unchallenging or trivial content online and to the content itself. Even though “brain rot” has been in current use only since the early 2000s, the earliest known example is in Thoreau’s 1854 book Walden: “While England endeavors to cure the potato-rot, will not any endeavor to cure the brain-rot, which prevails so much more widely and fatally?”
Oxford's 2024 Word of the Year, 'brain rot,' describes mental degradation from consuming trivial online content and the content itself. Thoreau’s 1854 writings in Walden show an early concern for similar intellectual decline, highlighting that concerns about mental stagnation due to societal factors are not new.
The MIT Media Lab conducted a study to assess the cognitive cost of using large language models (LLMs) like ChatGPT and search engines versus relying solely on one's own brain. Students were divided into three groups—using ChatGPT, using search engines, or using no electronic media—to draft essays, with brain activity and cognitive engagement being key metrics.
The study revealed that the 'brain-only' group exhibited the strongest and most widespread brain networks, leading to superior memory recall and deeper reengagement with the essay subject. The search engine group showed intermediate engagement, while the ChatGPT group demonstrated the weakest overall brain activity and struggled significantly to recall content from their own recently written essays.
An MIT research scientist, Nataliya Kosmyna, emphasized that although our brains naturally seek shortcuts, 'friction' or challenge is indispensable for genuine learning. She warned against 'cognitive offloading' to tools, suggesting that excessive reliance on AI can bypass the mental effort crucial for deep understanding and skill development.
Teachers and professors are increasingly concerned that students' use of LLMs may weaken essential academic skills such as research, writing, reading comprehension, and critical thinking. A survey also indicated that about half of students worry about feeling less connected to teachers and being exposed to extreme views due to LLM use, prompting educators to explore varied teaching strategies.
This approach aims to prevent students from using AI tools, though it is challenging to enforce. Educators adopting this strategy motivate students to develop their own voice and skills by emphasizing the writing process through in-class drafting with traditional methods, demonstrating edits, and incorporating classroom performance into grades, often alongside banning electronic devices.
Acknowledging students' perceived need for LLMs, some educators permit their use but strictly as a starting point for assignments. This method typically requires students to submit marked-up first drafts, often generated or assisted by AI, and engage in individual discussions with teachers to articulate their thought process and how they evolved from AI-assisted content to their own refined work.
This approach prepares students for a future where LLMs are ubiquitous, focusing on teaching proper and ethical AI usage. Educators in this camp aim to equip students with the skills to effectively leverage LLMs, while also utilizing AI-detecting software and scrutinizing submissions for generic language or superficial analysis to ensure genuine student engagement and critical thinking.
A recent poll indicated a significant increase in high school and college students using AI for homework, rising from 48% to 62% between May and December 2025. This surge occurred despite a majority of students (66%) believing that technology negatively impacts critical thinking. This trend is further fueled by AI startups directly targeting students through social media influencers who promote generative AI for academic tasks.