Before the world ends, Artificial Intelligence poses another grave threat to humanity: it risks boring us to death. This article delves into how the proliferation of AI-generated content is leading to a decline in quality, increasing tedium in professional work, and potentially diminishing human cognitive abilities, impacting various sectors from law to academic research.
While global security agencies like MI5 and leading AI firms such as Anthropic are grappling with the potential catastrophic risks of rogue AI systems and preventing the release of dangerous models like Claude Mythos, the article posits an often-overlooked, yet equally insidious, threat: Artificial Intelligence's capacity to induce profound boredom. This threat is presented not as a distant, apocalyptic scenario, but as an immediate and growing challenge to human engagement and intellectual vitality, impacting daily professional and creative endeavors.
The legal profession serves as a stark early example, with the Financial Times reporting that lawyers are being overwhelmed by an 'insane barrage' of AI-generated emails and letters from clients. This influx of machine-produced content not only fails to streamline legal processes but actively complicates them, forcing professionals to sift through vast quantities of low-quality information. Ironically, some law firms had already laid off human trainees and paralegals, anticipating efficiency gains from AI, only to find their workloads exacerbated by AI-induced 'slop,' eliciting little sympathy from the author for the 'overpaid lawyers' facing this new challenge.
The author recounts their personal experience as a geopolitics analyst, where the advent of ChatGPT has transformed their work from an engaging pursuit of knowledge into a tedious battle against verbose and unilluminating content. Papers on complex subjects like migration routes or illegal goods, which once offered insightful reading, now often triple in length. These AI-generated documents are characterized by dense text that obscures clarity, peppered with 'fatuous and quite bossy bullet points' that ironically aim to compensate for inherent confusion. This signifies a fundamental shift from genuine knowledge sharing to generic, machine-produced gobbledygook that wastes readers' time and stifles true understanding.
The article argues that beyond the practical frustrations, AI-generated writing creates a profound sense of being cheated, as the disproportionate effort required to read such content contrasts sharply with the minimal human effort invested in its creation. It delves into the philosophical implications, suggesting that human appreciation for art and creation stems from an intuitive recognition of years of skill and dedication, echoing Picasso's sentiment that it takes 'a lifetime to paint like a child.' The author emphasizes that true music is more than just notes in order, and authentic writing transcends mere 'Words, words, words'; it embodies a deeper understanding and soul that AI cannot replicate.
The widespread adoption of AI is not only affecting the quality of content but is also raising serious concerns about human cognitive functions. The journal Nature has highlighted the alarming trend of studies using AI-generated peer reviews to lend false authenticity to reports, undermining academic integrity. More critically, research has begun to show the direct impact of AI use on the human brain, with one study revealing a significant 55% reduction in brain activity among ChatGPT users. This unsettling finding suggests a future where AI, rather than serving as a tool for instant skill acquisition akin to 'The Matrix,' might instead lead to a widespread 'cognitive decline' across society, diminishing intellectual capacity.
The author highlights the irony found in attempts to understand the complex world of AI, citing the extremely lengthy 19,000-word essays by Anthropic's CEO, Dario Amodei, with their portentous titles. Despite containing crucial warnings about AI's potential for misuse (like facilitating bioweapon development), these essays themselves are described as tedious, symptomatic of the very 'AI slop' problem they discuss. The ultimate critique comes from Mrinank Sharma, Anthropic’s head of AI safety, who famously quit his high-stakes role to 'read poetry,' implicitly rejecting the soulless verbosity of AI for the brevity and wit that define human creative expression. This move powerfully symbolizes a preference for genuine, concise human artistry over endless, machine-generated content lacking true spirit.