AI-assisted warfare extends a logic with roots in the industrial warfare of the 20th century: a cold distance that turns humans into points in a dataset.
The United States has officially acknowledged its deployment of various artificial intelligence systems in the ongoing conflict with Iran. These AI tools are primarily used for data analysis, designed to rapidly sift through immense volumes of information, enabling military leaders to make quicker and more informed decisions. Despite reassurances from figures like Brad Cooper, chief of America’s Central Command, that humans retain the ultimate decision-making authority, the practical application of AI suggests a blurring of lines. A recent report highlighted that operations, such as 'Operation Epic Fury,' relied on systems like the National Geospatial-Intelligence Agency’s Maven Smart System. This AI integrates diverse data points from surveillance and intelligence, presenting a dashboard that significantly influences official decision-making processes. Although the military claims these AI tools do not 'explicitly create' targets but merely 'identify potential points of interest,' the incident involving the bombing of Shajarah Tayyebeh elementary school in Iran on February 28 suggests otherwise. If the targeting for this strike was indeed influenced by AI 'folding in' old military intelligence, it implies that artificial intelligence actively shaped the ostensibly human 'final call,' effectively leading to AI military decision-making even if a human still physically triggers the strike.
The contemporary integration of AI systems into military operations evokes historical parallels with the 1990–91 Gulf War, a conflict that marked a significant transformation in the public's experience of warfare. This war was notably consumed on television in real time, with CNN providing round-the-clock coverage, displaying combat footage that often resembled a video game. This new modality of warfare, facilitated by emerging technologies in both military execution and telecommunications, created a distinct sense of change, suggesting an irreversible shift in the nature of conflict. The execution of war itself became more distant and dehumanized, characterized by the deployment of cruise missiles from hundreds of miles away, effectively removing the human element from direct engagement. Historian Eric Hobsbawm, in his work 'Age of Extremes,' articulated how modern war technologies and the accompanying bureaucratic systems of the twentieth century fundamentally reshaped warfare. These advancements enabled a horrific form of total war previously unimaginable, primarily through the power of 'distance.' Hobsbawm argued that while distance serves strategic and tactical purposes, such as cover and surprise, its ultimate and most profound effect is 'separation.' This separation transforms violence, even mass violence, into an impersonal and unreal event, where the actor is removed from the immediate, visceral, and corporeal consequences of their actions, akin to 'playing a video game,' further abstracting the act of killing.
The current use of AI in warfare, which focuses on rapidly processing information for target acquisition, represents merely the immediate threat. The article posits that future applications of AI in military contexts could transcend our present comprehension, potentially leading to scenarios often dismissed as dystopian fantasies, like those depicted in 'The Terminator.' Regardless of whether AI acts as a mere tool or evolves into an autonomous agent, the immediate danger lies in the inherent tendency for humans to dehumanize themselves through its use. AI fundamentally serves as a sophisticated instrument designed to enhance our efficiency in killing, simultaneously alleviating the psychological burden traditionally associated with violence throughout human history—the necessity of being physically close enough to witness the demise of a target. Thus, AI functions not just as an information-sorting mechanism but critically as a means to create both literal and figurative distance between the military operator and those targeted for destruction. If the previous century introduced the capability to deploy bombs with the push of a button, this century promises to further automate the process by allowing computers to determine *where* those bombs should be dropped. This escalating detachment from destruction is profoundly horrifying and terrifying, yet it is not an entirely novel development but rather a logical progression in the ongoing trend towards fully digitized and dehumanized warfare, a trajectory presciently identified decades ago. Hobsbawm described this transformation as a 'new impersonality,' where killing became a 'remote consequence of pushing a button or moving a lever,' rendering victims 'invisible as people.' AI, therefore, does not alter the fundamental logic of impersonal warfare but rather significantly reinforces its underpinnings and amplifies its devastating effects.
The article emphasizes that the repercussions of this additional layer of removal in warfare will be devastatingly grim, perhaps even more so than we can currently conceive. Drawing again on Hobsbawm's analysis of World War II, the text reminds us that aerial bombing transformed human beings into abstract 'targets,' making it psychologically easier for individuals to inflict mass casualties. As Hobsbawm noted, 'mild young men' could more readily drop bombs on cities or nuclear weapons on Nagasaki because the victims were distant and dehumanized. This historical precedent is particularly alarming when considering recent findings that commercial AI chatbots, when placed in 'crisis situations,' have a disturbing propensity to choose nuclear war in nearly 95% of simulations. While such headlines often provoke fears of autonomous robot overlords, the more immediate and pressing threat, as the article argues, remains human agency—specifically, how we choose to utilize these machines and the moral responsibilities we abdicate by doing so. The inherent distance that AI introduces between the human intellect and the profound decision to destroy is arguably the most terrifying aspect, suggesting a potential for horrors without limit. Human history, in part, is a chronicle of technological advancement applied to mutual destruction. Today, we have become masters of this dark art, achieving not only ruthless efficiency in physical annihilation but also perfecting methods that make such destruction easier to initiate, justify, and psychologically reconcile with before, during, and after the fact. Hobsbawm aptly warned of the 'greatest cruelties of our century' being the 'impersonal cruelties of remote decision, of system and routine,' especially when rationalized as 'regrettable operational necessities.' The critical question now is how this industrial slaughter, characterized by its inherent cruelty, will be further exacerbated and intensified by the unprecedented new distance afforded by AI-mediated decision-making in future conflicts.