AI may be reshaping the roots of political violence. A new framework links AI-driven grievance, accountability gaps, and emerging threats.
This section argues that the rapid pace of technological change introduced by AI is exceeding the rate at which institutions can adapt. This creates a 'tempo problem' that widens the gap between disruption and the availability of legitimate channels for addressing grievances. Consequently, these channels are perceived, often correctly, as closed. This institutional closure, a consistent precondition for the escalation of grievance into political violence according to classical political violence theory, is identified as a force multiplier across all three domains of grievance. The mechanism by which this tempo leads to violence risk is not direct but is mediated through the perception of legitimate redress channels being inaccessible.
The article proposes a framework wherein AI can generate entirely new grievances, intensify existing ones by making them more acute, visible, or targetable, or provide a focal point around which diffuse grievances can coalesce. A central concept threading through all three domains is the 'accountability gap,' which arises when the distribution of consequential harm from AI systems cannot be clearly attributed to a specific human agent across complex technical and institutional chains. This accountability gap is compounded by the 'tempo problem,' leading to a quicker accumulation of grievances than institutional mechanisms can absorb.
This domain explores the economic repercussions of AI, focusing on the uneven distribution of costs and benefits. It highlights how the displacement caused by AI is not uniform; similar to earlier automation shocks that disproportionately affected blue-collar and mid-skill workers in specific regions, the current AI revolution is extending this displacement upward. Entry-level white-collar positions in sectors like technology, finance, law, and consulting are now among the most exposed. A key distinction of this period from past technological shifts is the unprecedented speed of innovation and AI adoption, coupled with the fact that the creators of the technology themselves predict these significant societal consequences.
This second domain concerns the erosion of legitimacy stemming from the perception that state and institutional bodies are either failing to effectively govern AI or are actively weaponizing it. The regulatory response to AI's proliferation has been criticized for being slow, fragmented, and often perceived as influenced by vested interests. This governance breakdown can foster a unique actor type—the 'demonstrative attacker.' Unlike traditional terrorists, their primary goal isn't to halt AI development or retaliate for harm, but to force improved governance by exposing the vulnerability and inadequate protection of AI infrastructure. This highlights a shift in potential targets, as acts of violence might aim to make a political statement about AI regulation.
The third domain delves into the direct effects of AI on individual lives and immediate social structures, rather than broader institutions or the economy. It identifies two sub-domains that share a common underlying logic. The first involves social atomization and identity disruption, where AI systems impact personal relationships and a sense of self or community. The second concerns direct personal harm, where an AI system is the proximate cause of an attributable injury to an individual or their loved ones. This emphasizes the intimate and potentially deeply personal nature of grievances arising from AI, moving beyond abstract concepts to tangible impacts on daily existence.
Drawing on historical precedents, this section suggests that the targeting patterns emerging from AI-driven structural conditions are remarkably consistent across various ideological spectrums. The logic behind target selection is less dictated by ideology and more by the inherent structure of the grievance itself. As high-value targets fortify their security measures, violence may be redirected towards more accessible, 'softer' targets that still align with the underlying grievance. For instance, if national figures become unreachable due to enhanced security, local policymakers who approved data centers might become proxies, absorbing the structural anger related to AI deployment and its consequences. This highlights a potential for diffuse and adaptable targeting strategies by aggrieved actors.
The article concludes by positing that the 'accountability gap' is not merely a legal or ethical consideration but a critical counterterrorism variable. Therefore, any measures designed to close or narrow this accountability gap should be understood not only as governance reforms but also as essential counterterrorism strategies. By providing avenues for accountability, these measures can significantly reduce the pool of grievances that might otherwise compel individuals to seek extra-institutional means of redress, including violence. The author stresses that the effective governance of AI accountability represents one of the most under-appreciated levers in the broader counterterrorism response to the evolving threat landscape. The window for implementing substantive engagement and policy is critical and now, before movement trajectories of political violence become entrenched and irreversible.