This commentary examines the critical impacts of artificial intelligence on children and vulnerable communities, critiquing popular narratives that overlook current AI-driven harms in warfare, surveillance, and economic injustice. It highlights ongoing community efforts in California to demand accountability from tech companies developing these powerful systems.
The AI Documentary's Limited Perspective
The article begins by discussing a documentary, 'The AI Doc: Or How I Became an Apocaloptimist,' which explores the anxieties of having children in the age of AI. However, it critiques the film's focus on hypothetical future risks, arguing it overlooks significant existing harms caused by AI in real-world conflicts.
AI in Global Military Operations
The author highlights the immediate, devastating impact of AI in military contexts, specifically mentioning the US-Israeli war on Iran where AI systems like Palantir's Maven Smart System and Anthropic's Claude were allegedly used for target identification, resulting in civilian and child casualties. It also points out substantial Pentagon contracts awarded to leading AI firms.
Domestic AI Harms in California
The commentary shifts to the negative consequences of AI within California, detailing how systems like Palantir's are used by ICE for deportations affecting immigrant communities. It also addresses the environmental strain caused by new data centers on California's water and energy grids, and the role of AI tenant screening algorithms in driving up rental costs.
Growing Resistance and Calls for Accountability
The article showcases various successful movements in California actively combating harmful AI. Examples include the Stop LAPD Spying Coalition's fight against predictive policing, the Writers Guild of America's securing of protections against AI in creative work, the No Tech for Apartheid campaign challenging tech companies' military contracts, and local efforts in Monterey Park to ban AI data centers.
Demanding Accountability for Present Harms
The piece concludes with a powerful call for accountability, asserting that the critical question is not about potential future harms to 'our' children, but rather about whose children are already being impacted by AI and who will be held responsible for these current atrocities and societal damages.