The article explores the critical tension between advanced military artificial intelligence and human decision-making, using a hypothetical US-Iran conflict under Donald Trump. It argues that even sophisticated AI, aiding in targeting and cyberoperations, can be undermined by human political folly and traditional adversary tactics, such as disrupting oil flow. The author critiques the notion of fully autonomous 'kill webs' and the lack of ethical oversight in AI development, suggesting that human stupidity poses a greater threat than AI itself, and advocates for ethical education alongside technological progress.
Taking Humans out of the Loop
This section introduces the 'intelligent kill web,' where AI systems calculate targets and execute kill chains with extreme speed and lethality, far surpassing human capabilities. It notes a U.S. Air Force exercise where AI was over 100 times faster. However, the article immediately contrasts this efficiency with a tragic U.S. bombing of a primary school in Iran, arguing that human failures in data management and system design, rather than the AI (Maven) itself, were responsible. It raises concerns about removing humans from this 'kill web' due to the potential for accelerated conflict, but also highlights the danger of incompetent human decision-makers. The author pessimistically suggests that current human leadership in operations is morally indistinguishable from killer robots.
Cyberoperations
This section illustrates the pervasive nature of cyberoperations in modern conflict. It describes how Israeli cyber-operatives allegedly used hacked Tehran traffic cameras to track and assassinate Ayatollah Khamenei, coupled with cellular service disruption. The article recalls the US-Israel Stuxnet worm attack on Iran's nuclear facilities, marking the beginning of a global digital arms race. While Iran conducts its own cyberattacks, like hacking an FBI director's email, it's at a disadvantage compared to the extensive investment by the US and Israel. The piece also details widespread Russian cyberops in Europe, including conventional sabotage and sophisticated cyber-espionage against police and high-tech companies, as well as destabilizing social media campaigns to incite secession, drawing parallels to the origins of the Ukrainian conflict.
The Genie and the Bottle
This section explores the challenge of controlling scientific advancements, likening AI to a 'nuclear genie.' It discusses the debate between techno-optimists and pessimists, and attempts to implement ethical 'constitutions' for AI, like Anthropic's for Claude, which aims to prevent the creation of cyberweapons or harm to humanity. The article criticizes the Trump administration for dismantling such regulatory efforts, viewing them as obstacles to power. It posits that AI merely reflects human nature and that genuine progress requires addressing fundamental human 'waste' through ethical education, rather than just controlling the AI itself. The author concludes with a cynical observation that AI, despite its dangers, might be an improvement over the 'stupidity' of current political leadership, sardonically suggesting Claude for president.