From intelligence to research and grant applications, artificial intelligence is playing a bigger role in government and military operations.
The Pentagon is phasing out Anthropic's AI systems, sparking competition among major AI firms to integrate their technology into America's military defense. An internal memo indicated Anthropic's AI was used in critical national security areas like nuclear weapons, ballistic missile defense, and cyber warfare. Sources suggest AI programs, including Anthropic's, are deployed in U.S. operations against Iran. Retired Navy Admiral Mark Montgomery notes that AI is now processing thousands of potential targets daily and significantly reducing strike turnaround time, a scale unmatched by previous campaigns, though humans remain in the decision-making loop.
The Pentagon utilizes AI much like consumers do, to summarize and distill vast amounts of information. Former officials indicate AI analyzes documents, videos, and images from battlefields to assist in war-game scenarios, aiming to minimize casualties and optimize weapon effectiveness. The digital revolution has flooded the battlefield with data, making AI essential for contextualizing this information rapidly, far beyond human analysis capabilities. For example, in missile defense, AI algorithms instantly sift through data to build targeting packages and assess damage, enabling real-time decisions during complex attacks. Anthropic's large language model, Claude, has been the only large-scale AI system operational on the Defense Department's classified networks. Beyond combat, AI also supports administrative functions such as research, policy development, and procurement, enhancing efficiency for government agencies.
While AI does not directly control physical weapons like flying planes or firing missiles, it plays a crucial role in the analytical phase preceding human actions. This integration has dramatically compressed operation planning times from days to mere hours, enabling rapid execution of war efforts, as noted by retired Navy Admiral Mark Montgomery. Anthropic's Claude AI, for instance, excels at sifting through extensive intelligence reports, synthesizing patterns, and surfacing relevant information more quickly than human analysts. The targeting process, however, remains human-driven, with Anthropic's U.S. Government Usage Policy explicitly requiring human decisions on military targets, even when using AI for foreign intelligence analysis. Despite AI's significant operational boost, traditional defense contractors still supply the vast majority of weapons, and war could theoretically be fought without AI, though it would be 'less desirable' given AI's growing role in military campaigns.
The Pentagon's $200 million contract with Anthropic to integrate its Claude AI into military systems was canceled following a dispute over control of AI usage restrictions, leading Anthropic to sue the federal government for alleged retaliation. This situation has opened lucrative opportunities for other major artificial intelligence firms. Google has announced the rollout of AI agents for non-classified military uses, while OpenAI's CEO, Sam Altman, has expressed interest in deploying ChatGPT's models in the Pentagon's classified network. OpenAI, however, emphasizes its 'three red lines' against autonomous lethal weapons, mass surveillance of Americans, and high-stakes automated decisions. The Pentagon has a six-month window to remove Anthropic's products but continues to use them in operations against Iran, highlighting the ongoing integration and challenges of AI in military applications.