As artificial intelligence increasingly integrates into military planning and operational design, practitioners are grappling with a persistent unease regarding its impact on traditional operational art. Despite advancements in tools and data access, challenges in integration, coherence, and shared direction persist, raising questions about how different forms of operational thinking are privileged or obscured by AI's influence.
This section discusses the prevailing understanding of operational art as a coherent professional language, historically used to link strategic aims with tactical actions through concepts like centers of gravity and decisive points. The authors note that within this tradition, operational breakdowns are often attributed to insufficient rigor or weak conceptual understanding. However, they propose an alternative interpretation: that difficulties arise when this singular understanding is applied to increasingly complex and adaptive environments, leading to a 'checklist effect.' This can cause planners to focus on completing procedural steps rather than deeply exploring the problem. As a result, planning products might appear sound and consistent, but the underlying operational understanding remains shallow, creating a disconnect between process and genuine insight. This tension suggests that the current unease isn't just about a decline in operational art but a fundamental mismatch between a dominant framework and the evolving nature of warfare. The implicit preference for one mode of operational thinking over others needs to be critically examined, especially as AI integration intensifies.
The article argues that operational art is not a monolithic body of knowledge but rather comprises several distinct traditions, each shaped by different historical problems and assumptions about warfare. These traditions influence how coherence is achieved and how action relates to understanding. Key traditions include the Anglo-American center of gravity approach, which emphasizes analytical decomposition to find critical vulnerabilities; German military thought, focusing on *schwerpunkt* (decisive focus) and *auftragstaktik* (mission command) where judgment and adaptation in context are paramount; and Soviet operational art, which approaches warfare as a system to be orchestrated in depth through simultaneity and echeloned forces. Additionally, Chinese operational thought, with concepts like systems destruction warfare and *shi* (situational potential), emphasizes disrupting an adversary’s entire system and shaping the overall situation. Issues arise when one of these traditions is implicitly treated as universally applicable, leading to a mismatch between the analytical framework and the adversary's operational logic, resulting in ineffective planning despite procedural correctness. The authors stress that while these traditions can overlap, friction occurs when their distinct underlying assumptions about what operational thinking should achieve are not recognized.
AI's impact on military planning extends beyond mere acceleration of processes; it fundamentally reshapes how operational problems are perceived, structured, and made intelligible. AI is not a neutral instrument; it operates within existing planning traditions, amplifying certain ways of understanding situations while relegating others to the background. This amplification is subtle, occurring as planners learn to trust and adapt to the information AI systems present as meaningful. Unsurprisingly, current AI applications predominantly align with the analytic center of gravity tradition, excelling at pattern recognition, network mapping, and predictive modeling. This alignment promises greater clarity and tempo, making AI appear well-suited to address current operational shortcomings. However, this alignment also reveals a limitation: forms of operational understanding reliant on situational judgment, improvisation, and dynamic orchestration—central to other traditions—are difficult to translate into machine-readable representations. As AI-supported processes increasingly guide attention, these human-centric forms of understanding risk becoming less visible and practiced. The critical risk is not that AI produces incorrect answers, but that it subtly reconfigures the relationship between human judgment and machine-generated coherence. Planners may unconsciously orient their understanding towards what the system can represent and optimize, potentially achieving 'coherence' through alignment with system outputs rather than comprehensive professional judgment. Thus, the challenge with AI is professional: to remain cognizant of which operational thinking traditions are being amplified or marginalized, and how this dynamic impacts operational judgment and human agency.
While doctrinal rigor and conceptual clarity remain crucial for effective operational planning, especially in complex environments requiring shared frameworks for coordinated action, rigor alone cannot resolve the tensions arising from the interplay of different operational thinking traditions. When analytic, judgment-based, and system-oriented modes of reasoning are implicitly conflated, procedural competence may not translate into genuine operational understanding. The article advocates for 'professional awareness': the capacity to discern which ways of thinking are influencing a plan and which are being overlooked. This requires officers not only to be familiar with various operational traditions but also to be capable of moving deliberately between them. The critical professional skill involves engaging with the problem more deeply than standard procedures might suggest, questioning the appropriateness of templates, and exploring diverse operational logics before committing to a course of action. This ability to reframe operational problems becomes even more vital as AI integration progresses. If AI systems are designed around a single dominant planning logic, they risk fostering procedural convergence at the expense of exploring alternative understandings. Therefore, operational art should be viewed as a dynamic professional capacity to navigate different modes of reasoning, discerning when analytical decomposition is useful, when judgment in action is required, and when coherence depends on endurance and adaptation. Cultivating this awareness is essential for maintaining meaningful human agency, ensuring AI outputs are treated as inputs to judgment rather than definitive conclusions. The military profession must critically scrutinize its tools and traditions, developing planning cultures and education programs that deliberately practice understanding diverse operational traditions to remain effective and adaptable.