Looking back over the past period, even as technological competition between China and the U.S. has intensified, the two sides have also made some constructive progress in cooperation on artificial intelligence (AI).
Despite intense technological competition, China and the U.S. have achieved constructive progress in AI cooperation. Key milestones include the May 2024 inter-governmental dialogue in Geneva on AI risks and global governance, marking AI's formal inclusion in bilateral talks. Later, in November 2024, Presidents Xi Jinping and Joe Biden agreed on the principle of human control over nuclear weapons, establishing a crucial red line against AI militarization in this domain. This consensus has been affirmed even with Donald Trump's return to office, signaling a shared recognition in Washington of the necessity to maintain engagement on AI-related issues with China. However, deep cooperation in sensitive military areas, especially regarding AI-nuclear nexus, faces practical constraints due to challenges in mutual transparency, verification, and persistent strategic trust deficits.
China and the U.S. can expand their existing consensus on human control over nuclear weapons in two main directions. Firstly, they can encourage other nuclear powers like the U.K., France, and Russia to adopt this same principle, thereby transforming a bilateral understanding into a multilateral one. This could involve promoting statements within the UN framework that reaffirm human control over nuclear weapons and advocate for the responsible deployment of military AI. Secondly, the principle could be extended beyond nuclear weapons to encompass other strategic systems with high deterrence potential. The precise definition and scope of such an extension would require further discussion, potentially commencing in Track-2 dialogues among non-governmental experts.
To deepen cooperation, both countries should work on more detailed risk assessments and management mechanisms. This involves meticulously breaking down nuclear weapon systems and conflict scenarios into specific deployment and decision-making steps to identify particular AI-related risks. While a complete ban on AI in Nuclear Command, Control, and Communications (NC3) systems is considered unrealistic, both sides must also acknowledge the potential benefits of AI-nuclear integration. Therefore, a critical step is to identify and agree upon mutually acceptable 'red lines.' In this context, Track-2 dialogues involving think tanks, scholars, and retired military officers serve as an important supplementary channel, facilitating discussions on risk assessment, ethical norms, and crisis decision-making to build expert consensus that can support official talks.
In non-military domains, China and the U.S. possess significant opportunities to collaborate on advancing global governance of artificial intelligence. Initially, they could concentrate on cross-border AI risks by promoting universally acceptable frameworks for risk-tiering, classification, and assessment, and jointly explore potential responses. These governance challenges are shared, encompassing technical risks such as loss of control, limited interpretability, and 'hallucinations' in large models. Application-level risks include AI-related biosecurity threats, the potential for AI to empower terrorist groups, and the erosion of social trust due to deepfakes. These issues transcend national boundaries and necessitate collective action. Adopting a risk-based approach could reframe China and the U.S. from 'technological competitors' into 'co-risk bearers,' fostering practical cooperation in AI risk assessment and management despite strategic competition.
The two nations should engage in dialogue concerning AI's profound impact on economic and social structures. This includes examining how AI reshapes labor–capital relations and traditional production methods, particularly the growing trends towards unmanned operations and high automation that risk structural unemployment. A shared challenge is ensuring AI improves livelihoods without exacerbating disparities in social resource distribution, necessitating policies like education reform, skills transformation, and inclusive digital infrastructure access. Furthermore, China and the U.S. could explore AI's potential as a global public good, applying it to climate governance (disaster forecasting, extreme weather modeling), international peace and security (conflict mediation, peacekeeping, early warning), and global development (open-source technologies to narrow the Global North-South tech gap). Pilot projects in climate monitoring, public health, smart agriculture, and education could highlight AI's positive aspects and improve bilateral relations. Finally, joint efforts are needed to address AI's ethical and legal challenges, such as defining human identity, preserving agency, maintaining human control in critical decisions, establishing ethical principles, and clarifying accountability for harm. Such dialogue would build trust and consolidate international normative consensus.
It is crucial to emphasize that cooperation between China and the U.S. on global AI governance is not aimed at establishing a 'G2' model, but rather reflects the inherent responsibilities stemming from their significant capabilities in technological innovation, industrial capacity, and international influence. Any effective framework for global technology governance requires the active participation of both China and the United States. As AI stands as a paradigmatic disruptive technology that will profoundly reshape economic, social, and security landscapes, and define future international order, early cooperative steps by China and the U.S. in areas like risk perception, ethical principles, and public-interest applications would carry immense demonstrative value. These efforts would reflect their responsibility as major powers to lead by example in the responsible development and governance of artificial intelligence globally.