China is taking a wait-and-see approach to AI regulation, preferring to prioritize adoption and innovation, as exemplified by the rise of AI agents like OpenClaw.
OpenClaw, an AI agent capable of independent device navigation for tasks like sending emails and monitoring prices, has rapidly gained popularity in China, becoming a cultural phenomenon. Tech analyst Rui Ma highlighted user obsession, describing feelings akin to romantic infatuation. This fervent adoption highlights a dual response from the Chinese government: local authorities actively promote AI through subsidies and grants, while national regulators issue risk advisories to prevent privacy breaches and financial losses. This tension reflects the government's struggle to adapt to rapid AI development, seeking regulatory flexibility to balance innovation with security.
China employs a framework that allows local governments significant autonomy in experimenting with AI policies. While the central government defines the strategic direction, including legal, ethical, and political boundaries, local entities are encouraged to innovate and drive adoption. This 'local experimentalism' is a long-standing feature of Chinese policymaking, allowing localities to serve as testbeds for new policies before national implementation. Lu Xu, a Chinese law expert, explains that local governments act as 'middlemen,' for instance, organizing computational power bidding processes. This approach fosters inter-provincial competition, as seen with Zhejiang's capital, Hangzhou, emerging as a leader in humanoid robotics through companies like Unitree, pushing neighboring provinces like Jiangsu and cities like Suzhou to accelerate their own tech innovation efforts.
The central government, particularly regulators like the Cybersecurity Administration of China (CAC), primarily focuses on mitigating the risks associated with emerging technologies. However, policy experimentation is also evident at this level. An example is the draft regulations on 'humanized AI interactive services,' which include a contentious Article 13 preventing AI services from simulating relatives for elderly users without clear justification or definition of 'elderly.' This demonstrates a pragmatic approach where rules are issued and can be changed if they cause significant negative consequences or controversy, indicating a preference for adaptability over extensive upfront debate. The central government is wary of stifling innovation with premature regulations, thus employing a 'bottom-up' policymaking approach where policies gradually solidify at higher legal authorities, moving from ministerial-level guidelines (like the generative AI regulations issued post-ChatGPT, emphasizing 'core socialist values' and industry growth) to potential State Council regulations and eventually a comprehensive AI law by the National People’s Congress.
Qiheng Chen of the Asia Society notes a recent emphasis on innovation from the central government, visible in policy documents and speeches over the past three years. Chinese regulators also apply a selective enforcement strategy, scrutinizing larger AI firms more strictly while being more lenient with startups to ease compliance burdens and encourage their growth. Despite ongoing discussions, Chen does not anticipate a national AI law in China within the next few years. Beijing first seeks greater clarity on specific regulatory needs and mechanisms for intellectual property compensation for data used in large language model training. Furthermore, China is observing the landscape of tech competition with the U.S. and aims to avoid rigid frameworks like the EU AI Act. The prevailing philosophy is to provide ample space for innovation and adoption, allowing regulations to naturally evolve and follow once the technology matures and stabilizes into its next phase.