European plans to weaken AI users’ rights are unlikely to help achieve convergence in performance between the EU and US tech markets
The European Union is preparing to adopt a more lenient approach to artificial intelligence regulation, mirroring the US's stance, with a proposal issued in November 2025 and broadly approved by EU countries. This move is supported by Big Tech companies, and EU policymakers hope it will bridge the performance gap between EU and US tech markets. The underlying assumption is that current EU regulations are hindering its tech sector's growth.
This deregulatory plan involves a significant weakening of tech users’ rights. It will permit AI companies to more easily utilize sensitive data for training their algorithms, potentially increasing Europeans' exposure to discrimination based on personal information like sexual orientation or religious beliefs. The plan also removes certain transparency requirements, allowing developers to self-assess AI systems as low-risk without public registration, and broadens the legality of fully automated decisions, even in critical areas like worker dismissal by machines.
The article argues that there is no substantial evidence linking the EU's safeguarding of fundamental rights to the underperformance of its AI markets. Instead, it posits that the current state of EU tech is more a result of past industrial choices. For instance, Europe's share of global high-tech R&D expenditure significantly declined between 2003 and 2013, with a focus shifting towards mid-tech sectors like car manufacturing, long before the implementation of data privacy laws or the AI Act.
China serves as a counter-example to the idea that strict regulation stifles tech growth. Despite enforcing rigorous and complex tech regulations, including content moderation requirements and severe limitations on cross-border data flows, China remains a formidable rival to the US in AI development, with its top foundational AI models estimated to be only two months behind the US. This suggests that a 'light touch' regulatory environment is not a prerequisite for a thriving AI market.
Beyond regulation, other crucial factors significantly impact tech performance. High energy costs are cited as a major concern for many EU businesses, with industrial electricity prices in the EU more than double those in China in 2024. Furthermore, access to finance plays a vital role; from 2013 to 2024, private AI investment in the US totaled $471 billion, compared to $119 billion in China and only around $50 billion in EU countries. These economic disparities appear to be more influential than regulatory frameworks.
Given the marginal impact of regulation on overall tech performance, the article questions the true economic benefits of reducing regulatory protection in the EU. While privacy protection might redirect innovation towards more privacy-friendly applications, it is unlikely to yield aggregate productivity gains comparable to those from essential resources, infrastructure, and financing. Such deregulation could potentially erode confidence in the European digital economy and dampen demand for tech services, even if AI demand continues to grow generally.
The EU regulatory framework for AI certainly has areas for improvement, such as addressing potential market distortions that disproportionately burden small companies, or adapting to unforeseen risks like 'nudification' algorithms or 'agentic AI'. Public authorities may also need enhanced tools for effective law enforcement. However, the author concludes that while refinement is necessary, weakening the existing robust AI regulatory framework based on the misguided belief that it will boost the European economy is ill-advised and potentially detrimental.