The year 2026 marks a major regulatory turning point for European companies using or considering the use of artificial intelligence in their human resources (HR) processes. Many AI tools deployed for HR purposes are likely to be classified as "high risk" under the AI Act, triggering strict obligations for employers, including mandatory human oversight and transparency requirements toward employees and their representatives. While full application of these obligations was initially scheduled for August 2026, the European Commission's Digital Omnibus package proposes to make the application conditional on the availability of harmonized technical standards, potentially extending deadlines to December 2027 or August 2028. Regardless of any postponement, employers are already required by Article 26(7) of the AI Act and national legislation to inform and consult employee representative bodies before deploying high-risk AI systems.
1. The Regulatory Framework: The AI Act and Its Risk-Based Approach
The AI Act, effective August 1, 2024, establishes a harmonized EU framework for AI system usage, following a risk-based approach that categorizes AI systems into four distinct risk levels.
1.2 The Four Risk Levels: Unacceptable risk
AI systems are prohibited if they pose a serious threat to EU fundamental values, including social scoring systems, emotion recognition in workplaces/education, and exploitation of vulnerable groups.
1.2 The Four Risk Levels: High risk
AI systems are deemed high-risk when used in sensitive areas that significantly impact individual rights, such as education, public safety, or recruitment. HR applications, including automated candidate selection, performance evaluation, workplace monitoring, employee turnover prediction, and decisions on promotion or termination, are explicitly identified as high-risk.
1.2 The Four Risk Levels: Limited risk
AI systems are classified as limited risk if they can be used safely with specific transparency obligations. Users must be informed about interacting with an AI system, and AI-generated content must be clearly marked. Examples include self-service portals with AI algorithms, HR chatbots, and virtual assistants for employees.
1.2 The Four Risk Levels: Minimal risk
This category encompasses all other AI systems not falling into higher-risk classifications, such as spam filters. The majority of AI systems in the EU are currently in this category, with no particular regulatory requirements imposed by the AI Act, though other contractual and legal obligations still apply.
1.3 Focus on High-Risk Systems in the HR Sector
Many AI tools specifically designed for human resources are expected to be classified as high risk. The full application of obligations for these systems, initially anticipated for August 2026, is currently under discussion as part of the Digital Omnibus package, potentially delaying the effective date.
1.4 Obligations of Employers Deploying High-Risk AI Systems: Mandatory human oversight
The AI Act mandates that high-risk AI systems be designed and used to allow for effective human oversight. This requires that individuals responsible for oversight are properly trained and qualified, receive ongoing training, and possess the effective capacity to intervene and modify system decisions. This obligation complements the GDPR's right not to be subject to solely automated decisions.
1.4 Obligations of Employers Deploying High-Risk AI Systems: Transparency and information obligations
Before deploying a high-risk AI system, Article 26(7) of the AI Act requires employers to inform employee representatives (e.g., works councils, trade union delegates) and directly affected employees in a clear and comprehensive manner. Compliance with national provisions regarding consultation of representative bodies is also essential.
2. The Impact of the "Digital Omnibus" Package
The European Commission's "Digital Omnibus" package, introduced on November 19, 2025, aims to revise and harmonize key EU digital legislation. It seeks to close regulatory gaps, eliminate overlaps, and enhance legal certainty for companies, particularly SMEs. The package clarifies the AI Act-GDPR interaction and proposes to make the application of high-risk AI system obligations conditional on the availability of harmonized technical standards, potentially postponing deadlines up to 16 months (to December 2027 or August 2028). However, this package is still a proposal subject to EU Council and European Parliament approval, meaning companies should prepare for a potential August 2026 enforcement while monitoring legislative developments.
3. Social Dialogue: An Enduring Imperative
Despite potential postponements of AI Act application deadlines, involving employee representatives remains a top priority in 2026. AI is viewed not only as a tool to facilitate work but also as a potential threat to job security and working conditions, necessitating proactive dialogue.
3.1 Mandatory Consultation
In most EU Member States, deploying new AI systems, especially in HR, requires prior consultation with employee representative bodies, as stipulated by both AI Act Article 26(7) and national legislation (e.g., Belgium's Collective Bargaining Agreement No. 39). This consultation should ideally occur before expensive systems are acquired. Employers are strongly advised to adopt a proactive approach, engaging in constructive dialogue and developing a company-wide AI policy to define usage rules and reassure stakeholders about ethical and responsible AI use.
4. Practical Recommendations
In this evolving regulatory landscape, companies should map and classify all AI systems according to AI Act risk categories, identify high-risk systems, train HR and IT teams on AI Act and GDPR requirements, conduct impact assessments before adopting new AI tools, establish regular dialogue with employee representatives to build trust, and actively monitor the legislative negotiations surrounding the Omnibus package at the EU level.