The OECD Council's Recommendation on Artificial Intelligence, adopted by 49 adherents as of April 2026, aims to foster innovation and trust in AI while promoting responsible stewardship through its AI Principles and Five Recommendations.
This section details the OECD Council's Recommendation on Artificial Intelligence, adopted by 49 adherents, which includes the OECD's AI Principles and Five Recommendations for fostering innovation and trust in AI. It highlights recent policy papers published in 2024 and 2025 concerning future AI risks, benefits, policy imperatives, common AI incident reporting frameworks, and AI adoption in firms. It also mentions the 2026 Due Diligence Guidance for Responsible AI and an integrated partnership with the Global Partnership on Artificial Intelligence (GPAI) formed in July 2024 to advance human-centric, safe, secure, and trustworthy AI.
The 49 Adherents, comprising OECD members, non-members, and the European Union, have committed to promoting, implementing, and adhering to the OECD's AI Recommendation. The Principles within this Recommendation serve as a foundation for other significant AI initiatives, such as the G7's Hiroshima AI Process Comprehensive Policy Framework.
While OECD recommendations are generally not legally binding, they represent a strong political commitment. Other relevant OECD guidance that may indirectly influence AI development and use includes the Council's Recommendation concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, the OECD Guidelines for Multinational Enterprises, the Recommendation of the Council on Consumer Protection in E-commerce, and the OECD Due Diligence Guidance for Responsible AI.
The OECD's definition of 'AI system' was updated on November 8, 2023, to reflect technological advancements, including generative AI. Key terms defined are 'AI actors' (those active in the AI system lifecycle), 'AI knowledge' (skills and resources needed for AI lifecycle), 'AI system' (a machine-based system generating outputs like predictions or content to influence environments), and the 'AI system lifecycle' (design, data and models, verification and validation, deployment, and operation and monitoring).
The Recommendation's territorial scope extends to its 49 Adherents, which include OECD member and non-member countries, as well as the European Union. These Adherents are expected to promote and implement the Recommendation. However, the term 'AI actors' within the Recommendation is not defined by reference to territory, leaving specific obligations to be determined by individual Adherents.
The OECD's Recommendation on AI is not sector-specific. Its general principles and recommendations are intended to be broadly applicable. The specific obligations placed on 'AI actors' will depend on how individual Adherent states choose to implement the Recommendation into their national policies, as the Recommendation itself does not define 'AI actors' by sector.
Adherents are expected to comply with the OECD Recommendation, although the Recommendation itself does not explicitly outline governance or regulatory oversight mechanisms. Certain Principles, such as those related to human-centered values, fairness, transparency, and accountability, are relevant to AI actors. The degree to which AI actors must adhere to these Principles is contingent on each Adherent state's specific implementation approach.
The OECD's AI Regulations primarily aim to establish a stable international policy framework. This framework seeks to promote a human-centric approach to trustworthy AI, foster research and development, and maintain economic incentives for innovation across all stakeholders involved in the AI ecosystem.
The Recommendation does not categorize AI systems based on their risk levels. However, the OECD has indicated its intention to further analyze the criteria necessary for a comprehensive AI risk assessment. This analysis will explore how best to aggregate these criteria, recognizing their potential interdependencies, to inform future policy development regarding AI risks.
Adherents are tasked with promoting and implementing five core AI Principles: ensuring AI supports inclusive growth, sustainable development, and well-being; incorporating human-centered values and fairness; maintaining transparency and explainability; ensuring AI systems are robust, secure, and safe; and establishing clear accountability for AI actors. Additionally, five Recommendations guide Adherents to invest in AI R&D, foster a digital ecosystem, shape an enabling policy environment, build human capacity for labor market transformation, and engage in international cooperation for trustworthy AI.
The OECD itself does not act as a direct regulator for the implementation of its Recommendation on AI. Instead, it monitors and analyzes global AI initiatives through its AI Policy Observatory. This observatory tracks AI strategies, policies, metrics, and good practices. The Recommendation leaves the specifics of how Adherents should regulate and enforce the Principles within their own jurisdictions to their discretion.
Since the OECD Recommendation on AI is not a legally binding instrument, it does not confer any direct enforcement powers or impose penalties for non-compliance. The OECD relies on its Adherents to actively incorporate and implement the Recommendation's Principles into their national laws and policies, and to establish their own enforcement mechanisms within their respective jurisdictions.