Recommendation of the Council on Artificial Intelligence

05/02/2024

|

OECD

Background Information

The Recommendation on Artificial Intelligence (AI) (hereafter the “Recommendation”) – the first intergovernmental standard on AI – was adopted by the OECD Council meeting at Ministerial level on 22 May 2019 on the proposal of the Digital Policy Committee (DPC, formerly the Committee on Digital Economy Policy, CDEP). The Recommendation aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values. In June 2019, at the Osaka Summit, G20 Leaders welcomed the G20 AI Principles, drawn from the Recommendation.

The Recommendation was revised by the OECD Council on 8 November 2023 to update its definition of an “AI System”, in order to ensure the Recommendation continues to be technically accurate and reflect technological developments, including with respect to generative AI. On the basis of the 2024 Report to Council on its implementation, dissemination and continued relevance, the Recommendation was revised by the OECD Council meeting at Ministerial level on 3 May 2024 to reflect technological and policy developments, including with respect to generative AI, and to further facilitate its implementation.

The OECD’s work on Artificial Intelligence

Artificial Intelligence (AI) is a general-purpose technology that has the potential to: improve the welfare and well-being of people, contribute to positive sustainable global economic activity, increase innovation and productivity, and help respond to key global challenges. It is deployed in many sectors ranging from production, education, finance and transport to healthcare and security.

Alongside benefits, AI also raises challenges for our societies and economies, notably regarding economic shifts and inequalities, competition, transitions in the labour market, and implications for democracy and human rights.

The OECD has undertaken empirical and policy activities on AI in support of the policy debate since 2016, starting with a Technology Foresight Forum on AI that year, followed by an international conference on AI: Intelligent Machines, Smart Policies in 2017. The Organisation also conducted analytical and measurement work that provides an overview of the AI technical landscape, maps economic and social impacts of AI technologies and their applications, identifies major policy considerations, and describes AI initiatives from governments and other stakeholders at national and international levels.

This work has demonstrated the need to shape a stable policy environment at the international level to foster trust in and adoption of AI in society. Against this background, the OECD Council adopted, on the proposal of DPC, a Recommendation to promote a human-centred approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and applies to all stakeholders.

An inclusive and participatory process for developing the Recommendation

The development of the Recommendation was participatory in nature, incorporating input from a broad range of sources throughout the process. In May 2018, the DPC agreed to form an expert group to scope principles to foster trust in and adoption of AI, with a view to developing a draft Recommendation in the course of 2019. The informal AI Group of experts at the OECD was subsequently established, comprising over 50 experts from different disciplines and different sectors (government, industry, civil society, trade unions, the technical community and academia). Between September 2018 and February 2019 the group held four meetings. The work benefited from the diligence, engagement and substantive contributions of the experts participating in the group, as well as from their multi-stakeholder and multidisciplinary backgrounds.

Drawing on the final output document of the informal group, a draft Recommendation was developed in the DPC and with the consultation of other relevant OECD bodies and approved in a special meeting on 14-15 March 2019. The OECD Council adopted the Recommendation at its meeting at Ministerial level on 22-23 May 2019.

Scope of the Recommendation

Complementing existing OECD standards already relevant to AI – such as those on privacy and data protection, digital security risk management, and responsible business conduct – the Recommendation focuses on policy issues that are specific to AI and strives to set a standard that is implementable and flexible enough to stand the test of time in a rapidly evolving field. The Recommendation contains five high-level values-based principles and five recommendations for national policies and international co-operation. It also proposes a common understanding of key terms, such as “AI system”, “AI system lifecycle”, and “AI actors”, for the purposes of the Recommendation.

More specifically, the Recommendation includes two substantive sections:

  1. Principles for responsible stewardship of trustworthy AI: the first section sets out five complementary principles relevant to all stakeholders: i) inclusive growth, sustainable development and well-being; ii) respect for the rule of law, human rights and democratic values, including fairness and privacy; iii) transparency and explainability; iv) robustness, security and safety; and v) accountability. This section further calls on AI actors to promote and implement these principles according to their roles.
  2. National policies and international co-operation for trustworthy AI: consistent with the five aforementioned principles, the second section provides five recommendations to Members and non-Members having adhered to the Recommendation (hereafter the “Adherents”) to implement in their national policies and international co-operation: i) investing in AI research and development; ii) fostering an inclusive AI-enabling ecosystem; iii) shaping an enabling interoperable governance and policy environment for AI; iv) building human capacity and preparing for labour market transformation; and v) international co-operation for trustworthy AI.

2023 and 2024 Revisions of the Recommendation

In 2023, a window of opportunity was identified to maintain the relevance of the Recommendation by updating its definition of an “AI System”, and the DPC approved a draft revised definition in a joint session of the Committee and its Working Party on AI Governance (AIGO) on 16 October 2023. The OECD Council adopted the revised definition of “AI System” at its meeting on 8 November 2023. The update of the definition included edits aimed at:

  • clarifying the objectives of an AI system (which may be explicit or implicit);
  • underscoring the role of input which may be provided by humans or machines;
  • clarifying that the Recommendation applies to generative AI systems, which produce “content”;
  • substituting the word “real” with “physical” for clarity and alignment with other international processes;
  • reflecting the fact that some AI systems can continue to evolve after their design and deployment.

In line with the conclusions of the 2024 Report to Council, the Recommendation was further revised at the 2024 Meeting of the Council at Ministerial level to maintain its continued relevance and facilitate its implementation five years after its adoption. Specific updates aimed at:

  • reflecting the growing importance of addressing misinformation and disinformation, and safeguarding information integrity in the context of generative AI;
  • addressing uses outside of intended purpose, intentional misuse, or unintentional misuse;
  • clarifying the information AI actors should provide regarding AI systems to ensure transparency and responsible disclosure;
  • addressing safety concerns, so that if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely by human interaction;
  • emphasising responsible business conduct throughout the AI system lifecycle, involving co- operation with suppliers of AI knowledge and AI resources, AI system users, and other stakeholders,
  • underscoring the need for jurisdictions to work together to promote interoperable governance and policy environments for AI, against the increase in AI policy initiatives worldwide, and
  • introducing an explicit reference to environmental sustainability, of which the importance has grown considerably since the adoption of the Recommendation in 2019.

Furthermore, some of the headings of the principles and recommendations were expanded for clarity, and the text on traceability and risk management was further elaborated and moved to the “Accountability” principle as the most appropriate principle for these concepts.

Implementation

The Recommendation instructs the DPC to report to the Council on its implementation, dissemination and continued relevance five years after its adoption and regularly thereafter.

2024 Report to Council

The DPC, through AIGO, developed a report to the Council on the implementation, dissemination and continued relevance of the Recommendation five years after its implementation, and proposed draft revisions drawing from its conclusions.

The 2024 Report concluded that the Recommendation provides a significant and useful international reference in national AI policymaking. The Recommendation is being implemented by its Adherents, is widely disseminated, and remains fully relevant, including as a solid framework to analyse technology evolutions such as those related to generative AI.

However, the 2024 Report found that updates were needed to clarify the substance of some of the Recommendation’s provisions, facilitate implementation, increase relevance, and ensure the Recommendation reflects important technological developments, including with respect to generative AI.

Further work to support the implementation of the Recommendation

In addition to reporting to the Council on the implementation of the Recommendation, the DPC is also instructed to continue its work on AI, building on this Recommendation, and taking into account work in other international fora, such as UNESCO, the European Union, the Council of Europe and the initiative to build an International Panel on AI.

In order to support implementation of the Recommendation, the Council instructed the DPC to develop practical guidance for implementation, to provide a forum for exchanging information on AI policy and activities, and to foster multi-stakeholder and interdisciplinary dialogue.

To provide an inclusive forum for exchanging information on AI policy and activities, and to foster multi-stakeholder and interdisciplinary dialogue, the OECD launched i) the AI Policy Observatory (OECD.AI) as well as ii) the informal OECD Network of Experts on AI (ONE AI) in February 2020.

OECD.AI is an inclusive hub for public policy on AI that aims to help countries encourage, nurture and monitor the responsible development of trustworthy artificial intelligence systems for the benefit of society. It combines resources from across the OECD with those of partners from all stakeholder groups to provide multidisciplinary, evidence-based policy analysis on AI. The Observatory includes a live database of AI strategies, policies and initiatives that countries and other stakeholders can share and update, enabling the comparison of their key elements in an interactive manner. It is continuously updated with AI metrics, measurements, policies and good practices that lead to further updates in the practical guidance for implementation.

OECD-LEGAL-0449-en

 

To read the recommendations as published by the OECD, please click here.

To read the full document, please click here.