Global AI Governance: Five Key Frameworks Explained

The AI Journal

Authored Article

Author(s) ,

With generative artificial intelligence (AI) technologies entering nearly every aspect of human life, it has become ever more urgent for organizations to develop AI systems that are trustworthy and subject to good governance. To that end, various international organizations and technical bodies have established standards for responsible AI development and deployment. Broadly speaking, these standards seek to mitigate potential AI-related risks while ensuring that intended benefits are widely distributed. Many of these standards are necessarily abstract due to their broad applicability, and their overlapping nature makes it difficult to differentiate them or determine their specific uses. 

To make some sense of this rapidly evolving landscape of AI governance, this article summarizes five of the most influential AI-related standards or frameworks from different organizations. We begin with the OECD’s foundational AI principles, which established international consensus on AI values, as well as UNESCO’s recommendation on AI ethics, which addresses broad societal implications of AI development. Following those are three more technical standards that translate high-level commitments into actionable practices: the U.S. National Institute of Standards and Technology (NIST) AI management framework, the ISO/IEC 42001 international standard for AI governance, and the IEEE 7000-2021 standard for ethical system design. Taken together, these five standards should give organizations a solid foundation on which to build a responsible and ethical AI system.

OECD Recommendation on Artificial Intelligence

In 2019, the Organisation for Economic Co-operation and Development (OECD), an intergovernmental group of developed nations, established five core principles that form a global consensus on the responsible and trustworthy governance of AI: (1) inclusive growth, sustainable development and well-being, (2) respect for the rule of law, human rights, and democratic values, including fairness and privacy, (3) transparency and explainability, (4) robustness, security, and safety, and (5) accountability. These non-binding but influential principles emphasize a rights-based approach, guiding the development and deployment of AI systems in a way that promotes human rights and democratic values.

Governments around the world use the OECD recommendations and related tools to design policies and develop AI risk management frameworks, laying the groundwork for global interoperability across regulatory jurisdictions. OECD member countries are expected to actively support these principles and make their best efforts to implement them.

To keep pace with technology, the OECD recommendations were updated in 2023 and 2024. These revisions clarified the definition of AI systems to address systems that continue to evolve after deployment and those that incorporate generative AI to produce content. Today, many governments adopt the OECD’s definitions and classification of AI systems for harmonized and interoperable governance. The OECD framework has been adopted by the G20 and has significantly influenced landmark regulatory efforts such as the European Union’s AI Act and the NIST AI Risk Management Framework, discussed below.

UNESCO Recommendation on the Ethics of Artificial Intelligence

The General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted its Recommendation on the Ethics of Artificial Intelligence in 2021 to address the broad societal implications of AI development. Endorsed by all 194 member states, the recommendations promote human rights and fundamental freedoms, centering on the protection of human rights through principles of “Do No Harm,” safety and security, fairness and nondiscrimination, privacy, sustainability, transparency, human oversight, and accountability. The recommendations call for concrete policy action on ethical governance and stewardship, robust data governance and protection, and conducting comprehensive AI impact assessments to identify risks and benefits while providing ongoing monitoring and mitigating concerns.

NIST AI Risk Management Framework 1.0

In January 2023, NIST released its AI Risk Management Framework (“AI RMF”), a voluntary set of guidelines addressed to individuals and organizations who want to act responsibly in developing products and services containing AI. Similar to other such standards, the AI RMF does not provide specific technical instructions but calls on organizations to establish a solid process for addressing AI-related risks. It emphasizes that AI systems need to be trustworthy in ways that matter to everyone. This framework is designed to be flexible and apply to any organization, regardless of size or sector.

The AI RMF breaks down AI management into four core functions: (1) “Govern” – implementing policies to encourage a culture of risk awareness and management with respect to AI systems, (2) “Map” – ensuring that people within the organization thoroughly understand the risks and benefits of the AI system in question, (3) “Measure” – continuously testing and monitoring the AI system to ensure its trustworthiness, and (4) “Manage” – making sure that enough resources are allocated to deal with the mapped and measured risks. The AI RMF also describes seven key characteristics of trustworthy AI: validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy enhancement, and fairness with the management of harmful bias. This framework encourages organizations to consider the perspectives of diverse stakeholders—that is, of anyone who may be affected by its AI system.

Unlike the ISO standard described below, the AI RMF is meant to be more of a guidance rather than a formal set of requirements, and it is not subject to formal certification schemes. It is well-suited for organizations who seek responsible AI development but are not yet ready to engage in a formal certification process. 

ISO/IEC 42001:2023 Artificial Intelligence Management System

ISO/IEC 42001 (“ISO 42001”) is an international standard promulgated in December 2023 by the International Organization for Standardization (ISO) and the International Electrotechnical Commission. It focuses on the management structure of AI systems, as opposed to the AI systems themselves. Indeed, it is billed as “the world’s first AI management system standard.” Although compliance with this standard is voluntary, ISO/IEC 42001 sets out a more formal set of guidelines that organizations can use to create and manage a well-functioning AI management system (or “AIMS”), while balancing governance with innovation. 

As with the AI RMF framework discussed above, ISO 42001 is intended to ensure that AI systems are developed and used in ways that are ethical and trustworthy and that comport with the organization’s objectives and stakeholder expectations. This standard likewise has a broad reach and may apply to any organization that uses AI in its products or services. Importantly, ISO 42001 is designed not to replace, but to complement other management system standards. Unlike the AI RMF, the ISO 42001 is designed for compliance certification.

ISO 42001 follows the widely used “Plan-Do-Check-Act” methodology through 10 structured clauses, seven of which consist of “mandatory” requirements: context, leadership, planning, support, operation, performance evaluation, and improvement. Within these clauses, the “Plan” step involves defining the scope of the AIMS, identifying risks and opportunities, and setting objectives. The “Do” step implements AI governance policies and controls, such as fairness and transparency, and conducts regular risks assessments. “Check” monitors, measures, and evaluates AI system performance, while “Act” continuously improves the AIMS through corrective actions.

IEEE 7000-2021 Standard Model Process for Addressing Ethical Concerns during System Design

IEEE 7000 was published in 2021 by the Institute of Electrical and Electronics Engineers, before generative AI exploded into public consciousness with the introduction of ChatGPT by OpenAI. This standard is addressed primarily to engineers and technical workers developing software-based products and services (or “systems”). It strives to ensure that ethical principles—such as transparency, sustainability, privacy, fairness, and accountability—are integrated into system design from the very beginning, regardless of whether the system uses AI. To be clear, the standard itself does not prescribe any specific ethical values; it simply provides a process by which such values may be elicited from management and other stakeholders and integrated into the system. 

The IEEE 7000 standard consists of five main processes: (1) defining the system’s stakeholders and its expected operation and context of use, (2) eliciting ethical values from various stakeholders, (3) formulating specific ethical value requirements for the system, (4) ensuring that these ethical requirements are implemented into the design of the system, and (5) maintaining transparency throughout the process, including sharing how ethical concerns have been addressed during system design. Part of this whole process involves creating a “Value Register” that documents applicable ethical values and traces them through to concrete system requirements and design features. 

Like AI RMF, the IEEE standard is voluntary and is not designed for formal certification. Rather, it is intended to provide organizations with a practical and auditable process to demonstrate that they have considered ethical implications and risks of their systems and embedded stakeholder values into their design decisions.

Impact of Responsible AI Governance Frameworks

Although these AI frameworks share common foundational elements, each has its own focus area and nuances. These governance frameworks are considered to be applicable both to developers and deployers and tend to be industry agnostic. Although adoption remains voluntary in most sectors and jurisdictions, recognized AI governance frameworks are increasingly being incorporated by reference into laws and regulatory guidance. 

To illustrate, the EU AI Act follows the OECD’s definition of AI systems. Colorado’s AI Act requires deployers of high-risk AI systems to maintain a risk management program that is reasonable in light of established frameworks such as the AI RMF, ISO 42001, or other nationally or internationally recognized frameworks that are substantially equivalent. These frameworks appear as reference points in subregulatory guidance, industry codes of conduct, and standards of practice that reflect prevailing industry norms.

As organizations formalize their AI governance policies, they need to reconcile overlapping expectations. Efforts are underway to map concepts, align guidelines, and develop crosswalks to support harmonized implementation. For example, NIST has set a priority of aligning with international standards and published crosswalks from its AI Risk Management Framework to the OECD Recommendation on AI and the ISO 42001. MITRE has led initiatives to standardize and differentiate core governance concepts across multiple frameworks, and author James Kavanaugh has compiled and distilled hundreds of overlapping AI controls into a unified tool

Many organizations striving to adhere to these principles must adopt them to particular use cases. Highly regulated sectors such as healthcare, financial services, and government contracting have public-private partnerships and industry associations developing sector-specific guidelines that address their unique risks, compliance obligations, and ethical considerations. For example, the Coalition for Health AI (CHAI) and the Health AI Partnership each have developed consensus-based frameworks grounded in real-world clinical applications that provide actionable, context-specific guidance. 

Even without formal legal mandates, following such frameworks can demonstrate “reasonable care” in developing and deploying AI systems or serve as documentation for regulatory compliance. For instance, CHAI recently partnered with The Joint Commission (TJC) to create an evidence-based certification process aligned with Medicare accreditation standards. This marks a shift from aspirational principles toward enforceable governance norms for AI in healthcare.

Conclusion

The five standards described above—OECD AI principles, UNESCO Recommendation on AI ethics, AI RMF, ISO 42001, and IEEE 7000—are complementary rather than competing; all encourage ethical and responsible AI technology development but serve different purposes. OECD and UNESCO establish broad policy frameworks that can serve as a foundation for any organization developing or deploying AI systems. 

The AI RMF provides a flexible, non-certifiable structure for AI risk assessment, while ISO 42001 provides specific practices and controls for building and running an AI governance system that is certifiable. An organization might use the AI RMF for initial risk assessment and governance planning, and then implement ISO 42001 for formal certification and a more systematic management of its AI systems. And IEEE 7000 provides a standards-based design process applicable to any system type, ensuring that all stakeholders’ values are considered and implemented throughout system development. 

Organizations can layer these approaches to translate high-level ethical principles and AI risk management structures into concrete AI management controls and design standards. This approach can align AI assurance programs with binding laws and best practices. 

Republished with permission. This article, "Global AI Governance: Five Key Frameworks Explained," was originally published by The AI Journal on August 4, 2025.