The European Union’s Artificial Intelligence Act operationalizes a risk-based governance logic that is now permeating international AI frameworks, although unevenly. While some of the Act’s tiered risk taxonomy remains largely unique to the EU, its underlying logic is subtly reshaping global regulatory rhetoric, stakeholder expectations, and institutional designs.
Risk-based Regulatory Frameworks for AI
At its core, a risk-based regulatory framework calibrates oversight and intervention proportionally to the perceived potential severity or likelihood of harm posed by a regulated activity. Risk-based models often enable regulators to differentiate obligations based on assessed risk levels, allocating more stringent requirements to high-risk entities and lighter oversight to those deemed lower-risk[1]. Risk-based models have been prominent across a variety of domains from finance, public health, environmental protection, and now Artificial Intelligence (AI) led by the EU AI Act.
Yet, applying this risk-based model to AI introduces novel challenges. Unlike prior domains, AI systems are dynamic, non-deterministic, and context sensitive. AI tools have rapidly advanced and interact with environments in unpredictable ways. Systems are capable of generating emergent, unanticipated outputs that vary depending on context, input data, or integration. In short, these characteristics mean that AI risks will be susceptible to evolution over time. Static, one-size-fits-all regulation is insufficient when addressing AI regulatory means.
The EU AI Act
The EU AI Act is an unprecedented regulation representing the first legally binding attempt to operationalize a four-tier risk-based framework in AI governance. While other frameworks have established guiding principles, the EU AI Act — introduced in 2021 and adopted in 2024 — established a tiered system of risk classification tied to the potential impact that AI systems may have on key principles of safety, health and fundamental rights (Article 1)[2]. This framework marks the first of its kind with a legally binding promise to regulate markets.
As per the tiered approach, the highest level of risk or “unacceptable risk” AI systems are deemed a clear threat to safety in some form and are prohibited outright. The AI Act explicitly mentions examples like social scoring and real time biometric identification for law enforcement purposes, among others[2]. A step lower are “high-risk” systems, which are allowed, but are required to comply with strict legal requirements. AI in critical infrastructure, education, employment, migration and justice fall under the high-risk category[2]. High-risk categorically stands out as having a significant potential to impact health, safety or fundamental rights, so these AI systems must implement extensive risk mitigation measures before deployment. The third tier is “limited risk” with less stringency, but must meet transparency obligations, like notifying a user when interacting with a chatbot or other AI generated content[2]. Last are “minimal or no-risk” systems that are the vast majority of applications — like AI filters or AI in video games[2]. These measures aim to preserve human autonomy and trust by ensuring users know when AI is involved in their digital interactions. Functionally the EU AI Act seeks to implement proportionality, regulating burdens scaled to the level of risk, with only the highest-risk uses triggering prescriptive obligations[3].
High-risk AI systems are subject to a detailed set of ex-ante compliance obligations under the AI Act and collectively elaborated on in Articles 9 through 15[4]. These include measures such as risk-management systems (Article 9), data quality and governance (Article 10), accuracy, robustness and cybersecurity (Article 15)[5]. In order to comply with EU markets, providers of high-risk AI systems must implement risk controls, technical safeguards and oversight mechanisms by design[6]. Beyond its tiered risk taxonomy, the EU AI Act should be understood as a governance architecture. A multilayered system of procedures and standards designed to embed the risk-based logic across the entire AI life cycle. Yet, the EU does not operate its regulations in a vacuum. The global infrastructure spans economies, governances, languages — and AI is no different.
While the AI Act has taken precedent domestically, the extent of its diffusion has arguably varied. While some jurisdictions have embraced risk-based rhetoric and incorporated similar elements of policy design, others have taken more fragmented approaches to AI systems. It is also vital to note that “AI” means different things in different jurisdictions, and regulation is incredibly complex as we zoom out globally. Regulation starts to vary in forms, conceptual approaches, and, at times, overlap, but the EU AI Act still has undeniably begun to shape AI regulatory framework towards a risk-based approach in some key locations[7].
Other AI Regulatory Approaches
The EU AI Act has had varied permeation internationally. AI regulation as a whole has become commonplace as the technology has only garnered exponential attention. Some nations have adopted the language of risk without enforcement mechanisms, others have adapted EU-like structures, and some have pursued entirely different models aligning with national priorities. Notably, we are witnessing rhetoric beginning to shift towards an emphasis of risk-based, in addition to the precedent of principle-based frameworks.
OECD AI Principles
Preceding the EU AI Act by two years was the Organization for Economic Co-operation and Development (OECD), which laid the groundwork for global AI governance through the OECD AI principles (2019). These principles were the first intergovernmental standards on AI, endorsed by 48 countries[8] and outlining the preliminary framework for the principle-based standards developed by so many nations[9], including the EU AI Act. This principle-based framework outlined five broad values: (1) inclusive growth and sustainable development; (2) human-centered values and fairness; (3) transparency and explainability; (4) robustness and safety; and (5) accountability[9].
While not binding, the OECD Principles have significantly influenced subsequent developments of soft laws and voluntary frameworks internationally as well as impacted more enforceable frameworks like the EU’s. These principles were the early articulation of AI risks that helped legitimize risk-based governance as a norm. Even though this initial framework lacked the rigidity and enforceability of the EU AI Act, it was vital for the formation of such succeeding regulations. Its influence remains significant in multilateral spaces, particularly as a bridge between regulatory-heavy and less regulated jurisdictions. Current policy papers from the OECD, such as a paper titled “Towards a common reporting framework for AI incidents,” not only promotes an AI risk frameworks, but analyzes the criteria that should be included in these frameworks, and how to aggregate such criteria[7]. However, these Principles serve as a global coordination scaffolding more than a regulatory blueprint.
AI Regulatory Approach in the United States
While the United States has endorsed the OECD AI Principles, its domestic approach has been more fragmented over AI regulation. Currently it stands on a sector-specific approach with voluntary guidance measures. The Biden administration issued Executive Order 14110 in 2023, which established a framework for federal agencies to develop and adopt risk-based AI governance practices, including requiring agencies to assess algorithmic impacts and ensure protections for civil rights, privacy, and public safety[10]. However, the second Trump administration rescinded this EO with Executive Order 14179 in 2025, replacing it with a directive focused on accelerating domestic AI innovation and minimizing regulatory burdens[11].
Similarly, Congress has been considering numerous AI regulations, yet none have come to fruition, and seem to focus on maintaining competitiveness, not risk management. While the US does have some federal laws with limited AI applications[7], these are limited in scope and do not address risk as a framework. And, although early drafts of the Trump administration’s “Big Beautiful Bill” proposed prohibiting states from enacting their own AI regulations, the final legislation excluded this provision following bipartisan pushback.As it stands, only two key frameworks exist, aside from those implemented internally at AI companies, directly targeting AI under a similar risk-based framework. These two frameworks are the National Institute Standards of Technologies (NIST) AI Risk Management Framework (RMF)[12] and the Biden administration’s Blueprint for an AI Bill of Rights[13]. Considering the second Trump administration’s differing approach to AI governance, the Blueprint is unlikely to have much of a regulatory impact in the near future.
The Blueprint for an AI Bill of Rights focused on five principles directly in line with the OECD Principles and aligned with the EU AI Act’s considerations for safety and health and fundamental rights, although it is not as risk focused[13]. The NIST AI RMF, on the other hand, embodies a risk-based logic emphasizing documentation, transparency, and proportionality to harm[12]. Both frameworks understand and emphasize a need for an ex-ante model in assessing AI systems. Yet both of these fall short, as they both lack any legal enforceability or binding compliance mechanisms. They are voluntary or suggestions of best practice.
AI Regulatory Approach in the United Kingdom
Rather than implementing a single statutory act like the EU, the UK has opted for a decentralized, principle-based model. This has taken shape in the AI Regulation White Paper, March 2023[14]. The framework has emphasized five cross-sectoral principles; safety, transparency, fairness, accountability, and contestability aligning with OECD’s approach and EU AI Act’s core principles in Article 12. The UK has diverged from the EU on how these principles are to be interpreted and applied by existing regulators in a preliminary, non-binding approach[7] that is more aligned with OECD approach. However, this vision has begun to shift as of recently towards a more binding framework.
This more decentralized model had initially eschewed rigid risk classification, favoring contextual assessments of AI applications in favor of adaptability and rapid response to technological advancements, but recently it has come under criticism for potential inconsistency and under enforcement[7]. This has led to significant debate and the introduction of the Artificial Intelligence (Regulation) Bill, aiming to create a new regulatory body that addresses AI through a risk-based mind, though not in a tiered format[15]. The bill seeks to establish an AI Authority that blends the UK’s current sector-specific regulatory approach, where existing regulators oversee AI within their domains, with a more centralized oversight model to ensure consistency across sectors. This centralization echoes aspects of the EU AI Act, such as coordinated risk assessment and unified compliance standards. Tasks of the Authority would include monitoring economic risks from AI, conducting horizon scanning, and accrediting AI auditors[15], but it has yet to be codified.
AI Regulatory Approach in Canada
Canada has proposed the Artificial Intelligence and Data Act (AIDA), as part of Bill C-27, the Digital Charter Implementation Act, 2022[16]. This approach has directly pulled from the EU AI Act, defining itself as a risk-based approach and setting measures for evaluating AI systems as “high-impact” or not. These systems are subject to more stringent governance requirements and include cases parallel to the AI Act on employment, public services, biometric systems, and decision-making models[16]. However, the law’s current iteration does not exhaustively define what high-impact AI systems look like, leaving critics with a lack of clarity, and worried about its ability to come into effect[7]. That said, AIDA’s focus on transparency, accountability, and human oversight reflects convergence with the EU’s risk-based principles, even if its legal mechanisms are still emerging. Its eventual success, or failure, hinges on whether enforcement authority, audit mechanisms, and definitional clarity are robustly developed in subsequent regulations.
AI Regulatory Approach in China
China has emerged as one of the most proactive regulators of AI, but its approach diverges significantly from European and North American models. China’s AI governance is embedded in a state-centric regulatory paradigm, prioritizing social stability, national security and ideological control. The Interim Measures for Management of Generative Artificial Intelligence Services (2023) sets the framework of regulatory provisions regarding public facing generative AI systems. This regulation requires providers to undergo security assessments and content compliance, while also setting clear responsibilities in a legally binding framework[17][18].
Other regulations have been introduced to address transparency and synthetic content, named the Algorithmic Recommendation Regulation (2022)[19] and Internet Information Service Deep Synthesis Management Provisions (2023) respectively[20]. Together these frameworks represent an enforceable, top-down regulatory architecture, wherein AI developers and platforms are subject to pre-deployment filing requirements and restrictions. While clarity and rapid implementation are foundational, these regulations have a clear lack of human rights-based considerations and overregulation[7]. While China’s model, as opposed to the EU’s model, may arguably lead in speed, scope and enforcement capabilities, its security-first framing and lack of transparency have posed challenges for international interoperability[7].
Implications
The diffusion of risk-based AI governance highlights the global contest over how to align technological innovations with public values. While the EU AI Act represents the most legally robust model to date, alternative frameworks, whether voluntary, sectoral or authoritarian, underscore the diverse regulatory logics shaping future AI oversight. While the Act has set a powerful precedent, it is not all encompassing. Rigidity may hinder adaptability in rapidly evolving AI environments, and stringent compliance measures may burden innovation. But the Act’s risk-based logic has begun to permeate through soft law, voluntary frameworks and regulatory rhetoric, though adoption remains uneven at best.
Models like the NIST AI RMF and the OECD AI Principles reflect the global appeal of proportionality, even when lacking legal teeth. In truth, no current framework offers a perfect solution. In the balancing act between innovation, rights protection, competitiveness, and national priorities, it is easy to doubt any framework will ever be “perfect.” However, risk-based governance offers us an introspective perspective, one not without its own risks. Ambiguity, under-regulation and over-reliance on self-assessment mechanisms raise concern, especially when applied to systems that operate at such an unprecedented rate like that of AI. As technology evolves, so too must our frameworks, and they are, but it is clear one is outpacing the other. However, while the EU AI Act might lead today, the NIST RMF’s clarity, flexibility, and technical granularity has a lot to recommend it. Risk-governance, after all, is as much about methodology as it is about legislation.
Uncoordinated regulation breeds compliance confusion and leaves the public unevenly protected. The goal should not be to shackle innovation but to adopt agile, risk-based guardrails that safeguard security, civil rights, and trust foundations — without which responsible AI progress cannot endure.
Sources
[1] Bert Baldwin, rtin Cave, and rtin Lodge, Understanding Regulation: Theory, Strategy, and Practice (Oxford etc.: Oxford university press, 2012).
[2] European Union, “Ai Act,” Shaping Europe’s digital future, accessed May 31, 2025, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
[3]Martin Ebers, “Truly Risk-Based Regulation of Artificial Intelligence – How to Implement the EU’s AI Act,” SSRN, June 26, 2024, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4870387.
[4]European Union, “Regulation – EU – 2024/1689 – En – EUR-Lex,” Official Journal of the European Union, accessed May 31, 2025, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689.
[5]Olivier Proust, Victoria Hordern, and Victoria Hordern Olivier Proust, “Top 10 Operational Impacts of the EU AI Act – Obligations on Providers of High-Risk AI Systems,” Top 10 operational impacts of the EU AI Act – Obligations on providers of high-risk AI systems, accessed May 31, 2025, https://iapp.org/resources/article/top-impacts-eu-ai-act-high-risk-ai-providers/.
[6]Aída Ponce Del Castillo, Beryl ter Haar, and Aria Huys, “The EU’s AI Act: Governing through Uncertainty and Complexity, Identifying Opportunities for Action,” Global Workplace Law & Policy, June 20, 2024, https://global-workplace-law-and-policy.kluwerlawonline.com/2024/06/20/the-eus-ai-act-governing-through-uncertainty-and-complexity-identifying-opportunities-for-action/.
[7] White & Case, “Ai Watch: Global Regulatory Tracker,” White & Case LLP, accessed May 31, 2025, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker#home.
[8] “Recommendation of the Council on Artificial Intelligence,” OECD Legal Instruments, accessed June 1, 2025, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#adherents.
[9] OECD, “AI Principles | OECD,” OECD.org, accessed June 1, 2025, https://www.oecd.org/en/topics/sub-issues/ai-principles.html.
[10]Harris Laurie and Chris Jaikaran, “Highlights of the 2023 Executive Order on Artificial Intelligence for Congress | Congress.Gov | Library of Congress,” Congress.gov, accessed June 1, 2025, https://www.congress.gov/crs-product/R47843.
[11] “Removing Barriers to American Leadership in Artificial Intelligence,” The White House, January 23, 2025, https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/.
[12] NIST, “Ai Risk Management Framework,” NIST, May 5, 2025, https://www.nist.gov/itl/ai-risk-management-framework.
[13]1. “Blueprint for an AI Bill of Rights | OSTP | The White House,” National Archives and Records Administration, accessed May 31, 2025, https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/.
[14] UK, “A Pro-Innovation Approach to AI Regulation,” GOV.UK, accessed May 31, 2025, https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.
[15]James Tobin, “Artificial Intelligence (Regulation) Bill [HL],” researchbriefings.files.parliament.uk, accessed June 1, 2025, https://researchbriefings.files.parliament.uk/documents/LLN-2024-0016/LLN-2024-0016.pdf.
[16] “Government of Canada,” Language selection – Innovation, Science and Economic Development Canada Main Site / Sélection de la langue – Site principal d’Innovation, Sciences et Développement économique Canada, January 31, 2025, https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document#s6.
[17]“China’s Interim Measures to Regulate Generative AI Services: Key Points,” China Briefing News, July 27, 2023, https://www.china-briefing.com/news/how-to-interpret-chinas-first-effort-to-regulate-generative-ai-measures/.
[18] “生成式人工智能服务管理暂行办法,” 生成式人工智能服务管理暂行办法_中央网络安全和信息化委员会办公室, accessed June 1, 2025, https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm.
[19] “Translation: Internet Information Service Algorithmic Recommendation Management Provisions – Effective March 1, 2022,” DigiChina, February 28, 2022, https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/.
[20] “Translation: Internet Information Service Deep Synthesis Management Provisions (Draft for Comment) – Jan. 2022,” DigiChina, April 12, 2023, https://digichina.stanford.edu/work/translation-internet-information-service-deep-synthesis-management-provisions-draft-for-comment-jan-2022/.




