Deep Research
Deep Research

July 12, 2025

How should existing legal frameworks evolve to define and allocate liability among developers, owners, and users when a highly autonomous AI causes harm through its unpredictable emergent behavior?

Allocating the Unpredictable: A Framework for AI Liability in the Age of Emergent Behavior

The rapid proliferation of Artificial Intelligence (AI) into every facet of the economy and society presents a generational challenge to legal systems worldwide. While AI promises unprecedented gains in efficiency, innovation, and human capability, its most advanced forms introduce a unique and destabilizing form of risk.¹ Highly autonomous systems, capable of learning and acting without direct human command, can exhibit behaviors that were not explicitly programmed, a phenomenon known as “emergent behavior.” When this unpredictable behavior causes harm, it creates a profound accountability gap, straining the conceptual foundations of traditional liability law, which are built upon pillars of foreseeability, intent, and causation.²

Resolving this dilemma is not merely an academic exercise; it is a prerequisite for fostering innovation while ensuring public trust and safety. An incoherent or fragmented liability landscape creates uncertainty that chills investment and slows adoption, while a framework that fails to provide adequate redress for victims undermines the social license for AI to operate.⁴ This report confronts this challenge directly. It begins by establishing a precise functional understanding of the core technical concepts—highly autonomous AI and emergent behavior—that create this legal quandary. It then demonstrates the inadequacy of existing legal paradigms before conducting a comparative analysis of global regulatory responses. Finally, it prescribes a robust, multi-layered liability framework designed to evolve with the technology, allocating responsibility fairly and efficiently among developers, owners, and users, and ensuring that accountability keeps pace with autonomy.

To construct a viable liability framework, it is imperative to move beyond vague, colloquial uses of the term “AI” and establish a functional legal taxonomy that differentiates systems based on their capabilities and level of human oversight. Not all AI is created equal, and the legal treatment must reflect these differences to avoid imposing unnecessary compliance burdens on simple automation while adequately addressing the risks of advanced systems.⁶ The regulatory scope must be carefully defined to capture the specific types of AI that challenge existing legal doctrines.⁶

The spectrum of AI begins with simple rule-based automation, such as email filters or static report generators, which follow pre-programmed, predictable rules and pose little challenge to existing legal frameworks.⁶ The next level includes statistical and machine learning (ML) systems, which are capable of discerning trends from data and modifying their behavior based on new inputs.⁶ These systems, which encompass supervised, unsupervised, and reinforcement learning models, introduce a dynamic element, as their logic can evolve “behind the scenes” in ways not transparent to the developer or user.⁶

The primary focus of this report, and the frontier of legal concern, is “highly” or “fully” autonomous AI. These are systems that can operate with minimal human intervention or supervision, performing complex tasks that once required human cognition.⁸ Descriptions from technical and legal literature characterize these as systems capable of creating outputs “on their own” with little to no user input beyond initial development and training, or AI agents that can perform computer-based tasks as competently as human experts.⁸ They are deployed to make financial transactions, diagnose diseases, and operate vehicles, increasingly mirroring human decision-making functions.⁹

From a legal standpoint, this high degree of autonomy is significant for two reasons. First, it blurs the conceptual line between a passive “tool” and an active “actor”.¹¹ A tool’s behavior is directed by its user, but a highly autonomous agent can reason about and conform its actions to a set of goals, making independent choices. Second, this autonomy makes it profoundly difficult to apply legal concepts of mental state, such as

mens rea (intent) or even negligence, which are central to liability.¹² These systems lack consciousness, emotion, or a subjective understanding of their actions; they operate on algorithms and probabilistic models, not an awareness of right and wrong.¹²

Therefore, a functional legal definition of “highly autonomous AI” must be tied to its capabilities rather than its underlying architecture. For the purposes of liability, a highly autonomous AI is a system that can, for explicit or implicit objectives, autonomously process data to generate predictions, recommendations, decisions, or other outputs that can influence physical or virtual environments with minimal human intervention for each action, and whose behavior may evolve based on new inputs.⁶ It is this category of AI, defined by its capacity for independent, adaptive decision-making, that gives rise to the challenge of unpredictable emergent behavior and necessitates a bespoke liability framework.⁶

1.2 Unpacking “Emergent Behavior”: The Source of Unpredictability

The core technical challenge that highly autonomous systems pose to the law is their capacity for emergent behavior. This phenomenon is defined as the appearance of complex, novel, and often unpredictable capabilities that were not explicitly programmed into the system but arise spontaneously from the interaction of its simpler components.¹⁵ This concept is not unique to AI; it is a fundamental property of complex adaptive systems observed throughout nature, such as the flocking of birds or the self-organizing patterns of ant colonies, where sophisticated group behaviors arise from individuals following simple rules.¹⁶

In AI, emergence occurs through the complex and non-linear interactions of millions of simpler elements, such as the neurons in a deep neural network or the agents in a multi-agent system.¹⁵ It is fueled by several factors, including immense computational complexity, the scale of training data, and the system’s capacity for self-organization and adaptation through feedback loops.¹⁷ One of the most insightful models for understanding this phenomenon is the concept of a “phase transition.” Similar to how water discontinuously changes from liquid to solid at a critical temperature, an AI model’s capabilities can improve steadily and predictably with increasing scale (more data, more computing power) until it crosses a critical threshold, at which point novel abilities suddenly and unpredictably appear.²²

These emergent abilities can be beneficial and useful. For example, large language models (LLMs) trained simply to predict the next word in a sentence have been observed to develop the unprogrammed abilities to perform arithmetic, translate languages, write poetry, and summarize passages.¹⁶ However, emergent behaviors can also be benign or actively harmful.¹⁸ Harmful examples include “language drift,” where a model’s output becomes progressively biased or offensive due to patterns in its training data, and “hallucinations,” where the model generates nonsensical, false, or unfaithful content.²³ In other cases, an AI may develop novel strategies to achieve its goals that were entirely unforeseen by its developers, such as the AlphaGo system surprising its creators with unconventional and winning moves in the game of Go.²⁴

The legal challenge of emergent behavior stems directly from its defining characteristics: unpredictability, novelty, and non-linearity.¹⁷ These traits strike at the heart of foundational legal concepts used to assign responsibility. The doctrine of

foreseeability, a cornerstone of negligence law, posits that a defendant can only be held liable for harms they could have reasonably anticipated and taken steps to prevent.² If a specific harmful behavior is, by its very nature, emergent and unpredictable, it becomes exceedingly difficult for a plaintiff to argue that a developer breached their duty of care by failing to prevent it.² This creates profound

traceability issues and an accountability gap, as the causal chain between a developer’s design choice and the emergent harm becomes opaque.¹⁶ This is the “black box” problem in its most acute and legally vexing form: the system’s internal logic is not only inscrutable, but its outputs can be fundamentally novel, breaking the predictive link upon which liability often depends.²⁶

While high autonomy and emergent behavior are distinct technical phenomena, their confluence creates a novel and acute legal challenge. A deterministic but highly autonomous system—for example, a sophisticated industrial robot executing a precise, pre-programmed sequence—is largely governable by existing product liability doctrines. If it malfunctions and causes harm, the analysis can trace the fault back to a design or manufacturing defect. Conversely, emergent behavior in a non-autonomous, heavily supervised system also presents a manageable legal problem; if a human operator is required to approve every action, that operator remains the locus of responsibility for any harmful outcome they authorize. The true liability lacuna, the legal “perfect storm,” appears at the nexus of high autonomy and emergent behavior. This combination creates an entity that is neither a passive tool nor a responsible agent. It is a quasi-actor whose harmful actions may be fundamentally unforeseeable to its human principals—the developer who designed it and the user who deployed it. This realization dictates that legal frameworks must not regulate “AI” monolithically. Instead, they must be designed to trigger stricter liability regimes specifically when this autonomy-emergence nexus is present, justifying a regulatory approach tiered not just by the AI’s field of application, but by its core technical architecture and behavioral potential.

Furthermore, the phenomenon of emergence fundamentally alters the legal character of an AI system, transforming it from a static “product” into a dynamic, risk-creating “process.” Traditional product liability law is built around the concept of a defect that exists at the time the product is sold and leaves the manufacturer’s control.²⁶ Emergent behavior shatters this premise. The AI system is not a fixed object but a continuously evolving process that can develop dangerous characteristics

after it has been placed on the market through its interactions with new data and its own self-learning capabilities.² A “defect,” in this context, may not be a flaw in the initial code but the emergent behavior itself, which arises autonomously long after the point of sale. This suggests that legal analogies drawn from the manufacturing of discrete objects are insufficient. A more apt comparison may be to the legal treatment of ultra-hazardous activities, such as blasting with dynamite or storing dangerous chemicals. In these cases, the law often imposes strict liability not because of a “defect,” but because the actor chose to initiate an inherently and unpredictably dangerous activity for their own benefit. By this logic, a developer who releases a highly autonomous, emergent AI into the world has initiated a risk-creating process and should bear strict liability for the harms that materialize, regardless of fault. This reframing also implies a continuing duty to monitor, update, and mitigate risks, fundamentally extending the temporal scope of a developer’s responsibility.

The techno-legal challenges posed by the autonomy-emergence nexus render conventional liability doctrines insufficient. Legal frameworks developed over centuries to govern harms caused by human actors and predictable, static objects struggle to assign responsibility when a non-human, dynamic system causes unforeseeable harm. Attempts to stretch these old paradigms to fit this new reality reveal a fundamental conceptual mismatch, highlighting the need for a purpose-built legal architecture.

In the search for a locus of liability, one of the earliest and most provocative proposals was to grant highly autonomous AI systems a form of legal personhood. The European Parliament’s 2017 resolution, which floated the idea of a limited “electronic personality,” was a pivotal moment in this debate.⁹ The stated goal was pragmatic: to create a legal entity that could be held accountable for damages, enter into contracts, and own property, thereby preventing a legal vacuum as AI became more sophisticated.⁹

However, this proposal was met with overwhelming philosophical and pragmatic objections, leading to its eventual rejection. The philosophical consensus is that AI, in its current and foreseeable forms, lacks the essential attributes for personhood. Legal personality is intrinsically tied to an entity’s ability to comprehend and act upon moral and legal responsibilities.⁹ This requires consciousness, moral agency, and intentionality—qualities that are uniquely human. AI systems function through algorithmic processing and statistical modeling; they do not “understand” law or ethics but merely follow patterns learned from data.⁹ To grant them personhood would be to dangerously blur the line between computational processes and human cognition.⁹

The pragmatic objections were even more decisive. The principal fear was that corporate actors could exploit AI personhood as a legal shield, a sophisticated shell company to which they could shift liability for harms caused by their products, thereby evading their own responsibilities.⁹ This would create accountability gaps rather than close them. Furthermore, the proposal was unworkable in practice. Unlike corporations, which are legal fictions but ultimately backed by assets, AI systems own nothing. A monetary judgment against an AI would be meaningless, as the entity would have no means to pay damages.³¹ Recognizing these profound flaws, the European Commission officially abandoned the “electronic personality” model in 2021, affirming a global consensus that accountability for AI must remain firmly and exclusively human-centered.⁹ The law must find a human or corporate entity to hold responsible for AI-caused harm.³¹

This definitive rejection of AI personhood creates an unavoidable “upstream” pressure on liability frameworks. With the AI itself off the table as a responsible party, the law is compelled to look to the humans in the value chain. When emergent behavior makes it difficult to blame the end-user (who could not have predicted the AI’s rogue action) and the “black box” nature of the system makes it nearly impossible to prove a specific design defect, the legal and moral responsibility has nowhere to go but upstream. It naturally flows to the actors who created the conditions for the harm to occur, even if they did not “cause” it in a traditional sense. This strengthens the rationale for imposing a stricter form of liability on developers and deployers, not necessarily because they were negligent, but because they are the only identifiable, solvent parties who chose to create and profit from the risk-creating technology. By closing the door on AI personhood, the legal world has implicitly committed itself to finding a human proxy for liability, and the developer is the most logical candidate.

2.2 Stretching Tort Law to Its Breaking Point

With AI personhood discarded, the natural next step is to apply the workhorse of civil liability: tort law. However, its core doctrines, particularly negligence, are ill-equipped to handle harms caused by emergent behavior.

The Challenge to Negligence

The tort of negligence requires a plaintiff to prove four elements: that the defendant owed them a duty of care, that the defendant breached that duty, that this breach caused the harm, and that the plaintiff suffered damages.³³ Each of these elements faces immense strain when applied to emergent AI harm.

The concepts of duty and breach are inextricably linked to foreseeability. As established, the very definition of emergent behavior is that it is unpredictable and novel.¹⁷ This makes it extraordinarily difficult for a plaintiff to prove that a specific harmful outcome was a

foreseeable consequence of a developer’s action or inaction.² If a harm is not reasonably foreseeable, the standard of “reasonable care” becomes almost impossible to define. A developer cannot be expected to design safeguards against a hazard that is, by its nature, unknown and unknowable at the time of design. This undermines the plaintiff’s ability to demonstrate a breach of duty.

The element of causation is equally problematic due to the “black box” nature of complex AI. The opacity of a system with billions of parameters makes tracing a direct causal link from a specific design choice, a piece of training data, or an architectural flaw to a single harmful output a near-insurmountable evidentiary challenge.³ The plaintiff is left in the position of knowing they were harmed by the AI’s decision but being unable to prove

why the AI made that decision, let alone connecting it to a specific failure by the developer.

The Limits of Strict Liability

Given the challenges of proving fault, strict liability—liability without fault—appears to be a more promising avenue, especially for high-risk AI applications.¹² This doctrine holds a defendant liable for harm caused by their product or activity simply because it occurred, shifting the focus from the defendant’s conduct to the harm itself.

However, traditional strict liability is most commonly applied in the context of defective products. This framework encounters difficulty when a harm results from emergent behavior in an AI system that was, by all metrics, non-defective and functioning exactly as designed at the time of sale.² The “defect” in such cases is the emergent behavior itself, a property that did not exist when the product was placed on the market. The system was not defectively designed or manufactured; rather, it autonomously

developed a harmful characteristic. While some legal evolution is possible, applying the “defective product” label to a harm caused by a correctly functioning but unpredictable system stretches the doctrine beyond its traditional boundaries.

2.3 Product Liability’s Identity Crisis

The challenges facing tort law are magnified within the specific domain of product liability. This area of law is confronting an identity crisis as it grapples with whether and how to classify AI.

The “Product vs. Service” Distinction

A threshold legal battle in many AI liability cases will be over the classification of the AI system itself. Is it a “product” subject to the stringent rules of strict product liability, or is it a “service” governed by the more lenient standard of negligence?.²⁵ The answer is far from clear and often depends on the jurisdiction and the specific facts. Under the Uniform Commercial Code, for example, mass-produced, off-the-shelf software is often treated as a “good” (a product), whereas software specifically designed for a single customer is typically seen as a service.²⁵ Courts are actively wrestling with this question. A recent federal court decision, for instance, allowed a product liability claim to proceed against an AI chatbot app, finding that it could be characterized as a “product” for legal purposes, particularly concerning its design features.³⁸ This indicates a judicial willingness to adapt the definition, but it remains a contentious and unsettled area of law.

The Problem of the Evolving “Product”

Even if an AI system is deemed a “product,” it behaves unlike any product the law has previously encountered. Traditional product liability doctrine is predicated on a static object with fixed characteristics. A defect, whether in design or manufacturing, is assumed to be present when the product leaves the manufacturer’s control.²⁹

Highly autonomous AI systems defy this assumption. They are designed to learn, adapt, and evolve after deployment.² A harmful “defect” might not arise from the initial code but from the AI’s continuous learning process, a software update pushed by the developer, its interaction with new data in the user’s environment, or an unforeseen cybersecurity vulnerability.²⁶ This dynamic nature fundamentally challenges the core tenet of product liability: that the defect existed at the point of sale.²⁹

Recognizing this challenge, the European Union has taken a significant step to modernize its framework. The new EU Product Liability Directive (PLD) explicitly includes software and AI systems within its definition of “product”.²⁶ More importantly, it extends liability to harms caused by post-sale issues like insufficient software updates and cybersecurity weaknesses, and it considers the system’s ability to self-learn when assessing defectiveness.³⁰ This represents a crucial evolution, acknowledging that for AI, liability must extend across the product’s entire lifecycle. However, even this advanced framework still grapples with the fundamental difficulty of defining a “defect” in a system whose very purpose is to be dynamic and to change in unpredictable ways.

A final traditional doctrine sometimes proposed as a solution is vicarious liability, which holds a “principal” (like an employer) responsible for the wrongful acts of their “agent” (like an employee). The analogy suggests treating the AI as an agent for which its human principal—the developer or owner—should be held liable.¹¹

This analogy, however, quickly breaks down under scrutiny. Vicarious liability is built on legal relationships, like employment or agency, that are legal fictions when applied to AI.⁴¹ An AI system is not an employee; it lacks the legal status, rights, and obligations that define such a relationship. The core model of

respondeat superior, which holds an employer liable for a tort committed by an employee, fails when the “agent” itself cannot be a tortfeasor because it lacks moral agency and legal standing.⁴¹

Furthermore, the doctrine of vicarious liability is often justified by the principal’s right to control the agent’s actions. The principal is held liable because they have the power to direct and supervise the agent’s work. With highly autonomous AI exhibiting emergent behavior, the principal’s control is, by definition, attenuated and incomplete.¹² The developer sets the initial conditions, but they cannot control the specific, emergent actions the AI takes thereafter. This lack of direct control fundamentally weakens the theoretical justification for imposing vicarious liability.⁴²

The consistent failure of each of these traditional doctrines when applied to emergent AI harm is not a sign of a simple “gap” in the law that can be patched. It is a symptom of a deeper conceptual mismatch. These legal frameworks were constructed for a world of human actors, whose actions are guided by intention and bounded by foreseeability, and for a world of static, physical objects with fixed properties. Highly autonomous AI introduces a new ontological category: a non-human, non-static, quasi-actor. It is not a person, but it is more than an object. It is a tool that can, without warning, become its own actor in unpredictable ways. Therefore, simply “stretching” or “adapting” these old laws is an inherently flawed strategy. A successful legal framework cannot be a mere adaptation; it must be a new synthesis. It must intelligently import and combine elements from different doctrines—the risk-based allocation of strict liability, the duty of care from negligence, the value-chain perspective of commercial law—to create a hybrid model tailored to this new category of risk-creating entity.

Part III: A Comparative Analysis of Global Regulatory Approaches

As nations grapple with the liability dilemma posed by emergent AI harm, divergent regulatory philosophies have crystallized into distinct national and regional approaches. The European Union, the United States, China, and the United Kingdom are each forging a path that reflects their unique legal traditions, economic priorities, and societal values. This comparative analysis reveals a global “regulatory tripolarization” that will define the legal landscape for AI, forcing multinational organizations to navigate a complex and often contradictory patchwork of rules.

3.1 The European Union: A Comprehensive, Rights-Centric, Risk-Based Model

The European Union has positioned itself as a global standard-setter with a comprehensive, human-centric, and rights-focused approach to AI governance.⁴³ Its framework is built on two main legislative pillars: the AI Act and the revised Product Liability Directive.

The EU AI Act

The cornerstone of the EU’s strategy is the AI Act, a horizontal regulation that applies across all sectors.³⁹ The Act establishes a risk-based pyramid, categorizing AI systems to apply proportionate regulatory burdens ⁴⁷:

  • Unacceptable Risk: Certain AI practices are banned outright because they are deemed a threat to fundamental rights. This includes systems for government-led social scoring, real-time biometric surveillance in public spaces (with narrow exceptions), and manipulative AI that exploits vulnerabilities.⁴⁷

  • High-Risk: This is the most heavily regulated category and the most relevant for liability. It includes AI systems used in critical infrastructure, medical devices, employment, law enforcement, and administration of justice.³⁴ Providers of high-risk systems are subject to stringent
    ex ante obligations, including rigorous conformity assessments, comprehensive risk management systems, high standards for data quality and governance, detailed technical documentation, transparency and explainability measures, robust human oversight, and high levels of accuracy and cybersecurity.³⁴

  • Limited Risk: Systems that pose transparency risks, such as chatbots or deepfakes, are subject to disclosure obligations. Users must be informed that they are interacting with an AI or that content is artificially generated.⁴⁸

  • Minimal Risk: For all other AI systems, the Act encourages the voluntary adoption of codes of conduct.⁴⁷

The Revised Product Liability Directive (PLD)

The EU’s primary mechanism for assigning liability for AI-caused harm is the modernized Product Liability Directive (PLD).⁴⁶ This directive is crucial because it adapts a well-established strict liability regime to the digital age. Key innovations include:

  • Expanded Definition of “Product”: The revised PLD explicitly defines software, including standalone AI systems, as “products,” closing a significant legal gap and bringing them squarely within the scope of strict liability.²⁶

  • Modernized Concept of “Defect”: Recognizing that AI systems evolve post-deployment, the directive expands the definition of a defect. A product can be deemed defective not only due to its initial design but also because of its subsequent behavior. This includes failures to provide necessary software updates, inadequate cybersecurity that leads to harm, or dangerous behaviors that emerge from the AI’s self-learning capabilities.²⁶ Manufacturers are expected to design systems that prevent hazardous emergent behavior.³⁹

  • Eased Burden of Proof: To address the “black box” problem, the PLD introduces measures to help victims. In complex cases, such as those involving AI, it establishes a rebuttable presumption of defectiveness and/or causality. If a claimant can demonstrate that the product likely contributed to the damage, the burden shifts to the manufacturer to prove that the product was not defective or did not cause the harm.³⁰ Courts are also empowered to order the disclosure of evidence from the manufacturer to help the claimant build their case.³⁰

While the EU had initially proposed a separate AI Liability Directive (AILD) focused on fault-based claims, it was withdrawn in early 2025.⁴ This decision signals a strategic preference for embedding AI liability within the robust, existing PLD framework rather than creating a new, potentially duplicative set of rules. The core principles of the AILD, such as easing the burden of proof, were ultimately integrated into the modernized PLD, reflecting the EU’s consistent focus on ensuring effective victim compensation.³⁹

The EU’s philosophy is clear: to create a harmonized internal market for AI that is grounded in trust, safety, and fundamental rights. By setting high standards ex ante, the EU aims to shape the global development of AI in its own image, a phenomenon often referred to as the “Brussels Effect”.⁵³

3.2 The United States: A Fragmented, Market-Driven, Sector-Specific Approach

In stark contrast to the EU’s comprehensive model, the United States has adopted a fragmented, market-driven approach that prioritizes innovation and relies heavily on existing legal structures and corporate self-regulation.⁴⁴

Federal and State Dynamics

At the federal level, there is no single, overarching AI law.⁴⁵ The approach has been characterized by “soft law” initiatives, such as the voluntary NIST AI Risk Management Framework, which provides guidance but carries no legal force.⁵⁴ Federal agencies like the Federal Trade Commission (FTC), the Equal Employment Opportunity Commission (EEOC), and the Department of Justice (DOJ) have asserted that their existing statutory authority covers harms caused by AI in areas like consumer protection and anti-discrimination, but this creates a reactive, enforcement-led system rather than a proactive regulatory one.⁵⁶

In the absence of federal preemption, a chaotic patchwork of state-level legislation has emerged, creating significant compliance challenges for businesses operating nationwide.⁵⁷ Key examples include:

  • The Colorado AI Act: Effective in 2026, this pioneering law requires developers and deployers of “high-risk” AI systems to use “reasonable care” to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. It creates a rebuttable presumption that a developer or deployer did use reasonable care if they comply with extensive documentation, risk assessment, and disclosure requirements.⁵⁸ This is an impact-based standard.

  • The Texas Responsible AI Governance Act (TRAIGA): This law takes a different path, establishing an intent-based liability framework. It prohibits the intentional development or deployment of AI to discriminate or cause harm.⁶⁰ This sets a much higher bar for plaintiffs, who must prove the defendant’s state of mind, in contrast to Colorado’s focus on the discriminatory outcome regardless of intent.⁵⁸

  • California: As a major technology hub, California has been a battleground for AI regulation, considering a range of bills from transparency mandates (AB 2013) to highly contentious safety proposals for powerful models (SB 1047), which would have required features like an emergency “kill switch”.⁵⁸

Reliance on Tort Law

The primary avenue for redress for AI-caused harm in the U.S. remains the state-based common law tort system.⁶¹ As detailed in Part II, this approach is fraught with challenges. The outcome of a product liability or negligence claim can vary dramatically from state to state, depending on different legal tests (e.g., risk-utility vs. consumer expectations) and judicial interpretations.⁶² This reliance on

ex post litigation to set standards creates significant legal uncertainty and high transaction costs.

The American philosophy is fundamentally one that favors permissionless innovation and market freedom, reflecting a deep-seated skepticism of broad, top-down, ex ante regulation.⁴⁴

3.3 China: A State-Centric, Stability-Focused Governance Model

China’s approach to AI regulation is distinct from both the EU and the US, driven by the goals of maintaining state control, ensuring social stability, and harnessing AI as a tool for national economic and strategic development.⁴³

Instead of a single horizontal law, China has implemented a series of vertical, technology-specific regulations targeting different aspects of AI.⁶⁵ The most prominent of these are the “Interim Measures for the Management of Generative AI Services,” the “Deep Synthesis Provisions,” and the “Algorithm Recommendation Provisions”.⁵¹

These regulations place comprehensive and stringent obligations squarely on the service provider. The provider is held directly responsible for the content generated by its AI systems and must ensure that all outputs comply with Chinese law and adhere to “core socialist values”.⁶⁸ Key obligations include:

  • Content Moderation: Providers have a “positive duty” to filter training data and prevent the generation and dissemination of illegal or undesirable content.⁶⁸

  • State Registration and Security Assessment: AI services with “public opinion attributes or social mobilization capabilities” must undergo a security assessment and file their algorithms with the Cyberspace Administration of China (CAC).⁶⁹

  • Transparency and Labeling: AI-generated content must be clearly labeled as such to prevent misinformation.⁶⁹

Liability in China is therefore primarily an administrative and state-enforced matter, with the provider acting as the gatekeeper responsible for the AI’s behavior. This contrasts sharply with the EU’s model of distributing liability across a value chain and the US model of private litigation. China’s legal framework is less concerned with individual rights in the Western sense and more focused on collective order, information control, and the authority of the state.⁶⁴

3.4 The United Kingdom: A Principles-Based, Pro-Innovation Alternative

Seeking a “third way” between the EU’s comprehensive regulation and the US’s market-led approach, the United Kingdom has adopted a flexible, “pro-innovation” framework.⁷¹ This approach is non-statutory and is built upon five high-level principles intended to guide existing regulators:

  1. Safety, security & robustness

  2. Appropriate transparency & explainability

  3. Fairness

  4. Accountability & governance

  5. Contestability & redress.⁷¹

Instead of creating a new, overarching AI law or a new AI regulator, the UK model empowers existing, sector-specific regulators (like the Information Commissioner’s Office for data privacy or the Competition and Markets Authority for market issues) to interpret and apply these principles within their domains.⁷¹ This is intended to create an agile and context-specific regulatory environment that can adapt quickly to technological change. A central government function provides support and monitors for regulatory gaps.⁷¹

From a liability perspective, the UK framework is currently light on specific new rules, relying on the evolution of common law and existing product safety regulations. The “Accountability and governance” principle encourages regulators to provide clarity on how existing laws apply to AI and to identify which actors in the supply chain they can hold legally responsible, but this work remains ongoing.⁷¹

3.5 Synthesis and Key Divergences

The divergent paths taken by these major jurisdictions reveal fundamentally different answers to the question of how to balance innovation with safety. The EU has prioritized legal certainty and rights protection through a detailed, ex ante rulebook. The US has prioritized innovation speed, preferring to address harms ex post through a decentralized and unpredictable litigation system. China has prioritized state control, using regulation as a tool for social management. The UK is experimenting with a flexible, regulator-led model that seeks to balance these competing priorities.

This global divergence is not merely a matter of legal detail; it reflects a deeper split in the definition of “harm” itself. The EU framework explicitly recognizes a broad range of compensable harms, including violations of fundamental rights, data loss, and even medically recognized psychological harm.³⁹ This expansive definition creates a wide surface area for liability. In contrast, the US tort system has traditionally been more restrictive, primarily focusing on physical injury and property damage, with recovery for pure economic or emotional harm being more challenging to secure.⁶¹ China’s framework is concerned with a different set of harms altogether, focusing on threats to state security, social stability, and infringements on personality rights.⁶⁸ This definitional variance has significant economic consequences, as it directly impacts the risk calculus for AI developers and could lead to “innovation arbitrage,” where riskier AI development is concentrated in jurisdictions with narrower definitions of compensable harm.

For multinational companies, this regulatory fragmentation creates a strategic trilemma. They can (a) adopt the strictest standard globally (the “Brussels Effect”), ensuring market access everywhere at a high compliance cost; (b) geofence their products, developing different AI models for different regulatory blocs, which fragments innovation; or (c) lobby for international harmonization through bodies like the OECD, a long-term and uncertain endeavor. The choice will have profound implications for product architecture, R&D investment, and global competitiveness. The following table provides a systematic comparison of these divergent approaches.

Feature European Union United States People’s Republic of China United Kingdom
Core Philosophy Human-Centric & Rights-Focused Market-Driven & Innovation-First State-Centric & Stability-Focused Principles-Based & Pro-Innovation
Key Legislation AI Act; Revised Product Liability Directive (PLD) Existing Tort Law; State-level Acts (e.g., CO, TX); Agency Enforcement Specific Regulations (Generative AI, Deep Synthesis, Algorithm Recs) Non-statutory principles-based framework for existing regulators
Primary Liability Model Ex-ante risk-based strict liability (PLD) & fault-based claims with eased proof Ex-post common law tort litigation (negligence, product liability) Provider-centric, state-enforced administrative liability Case-by-case application of common law, adapted by regulator guidance
Locus of Responsibility Distributed across value chain (provider, deployer, importer, distributor) Primarily the defendant in tort litigation (can be developer, owner, or user) Primarily the service provider Determined by existing law on a case-by-case basis
Enforcement Mechanism National market surveillance authorities; European AI Office Civil litigation by private parties; Federal/State agency enforcement (e.g., FTC, AGs) Cyberspace Administration of China (CAC) and other state bodies Sector-specific regulators (e.g., ICO, CMA, FCA)

Table 1: Comparative Overview of Global AI Liability Frameworks. This table synthesizes the distinct regulatory and liability models for AI in four major jurisdictions, highlighting key differences in their underlying philosophy, legal instruments, and enforcement mechanisms.³⁹

Part IV: Architecting a Future-Proof Liability Framework

Drawing upon the analysis of traditional doctrines and global regulatory trends, it is possible to architect a coherent, future-proof liability framework. Such a framework must be a hybrid model, intelligently combining the most effective elements of different legal approaches to create a system that is both robust and adaptable. It must allocate responsibility fairly across the complex AI value chain, provide clear paths to redress for victims, and create powerful incentives for the development of safe, trustworthy AI without stifling innovation. This section outlines the key components of such a framework: a multi-layered liability model, a tiered approach based on risk and autonomy, clear rules for contractual allocation, and alternative compensation mechanisms for catastrophic or truly unforeseeable harms.

4.1 A Multi-Layered Liability Model for the AI Value Chain

A single AI-driven harm is rarely the result of a single actor’s failure. The modern AI ecosystem is a complex value chain—the “problem of many hands”—involving numerous participants who contribute to the final product and the resulting risk.⁷⁴ This chain can include upstream data providers, developers of foundation models, downstream actors who fine-tune or customize those models, integrators who embed the AI into a consumer-facing product (like a vehicle or a medical device), and the final deployers or users of that product.⁷⁴ Ascribing liability to only one of these actors is often inefficient and unjust, as it fails to hold all responsible parties accountable and may leave the victim without a viable defendant.

A modern framework must therefore be capable of allocating liability across this entire chain.⁴ The most effective way to achieve this is through a

multi-layered liability model that combines clear victim redress with internal accountability:

  1. Joint and Several Liability for Victim Compensation: To ensure that victims have a clear and effective path to compensation, the framework should establish a baseline of joint and several liability for all commercial actors in the value chain of a high-risk AI system that causes harm.⁷⁴ This legal principle allows an injured party to sue any one of the responsible parties (e.g., the deployer, who is often the most visible and accessible defendant) and recover the full amount of damages from that single party. This removes the burden from the victim of having to identify and sue every actor in a complex, opaque supply chain.

  2. Internal Recourse and Contribution: After the victim has been compensated, the framework must allow the party held liable to seek contribution from other actors in the chain based on their relative degree of fault or causal contribution to the harm. This internal allocation of responsibility would be heavily influenced by the contractual agreements between the parties, creating a strong incentive for them to clearly define their roles, responsibilities, and risk exposure ex ante.

This multi-layered approach achieves two critical goals simultaneously: it prioritizes the victim’s right to redress by providing a simple and direct path to compensation, while also ensuring that liability is ultimately distributed fairly among the responsible commercial parties, incentivizing safety at every stage of the value chain.

4.2 Tiered Liability: A Hybrid Approach Based on Risk and Autonomy

A one-size-fits-all liability rule is inappropriate for the diverse world of AI. The liability regime must be tiered, applying different standards based on the level of risk posed by the AI system and its degree of autonomy.³⁴ This approach, inspired by the EU AI Act’s risk-based structure, ensures that the most stringent liability rules are reserved for the most dangerous applications, while allowing for more flexibility with lower-risk systems.

  • Tier 1: High-Risk Systems (Strict Liability): For AI systems classified as high-risk—those used in critical domains like healthcare, transportation, justice, and essential infrastructure—a strict liability standard should apply.³⁴ Under this standard, the developer and/or operator of the system would be held liable for harm caused by the AI, regardless of whether they were negligent or at fault.⁷⁶ This approach is justified because the actors who introduce high-risk systems into society for their own profit are best positioned to manage and insure against those risks. It shifts the legal focus from proving culpability, which is often impossible in “black box” cases, to simply proving causation, thereby streamlining the compensation process for victims.

  • Tier 2: Lower-Risk Systems (Presumed Fault): For all other AI systems that cause harm, a fault-based liability regime with a rebuttable presumption of fault should be the standard.⁵⁰ Under this model, if a plaintiff proves that the AI system caused them harm, the operator or developer is presumed to have been negligent. The burden of proof then shifts to the defendant to rebut this presumption by demonstrating that they exercised a high degree of care throughout the AI’s lifecycle. Evidence of such care could include rigorous adherence to established best practices and standards, such as the NIST AI Risk Management Framework, or compliance with a certified algorithmic audit.⁶⁰ This approach maintains the nuance of a fault-based system but alleviates the prohibitive evidentiary burden that would otherwise face plaintiffs.

Within this risk-based framework, the AI’s degree of autonomy should further modulate the allocation of liability. For systems that require significant human oversight (e.g., driver-assist technologies), liability would naturally fall more heavily on the human operator who fails to supervise the system properly. Conversely, for fully autonomous systems that exhibit unpredictable emergent behavior, the liability should shift decisively upstream to the developer and owner who released the system into the world.⁸⁰

This tiered, hybrid system creates a powerful “market for safety.” By imposing higher liability risks and associated insurance costs on high-risk, highly autonomous systems, it creates a direct economic incentive for developers to “design for a lower tier.” Businesses will be motivated to build in more robust human oversight, reduce their systems’ autonomy where possible, and avoid high-risk applications in order to lower their liability exposure. In this way, the liability framework itself becomes a market-based steering mechanism, guiding innovation toward safer and more beneficial applications without imposing outright bans.

4.3 The Critical Role of Contractual Allocation

While legislation sets the overarching liability framework, contracts between commercial parties in the AI value chain are a critical tool for the day-to-day allocation of risk and responsibility.²⁵ A clear contractual architecture is essential for the functioning of the internal recourse mechanism described above.

Sophisticated commercial actors, such as a foundation model developer licensing its technology to a large enterprise, should use contracts to clearly define their respective duties ex ante. Key contractual levers include:

  • Warranties: Providing explicit guarantees about the AI’s performance, reliability, and compliance with applicable regulations. Currently, many AI vendor contracts disclaim all performance warranties, a practice that is unsustainable for high-stakes applications.²⁶

  • Indemnification Clauses: Specifying which party will cover the legal costs and damages arising from third-party claims. These clauses are the primary mechanism for allocating financial risk for lawsuits.⁸³

  • Limitations of Liability: Capping the financial exposure of one or both parties. While standard in software agreements, these caps must be reasonable and should not be used to absolve a party of responsibility for foreseeable harms.²⁵

  • Data Quality and Use Obligations: Given that an AI’s output is highly dependent on the data it receives, contracts must specify the quality of data the deployer will use and the intended use cases for the model. This is critical for disentangling whether a harm was caused by a flaw in the core model or by the deployer’s misuse or provision of poor data.

However, freedom of contract should not be absolute. Legislation must impose clear guardrails to prevent powerful upstream developers from using their market position to contractually disclaim all liability, especially for harms caused to end-users and consumers who are not party to the commercial agreement.⁵⁶ The ability to allocate risk via contract should primarily govern the internal relationship between commercial actors, not extinguish the rights of injured third parties.

4.4 Alternative Compensation Mechanisms

Even the most robust liability framework may be insufficient in two specific scenarios: when the responsible party is insolvent, or when the harm is so novel and unforeseeable that it is unfair to hold any single actor responsible. To address these gaps, the framework should be supplemented with alternative compensation mechanisms.

  • Mandatory Liability Insurance: For all high-risk AI systems, a mandatory insurance scheme should be required as a condition of market access.³⁴ Similar to requirements for vehicle or professional liability insurance, this ensures that a solvent fund is available to compensate victims, even if the developer is a startup or an undercapitalized entity.⁸⁵ The insurance market is already beginning to adapt to AI risk, with carriers introducing both AI-specific exclusions and new, affirmative AI coverage policies.⁸⁵ A mandatory requirement would accelerate the development of this market and ensure that the costs of AI risk are properly priced and borne by the entities that create them.

  • No-Fault Compensation Funds: For truly catastrophic or unforeseeable harms—so-called “algorithmic black swans” where a system causes widespread damage in a way that no one could have reasonably predicted—it may be unjust to hold any single actor liable, even under a strict liability regime.⁵⁵ In these rare cases, an
    industry-funded, no-fault compensation fund could be established.⁷⁸ Modeled on existing schemes like the National Vaccine Injury Compensation Program or workers’ compensation funds, this mechanism would ensure that victims receive necessary compensation without the need to prove fault in a lengthy and complex legal battle.⁷⁹ Such a fund would treat these extreme, unforeseeable harms as a systemic cost of technological progress, spreading the risk across the entire industry that benefits from that progress.

Part V: Procedural and Evidentiary Imperatives for the Algorithmic Age

A substantive liability framework, no matter how well-designed, is meaningless without procedural and evidentiary rules that allow it to function in practice. The “black box” nature of highly autonomous AI presents profound challenges to the legal process, which relies on transparency, evidence, and reasoned argumentation. To make the proposed liability model workable, courts and legislatures must adopt new procedural tools to pierce the algorithmic veil and ensure that legal proceedings are both fair and effective.

Explainability, or the ability to understand and articulate the reasoning behind an AI’s decision, cannot remain a mere ethical guideline or a technical “nice-to-have.” In the context of liability, it is a legal prerequisite.⁹⁰ Without a meaningful way to scrutinize an AI’s decision-making process, fundamental legal questions of causation, breach of duty, and defect become unanswerable, rendering the entire liability system moot.⁹¹

The demand for Explainable AI (XAI) will be driven by the needs of the legal system. In court proceedings, XAI will serve several critical functions:

  • Establishing Causation: XAI techniques can provide crucial evidence about the factors that led to a specific AI output, allowing a plaintiff to establish—or a defendant to rebut—a causal link between the system’s operation and the resulting harm.⁹¹

  • Assessing Fault: In fault-based claims, explanations of an AI’s internal logic can help judges and juries determine whether the system’s reasoning was flawed, biased, or otherwise unreasonable, providing a basis for finding negligence.⁹²

  • Upholding Due Process: When an AI-driven decision adversely affects an individual’s rights or interests (e.g., in a credit denial or a criminal sentencing recommendation), the constitutional and common law principles of due process demand a “right to an explanation”.⁹⁴ The affected party must be able to understand why the decision was made in order to meaningfully challenge it.

This judicial demand will, in turn, shape the technical development of XAI. Courts will need to engage with and set standards for the legal sufficiency of various XAI methods, such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive eXplanations), which are designed to provide insights into model predictions.⁹⁶ The legal system will not merely be a consumer of XAI; it will be a primary driver of its evolution, demanding explanations that are not only technically accurate but also comprehensible and useful to legal actors.⁹²

However, a simple, static explanation provided by the AI’s developer—an actor with a vested interest in presenting a self-serving narrative—is insufficient for genuine due process. A meaningful right to explanation must evolve into a right to a contestable explanation. This implies that a plaintiff and their experts must have the ability to interact with the model (or a high-fidelity simulation of it), to probe its logic with different inputs, and to independently test its reasoning to uncover flaws or biases. This will necessitate the development of a new form of adversarial legal discovery focused on “model interrogation.” Courts will be forced to create new protocols to govern this process, balancing the plaintiff’s need for access with the developer’s legitimate trade secret interests. This could involve the use of court-appointed neutral technical experts, secure data “clean rooms,” or other innovative procedural safeguards. The legal standard will inevitably shift from “Did the defendant provide an explanation?” to “Did the defendant provide a sufficiently transparent and interactive model for its explanation to be independently verified and contested?”

5.2 Algorithmic Accountability and Due Process

Beyond in-court explainability, a functioning liability regime depends on a broader ecosystem of algorithmic accountability that ensures transparency and oversight throughout the AI lifecycle.⁹⁸

  • Mandatory Transparency and Documentation: Meaningful accountability begins with transparency.⁹⁹ For high-risk AI systems, regulations should mandate the creation and maintenance of comprehensive documentation. This includes so-called “model cards” or “factsheets” that detail the AI’s intended purpose, its architecture, the sources and characteristics of its training data, its known limitations, and the results of its testing and validation.¹⁰¹ This documentation provides a crucial baseline for regulators, auditors, and litigants.

  • Independent Audits and Risk Assessments: Self-assessment is not enough. For high-risk systems, mandatory pre-deployment and regular post-deployment audits by qualified, independent third parties should be required.³⁴ These audits would assess the system for bias, fairness, security, and robustness. Adherence to established risk management frameworks, such as the one developed by the U.S. National Institute of Standards and Technology (NIST), could serve as powerful evidence of due care in litigation and could even form the basis for a “safe harbor” from certain forms of liability.⁵⁴

  • Upholding Due Process Rights: When public or private entities use AI to make consequential decisions affecting individuals, fundamental principles of due process must be upheld. This requires, at a minimum, that the affected individual is (1) notified that an AI system was used in the decision, (2) provided with a clear and understandable explanation of the decision’s basis, and (3) given a meaningful opportunity to appeal and contest the outcome before a human reviewer.⁹⁹

5.3 Evolving Evidentiary Standards for AI-Generated Evidence

The proliferation of generative AI creates a parallel evidentiary crisis. The ability of AI to create highly realistic but entirely fabricated images, videos, audio, and documents—“deepfakes”—poses a profound challenge to the courts’ ability to determine the truth.¹⁰³

  • The Authentication Challenge: Under traditional evidence rules, such as Federal Rule of Evidence 901 in the U.S., a party introducing a piece of evidence must first authenticate it—that is, provide sufficient proof that the item is what the party claims it is.¹⁰⁴ This is relatively straightforward for a traditional photograph or document. For AI-generated content, however, the “author” is an opaque algorithm, making it exceptionally difficult to establish provenance and authenticity.¹⁰³

  • The “Liar’s Dividend”: This challenge is compounded by its inverse: the “liar’s dividend.” This occurs when a party is confronted with authentic digital evidence and seeks to undermine it by falsely claiming it is a deepfake, exploiting a general and growing public distrust of all digital media.¹⁰³

  • The Hearsay Exception: An additional complexity is that AI-generated output is generally not considered hearsay. Hearsay rules are designed to exclude unreliable out-of-court statements made by human declarants. Because an AI is not a human, its output is not a “statement” for hearsay purposes and is likely admissible for its truth, provided it can be authenticated.¹⁰⁴ This bypass of the hearsay rule places even greater weight on the authentication and reliability analysis.

The extreme reliability concerns posed by AI-generated evidence are too complex and high-stakes to be left for a jury to resolve during trial. This will inevitably force a procedural shift toward mandatory, pre-trial “AI Reliability Hearings.” Similar to Daubert hearings used to vet novel scientific evidence, these proceedings would require judges to act as rigorous gatekeepers.¹⁰⁶ Before any crucial piece of AI-generated evidence is shown to a jury, the judge would conduct a dedicated hearing to assess its reliability. This would involve expert testimony on the AI system that generated it, its training data, its architecture, its known error rates, and the chain of custody of the evidence itself. Only if the evidence is found to be sufficiently reliable would it be deemed admissible. This procedural innovation will make litigation more front-loaded and technically demanding, creating a strong incentive for legal professionals to develop expertise in this area and for AI developers to build systems whose outputs can withstand this level of judicial scrutiny.

Part VI: Societal and Economic Implications

The design of a liability framework for AI is not a purely legal or technical matter; it has profound societal and economic consequences. The rules governing liability will shape the trajectory of AI innovation, influence public perception and trust, and ultimately determine whether this transformative technology is integrated into society in a way that is both productive and safe. A well-designed liability regime should not be seen as a barrier to progress but as an essential piece of social and economic infrastructure.

6.1 The Economic Calculus: Balancing Innovation and Accountability

A frequent objection to robust AI regulation is that imposing stringent liability rules will stifle innovation by burdening developers with excessive costs and legal risks, causing them to become overly cautious.⁴ However, a deeper economic analysis suggests that the opposite is more likely true: a clear, predictable, and harmonized liability framework is a

catalyst for sustainable innovation, not an impediment to it.

The greatest chill on investment and adoption comes not from regulation itself, but from legal uncertainty.⁴ A fragmented, ambiguous, or non-existent liability regime creates an unquantifiable risk for businesses, their investors, and their insurers. In such an environment, only the largest companies with vast legal resources can afford to navigate the uncertainty, while smaller innovators and startups are deterred from entering the market. A well-defined liability framework, even a strict one, replaces this uncertainty with calculable risk. It allows companies to understand their potential exposure, insurers to price policies accurately, and investors to make informed decisions, thereby fostering a more stable and competitive market.⁴

Furthermore, the choice of a liability regime is a powerful tool of economic and social industrial policy. The rules a jurisdiction adopts will directly incentivize certain types of research, development, and business models over others. A framework like the EU’s, which imposes strict liability on high-risk systems, encourages companies to invest heavily in safety, transparency, and auditable AI. This steers the domestic AI industry toward developing trustworthy systems, potentially creating a competitive advantage in markets where safety and reliability are paramount. In contrast, a more laissez-faire approach might encourage faster, riskier innovation cycles, prioritizing speed-to-market over robustness. The global competition over AI regulation is therefore not just a legal debate; it is a competition between different models of techno-social development, and the chosen liability framework will be instrumental in determining which nations’ industries are best positioned to lead the next phase of the AI economy.

The economic potential of AI is immense, with projections of trillions of dollars in additional global GDP driven by productivity gains and the creation of new products and services.¹ However, realizing this potential is entirely dependent on widespread adoption and integration of the technology.¹⁰⁷ This, in turn, depends on public trust.

Public trust is the bedrock upon which the AI-driven economy must be built. Currently, that trust is fragile. Global surveys reveal a public that is deeply ambivalent: people recognize the potential benefits of AI but are profoundly concerned about its risks, including bias, misinformation, privacy violations, and a lack of governance.¹⁰⁸ A significant majority of the public believes that current laws are inadequate to make AI use safe and that stronger regulation is necessary.¹⁰⁸

Trust is not built on technological performance alone; it is built on accountability.¹¹⁰ The public is far more willing to accept and use AI systems when they know that robust governance structures, transparent oversight, and effective mechanisms for redress are in place.¹⁰⁸ A clear and fair liability framework that guarantees victims will be compensated when harm occurs is a non-negotiable component of this accountability ecosystem. It provides the fundamental assurance that someone is responsible, which is essential for overcoming public skepticism. Without this trust, adoption will stall, and the promised economic and social benefits of AI will fail to materialize.¹⁰⁷ The unchecked proliferation of harmful AI outputs, such as deepfakes and misinformation, not only damages faith in AI but also erodes public trust in digital content, media, and democratic institutions more broadly.¹¹²

From this perspective, public trust should not be viewed as a “soft” sociological concept but as a tangible economic asset. The research explicitly links a trust deficit to curtailed adoption, which directly translates into massive economic losses.¹⁰⁷ Therefore, the resources that companies spend on compliance with a robust liability regime, and that governments spend on enforcement, are not merely deadweight costs. They are strategic

investments in the creation and maintenance of this essential economic asset. This reframes the entire economic debate. The cost of regulation should not be weighed against an abstract ideal of “unfettered innovation,” but against the very real and substantial economic damage that will result from a trust deficit in an unregulated or poorly regulated market. When viewed through this lens, a strong liability framework offers a clear positive return on investment by enabling the broad societal acceptance necessary to unlock the full potential of artificial intelligence.

Conclusion and Strategic Recommendations

The emergence of highly autonomous AI systems capable of unpredictable behavior represents a fundamental challenge to our legal traditions. The core concepts of foreseeability, intent, and static product identity, upon which centuries of liability law have been built, are rendered inadequate by the dynamic and opaque nature of this new technology. Simply stretching old doctrines to fit this new reality is a failing strategy that results in conceptual mismatches and accountability gaps. A new legal architecture is required—one that is purpose-built for the algorithmic age.

This report has argued that the most viable path forward is a hybrid, tiered, and multi-layered liability framework. This framework must be grounded in a clear-eyed understanding of the technology, rejecting the dead end of AI personhood and instead focusing on allocating responsibility among the human actors in the AI value chain. It must be flexible enough to adapt to a rapidly evolving technology and robust enough to ensure public trust and provide meaningful redress for victims.

Based on the comprehensive analysis conducted, the following strategic recommendations are proposed for policymakers, legal practitioners, and technology companies:

For Policymakers and Legislatures:

  1. Adopt a Tiered, Hybrid Liability Framework: Legislate a dual-channel liability system.

    • For high-risk AI systems, impose a regime of strict liability on developers and deployers. This ensures victim compensation without requiring proof of fault, which is often impossible to establish.

    • For lower-risk AI systems, establish a regime of fault-based liability with a rebuttable presumption of fault. This shifts the burden of proof to the defendant to demonstrate they exercised a high duty of care, alleviating the evidentiary challenges for plaintiffs.

  2. Mandate a Multi-Layered Accountability Structure:

    • Establish joint and several liability for all commercial actors in the high-risk AI value chain to provide victims with a clear path to compensation.

    • Explicitly permit contractual allocation of risk and rights of contribution among commercial parties to allow for the internal, market-based distribution of ultimate financial responsibility.

    • Place statutory limits on the ability of powerful upstream actors to use contracts to disclaim all liability for harm to third-party consumers.

  3. Establish Supporting Compensation Mechanisms:

    • Require mandatory liability insurance for the deployment of all high-risk AI systems as a condition of market access.

    • Explore the creation of an industry-funded, no-fault compensation fund to address catastrophic or truly unforeseeable “algorithmic black swan” events, ensuring that victims of systemic risks are not left without recourse.

  4. Modernize Procedural and Evidentiary Rules:

    • Codify a “right to a contestable explanation” for individuals adversely affected by high-risk AI decisions, ensuring access not just to a static explanation but to the means of interrogating and challenging the AI’s logic.

    • Empower courts to mandate disclosure of evidence related to high-risk AI systems, including technical documentation and logs, to facilitate fair adjudication.

    • Develop new evidentiary standards for authenticating AI-generated evidence, potentially requiring pre-trial reliability hearings akin to Daubert hearings for novel scientific evidence.

For the Judiciary and Legal Practitioners:

  1. Develop Specialized Expertise: Courts and law firms must invest in developing the technical literacy necessary to adjudicate complex AI-related disputes. This includes understanding the fundamentals of machine learning, emergent behavior, and XAI techniques.

  2. Drive the Common Law of XAI: Judges, through their case-by-case rulings, will play a seminal role in defining what constitutes a legally sufficient explanation. By demanding explanations that are comprehensible and useful in specific legal contexts, the judiciary will incentivize the development of more robust and transparent AI systems.

  3. Innovate in Case Management: Courts should pioneer new case management techniques for AI litigation, such as appointing neutral technical experts, using special masters to oversee discovery of algorithmic systems, and developing protocols for secure “clean room” model interrogation.

For Technology Companies (Developers, Deployers, and Owners):

  1. Embrace Accountability by Design: Integrate legal and ethical risk management into the entire AI lifecycle, from initial design to post-deployment monitoring. Compliance should not be an afterthought.

  2. Invest in Transparency and Explainability: Proactively develop and implement robust XAI tools and maintain comprehensive documentation (e.g., model cards) for all AI systems. This is no longer just good practice; it is a critical risk mitigation strategy.

  3. Conduct Rigorous Audits and Assessments: Implement regular, independent third-party audits for bias, security, and performance of high-risk systems. Adherence to frameworks like the NIST AI RMF should be standard operating procedure.

  4. Utilize Contracts Strategically: Employ clear and detailed contractual agreements to allocate responsibilities, warranties, and indemnities within the value chain. Do not rely on boilerplate liability disclaimers, especially for high-stakes applications.

Ultimately, creating this new legal framework is not about stifling progress. It is about building the guardrails that make progress possible. By ensuring that accountability keeps pace with autonomy, we can foster an environment of trust where the immense benefits of artificial intelligence can be realized safely, equitably, and for the good of all society. The path forward requires a deliberate and collaborative effort from all stakeholders to architect a legal system as intelligent and adaptive as the technology it seeks to govern.

Cited works

  1. Economic impacts of artificial intelligence (AI) - European Parliament, https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/637967/EPRS_BRI(2019)637967_EN.pdf

  2. Mitigating Product Liability Risks for Companies Providing AI-Enabled Products, https://www.mmmlaw.com/news-resources/mitigating-product-liability-risks-for-companies-providing-ai-enabled-products/

  3. Full article: Locating fault for AI harms: a systems theory of foreseeability, reasonable care and causal responsibility in the AI value chain, https://www.tandfonline.com/doi/full/10.1080/17579961.2025.2469345

  4. An AI Liability Regulation would complete the EU’s AI strategy - CEPS, https://www.ceps.eu/an-ai-liability-regulation-would-complete-the-eus-ai-strategy/

  5. Liability for Artificial Intelligence and the Internet of … - Nomos eLibrary, https://www.nomos-elibrary.de/10.5771/9783845294797.pdf?download_full_pdf=1&page=1

  6. Legal Definitions of AI: Considerations and Common Threads - Sourcing Speak, https://www.sourcingspeak.com/legal-definitions-ai/

  7. Artificial Intelligence and Civil Liability - European Parliament, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf

  8. Understanding authorship in Artificial Intelligence-assisted works | Journal of Intellectual Property Law & Practice | Oxford Academic, https://academic.oup.com/jiplp/advance-article/doi/10.1093/jiplp/jpae119/7965768

  9. (PDF) Legal Personhood of Autonomous Systems: A Jurisprudential …, https://www.researchgate.net/publication/389459394_Legal_Personhood_of_Autonomous_Systems_A_Jurisprudential_Analysis

  10. Agentic AI Unleashed: Who Takes the Blame When Mistakes Are Made? | Smarsh, https://www.smarsh.com/blog/thought-leadership/agent-ai-unleashed-who-takes-blame-when-mistakes-are-made

  11. Law-Following AI: designing AI agents to obey human laws …, https://law-ai.org/law-following-ai/

  12. AI Decision-Making: Legal and Ethical Boundaries and the Mens Rea Dilemma, https://www.americanbar.org/groups/gpsolo/resources/magazine/2024-november-december/ai-decision-making-legal-ethical-boundaries-mens-rea-dilemma/

  13. The Law of AI is the Law of Risky Agents Without Intentions, https://lawreview.uchicago.edu/online-archive/law-ai-law-risky-agents-without-intentions

  14. Full article: Property ownership and the legal personhood of artificial intelligence, https://www.tandfonline.com/doi/full/10.1080/13600834.2020.1861714

  15. What is emergent behavior in AI? | TEDAI San Francisco, https://tedai-sanfrancisco.ted.com/glossary/emergent-behavior/

  16. Understanding AI Emergent Behavior - OneTask, https://onetask.me/blog/ai-emergent-behavior

  17. Emergent Properties in Artificial Intelligence - GeeksforGeeks, https://www.geeksforgeeks.org/artificial-intelligence/emergent-properties-in-artificial-intelligence/

  18. Emergent Behavior | Deepgram, https://deepgram.com/ai-glossary/emergent-behavior

  19. AI In the Law - International Association of Defense Counsel, https://www.iadclaw.org/assets/1/6/1.1_-_AI_in_the_Law.pdf

  20. Unexpected capabilities in AI - Telnyx, https://telnyx.com/learn-ai/emergent-behavior-ai

  21. Emergent Behavior — Theory - Daposto - Medium, https://daposto.medium.com/emergent-behavior-theory-a58ef44c0cf0

  22. Emergent Abilities of Large Language Models - AssemblyAI, https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models

  23. Emergent behaviors - EasyAI, https://easyai.uni-mainz.de/html/emergent-behaviors.html

  24. What is emergent behavior? - AI Glossary - DocsBot AI, https://docsbot.ai/ai-terms-glossary/term/emergent-behavior

  25. Product Liability for AI.pdf - Jones Day, https://www.jonesday.com/-/media/files/publications/2018/03/mitigating-product-liability-for-artificial-intell/files/product-liability-for-aipdf/fileattachment/product-liability-for-ai.pdf?rev=2705b4f8ed614b46a198556c2d28c18d&sc_lang=en

  26. AI liability – who is accountable when artificial intelligence malfunctions? - Taylor Wessing, https://www.taylorwessing.com/en/insights-and-events/insights/2025/01/ai-liability-who-is-accountable-when-artificial-intelligence-malfunctions

  27. Who is responsible when AI acts autonomously & things go wrong? - Global Legal Insights, https://www.globallegalinsights.com/practice-areas/ai-machine-learning-and-big-data-laws-and-regulations/autonomous-ai-who-is-responsible-when-ai-acts-autonomously-and-things-go-wrong/

  28. Artificial Intelligence: The ‘Black Box’ of Product Liability, https://www.productlawperspective.com/2025/04/artificial-intelligence-the-black-box-of-product-liability/

  29. Product Liability Considerations For AI-Enabled Medtech | Insights | Sidley Austin LLP, https://www.sidley.com/en/insights/publications/2024/01/product-liability-considerations-for-ai-enabled-medtech

  30. New product liability risks for AI products - Taylor Wessing, https://www.taylorwessing.com/en/synapse/2025/ai-in-life-sciences/new-product-liability-risks-for-ai-products

  31. Vicarious Liability for AI - Digital Repository @ Maurer Law, https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?article=11519&context=ilj

  32. “Vicarious Liability for AI” by Mihailis E. Diamantis, https://www.repository.law.indiana.edu/ilj/vol99/iss1/7/

  33. Negligence Liability for AI Developers - Lawfare, https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers

  34. (PDF) Regulating Autonomous AI: Legal Perspectives on Accountability and Liability, https://www.researchgate.net/publication/392700339_Regulating_Autonomous_AI_Legal_Perspectives_on_Accountability_and_Liability

  35. Legal liability concerns in artificial intelligence: What you need to know - Daily Journal, https://www.dailyjournal.com/mcle/1525-legal-liability-concerns-in-artificial-intelligence-what-you-need-to-know

  36. Artificial Intelligence & Product Liability | McCarter & English, LLP, https://www.mccarter.com/insights/artificial-intelligence-product-liability/

  37. Products Liability for Artificial Intelligence | Lawfare, https://www.lawfaremedia.org/article/products-liability-for-artificial-intelligence

  38. Is an AI Chatbot a “Product”? - Winston & Strawn, https://www.winston.com/en/blogs-and-podcasts/product-liability-and-mass-torts-digest/is-an-ai-chatbot-a-product

  39. Artificial intelligence and liability: Key takeaways from recent EU legislative initiatives, https://www.nortonrosefulbright.com/en/knowledge/publications/7052eff6/artificial-intelligence-and-liability

  40. liability for damage caused by artifical intelligence - templars, https://www.templars-law.com/app/uploads/2021/05/LIABILITY-FOR-DAMAGE-CAUSED-BY-ARTIFICAL-INTELLIGENCE.pdf

  41. Resolving the Liability Dilemma in AI Caused Harms - RGNUL Student Research Review, https://www.rsrr.in/post/resolving-the-liability-dilemma-in-ai-caused-harms

  42. AI Systems – who is liable? - Reliability Oxford, http://www.reliabilityoxford.co.uk/ai-systems-who-is-liable/

  43. Comparative Analysis of AI Development Strategies: A Study of China’s Ambitions and the EU’s Regulatory Framework - EuroHub4Sino, https://storage.eh4s.eu/vitrin/files%2FComparative-Analysis-of-AI-Development-Strategies.pdf

  44. (PDF) Navigating AI Regulation: A Comparative Analysis of EU and US Legal Frameworks, https://www.researchgate.net/publication/385087114_Navigating_AI_Regulation_A_Comparative_Analysis_of_EU_and_US_Legal_Frameworks

  45. Global AI Law Comparison: EU, China & USA Regulatory Analysis - Compliance Hub Wiki, https://www.compliancehub.wiki/global-ai-law-snapshot-a-comparative-overview-of-ai-regulations-in-the-eu-china-and-the-usa/

  46. Impact of EU Regulations on AI Adoption in Smart City Solutions: A …, https://www.mdpi.com/2078-2489/16/7/568

  47. Key Issue 3: Risk-Based Approach - EU AI Act, https://www.euaiact.com/key-issue/3

  48. Risk-Based AI Regulation: A Primer on the Artificial Intelligence Act of the European Union, https://www.rand.org/pubs/research_reports/RRA3243-3.html

  49. EU AI Act: Navigating a Brave New World - Latham & Watkins LLP, https://www.lw.com/admin/upload/SiteAttachments/EU-AI-Act-Navigating-a-Brave-New-World.pdf

  50. Artificial intelligence liability directive - European Parliament, https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf

  51. arxiv.org, https://arxiv.org/html/2505.13673v1

  52. The Artificial Intelligence Liability Directive, https://www.ai-liability-directive.com/

  53. DING Xiaodong | Legislation on Artificial Intelligence in China from a Global Comparative Perspective, http://www.socio-legal.sjtu.edu.cn/en/wxzy/info.aspx?itemid=4868

  54. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile - NIST Technical Series Publications, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf

  55. Algorithmic Black Swans - Washington University Law Review, https://wustllawreview.org/2024/04/18/algorithmic-black-swans/

  56. Liability Rules and Standards | National Telecommunications and Information Administration, https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/using-accountability-inputs/liability-rules-and-standards

  57. US States Can (And Will) Continue To Regulate Artificial Intelligence … for Now - Taft Law, https://www.taftlaw.com/news-events/law-bulletins/u-s-states-can-and-will-continue-to-regulate-artificial-intelligence-for-now/

  58. Federal AI Moratorium Out, State AI Regulation Gold Rush In | Insights & Resources, https://www.goodwinlaw.com/en/insights/publications/2025/07/insights-technology-aiml-federal-ai-moratorium-out

  59. US state-by-state AI legislation snapshot | BCLP - Bryan Cave Leighton Paisner, https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html

  60. Texas Enacts Responsible AI Governance Act: What Companies Need to Know | JD Supra, https://www.jdsupra.com/legalnews/texas-enacts-responsible-ai-governance-1432018/

  61. Liability for Harms from AI Systems: The Application of U.S. Tort Law …, https://www.rand.org/pubs/research_reports/RRA3243-4.html

  62. Emerging AI Models Challenge Liability Law With Little Precedent - Armilla, https://www.armilla.ai/resources/emerging-ai-models-challenge-liability-law-with-little-precedent

  63. Third-party liability and product liability for AI systems - IAPP, https://iapp.org/news/a/third-party-liability-and-product-liability-for-ai-systems

  64. A Comparative Analysis of Artificial Intelligence Regulatory Law in Asia, Europe, and America - SHS Web of Conferences, https://www.shs-conferences.org/articles/shsconf/pdf/2024/24/shsconf_diges-grace2024_07006.pdf

  65. Preparing for compliance: Key differences between EU, Chinese AI regulations - IAPP, https://iapp.org/news/a/preparing-for-compliance-key-differences-between-eu-chinese-ai-regulations

  66. China’s New AI Regulations - Latham & Watkins LLP, https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf

  67. AI Watch: Global regulatory tracker - China | White & Case LLP, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china

  68. China’s AI regulation - Legal issues around AI - Eversheds Sutherland, https://ezine.eversheds-sutherland.com/legal-issues-around-ai/chinas-ai-regulation?overlay=Issues%20to%20consider

  69. Shape of China’s AI regulations and prospects | Law.asia, https://law.asia/china-ai-regulations-legislation-compliance-future-prospects/

  70. AI regulation in the EU, the US and China: An NLP quantitative and qualitative lexical analysis of the official documents - Journal of Ethics and Legal Technologies, https://jelt.padovauniversitypress.it/system/files/papers/JELT-2024-2-7.pdf

  71. Implementing the UK’s AI Regulatory Principles - GOV.UK, https://assets.publishing.service.gov.uk/media/65c0b6bd63a23d0013c821a0/implementing_the_uk_ai_regulatory_principles_guidance_for_regulators.pdf

  72. www.deloitte.com, https://www.deloitte.com/uk/en/Industries/financial-services/blogs/the-uks-framework-for-ai-regulation.html#:~:text=The%20UK%20Government%20has%20adopted,governance%2C%20and%20contestability%20and%20redress.

  73. Closing the gap: Fair victim compensation in the EU AI liability regime - EST, https://esthinktank.com/2025/05/05/closing-the-gap-fair-victim-compensation-in-the-eu-ai-liability-regime/

  74. AI Liability Along the Value Chain - The Mozilla Blog, https://blog.mozilla.org/netpolicy/files/2025/03/AI-Liability-Along-the-Value-Chain_Beatriz-Arcila.pdf

  75. AI Liability Along the Value Chain - AI Governance Library, https://www.aigl.blog/ai-liability-along-the-value-chain/

  76. Liability Issues in the Context of Artificial Intelligence: Legal Challenges and Solutions for AI-Supported Decisions - ResearchGate, https://www.researchgate.net/publication/387015813_Liability_Issues_in_the_Context_of_Artificial_Intelligence_Legal_Challenges_and_Solutions_for_AI-Supported_Decisions

  77. Outsmarting Smart Devices: Preparing for AI Liability Risks and Regulations - University of San Diego, https://digital.sandiego.edu/cgi/viewcontent.cgi?article=1350&context=ilj

  78. Accountability Frameworks for Autonomous AI Agents: Who’s …, https://www.arionresearch.com/blog/owisez8t7c80zpzv5ov95uc54d11kd

  79. In support of “no-fault” civil liability rules for artificial intelligence - PMC - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC7851658/

  80. Nature, Nurture, or Neither?: Liability For Automated and Autonomous Artificial Intelligence Torts Based on Human Design and Influences, https://lawreview.gmu.edu/print__issues/nature-nurture-or-neither-liability-for-automated-and-autonomous-artificial-intelligence-torts-based-on-human-design-and-influences/

  81. Autonomous Vehicles: Legal Governance of Civil Liability Risks - Advances in Engineering Innovation, https://www.ewadirect.com/proceedings/lnep/article/view/24130/pdf

  82. Draft Report of the Joint California Policy Working Group on AI Frontier Models—liability and insurance comments - Institute for Law & AI, https://law-ai.org/draft-report-of-the-joint-california-policy-working-group/

  83. Navigating AI Vendor Contracts and the Future of Law: A Guide for Legal Tech Innovators, https://law.stanford.edu/2025/03/21/navigating-ai-vendor-contracts-and-the-future-of-law-a-guide-for-legal-tech-innovators/

  84. Legal Liability for AI-Driven Decisions – When AI Gets It Wrong, Who Can You Turn To?, https://www.hfw.com/insights/legal-liability-for-ai-driven-decisions-when-ai-gets-it-wrong-who-can-you-turn-to/

  85. How Insurance Policies Are Adapting To AI Risk, Law360 - Hunton Andrews Kurth LLP, https://www.hunton.com/insights/publications/how-insurance-policies-are-adapting-to-ai-risk

  86. Insurance Strategies to Mitigate AI and Cyber Risks - American Bar Association, https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-december/insurance-strategies-mitigate-ai-cyber-risks/

  87. Artificial Intelligence, Real Risks: Insurance Coverage That Can Respond to AI-Related Loss and Liability - Brown Rudnick Insights, https://briefings.brownrudnick.com/post/102jz45/artificial-intelligence-real-risks-insurance-coverage-that-can-respond-to-ai-re

  88. Understanding Artificial Intelligence (AI) Risks and Insurance: Insights from A.F. v. Character Technologies - Hunton Andrews Kurth LLP, https://www.hunton.com/hunton-insurance-recovery-blog/understanding-artificial-intelligence-ai-risks-and-insurance-insights-from-a-f-v-character-technologies

  89. Artificial Intelligence - The Implications of Machine Learning in Workers’ Compensation | Laughlin, Falbo, Levy & Moresi LLP - JDSupra, https://www.jdsupra.com/legalnews/artificial-intelligence-the-3773439/

  90. What is Explainable AI (XAI)? - IBM, https://www.ibm.com/think/topics/explainable-ai

  91. (PDF) Explainable AI and Law: An Evidential Survey - ResearchGate, https://www.researchgate.net/publication/376661358_Explainable_AI_and_Law_An_Evidential_Survey

  92. THE JUDICIAL DEMAND FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE, https://columbialawreview.org/content/the-judicial-demand-for-explainable-artificial-intelligence/

  93. Explainable Artificial Intelligence (xAI): Reflections on Judicial System - Kutafin Law Review, https://kulawr.msal.ru/jour/article/download/230/230

  94. What legal consideration explainable AI raises - Nupur Jalan, https://nupurjalan.com/what-legal-consideration-explainable-ai-raises/

  95. A Legal Framework for eXplainable Artificial Intelligence - Research Collection, https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/699762/CLE_WP_2024_09.pdf

  96. A Systematic Review on Explainable AI in Legal Domain - IJRASET, https://www.ijraset.com/research-paper/explainable-ai-in-legal-domain

  97. Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems - MDPI, https://www.mdpi.com/2073-431X/14/5/160

  98. Algorithmic Accountability Frameworks → Term - Pollution → Sustainability Directory, https://pollution.sustainability-directory.com/term/algorithmic-accountability-frameworks/

  99. Why does algorithmic transparency matter and what can we do about it? | OpenGlobalRights, https://www.openglobalrights.org/why-does-algorithmic-transparency-matter-and-what-can-we-do-about-it/

  100. AI’s Complex Role in Criminal Law: Data, Discretion, and Due Process, https://www.americanbar.org/groups/gpsolo/resources/magazine/2025-mar-apr/ai-complex-role-criminal-law-data-discretion-due-process/

  101. 7 actions that enforce responsible AI practices - Huron Consulting, https://www.huronconsultinggroup.com/insights/seven-actions-enforce-ai-practices

  102. Who is accountable for responsible AI? The answer might surprise you - IBM, https://www.ibm.com/think/insights/who-is-accountable-for-responsible-ai

  103. Deepfakes on trial: How judges are navigating AI evidence …, https://www.thomsonreuters.com/en-us/posts/ai-in-courts/deepfakes-evidence-authentication/

  104. AI-Generated Evidence: Challenges and Evolving Standards Under the Federal Rules of Evidence - Today’s Managing Partner, https://todaysmanagingpartner.com/ai-generated-evidence-challenges-and-evolving-standards-under-the-federal-rules-of-evidence/

  105. AI-Generated Evidence Authentication Standards - Attorney Aaron Hall, https://aaronhall.com/ai-generated-evidence-authentication-standards/

  106. Augmenting Forensic Science Through AI: The Next Leap in Multidisciplinary Approaches, https://www.preprints.org/manuscript/202501.1951/v1

  107. Promoting Accountability for AI Misinformation: Intermediary Digital Liability - Global Voices, https://www.globalvoices.org.au/post/promoting-accountability-for-ai-misinformation-intermediary-digital-liability

  108. Trust in artificial intelligence: global insights 2025 - KPMG International, https://kpmg.com/au/en/home/insights/2025/04/trust-in-ai-global-insights-2025.html

  109. PUBLIC TRUST IN AI: IMPLICATIONS FOR POLICY AND REGULATION - Ipsos, https://www.ipsos.com/sites/default/files/ct/news/documents/2024-09/Ipsos%20Public%20Trust%20in%20AI.pdf

  110. Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making - Frontiers, https://www.frontiersin.org/journals/human-dynamics/articles/10.3389/fhumd.2024.1421273/full

  111. Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC10606888/

  112. (PDF) AI’S IMPACT ON PUBLIC PERCEPTION AND TRUST IN DIGITAL CONTENT, https://www.researchgate.net/publication/387089520_AI’S_IMPACT_ON_PUBLIC_PERCEPTION_AND_TRUST_IN_DIGITAL_CONTENT

Older > < Newer