“Henna Virkkunen, the EU Commission’s Executive Vice-President for Tech Sovereignty… argu[ed] that the [AI Liability Directive] would have led to fragmented rules across EU member states.”
Last week, reports surfaced that spokespeople from the European Commission had confirmed the official withdrawal of legislative draft proposals that would have increased the European Union’s (EU) regulatory oversight over both standard-essential patent (SEP) licensing and civil liability of artificial intelligence (AI) products and services. While the decision to abandon these proposals was first made public this February, the EU Commission’s official withdrawal underscores ongoing tensions between the tech lobby and consumer advocates in the AI sector.
No Foreseeable Agreement Leads to Withdrawal of SEP, AI Liability Frameworks
In September 2022, the EU Commission published a draft proposal on an AI Liability Directive that would adapt non-contractual civil liability rules to AI providers. The adoption of a single civil liability framework for AI was originally intended to prevent the fragmentation of liability rules adopted by individual EU member states to address harmful acts and other wrongs committed by AI. Months later, in April 2023, the EU Commission published a regulatory proposal designed to facilitate licensing for patents covering inventions incorporated into technological standards. This regulation would have required SEP owners to register these patents with the European Union Intellectual Property Office (EUIPO), which would have also conducted essentiality checks and set criteria for fair, reasonable and non-discriminatory (FRAND) licensing obligations.
The EU Commission’s decision to withdraw the proposed SEP regulation was immediately hailed by several industry insiders concerned about the regulation’s impact on the telecommunications market. A spokesperson for the Council for Innovation Promotion (C4IP) said at the time that the SEP regulation “would have enabled large companies within industries to collectively determine royalty rates.” IPWatchdog CEO & President Gene Quinn remarked that the SEP oversight framework would “render meaningless FRAND… licensing promises in favor of authoritarian decrees.” The EU Commission cited a lack of agreement among member states in the foreseeable future as the main reason for tabling the proposed SEP regulation.
Difficulties in reaching common ground also led to the EU Commission’s withdrawal of the AI Liability Directive. Drawing upon findings from the EU Commission’s February 2020 white paper on AI, this directive would have provided for court-ordered evidentiary disclosures from operators of high-risk AI, a presumption of a causal link between non-compliance and damage caused by AI outputs or failures, and a monitoring program advising the EU Commission as to whether certain AI incidents implicate strict liability regimes.
Since the AI Liability Directive was introduced in late 2022, no significant progress had been made on its adoption by EU member states. Originally part of a larger plan for the regulation of the AI industry along with passage of the AI Act, which became effective in the EU last August, the AI Liability Directive was intended to complement the AI Act’s framework for assessing risk levels for specific AI deployments by providing EU consumers with a cause of action for civil liability, an important enforcement mechanism against non-compliance with the AI Act.
EU’s Virkkunen: AI Act Needs Full Implementation Before Rewriting Liability Rules
Many EU lawmakers initially resisted plans to withdraw the AI Liability Directive, with members of the European Parliament’s Internal Market and Consumer Protection Committee voting to continue working on AI liability rules in the days following the EU Commission’s announcement of the AI directive’s withdrawal. This April, several organizations representing the interests of civil society, including the European Consumer Organisation, the European Center for Not-for-Profit Law and Mozilla Foundation, sent an open letter addressed to Henna Virkkunen, the EU Commission’s Executive Vice-President for Tech Sovereignty, Security and Democracy, urging the EU Commission to immediately begin drafting new civil liability rules for AI providers.
Virkkunen publicly defended the EU Commission’s withdrawal of the AI Liability Directive before the EU Parliament’s Committee on Legal Affairs this August, arguing that the directive would have led to fragmented rules across EU member states. Virkkunen noted that new AI liability rules likely wouldn’t be drafted until the AI Act is fully implemented across the EU, although she reiterated her commitment to drafting liability rules supporting a true single market for AI across Europe.
While the application of civil liability in the AI context naturally drew opponents from among the Big Tech lobby, some critics of the AI directive also argued that the proposed framework left certain liability gaps unaddressed. A 2023 article published in npj Digital Medicine noted that it would be difficult under the proposed framework for civil liability to attach to black-box medical AI systems, which provide diagnoses and recommendations based on opaque decision-making processes, in cases where physicians cannot independently assess AI outputs or where decisions made by AI are not subject to independent physician review.
Confirmation of the AI Liability Directive’s withdrawal came days before certain oversight provisions of the AI Act became effective, requiring EU member states to monitor compliance with the act by domestic businesses. Foreign companies operating in the EU’s AI market will also be governed by the AI Code of Practice established by the AI Act, which mandates certain transparency and compliance standards on general-purpose AI models. Last week, both Google and X announced that they would sign the EU’s general-purpose AI rules, with X’s commitment limited to the code’s chapter on safety and security.
Image Source: Deposit Photos
Author: sashk0
Image ID: 35711349

Join the Discussion
No comments yet.