Advantages and Disadvantages of the EU AI ACT (by September 2023)

In our detailed analysis, we have curated a thorough list of both the advantages and disadvantages associated with the AI Act up to date(September 2023 ).

Drawing from a vast array of references and expert opinions, we’ve endeavored to present a holistic viewpoint. 

This synthesis is designed to provide readers with a centralized overview, encapsulating the breadth of discussions and arguments surrounding the AI Act. 

By navigating through our report, stakeholders can gain a deep understanding of the Act’s potential benefits and limitations, facilitating informed decision-making and discussions.

 

Prehistory: 

In April 2021, the European Commission introduced the EU’s first regulatory framework for artificial intelligence (AI). The draft AI act represents the EU’s pioneering effort to establish comprehensive regulations governing the use of AI within its jurisdiction. This proposed legislation is groundbreaking within the EU, aiming to oversee the deployment of AI technology. 

The Council reached a consensus on the general stance of EU Member States in December 2021. On June 14, 2023, Members of the European Parliament (MEPs) endorsed Parliament’s negotiating stance on the AI Act. Negotiations have now commenced among EU lawmakers to finalize this new legislation. 

These negotiations encompass substantial revisions to the Commission’s initial proposal (changes to the definition of AI systems, an expanded list of prohibited AI systems, and the imposition of obligations on AI systems like general-purpose AI and generative AI, such as ChatGPT. 

As the regulation of AI is currently the subject of intense deliberation, this article aims to provide a clear overview of both the advantages and disadvantages of the EU AI Act as of September 2023. 

Each position pros and cons is justified by quotations from publications in the field of AI regulation.

ADVANTAGES: 

The EU AI Act has a number of advantages, including:

  1. It provides an internationally aligned definition of an AI system (OECD definition). Parliament established a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Quote: 

“The OECD defines an Artificial Intelligence (AI) System as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

The definition is expected to change further with the removal of “machine-based,” according to Euractiv sources. The similarity to the OECD definition is acknowledged with an addition to the Act’s preamble “to ensure legal certainty, harmonisation and wide acceptance,” notes Euractiv.

The narrower definition of AI is more in line with conservative political groupings as left-of-center politicians had been pushing for a broader, more encompassing understanding of the technology.” 

 

Source: https://www.biometricupdate.com/202303/eu-ai-act-definition-of-ai-aligns-with-oecd-definition-biometric-risk-updated#:~:text=OECD%20definition%3A,influencing%20real%20or%20virtual%20environments.%E2%80%9D 

Author:  Frank Hersey 

 

  1. It contains exclusion of Certain Unfair Contractual Terms in AI Contracts with small to medium-sized enterprises (SMEs) or Startups

Quote: 

“The EP’s Position introduces a new provision restricting the ability of a contracting party to unilaterally impose certain unfair contractual terms related to the supply of tools, services, components or processes that are used or integrated in a high-risk AI system, or the remedies for the breach or the termination of obligations related to these systems in contracts with SMEs or startups. Examples of prohibited provisions include contractual terms that: (i) exclude or limit the liability of the party that unilaterally imposed the term for intentional acts or gross negligence; (ii) exclude the remedies available to the party upon whom the term has been unilaterally imposed in the case of non-performance of contractual obligations or the liability of the party that unilaterally imposed the term in the case of a breach of those obligations; and (iii) give the party that unilaterally imposed the term the exclusive right to determine whether the technical documentation and information supplied are in conformity with the contract or to interpret any term of the contract.”

Source: https://www.huntonprivacyblog.com/2023/06/15/european-parliament-agrees-on-position-on-the-ai-act/

Author: Hunton Andrews Kurth

 

4. It protects human rights and fundamental freedoms, an essential aspect to ensure that AI systems are employed in a manner that serves the best interests of society.

Quote: 

“The inclusion of protection for ‘fundamental rights’ in the AI Act was itself a form of institutional innovation, given that product safety legislation usually focuses on ‘health and safety’.”

Source: https://www.adalovelaceinstitute.org/event/eu-ai-standards-civil-society-participation/

Author: Ada Lovelace Institute

 

  1. It contains measures to Support Innovation

Quote: 

“Tittle V of the AI Act, which contains measures in support of innovation (including AI regulatory sandboxes), is expanded and clarified by the EP’s Position. One of the new provisions requires EU Member States to promote research and development of AI solutions which support socially and environmental beneficial outcomes, such as (i) solutions to increase accessibility for persons with disabilities; (ii) tackle socio-economic inequalities, and (iii) meet sustainability and environmental targets.”

Source: https://www.huntonprivacyblog.com/2023/06/15/european-parliament-agrees-on-position-on-the-ai-act/

Author: Hunton Andrews Kurth

 

  1. It provides that general purpose AI systems are not considered inherently high risk

Quote: 

“Still, OpenAI’s lobbying effort appears to have been a success: the final draft of the Act approved by E.U. lawmakers did not contain wording present in earlier drafts suggesting that general purpose AI systems should be considered inherently high risk. Instead, the agreed law called for providers of so-called “foundation models,” or powerful AI systems trained on large quantities of data, to comply with a smaller handful of requirements including preventing the generation of illegal content, disclosing whether a system was trained on copyrighted material, and carrying out risk assessments. OpenAI supported the late introduction of “foundation models” as a separate category in the Act, a company spokesperson told TIME.”

Source: https://time.com/6288245/openai-eu-lobbying-ai-act/

Author: BILLY PERRIGO

 

  1. It introduces a GDPR-like right to lodge a complaint with a supervisory authority

Quote: 

“MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.”

 

Source: https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

 Author: European Parliament

 

  1. Parliament requires one national surveillance authority (NSA) in each member state

Quote: 

“In all three AI Act proposals, there are several areas where existing agencies would be anointed as MSAs – this includes AI in financial services, AI in consumer products, and AI in law enforcement. In the Council and Commission proposals, this approach could be expanded. It allows for a member state to, for example, make its existing agency in charge of hiring and workplace issues the MSA for high-risk AI in those areas, or alternatively name the education ministry the MSA for AI in education. However, the Parliament proposal does not allow for this – aside from a few selected MSAs (e.g., finance and law enforcement), member states must create a single NSA for enforcing the AI Act. In the Parliament version, the NSA even gets some authority over consumer product regulators and can override those regulators on issues specific to the AI Act. Between these two approaches, there are a few important trade-offs to consider. The Parliament approach through a single NSA is more likely able to hire talent, build internal expertise, and effectively enforce the AI Act, as compared to a wide range of distributed MSAs. Further, the centralization in each member state NSA means that coordination between the member states is easier—there is generally just one agency per member state to work with, and they all have a voting seat on the board that manages the AI Office, a proposed advisory and coordination body. This is clearly easier than creating a range of coordination councils between many sector-specific MSAs. However, this centralization comes at a cost, which is that this NSA will be separated from any existing regulators in member states. This leads to the unenviable position that algorithms used for hiring, workplace management, and education will be governed by different authorities than human actions in the same exact areas. It’s also likely that the interpretation and implementation of the AI Act will suffer in some areas, since AI experts and subject matter experts will be in separate agencies. Looking at early examples of application-specific AI regulations demonstrates how complex they can be (see for instance, the complexity of a proposed U.S. rule on transparency and certification of algorithms in health IT systems or the Equal Employment Opportunity Commission’s guidance for AI hiring under the Americans with Disabilities Act). This is a difficult decision with unavoidable trade-offs, but because the approach to government oversight affects every other aspect of the AI Act, it should be prioritized, not postponed, in trilogue discussions.”

Source: https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/

 Author: Alex Engler

 

  1. Independent review by notified bodies is never strictly required

Quote: 

“Government market surveillance is only the first of two or three (the Parliament version adds individual redress) mechanisms for enforcing the AI Act. The second mechanism is a set of processes to approve organizations that would review and certify high-risk AI systems. These organizations are called ‘notified bodies’ when they receive a notification of approval from a government agency selected for this task, which itself is called a ‘notifying authority.’ This terminology can be quite confusing, but the general idea is that EU member states will approve organizations, including non-profits and companies, to act as independent reviewers of high-risk AI systems, giving them the power to approve those systems as meeting AI Act requirements. It is the aspiration of the AI Act that this will foster a European ecosystem of independent AI assessment, resulting in more transparent, effective, fair, and risk-managed high-risk AI applications. Certain organizations already exist in this space, such as the algorithmic auditing company Eticas AI, AI services and compliance provider AppliedAI, the digital legal consultancy AWO, and the non-profit Algorithmic Audit. This is a goal that other governments, such as the UK and U.S., have encouraged through voluntary policies. However, it is not clear that current AI Act proposals will significantly support such an ecosystem. For most types of high-risk AI, this independent review is not the only path for providers to sell or deploy high-risk AI systems. Alternatively, providers can develop AI systems to meet a forthcoming set of standards, which will be a more detailed description of the rules set forth in the AI Act, and simply self-attest that they have done so, along with some reporting and registration requirements. The independent review is intended to be based on required documentation of the technical performance of the high-risk AI system, as well as documentation of the management systems. This means the review can only really start once this documentation is completed, which is otherwise when an AI developer could self-attest as meeting the AI Act requirements. Therefore, the self-attestation process is sure to be faster and more certain (as an independent assessment could come back negatively) than paying for an independent review of the AI system. When will companies choose independent review by a notified body? A few types of biometric AI systems, such as biometric identification (specifically of more than one person, but less than mass public surveillance) and biometric analysis of personality characteristics (not including sensitive characteristics such as gender, race, citizenship, and others, for which biometric AI is banned) are specially encouraged to undergo independent review by a notified body. However, even this is not required. Similarly, the new rules proposed by Parliament on foundation models require extensive testing, for which a company may, but does not need to, employ independent evaluators.”

Source:https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/

 Author:Alex Engler

 

  1. The addition by the Council and the EP of the “significant risk of harm” criterion with the intention to address only those AI systems listed in the areas and use cases of ANNEX III that are truly high risk. 

Quote: “The scope of ANNEX III remains too broad; as currently conceived, the number of AI systems subject to highrisk classification is likely to be significantly higher than intended by the legislator.7 A blanket classification of AI systems in certain areas such as education and vocational training or employment, workers management and access to self-employment as ‘high-risk’ is excessive and misses the point: Rather than discouraging innovative, promising use cases a priori, legislators should focus on promoting accountability and due diligence requirements for providers and deployers of AI systems in these areas.”

 

Source:https://ki-verband.de/wp-content/uploads/2023/07/Position-Paper_AI-Act-Trilogue_GermanAIAssociation.pdf

 Author:German AI Association

 

  1. It sets the obligation to design and apply technical standards for the development and use of AI

Quote: 

Efforts to regulate artificial intelligence (AI) must aim to balance protecting the health, safety, and fundamental rights of individuals while reaping the benefits of innovation. These regulations will protect people from physical harms (like AI cars crashing), less visible harms (like systematized bias), harms from misuse (like deepfakes), and others. Regulators around the world are looking to the European Union’s AI Act (AIA), the first and largest of these efforts, as an example of how this balance can be achieved. It is the bar against which all future regulation will be measured. Notably, the act itself is intended only to outline the high-level picture of this balance. Starting in early 2023, accompanying technical standards will be developed in parallel to the act, and they ultimately will be responsible for establishing many of the trade-offs; early signs suggest that developing effective standards will be incredibly difficult.

Source:https://www.lawfaremedia.org/article/eus-ai-act-barreling-toward-ai-standards-do-not-exist

 Author:Hadrien Pouget

  1. Providers of certain AI systems may rebut the presumption that the system should be considered a high-risk AI system

Quote: 

“Under the rules proposed by the EP, providers of certain AI systems may rebut the presumption that the system should be considered a high-risk AI system. This would require that a notification be submitted to a supervisory authority or the AI Office (the latter if the AI System is intended to be used in more than one Member State), which shall review and reply, within three months, to clarify whether they deem the AI system to be high risk.”

Quote: https://www.huntonprivacyblog.com/2023/06/15/european-parliament-agrees-on-position-on-the-ai-act/

Author:Hunton Andrews Kurth

  1. The EU is creating AI Act to make it easier to sue AI companies for harm

Quote: 

“The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions. “

Source:https://www.technologyreview.com/2022/10/01/1060539/eu-tech-policy-harmful-ai-liability/

 Author:Melissa Heikkilä

 

DISADVANTAGES: 

 

The EU AI Act has also a number of disadvantages, including:

  1. The EU AI Act encompasses an extensive range of issues, seeking to address AI’s influence on various aspects such as national security, employment, misinformation, bias, democratic principles, and more. However, this all-encompassing approach may be overly ambitious, as attempting to resolve all these issues within a single piece of legislation is exceptionally challenging. 

Quote: 

“In this paper, we argue that regulation, and EU regulation in particular, is not only ill-prepared for the advent of this new generation of AI models, but also sets the wrong focus by quarreling mainly about direct regulation in the AI Act at the expense of the, arguably, more pressing content moderation concerns under the Digital Services Act (DSA). Significantly, the EU is spearheading efforts to effectively regulate AI systems, with specific instruments (AI Act, AI Liability Directive), software regulation (Product Liability Directive) and acts addressed toward platforms, yet covering AI (DSA; Digital Markets Act). Besides, technology-neutral laws, such as non-discrimination law, and also data protection law, continue to apply to AI systems. As we shall see, it may be precisely their technology-agnostic features that make them better prepared to handle the risks of LGAIMs than the technology-specific AI regulation that has been enacted or is in preparation.  ”

Source:https://arxiv.org/ftp/arxiv/papers/2302/2302.02337.pdf

 Author:Philipp Hacker

  1. AI Act does not cover all number of AI products and existing EU law, such as consumer protection, data protection or product safety legislation, must be enforced effectively

Quote: 

“Effective enforcement of other horizontal EU legislation for the protection of consumers and citizens will remain a necessity even after the AI Act enters into application, as unfortunately a significant number of AI products (which are not considered high-risk or generative AI) will not be regulated specifically under its framework.” 

Source:https://www.beuc.eu/letters/eu-us-ai-voluntary-code-conduct-and-ai-pact-europe

 Author:BEUC

 

  1. Regulation should focus on concrete high-risk applications, not on technology

Quote: 

 “Regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and include (i) obligations regarding transparency and (ii) risk management”.

Source:https://www.linkedin.com/posts/philipp-hacker-078940257_regulating-chatgpt-and-other-large-generative-activity-7029727555591000064-wG7Z/?originalSubdomain=re

 Author:Philipp Hacker

 

  1. The AI Act would jeopardize Europe’s competitiveness and technological sovereignty without effectively addressing the challenges we face now and in the future

Quote: 

 “The letter, sent on Friday to the European Commission, the parliament and member states, says: “In our assessment, the draft legislation would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.”

Source:https://techcrunch.com/2023/06/30/european-vcs-tech-firms-sign-open-letter-warning-against-over-regulation-of-ai-in-draft-eu-laws/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAGXUXDJrWx3ALqU3EHYJMUNHHrv5ZPSN1fESU28_5_1oyeWdqNZ5BFuLpnHqo83fsIY5JC_kYAHWpBfwcXKgaNYgHBg6mJKtHVFBvX0jH6Jqp4Qo6Gk_wZtB7ftAooGjQ0MUugY_xhCN7AAiWupmROmOy3CCbyPVJovTwbkPKq2R

 Author: Mike Butcher

 

  1. Fines imposed in AI Act are to high

 Quote: 

“The EP’s Position substantially amends the fines that can be imposed under the AI Act. The EP proposes that: Non-compliance with the rules on prohibited AI practices shall be subject to administrative fines of up to 40,000,000 EUR or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Non-compliance with the rules under Article 10 (data and data governance) and Article 13 (transparency and provision of information to users) shall be subject to administrative fines of up to 20,000,000 EUR or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Non-compliance with other requirements and obligations under the AI Act shall be subject to administrative fines of up to 10,000,000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 5,000,000 EUR or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher. It is also important to note that the EP’s Position proposes that the penalties (including fines) under the AI Act, as well as the associated litigation costs and indemnification claims, may not be subject to contractual clauses or other forms of burden-sharing agreements between providers and distributors, importers, deployers, or any other third parties.”

Source:

https://www.huntonprivacyblog.com/2023/06/15/european-parliament-agrees-on-position-on-the-ai-act/

Author:Hunton Andrews Kurth

 

  1. Technical standards for AI technologies have not yet been developed

Quote: 

“But while the AI Act could become law as soon as 2023, to become fully operational, the rules contained in the Act would need to be supported by adequate standards, a process led by two crucial – yet mostly unknown – actors. These are the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). Both are private international nonprofits, with 34 member countries and close connections with European stakeholders (‘European’ in the broad sense, by including the UK for instance).”

Source:

https://www.ceps.eu/with-the-ai-act-we-need-to-mind-the-standards-gap/

Author:Clément Perarnaud

 

  1. It is complex and may be difficult to implement. This is because the Act contains too many rules on generative AI  

Quote: 

“The European Parliament’s position seems excessive at a time when we have a pressing obligation to develop generative AI models in Europe over the coming months, in order to be autonomous and not have to depend on non-European models in the years and decades to come,” Barrot said, arguing that chatbots such as ChatGPT would have to abide by some of the same rules as high-risk systems in areas such as health and transportation.”

 Source:https://www.politico.eu/article/france-warns-eu-parliament-against-killing-a-european-chatgpt/

 Author:LAURA KAYALI

 

  1. The AI Act also lacks a comprehensive process for rights-based assessment of the impacts and risks of an AI system that should be considered throughout the AI lifecycle not just at market entry

Quote: 

“The alleged ‘risk-based’ nature of the Act is illusory and arbitrary. Impacts on groups and on society as a whole need to be considered, as well as risks to individuals and their rights, and risks should be considered throughout the AI lifecycle not just at market entry. The AI Act does not lay down criteria for when AI poses unacceptable risks to society and individuals. It merely designates set lists of what categories of AI systems are deemed ‘unacceptable risk’ and thus banned from the EU (a small number of systems, notably including some public-space, real-time, law-enforcement biometric systems) ”.

 Source:https://www.adalovelaceinstitute.org/wp-content/uploads/2022/03/Expert-opinion-Lilian-Edwards-Regulating-AI-in-Europe.pdf

 Author:Lilian Edwards

 

9. High-risk classification should have a clear criteria for AI providers to self-assess as binding rules rather than ‘soft’ guidance

Quote: 

The AI Act mandates the developers of systems at high risk of causing harm to people’s safety and fundamental rights to comply with a stricter regime concerning risk management, data governance and technical documentation.

How systems should fall into this category was subject to hefty amendments. Initially, the draft law automatically classified high-risk AI applications that fell into a list of use cases in Annex III. Both co-legislators removed this automatism and introduced an ‘extra layer’.

For the Council, this layer concerns the significance of the output of the AI system in the decision-making process, with purely accessory outputs kept out of the scope.

MEPs introduced a system whereby AI developers should self-assess whether the application covered by Annex III was high risk based on guidance provided by the EU Commission. If the companies consider their system is not high-risk, they would have to inform the relevant authority, which should reply within three months if they consider there was a misclassification.

Again, the options involve maintaining the Council’s general approach or moving toward the Parliament, but several midway solutions are also envisaged in this case.

One option is to adopt the MEPs’ version but without the notification of the competent authorities. Alternatively, this version could be further refined, introducing clear criteria for AI providers to self-assess as binding rules rather than ‘soft’ guidance.

The final proposal is the Parliament’s system without notification and with binding criteria, plus exploring “further options to provide additional guidance for providers, for example, using a repository of examples of AI systems covered by Annex III that should not be considered high-risk.

 Source:https://www.euractiv.com/section/artificial-intelligence/news/ai-act-spanish-presidency-sets-out-options-on-key-topics-of-negotiation/

 Author: Luca Bertuzzi

  1. Government licensing can kill competition between AI companies

Quote: 

“When Big Tech asks to be regulated, we must ask if those regulations might effectively cement Big Tech’s own power. For example, we’ve seen multiple proposals that would allow regulators to review and license AI models, programs, and services. Government licensing is the kind of burden that big players can easily meet; smaller competitors and nonprofits, not so much. Indeed, it could be prohibitive for independent open-source developers. We should not assume that the people who built us this world can fix the problems they helped create; if we want AI models that don’t replicate existing social and political biases, we need to make enough space for new players to build them.” 

 Source:https://www.eff.org/deeplinks/2023/07/generative-ai-policy-must-be-precise-careful-and-practical-how-cut-through-hype

 Author:CORYNNE MCSHERRY

  1. It contains a proposal to “watermark” the works of AI

Quote: 

“…there have been several proposals to require generative AI users and developers to “watermark” the works they produce. Assuming this is technically possible (it might be harder to do this for music, say, than images), history suggests it won’t be very effective against the uses we might worry about the most. “Advisory” watermarking by default, such as DALL-E’s automatic insertion of a few colored squares in the corner of an image, can help indicate it was AI-generated, so the person who shares it doesn’t unintentionally deceive. But those watermarks can easily be cropped out by the more sophisticated fraudsters we might really want to deter. And “adversarial” watermarking, whereby the AI model generates a watermark that is so deeply embedded in the output that it cannot be removed, has almost always been defeated in practice. In short, watermarking can have some benefits but it’s inevitably a cat and mouse game. If we’re aiming at serious harms by motivated people, we need strategies that work. ”

Source:https://www.eff.org/deeplinks/2023/07/generative-ai-policy-must-be-precise-careful-and-practical-how-cut-through-hype

 Author:CORYNNE MCSHERRY

 

  1. We should worry about giving any central authority the power to control access to generative AI tools 

Quote: 

“… there’s a lot of rhetoric around the risks of allowing open-source AI development compared to closed systems where a central authority can control what can and cannot be done by users. We’ve seen this movie before— open systems are often attacked with this claim, especially by those who benefit from a closed world. Even taking the concern at face value, though, But it’s hard to see how government can regulate use and development without restricting freedom of expression and the right to access new information and art. In the U.S., courts have long recognized that code is speech, and policing its development may run afoul of the First Amendment. More generally, just as we would not tolerate a law allowing the government to control access to and use of printing presses, we should worry about giving any central authority the power to control access to generative AI tools and, presumably, decide in advance what kinds of expression those tools can be allowed to generate. Moreover, placing controls on open-source development in some countries may just ensure that developers in other countries have better opportunities to learn and innovate.”

Source:https://www.eff.org/deeplinks/2023/07/generative-ai-policy-must-be-precise-careful-and-practical-how-cut-through-hype

 Author:CORYNNE MCSHERRY

 

  1. The AI Act contains unclear rules for a civil liability for AI harms

Quote: 

“The processes for complaints, redress, and civil liability by individuals harmed by AI systems has changed significantly across the various versions of the AI Act. The proposed Commission version of the AI Act from April 2021 did not include a path for complaint or redress for individuals. Under the Council proposal, any individual or organization may submit complaints about an AI system to the pertinent market surveillance authority. The Parliament has proposed a new requirement to inform individuals if they are subject to a high-risk AI system, as well as an explicit right to an explanation if they are adversely affected by a high-risk AI system (with none of the ambiguity of GDPR). Further, individuals can complain to their NSA and have a right to judicial remedy if complaints to that NSA go unresolved, which adds an additional path to enforcement. While liability is not explicitly covered in the AI Act, a new proposed AI Liability Directive intends to clarify the role of civil liability for damage caused by AI systems in the absence of a contract. Several aspects of AI development challenge pre-existing liability rules, including difficulty ascribing responsibility to specific individuals or organizations as well as the opacity of decision-making by some “black box” AI systems. The AI Liability Directive seeks to reduce this uncertainty by first clarifying rules on the disclosure of evidence. These rules state that judges may order disclosure of evidence by providers and users of relevant AI systems when supported by evidence of plausible damage. Second, the directive clarifies that fault of a defendant can be proven by demonstrating (1) non-compliance with AI Act (or other EU) rules, (2) that this non-compliance was likely to have influenced the AI system’s output, and (3) that this output (or lack thereof) gave rise to the claimant’s damages. Even if Parliament’s version of the AI Act and the AI Liability Directive are passed into law, it is unclear what the effect of these individual redress mechanisms will be. For instance, the right to an explanation might further incentivize companies to use simpler models for high-risk AI systems, such as choosing tree-based models over more “black box” models such as neural networks, as is the common result of the same requirement in the U.S. consumer finance market.

 Source:https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/

 Author:Alex Engler

  1. New definitions of Foundation Models (FM) and General Purpose AI Systems (GPAIS) are too broad and lack perspective

Quote: 

“The added definition for foundation models in the EP proposal does not provide a clear definition of this technology. Instead, it is too broad and lacks perspective, which will be detrimental to the European AI ecosystem. The question is therefore whether the AI Act needs to define GPAIS at all. The German AI Association recommends the removal of the definition of GPAIS and a focus on defining foundation models.

Source:

https://ki-verband.de/wp-content/uploads/2023/07/Position-Paper_AI-Act-Trilogue_GermanAIAssociation.pdf

Author:The German AI Association

 

  1. The AI Act is not the right instrument for monitoring and calculation of energy consumption and impact on the environment

Quote: 

“The EP text in art. 12 also requires AI providers to include monitoring and calculation of energy consumption and impact on the environment. The AI Act is not the right instrument for these purposes. By regulating the energy consumption of computer servers through commission regulation (EU) 617/20131 , this issue is already dealt with.”

  Source:https://www.amchameu.eu/position-papers/artificial-intelligence-act-priorities-trilogues

 Author:American Chamber of Commerce to the European Union

 

Conclusions

The EU’s legislative initiative to comprehensively regulate the ever-evolving realm of complex AI technology exhibits both notable advantages and noteworthy shortcomings.

On one hand, by establishing guardrails for deployers of AI technologies, the AI Act increases trust in AI and protects the interests of individuals from potential infringements.

On the other hand, it introduces impediments to the advancement of innovations, primarily through the implementation of bureaucratic market access mechanisms for AI companies. Consequently, excessive regulation of AI technologies has the potential to stifle AI development within the EU.

Furthermore, it’s evident to society that the diversity and intricacies of AI technologies cannot be adequately addressed by a single legislative act.