Analysis: Draft EU AI Act v. China’s Interim Measures for the Management of Generative Artificial Intelligence Services(summer 2023)

The field of artificial intelligence (AI) has been experiencing rapid growth and innovation, leading to new challenges and opportunities for regulators worldwide.

The European Union (EU) and China introduced landmark regulations aimed at governing the use of generative artificial intelligence services.

The EU AI Act and China’s Interim Measures for the Management of Generative Artificial Intelligence Services (hereafter – 2023 China’s Measures for Generative AI) represent significant efforts to strike a balance between fostering technological advancements and ensuring responsible AI usage.

We analyzed key provisions of both regulations, highlighting their respective approaches in Table 1.

 

Key areas EU AI Act (the EP’s
approach)
2023 China’s Measures for Generative AI
Generative AI systems The generative AI system is the application (such as ChatGPT) built on top of the foundation model (such as GPT-3.5). Generative AI technologies, such as ChatGPT, must adhere to certain transparency rules, which include:1. Clearly indicating when the content has been generated by AI; 2. Ensuring the model is engineered in a way that it cannot produce illegal or harmful content; 3. Making available summaries of copyrighted data that were utilized during the model’s training. used for training.  

The rules will only apply to generative AI services that are available to the general public rather than those being developed in research institutions. Providers should ensure that both the training data and generated content are “true and accurate”, must conduct security assessments on their product and ensure user information is secure. Generative AI services in China must also adhere to the “core values of socialism”.
Generative AI services will need to obtain a license to operate.
If a generative AI service provider finds “illegal” content, it should take measures to stop generating that content, improve the algorithm and then report that material to the relevant authority.

 

A risk-based approach  

The regulations adopt a risk-oriented strategy and set forth responsibilities for both providers and users based on the potential risks associated with the AI’s capabilities.
1. Unacceptable risk: AI systems considered a threat to people and will be banned (Cognitive behavioral manipulation, social scoring, real-time and remote biometric identification systems).
2. High risk: AI systems that negatively affect health, safety, fundamental rights or the environment.
3. Limited risk: AI systems should comply with minimal transparency requirements that would allow users to make informed decisions.

Measures in support of innovation  

Research activities and the development of free and open-source AI components would be largely exempted from compliance with the AI Act rules. Tittle V of the AI Act contains measures in support of innovation (including AI regulatory sandboxes). Regulatory sandboxes – real-life environments, established by public authorities to test AI before it is deployed. Act requires EU Member States to promote research and development of AI solutions which support socially and environmentally beneficial outcomes. 

 

Rules aim to encourage innovative applications of generative AI and support the development of related infrastructure like semiconductors.
Subject of regulation  

Impose requirements on the final actions taken by an AI system. “AI system” – a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments (Art. 3(1))

 

Focused on algorithms. China’s regulation on recommendation algorithms does not even contain the term “artificial intelligence” in its text, despite covering many AI applications.
Type of regulations Horizontal regulations are comprehensive umbrella laws attempting to cover all applications of AI technology. Vertical regulations target a specific application or manifestation of AI technology. In addition to being vertical, the regulations are iterative, so when the government identifies shortcomings or inadequacies in a regulation it has introduced, it will issue a revised version that addresses the gaps or broadens its coverage.

 

Fines  

Non-compliance with the rules on prohibited AI practices shall be subject to administrative fines of up to 40,000,000 EUR or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Non-compliance with the rules under Article 10 (data and data governance) and Article 13 (transparency and provision of information to users) shall be subject to administrative fines of up to 20,000,000 EUR or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Non-compliance with other requirements and obligations under the AI Act shall be subject to administrative fines of up to 10,000,000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 5,000,000 EUR or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

N/A-
Right to complain  

Remedies available to affected persons when faced with potential breaches of the rules under the AI Act are a GDPR-like right to lodge a complaint with a supervisory authority. 

 

The obligation to design and apply
technical standards
 

A list of ten proposed European standardisation deliverables are to be developed by 31 January 2025. These include standards covering risk management, transparency, and conformity assessment.

 

Providers should “participate in the formulation of international rules and standards” related to generative AI. 
Registration  

AI systems falling into eight specific areas that will have to be registered in an EU database:
– Biometric identification and categorisation of natural persons
– Management and operation of critical infrastructure
– Education and vocational training
– Employment, worker management and access to self-employment
– Access to and enjoyment of essential private services and public services and benefits
– Law enforcement
– Migration, asylum and border control management
– Assistance in legal interpretation and application of the law.

Requirement for generative AI service providers to conduct security reviews and register their algorithms with the government, if their services are capable of influencing public opinion or can “mobilize” the public.
A new body An AI Office, a new EU body to support the harmonised application of the AI Act, will provide guidance and coordinate joint cross-border investigations. Two new bodies were created in March 2023: the CCP Central Science and Technology Commission (CSTC) and the National Data Administration (NDA). The CSTC will serve as the CCP’s top science and technology policymaking body. The NDA will focus on data infrastructure and the utilization of data to support economic and social policies.

 

 

Conclusion.

Both the EU AI Act and 2023 China’s Measures for Generative AI share the common goal to ensure a human-centric and ethical development of AI. However, they differ in their methods of achieving this objective. The EU’s risk-based approach establishes obligations for providers and users depending on the level of risk the AI can generate, while China’s iterative approach allows for more adaptive regulation to cope with technological advancements. 

The EU AI Act’s emphasis on risk categorization and comprehensive requirements may offer greater certainty for businesses seeking to comply with the regulations. On the other hand, 2023 China’s Measures for Generative AI take an iterative and pragmatic approach. The Chinese regulators acknowledge the fast-changing nature of AI technology and understand the need for flexibility in the regulatory process. So, when the government identifies shortcomings or inadequacies in a regulation it has introduced, it will issue a revised version that addresses the gaps or broadens its coverage.

The Chinese approach may offer more room for innovation and adaptation, but businesses may face challenges in keeping up with evolving regulatory measures.

Best regards,

The Team at WiseRegulation.org Think Tank