what you should know about EU AI Act EU Artificial Intelligence Act
Generative AI
Touchapon Kraisingkorn
min read
March 26, 2024

14 highlights of the EU AI Act, the world's first AI law

Recently, the European Union (EU) Parliament passed the EU AI Act. This article summarizes the key points of this legislation in 14 points.

What is the EU AI Act?

The EU AI Act, or the EU Artificial Intelligence Act, is legislation governing artificial intelligence (AI) technology within the European Union. Approved by the European Parliament, it provides a regulatory framework aimed at fostering responsible development and use of AI technology. The law categorizes the risk levels of AI, ranging from "unacceptable" to "minimal risk," with corresponding stringent regulations.

Why is the EU AI Act important?

The EU AI Act is significant as it provides a comprehensive legal framework covering AI technology to ensure responsible development and usage while protecting fundamental rights. It aims to strike a balance between innovation and safeguarding consumers and businesses by establishing international standards for AI regulation.

When will the EU AI Act come into effect?

The EU AI Act will come into effect in May 2024 and will be implemented into the laws of each EU member state. However, due to the diversity of economies, laws, and governance systems across countries, it may take effect as early as 2025. While the EU AI Act's legal enforcement applies within the European Union, its legal implications are likely to have global ramifications, driving responsible AI regulation internationally and encouraging other countries to adopt similar measures.

Summary of the 14 key points of the EU AI Act

 EU AI Act categorizes AI based on risk levels (Credit: Telefonica)
EU AI Act categorizes AI based on risk levels (Credit: Telefonica).
  1. Categorization of AI by risk levels into four tiers: unacceptable risk, high risk, limited risk, and minimal risk.
  2. Unacceptable risk AI includes systems like social scoring and AI that manipulates human behavior covertly.
  3. High-risk AI covers sectors such as biometrics, public services, education, employment, infrastructure, law, migration, and justice processes.
  4. Developers of high-risk AI must undergo conformity assessments and comply with regulations, while users have specific obligations.
  5. Conformity assessment evaluates AI systems' compliance with EU AI Act requirements, conducted by certified assessors.
  6. Limited risk AI, like chatbots, requires user awareness through disclaimers or watermarks.
  7. Minimal risk AI, mainly sold in the EU market, is exempt from EU AI Act regulation.
  8. General Purpose AI (GPAI) is versatile AI models used for various tasks and may pose high risks.
  9. Special emphasis and additional requirements apply to GPAI exceeding 10^25 FLOPS.
  10. GPAI developers must document technical specifications, copyright policies, disclose training data summaries, and adhere to codes of practice.
  11. Systemic risk GPAI developers must conduct additional testing, prepare risk management strategies, report incidents to the AI Office, and maintain cybersecurity.
  12. Open-source GPAI not classified as systemic risk may receive regulatory leniency if they comply with copyright laws and disclose training data summaries.
  13. The AI Office, established under the European Commission, oversees GPAI developers' compliance and conducts direct assessments.
  14. Different requirements of the EU AI Act will take effect over varying timeframes, ranging from six months for prohibited AI to two to three years for high-risk AI.

In conclusion, the EU AI Act serves as a model for many countries in drafting their own AI legislation, including Thailand, which must study and adapt it to fit the country's context, with a focus on AI categorization by risk levels and the importance of GPAI like ChatGPT and Bard.