The European Parliament has approved the world’s first Artificial Intelligence Act (AIA), a comprehensive framework to combat the risk posed by artificial intelligence (AI). After two and a half years of political debates and negotiations, this decision has made Europe the global standard for AI regulation. So, what does this mean for us and the broader AI community in 2024?

See BigID in Action

What Does the EU AI Act Mean?

The EU recognized the fundamental need to ensure the safe and secure development of AI systems. The EU AI Act was introduced to mitigate harm in areas where using AI poses significant risks to fundamental rights like healthcare, education, public services, and border surveillance. This Act provides regulations for general-purpose AI models (GPAI) emphasizing transparency, traceability, non-discriminatory, and environmentally friendliness.

Additionally, the legislation requires tech companies that develop AI technologies to produce a summary of data used for training models, provide reporting on data being trained, and implement regular risk assessments to mitigate risk and comply with the EU AI Act requirements. These additional layers ensure human oversight rather than automated processes to prevent bias, profiling, or dangerous and harmful outcomes.

Also, the EU Parliament proposed a technology-neutral definition for AI to be applied to future AI systems.

Why the European Union (EU) AI Act is Important

As this is the first global powerhouse to pass AI legislation, we could see what will eventually become worldwide regulatory standards. In addition, it could have a snowball effect where specific organizations begin to introduce industry-specific regulations in the coming years. Interestingly, the bill was introduced even before ChatGPT’s introduction in November 2022 and the explosion of generative AI tools in 2023.

The EU AI Act will not be enforced until 2025, allowing companies to adjust and prepare. Before then, companies will be urged to follow the rules in 2024 voluntarily, but there are no penalties if they don’t. However, when it is eventually implemented, companies that violate the AI Act’s rules could face fines of up to 35 million euros or between 1.5% to 7% of their global sales in the preceding financial year (see below for further details).

Download Guide.

What Does the EU AI Act Restrict?

The artificial intelligence law from the European Union’s Council places bans and restrictions on several uses of AI. Here are some practices that are prohibited according to the new legislation:

  • Indiscriminate and untargeted bulk scraping of biometric data like facial images from social media or footage to create or expand face recognition databases is prohibited. This also includes facial and emotion recognition systems in public places like workplaces, border patrol, law enforcement, and educational settings. However, certain safety exceptions exist, such as using AI to detect when a driver is falling asleep. Facial recognition technology by law enforcement is restricted to uses like identification of victims of terrorism, kidnapping, and human trafficking.
  • Using social scoring systems by public authorities to evaluate citizens’ compliance is also prohibited. This is because it may lead to discriminatory outcomes, injustice, and exclusion of specific groups. AI systems manipulating people’s behavior to influence or guide their decisions are banned. For example, targeted manipulation of content on social media and other platforms to pursue political or commercial goals is prohibited. Any operator of systems creating manipulated media must disclose to users.
  • Additionally, any AI system that assesses natural persons or groups for risk and profiling for delinquency is banned. This prohibits tools that predict the occurrence or recurrence of a misdemeanor or crime based on profiling a person using traits and data like location or past criminal behavior.
  • Foundation model providers must also submit detailed summaries of the training data for building their AI models.

These bans and many others are categorized into different levels of risk in the EU AI Act, which depends on the severity. Let’s take a look at those risk levels:

Rules for Different Risk Levels in the AI Act

The EU AI Act follows a risk-based approach to regulation, categorizing AI applications into four levels. This means the higher the risk, the stricter the governance.

  • Unacceptable Risk: An AI system that’s categorized as an “unacceptable risk” poses a threat to us humans. Cognitive manipulation of people’s behaviors, social scoring, and some uses of biometric systems fall under this class. The only exception here is for law enforcement, but even that is capped for specific uses.
  • High Risk: AI systems that affect the safety of humans and our fundamental rights will be considered high risk. This includes credit scoring systems and automated insurance claims. All high-risk systems, such as the advanced GPT-4 model, will be strictly vetted through conformity assessments before they’re put on the market and will be continuously monitored throughout their lifecycle. Companies must also register the product with an EU database.
  • Limited Risk: AI tools like chatbots, deepfakes, and features like personalization are considered “limited risk.” Companies that provide such services must ensure that they are transparent with their customers about what their AI models are being used for and the type of data involved.
  • Minimal Risk: For tools and processes that fall under “minimal risk,” the draft EU AI Act encourages companies to have a code of conduct ensuring AI is being used ethically.
Ensure AI Compliance

General Purpose AI Model Protections

After extended deliberation on the regulation of “foundational models” in the EU AI Act, a compromise was made in the form of a tiered approach. The compromise is focused on terminology around general-purpose AI (“GPAI”) models/systems and outlines obligations into two tiers:

  • Tier 1: several uniformed obligations for all GPAI models.
  • Tier 2: an additional set of obligations for GPAI models with systemic risks.

Tier 1 Obligations: As part of Tier 1, all GPAI model providers must adhere to transparency requirements, which require developing technical documentation with detailed summaries about the content used for training. Additionally, these organizations must comply with the EU copyright law to ensure ethical use of content.

Tier 2 Obligations: GPAI models encompassing systemic risk will be considered second tier. Second-tier GPA models will be subjected to more stringent obligations, including evaluating AI model evaluations, assessing and mitigating risk, using adequate security measures, reporting severe incidents, ensuring proper security measures, and reporting on energy efficiency. For GPAI models with a high level of risk to comply with the EU AI act in their current state, it’s recommended that they adhere to the codes of practice until EU standards are formalized and published.

Framework for Enforcement & Penalties

According to the European Council, the EU AI act will be enforced through each member state’s national competent market surveillance authorities. In addition, an EU AI office, which will be a new ruling body within the EU Commission, will set the standards and enforcement mechanisms for new rules on GPAI models.

There will be financial penalties for violations of the EU AI Act, but several factors, such as the type of AI system, company size, and the extent of the violation, will determine fines. The penalties will range from:

  • 7.5 million euros or 1.5% of a company’s gross revenue (whichever is higher), if the information supplied is incorrect.
  • 15 million euros or 3% of a company’s gross revenue (whichever is higher) for violating the EU AI acts obligations.
  • 35 million euros or 3% of a company’s gross revenue (whichever is higher) for violations of banned AI applications.

Additionally, based on negotiations, smaller companies and start-ups may catch a break as the EU AI Act will provide caps on fines to ensure they are proportionate to those organizations.

Download Guide.

EU AI Act: Hampering Innovation or Fostering Data Security?

We cannot deny that the EU AI Act is necessary to ensure data security. However, there’s a thin line between achieving that goal and hampering innovation. French President Emmanuel Macron echoed these sentiments, arguing that the landmark rules could make European tech companies lag behind rivals in the United States, United Kingdom, and China.

However, going by the draft EU AI, certain built-in safeguards are designed to protect and maintain inventive AI strides. The intent is to strike a balance of risk management while promoting general-purpose AI innovations.

In the coming months, we expect the EU to clarify legalities on how governments can use AI in biometric surveillance and for national security. EU national governments, led by France, have already won certain exemptions for some AI uses in military or defense.

Since there is still quite a bit of time till enforcement, we might still see some details being fine-tuned between now and then. Hence, 2024 is poised to be a significant decision-making year.

How BigID Helps Secure AI Development & Reduce AI Risk

The EU AI Act is a new legal framework for developing AI that the public can trust. It reflects the EU’s commitment to driving innovation, securing AI development, national safety, and the fundamental rights of people and businesses.

As these organizations leverage AI and Large Language Models (LLMs) like ChatGPT (Tier 2), these models rely heavily on unstructured data for training purposes. BigID provides a complete solution to govern and secure data throughout the AI development lifecycle, with the ability to identify personal and sensitive information across structured and unstructured data sources. With BigID, organizations can:

Schedule a demo with our experts to see how BigID can help your organization reduce risk and comply with requirements within the EU AI Act.