February 19, 2024 , in technology


Regulating AI: The EU Takes the First Shot

The EU has released its draft legislation governing the use of AI technologies. What does it cover? What impact will it have and what will businesses need to do to to comply?

Eidosmedia AI Act

AI Act | Eidosmedia

Traditionally, regulation lags behind technological advancements — and this delay has often been problematic. This time perhaps we’ve finally learned our lesson as the EU tries to head off AI-related issues before they have a chance to take hold.

After months of negotiation between EU officials and tech organizations, draft legislation governing AI technologies is here. However, not everyone is convinced the legislation goes far enough, while others think it goes too far.

The legislation clears the path for the EU to prohibit certain uses of AI, while demanding transparency from providers. In this article, we’ll explore the potential impacts of the legislation, reactions, and whether it's likely to be adopted in its current form.

The EU AI Act - a quick look

The EU AI Act started taking shape in 2021 and is only now reaching draft form. Years of negotiating have led to legislation that some warn does not have enough impact and takes too long to go into effect. Meanwhile, some member states were concerned that the regulations could make the EU inhospitable towards new startups.

The process came to a head in early December after years of broad discussion concluded with “3-day ‘marathon’ talks,” according to the EU. This draft of the EU AI Act “aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This landmark proposal also aims to stimulate investment and innovation on AI in Europe.”

While the legislation is still subject to approval, it prohibits:

  • Social credit scoring
  • Using emotion recognition at work and school
  • Behavioral manipulation
  • Untargeted use of facial images through facial recognition
  • Biometric categorization systems for sensitive characteristics
  • Some predictive policing programs
  • Real-time biometric identification by law enforcement except with pre-authorization

Focusing mainly on what it calls “high impact” applications of AI — such as in vehicles, elections, or HR and banking — that were already in use in 2021, the EU says, “provisional agreement also provides for increased transparency regarding the use of high-risk AI systems.” For instance, as The Verge’s Jess Weatherbed reports, “The rules also include a list of safeguards and exemptions for law enforcement use of biometric systems, either in real-time or to search for evidence in recordings.”

AI acceleration - a fast-moving target

To complicate matters further, the introduction and widespread adoption of general-purpose AI during 2023 meant that the commission had to adapt to the rapidly evolving technology. “The proposed rules were ill-equipped to handle general-purpose systems broadly dubbed foundation models, like the tech underlying OpenAI’s explosively popular ChatGPT, which launched in November 2022,” reports The Verge’s Emilia David. In other words, changes to these considerations may still be on the way.

The penalties for infringement of the AI Act

The next logical question is about enforcement. Does the legislation have teeth? Violations may result in fines that are a percentage of the “company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.”

EU AI Act: Expected impact

Despite potentially heavy fines, not everyone is convinced the proposal goes far enough — while still others worry it could curtail innovation.

“In the very short run, the compromise on the EU AI Act won’t have much direct effect on established AI designers based in the US, because, by its terms, it probably won’t take effect until 2025,” Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights told The Verge. In other words, Meta, Google, Open AI, Amazon, and others are free to continue iterating and implementing their technology for another year or more.

Still, politicians have their reservations. French President Emmanuel Macron said the act “risks hampering European tech companies compared to rivals in the US, UK and China, setting the stage for a new battle over regulation of the emerging technology,” according to the Financial Times (FT). With that in mind, it’s worth noting that potential fines do not apply to open-source developers, researchers, and smaller companies.

Business people have taken a more nuanced view than politicians. Navrina Singh, founder of Credo AI told The Verge, “The focus for regulators on both sides of the Atlantic should be on assisting organizations of all sizes in the safe design, development, and deployment of AI that are both transparent and accountable.”

The AI Act will likely see more changes before it’s finally adopted, but that has not stopped experts from speculating on how it will impact people in real life.

The AI Act in practice

The discussion around the EU’s AI Act began back in 2021 when many AI applications were already in regular use in businesses. That means companies need to start thinking about the AI-fueled tools they are using and whether or not they are likely to run afoul of the new regulations. The good news is you have plenty of time to figure it out.

Those in a position of power at companies — especially in the financial services, employment, infrastructure, medical devices, and healthcare sectors — operating in the EU must ask a few questions, according to AI Business:

  • Which departments are using AI tools?
  • Are the AI tools processing proprietary information or sensitive personal data?
  • Do these use cases fall under the unacceptable, high, or low-to-no-risk category under the AI Act?
  • Is the company an AI Provider that develops the AI tool with potential use in the EU market?
  • Is the company an AI Deployer that uses AI tools with the potential to produce outputs used in the EU market?
  • What do our vendor agreements with the Providers of these AI tools say about data protection, use restrictions, and compliance obligations?

AI Act: the need for vigilance

Companies must keep a close watch on the news for changes that may take place before the AI Act is fully adopted and for whatever regulations are likely to follow in the U.K., the U.S., and other markets likely to follow suit with their regulations. As the FT’s Alan Beattie reminds us, though changes are bound to come, “Someone needs to be thinking and legislating methodically about how [AI’s] power can be channeled for good.”

For now, that job is left up to the EU.


Find out more about Eidosmedia products and technology.