The European Union has officially begun enforcing its landmark AI Act, ushering in tough new restrictions and potential multimillion-euro fines for violations.
While the full regulatory framework won’t be in place for some time, the first batch of rules took effect on Sunday, requiring companies to comply or face penalties.
The AI Act, which formally became law in August 2024, bans AI applications considered an “unacceptable risk” to citizens. These include social scoring systems, real-time facial recognition, and biometric identification that categorizes people based on sensitive attributes such as race or sexual orientation. It also prohibits AI tools designed to manipulate human behavior.
Companies that violate these rules could face fines of up to €35 million (US$35.8 million) or 7% of global annual revenue, whichever is higher. These penalties are even stricter than those under the General Data Protection Regulation (GDPR), which caps fines at €20 million or 4% of global turnover.
Despite the milestone, the AI Act’s full enforcement is still a work in progress. Compliance will depend on guidelines, secondary legislation, and technical standards that will define what adherence looks like, according to Tasos Stampelos, head of EU public policy and government relations at Mozilla.
The newly formed EU AI Office has already published a second-draft code of practice for general-purpose AI (GPAI) models, such as OpenAI’s GPT. The latest version outlines exemptions for certain open-source AI models while requiring systemic GPAI models to undergo rigorous risk assessments.
However, some industry leaders remain skeptical. Prince Constantijn of the Netherlands warned in June 2024 that Europe’s focus on AI regulation could stifle innovation.
“Our ambition seems to be limited to being good regulators,” he said, adding that it’s challenging to set clear rules in such a rapidly evolving space.
Others argue that strong AI regulations could give Europe a competitive advantage. Diyan Bogdanov, director of engineering intelligence at Payhawk, believes the AI Act’s emphasis on bias detection, risk assessments, and human oversight will set a new standard for trustworthy AI rather than hinder innovation.
“While the US and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” he said.