The European Union’s Artificial Intelligence Act was approved by the European Parliament on Wednesday, marking the most extensive governmental regulation of AI technology to date.
The EU AI Act regulates different types of AI systems based on their risk level, outright prohibiting some uses of AI while establishing stricter requirements for those who deploy AI systems classified as “high-risk.”
Additionally, the act sets rules for the use of “general-purpose” AI models, as well as generative AI tools capable of producing deepfakes.
“With any new powerful technology, we need to have the right limitations and guardrails to operate securely,” Jadee Hanson, chief information security officer at Vanta, told SC Media. “The EU AI Act is a welcomed introduction of what those limitations should be and what companies should be thinking about as they apply this technology in their products and services.”
The AI Act is expected to officially become law before the end of the EU legislature’s term in June, with the ban on AI systems posing “unacceptable risks” becoming enforceable six months later.
Requirements for general-purpose AI systems will come into effect 12 months after the act becomes law, while the compliance deadlines for most “high-risk” systems will come two years after the law is published.
EU AI Act summary
The roughly 300-page act is separated into 113 articles and 13 chapters. The first chapter deals with general provisions and definitions, while chapters 2-5 respectively cover prohibited AI practices, high-risk AI systems and transparency requirements for lower-risk tools, including general-purpose AI models.
The AI Act prohibits the eight specific uses of AI, including systems that use “subliminal techniques” to manipulate people into harmful behaviors, tools for scraping facial images from the internet or CCTV footage for collection in a database, and use of real-time remote biometrics by law enforcement in public spaces, except in specific circumstances such as in missing person searches and the prevention of an imminent terrorist attack.
“These restrictions are essential safeguards, and I expect broad consensus in supporting this law to uphold these critical limitations,” said Hanson.
High-risk AI systems, as defined under the act, include safety components in critical infrastructure, systems used in education and employment processes, tools involved in one’s access to essential services such as emergency services, tools used by law enforcement, systems involved in the regulation of migration, and systems used in the administration of justice and democracy.
Systems in this category must be registered in an EU database established by the commission and undergo conformity assessments prior to deployment. Providers of these systems are required to meet higher standards of training data quality and resilience to error, interruptions and cyberattacks.
High-risk AI system providers must also continually monitor and log system performance, provide transparency with regard to information about the system’s capabilities and limitations, and include human oversight in the operation of the AI system.
“It’s well known that the quality of any AI model depends on the dataset and requirements of transparency around data sets used for training, completeness of data and accuracy of data can hopefully lead to better outcomes,” Graham Rance, EMEA head of sales engineering at CyCognito, told SC Media.
The obligations for providers of lower risk and general-purpose AI models that can perform a wide range of tasks, which would likely encompass mainstream chatbots like ChatGPT, largely focus on transparency, governance and risk management. Notably, the act requires the users must be made aware when they are interacting with an AI system, and AI-generated media such as deepfakes must be identified through methods like metadata identification or cryptographic verification.
A subcategory of general-purpose AI models with “systemic risk” due to their large size, large user base or access to sensitive information also must establish codes of practice outlining measures to assess and mitigate these risks.
The sixth chapter of the act mandates the establishment of a national AI regulatory sandbox by each of the EU’s 27 member states for AI providers to test their system’s performance, robustness, security and compliance with the regulations. The sandbox environment must be accessible to all providers, including small- to medium-sized enterprises (SMEs) and startups.
Articles 62 and 63 specifically outline additional provisions to support innovation at SMEs and startups.
“The ‘cost of compliance’ is always likely to fall on smaller companies harder, but the flip side is ideally reduced risk for those operating in line with the framework. This will be critical for government bodies as they weigh the pros and cons of this regulation,” Rance said.
The remaining chapters of the act deal with government oversight of regulatory compliance and enforcement, the establishment of the database for high-risk AI systems, post-market monitoring of high-risk systems, voluntary guidelines for low risk systems and penalties for non-compliance.
Penalties include fines of up to €35 million (about $38 million USD) for use of prohibited AI systems and up to €15 million (about $16.3 million) for non-compliance with requirements for high-risk systems.
“Legislation always creates a heated debate. One camp currently feels that AI regulation is overblown and that, if implemented hastily, it can hinder innovation. On the other side, there are many that feel innovation is important but not at the cost of safety and data privacy,” Rick Song, CEO of Persona, told SC Media. “While finding the right balance is a tightrope walk, it’s possible to surgically set guardrails while still harnessing the power of AI.”
What does the EU AI Act mean for cybersecurity?
Cybersecurity companies that use AI technology are unlikely to fall under the AI Act’s “high-risk” category; the act specifically notes that AI components designed solely for cybersecurity are not considered high-risk safety components when tied to critical infrastructure.
However, the regulations put emphasis on cybersecurity and data protection, requiring deployers of high-risk systems to have suitable cybersecurity measures in place and follow certain guidelines in the collection and storage of personal data.
Article 15 specifically states AI providers should utilize solutions to detect, respond to and resolve AI-specific threats involving data poisoning, model poisoning, model evasion (“jailbreaking”), confidentiality breaches and security vulnerabilities in models, when applicable.
What influence will the AI Act have outside the EU?
Companies outside of the EU that provide AI systems will be required to follow the EU’s regulations “to the extent the output produced by those systems is intended to be used in the Union,” the AI Act states.
“Given that the regulation is EU-wide, it will have a significant impact on U.S. companies that do any business in Europe — especially the large tech giants,” Rance said. “Some aspects of the regulation are likely to become ‘de-facto’ practices.”
The act could have a similar widespread effect to the EU’s General Data Protection Regulation (GDPR), which not only placed requirements on other countries doing business in the EU, but also influenced the adoption of similar provisions in other countries, like the California Consumer Privacy Act, Rance and Song both noted.
“It will likely guide decisions in the U.S. allowing individual states or the government to cherry pick the best aspects of this ‘global first’ regulation,” Rance said.
“Although this is likely not a perfect solution and will be iterated in time, the pace of development and change in the ‘AI industry’ or ‘AI-enabled’ industry means that done is better than doing. Further efforts can iterate off this good start,” Rance added.