AI/ML, AI benefits/risks

What the US can learn from the UK and EU about regulating AI

Share
AI Regus

COMMENTARY: California Gov. Gavin Newsom vetoed a bill last month that would have enacted the most significant AI legislation to date in the United States.

The measure was seen by legislators as offering a potential blueprint for federal regulation, focused on making tech companies legally liable for the harm caused by their AI models. It would have forced the industry to conduct safety tests on powerful AI models and mandated that tech companies enable a “kill switch” for AI technology to stop potential misuse.

Newsom argued that while the AI safety bill's intentions were valid, it used a broad brush approach, applying uniform regulation to all large models without distinguishing between high-risk AI applications and more benign ones​.

The governor pointed out that the bill focused on large-scale, expensive AI models, which would potentially give the public a false sense of security by targeting only high-cost systems. Smaller, more specialized AI models, which arguably pose equal or even greater risks, were not sufficiently addressed. Additionally, the bill applied strict safety protocols to all large models, regardless of their actual deployment in high-risk environments or their involvement with sensitive data​. As a result, Newsom feared that the bill could create an overly restrictive environment that might hamper innovation.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

The bill – and Newsom’s decision to veto it – has sparked widespread debate about the best approach to regulating AI, specifically when it comes to reducing risk without stifling innovation. It’s a debate that other regions, such as the UK and the European Union (EU), are also navigating using a variety of approaches.

So, what are some of the important considerations that go into developing regulation for AI? And what can the U.S. learn from the UK and EU where they are doing it effectively today?

Let’s take a closer look:

In comparison to the U.S., both the UK and EU are further along in their regulatory efforts. And unlike the proposed AI bill in California, both of these regions emphasize regulation focused on distinguishing high-risk applications, no matter if they used large models or smaller, specialized ones.

For example, under Prime Minister Keir Starmer’s government, the UK promotes a safety-focused AI regulatory framework that seeks to prevent misuse by enhancing transparency, human oversight, and data quality standards. It’s particularly focused on high-risk sectors like healthcare and criminal justice, areas in which AI is most likely to be misused or abused.

This approach aligns closely with the EU’s AI Act, which also imposes compliance requirements on high-risk AI applications, such as those in healthcare, finance, and public services​. The stringent EU AI Act bans AI systems that pose an "unacceptable level of risk," including social scoring algorithms. Both the UK and EU recognize the importance of public trust in AI, especially in critical sectors, and their regulatory frameworks aim to ensure that AI systems are explainable, reliable, and fair​.

But while both the UK and EU regulations aim to mitigate risks, there are still concerns that this strict approach might stifle innovation, particularly for smaller companies. For example, the compliance costs associated with these regulations could become prohibitive for startups—potentially limiting the development of cutting-edge AI technologies.

Lessons for the United States

The U.S. – which today lacks comprehensive federal AI regulation – could learn several lessons from the UK and EU. First, the European regulations are based on the actual risk an AI system poses. Both the UK and EU focus on strictly regulating high-risk AI systems, while allowing more flexibility for low-risk applications. This targeted approach could help avoid stifling innovation because of over-regulation, which was one of the main concerns Newsom highlighted in his veto.

Additionally, the emphasis on transparency, human oversight, and accountability in both models offers a roadmap for how the U.S. could structure its own AI governance. Ensuring that AI systems are explainable and accountable is crucial for public trust, particularly as these technologies become more integrated into everyday life​.

Another strategy that the UK has adopted, which the U.S. could potentially benefit from, is the use of regulatory sandboxes. Sandboxing lets tech companies  experiment with AI technology in a controlled environment, fostering innovation while ensuring that AI applications are subject to rigorous safety testing before being deployed at scale.

Finally, as the U.S. considers its own AI regulations, it should also focus on international competitiveness. The EU's AI Act has already set a global standard, and many U.S. companies will need to comply with these rules when operating in Europe. Aligning U.S. regulations with global standards could help streamline compliance and ensure that American companies remain competitive on an international stage.

In short, Gavin Newsom’s veto of California’s AI safety bill highlights the challenges of balancing innovation with safety in a rapidly evolving landscape. While concerns are valid, the experiences of both the UK and the EU show that it’s possible to create a regulatory framework that protects public safety without unduly restricting technological development.

Adopting targeted, risk-based regulations, fostering transparency and accountability, and supporting innovation through regulatory sandboxes are just a few of the strategies that the U.S. may consider as it continues to develop complex legislation around AI—legislation that's essential for maintaining public trust and driving responsible AI development.

Mike Britton, chief information security officer, Abnormal Security

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

What the US can learn from the UK and EU about regulating AI

There are ways to protect the public from the potential dangers of AI without stifling innovation – and the Europeans have already shown us how.

Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.