Security Program Controls/Technologies

Google’s Secure AI Framework: A good start, but a lot of work ahead

AI, Artificial Intelligence

Artificial intelligence has been exploding in popularity in the past several months — so much so that some of the major tech players believe it’s high time for a common set of standards for building and deploying these new technologies.

On Friday, Google introduced its Secure AI Framework (SAIF), a conceptual framework for securing AI systems.

SAIF aims to develop an ecosystem that keeps pace with developments in AI, extend detection and response with AI in mind, integrate automation into AI, and conduct red-team exercises with models developed for AI.

Google plans to work closely with government standards agencies to help develop the NIST AI Risk Management Framework and the ISO/IEC 42001 AI Management System Standard, the industry's first AI certification standard.

In terms of concrete actions, Google said it plans to expand it bug bounty programs to incentivize industry research around AI safety and security. And they also plan to publish several open source tools to help put SAIF elements into practice for AI security.

“The research community has an important role to play in the AI ecosystem, and we’re proud to say we already have that relationship with security researchers,” said Phil Venables, chief information security officer at Google Cloud. “Last year, we paid more than $12 million dollars in bounties to security researchers who tested our products for vulnerabilities. Our AI systems are in-scope for these programs, and we’ve been engaging with this community to ensure we can find vulnerabilities. We also have a research unit, Google DeepMind, that works on these problems.”

While most security pros thought it was a positive that a major player such as Google was taking such a strong step in promoting SAIF, some believe there’s much work ahead in a field that most security pros are just learning as they go.

“We are only just getting started thinking about this stuff and we’re drawing analogies on existing cybersecurity disciplines,” said John Bambenek, principal theat hunter for Netenrich.

Bambenek pointed out that having bug bounty programs makes sense if we’re talking about software applications, but in AI we don’t even really know what penetration testing truly looks like.

“The fact is, we are making it up on the fly, and we’re just going to have to revise and figure things out,” said Bambenek. “In that sense, putting some of the stuff out there is a good first step because at least it gives the industry a starting point to figure out what works and what does not.”

The SAIF offers a great start, anchoring on several tenets that are found in the NIST and ISO frameworks, said Sounil Yu, chief information security officer at JupiterOne. Now, the industry needs a bridge between its current security controls and those that are needed specifically for AI systems.

“The primary difference with AI systems that makes the SAIF particularly compelling and necessary is that with AI systems, we won't have many opportunities to make mistakes,” said Yu. “AI safety is an extremely important principle to consider at the earliest stages of designing and developing AI systems because of potential catastrophic and irreversible outcomes. As AI systems grow more competent, they may perform actions not aligned with human values. Incorporating safety principles early on can help ensure that AI systems are better aligned with human values and prevent potential misuse of these technologies.”

Piyush Pandey, chief executive officer at Pathlock, pointed out that just as the Sarbanes-Oxley (SOX) legislation created a need for separation-of-duties (SOD) controls for financial processes, it's evident that similar types of controls are necessary for AI systems. 

Pandey said SOX requirements were quickly applied to business applications executing those processes, and as a result, controls testing has become now it's own industry, with software solutions, and audit and consulting firms, helping customers prove the efficacy and compliance of their controls. 

“For SAIF to become relevant, controls will need to be defined to give organizations a starting point to help them better secure their AI systems and processes,” said Pandey.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds