AI benefits/risks, Generative AI

Google launches AI bug bounty program as organizations plan to study risks

Share
A close-up view of OpenAI logo on its website.

There’s been any number of news releases around artificial intelligence (AI) this week, as the industry and government look to chart a path forward with these new technologies.

From a hands-on industry perspective, Google announced its new bug bounty program in which it aims to take a fresh look at how bugs are categorized and reported.

The United Nations and OpenAI also announced that they plan to study AI in the coming months, with OpenAI focused on what they called “catastrophic risk.” All of this comes on top of the Biden administration expected to roll out an executive order (EO) around AI sometime this coming week.

In a blog post Oct. 26, Google pointed out that generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation, or misinterpretations of data (hallucinations).

“As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks,” said Google’s Laurie Richardson and Royal Hansen in the blog. “But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure.”

Google plans to expand its vulnerability rewards program (VRP) to include attack scenarios around prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft.

Alex Rice, co-founder and CTO of HackerOne, said Google’s expansion of its bug bounty program is a signal for where all bug bounty programs are headed. Rice said the ethical hacker community is a great resource to explore emerging technology because they’re often at the forefront of researching how these kinds of technologies can be exploited.

“I foresee GenAI becoming a significant target for hackers and a growing category of bounty programs,” said Rice.

Rice pointed out that research from HackerOne validates this: 55% of the hacker community on the HackerOne platform say GenAI tools will become a major target for them in the coming years, and 61% say they plan to use and develop hacking tools that employ GenAI to find more vulnerabilities. Another 62% of hackers also said they plan to specialize in the OWASP Top 10 for large language models (LLMs). 

Casey Ellis, founder and CTO at Bugcrowd, added that Bugcrowd has been active in AI and ML testing as far back as 2018, involved in the AI Village and Generative AI Red Teaming events, and working with a number of the leading AI players that popped into the public collective consciousness in 2022-2023. Ellis said AI has captured the imagination of hackers, with more than 90% reporting that they use AI in their hacking toolchains as per a recent Bugcrowd survey.

“AI testing mostly augments, rather than replaces, traditional vulnerability research and bug hunting for those who are already experienced in the latter,” said Ellis. “The part that's exciting is that the barrier to entry for AI testing is much lower for a very large number of people, since the only language a prospective hacker needs to know in order to get started is the one they're probably already using.”

Multiple industry and government artificial intelligence initiatives

Security pros said that while the announcement by the United Nations of the formation of a 39-member panel to report on governance of artificial intelligence could wind up being a “toothless” exercise, it still promises to educate the public on the potential benefits and risks of AI.

“There could still be some real value to casual and even professional observers,” said Shawn Surber, senior director of technical accounts at Tanium. “Like all political entities, there's also the potential for the UN’s mission to be subverted to the individual agendas of the panelists.”

On the OpenAI initiative, Surber said having a team from OpenAI that’s at the heart of AI looking into all of the nightmare scenarios was a positive development.  

“Knowledge always trumps assumption, so even if there's a razor thin chance of an AI-led extinction event, that's still a non-zero chance and we should be prepared to prevent it,” said Surber. “And while OpenAI's Preparedness Team may simply be a publicity stunt to keep them at the forefront of the news, the reality is that AI is rapidly being inserted into almost every aspect of technology. And if there's one thing that we've learned, it's that a rush to market in technology is almost always accompanied by insufficient testing and preparation.

John Bambenek, principal threat hunter at Netenrich, said unfortunately, several decades of science fiction about machines becoming self-aware and killing off humanity has poisoned the minds of regulators as to what the true risks really are.

“Sure, you could study AI-ML systems and the risks they pose if implemented into nuclear weapon systems, but that would never actually be done in the real world because of the strict human controls involved with those weapons systems,” said Bambenek. “Before diving headlong into imaginative exercises on what ML-AI could do wrong, we need to start where people are implementing these technologies and then see what risks exist. For instance, facial recognition works pretty good on iPhones and social media, but when applied to policing, we’ve seen human rights violations. Those risks are here today.”

Kevin Surace, chairman and CTO at Token, said as for the UN and the U.S. government and other bodies looking at regulating AI, it’s unclear what the purpose would be and how we keep rogue states such as North Korea from ignoring the rules.

“Today, Gen AI is a fabulous language tool and content-creator,” said Surace. “The major providers have spent more than a year placing guardrails to keep people from generating say, instructions to build a nuclear bomb. It’s in their interest to do so and self-regulate today. But as open source models begin to proliferate, rogue actors won't place guardrails on anything and we will have to live in that world, like it or not.”

Related Terms

Algorithm

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.