RSAC, AI/ML, Generative AI

RSAC 2024: Google on the promise of large language models and cybersecurity

Share
Google’s Elie Bursztein speaks during a presentation at the RSA Conference in San Francisco. (Tom Spring / SC Media)

SAN FRANCISCO — In a Google deep-dive analysis of large language models used in AI technology, speaker Elie Bursztein showed how close and far we are from handing over the cybersecurity reigns to GenAI for tasks such as zero-shot content moderation, identifying and fixing open-source code in repositories, and detecting and fixing software vulnerabilities in an attack surface.

“I don't think it's going to be as fast as people would imagine, which is in a matter of months,” said Bursztein who is Google and DeepMind AI Cybersecurity Technical and Research Lead.

For more real-time RSAC coverage from SC Media please visit here.

He told a packed session here at RSAC titled “How Large Language Models are Reshaping the Cybersecurity Landscape” that “AI is eventually going to give us back the advantage [over AI-empowered adversaries] because the upside of using it is really, really large.”

Humans are keeping score

Bursztein kicked off his talk handicapping the use of adversarial GenAI in the wild. Most worrisome are adversaries' ability to spread misinformation followed by the use of GenAI in crafting convincing phishing emails. Emerging threats that shouldn’t keep security professionals up yet are attackers successfully using the technology to create novel malware or build nuclear, chemical or biological weapons. (see image)

On the flip side, defensive uses of large language models (LLM) are taking shape with a promising future, Bursztein said.

 “I think you could try to think about where to add AI as a new use case as an additional, in-depth layer to improve your [existing] security,” he said.

He said the best way security professionals should prepare a workforce against current and next generation AI attacks.

GenAI’s most promising opportunities

Bursztein spent much of his talk discussing the promise leveraging language models for:

Training language models for its’ generalization capabilities to synthesize human reasoning capabilities. That would allow the technology to classify user-generated content without manual review. The example shared was when parsing phishing emails and spikes in social media misinformation events.  

Multimodal understanding of images, text, video or code by a generative engine that can perform an analysis of content to determine if it is malicious.

Code understanding: Where AI could scan a repository such as GitHub and identify malicious code, flag it and potentially offer safe code alternatives.

Using generative capabilities of AI to speed up incident response. He said AI's potential to improve security by automating tasks, reducing windows and increasing incident response speed will be a game changer for security teams.

“During incident response, time is of the essence and the faster we respond to incident, the better we are and the better we can mitigate the attacks,” Bursztein said.

The hope is someday GenAI will be able to model an incident or generate a near real-time incident report to help speed up incident response rates drastically.  

“Hopefully by having incident response assisted by AI will be so much faster and will make the life of attacker so much more difficult,” he said.

Challenges ahead

Where we are at today, as a cybersecurity industry, is far from this GenAI enhanced future.

Using AI to detect and fix software vulnerabilities is having mixed results. Equally challenging is AI's current ability to improve code security by identifying and mitigating vulnerabilities.

Challenges in vulnerability detection, including noisy datasets and difficulty identifying vulnerable code in batches, he said. Experiment with machine learning models on Google's internal code base shows mixed results, with some bugs fixed and others left unfixed. The success rate of AI-generated patches in the near future is questionable, due to accuracy and success rate concerns.

However there are processes that GenAI are excelling at. Bursztein said using AI in incident response can cut the time it takes to write an incident summaries in half. “There is quite a bit of more research and more innovation to be done before it is as reliable and as powerful as we need AI to be to reach it’s full potential,” Bursztein said. “Hopefully, [the RSA Conference] will get you excited to get into this domain, if you haven't jumped on yet, and start to think about how you're going to use it.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Tom Spring, Editorial Director

Tom Spring is Editorial Director for SC Media and is based in Boston, MA. For two decades he has worked at national publications in the leadership roles of publisher at Threatpost, executive news editor PCWorld/Macworld and technical editor at CRN. He is a seasoned cybersecurity reporter, editor and storyteller that aims always for truth and clarity.

Related Terms

Algorithm

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.