AI/ML

AI as an existential threat: The story so far

Share

This is part of a series on AI threats and opportunities, which will be covered at length during InfoSecWorld 2023 Sept. 25-28. Visit the InfoSecWorld website to see more on the AI challenge in the ISW 2023 Trends Report.

Even as AI gives companies a path to upgrading their cybersecurity, it also provides adversaries more ammunition to launch devastating attacks that evade conventional defenses. 

Malicious versions of ChatGPT and generative AI have already sprung up in the dark web, lowering the barrier for would-be attackers who have neither the planning nor technical chops required during previous generations. 

What are the consequences of AI in the wrong hands? Here are just a few ways AI can be misused or weaponized to harm organizations.

 AI uses impersonation and social engineering to exploit user trust 

We’ve come a long way since the T-1000 began impersonating its victims in the film Terminator 2: Judgement Day. Of all the instances observed of malicious AI in the wild, impersonation in social engineering attacks might be the most common. The same tech wizardry that enables users to get ChatGPT’s help with writing cover letters and how-to guides also gives adversaries a more refined method of crafting emails and other correspondence that presents them as someone they are not.

Organizations have spent millions of dollars, educating employees on how to spot classic signs of a phishing attack — like typos, broken English, and suspicious email signatures – but those easy tells may quickly become a thing of the past. 

“The days of training nurses and doctors to recognize a phishing email using poor grammar or bad spelling are over,” says Don Kelly, Senior Virtual Information Security Officer at Fortified Health Security. “Now, anyone can create a spear phishing email using proper syntax, style and desired tone.”

The impersonation threat is bigger than just a linguistic problem. AI deepfake technology creates stunningly lifelike video depictions of famous individuals, disassembling and reassembling facial tics and audio samples that can produce novel utterances without an individual’s knowledge or consent. 

“It is now easier to spoof the voice of a colleague or a customer that even the sharpest minds might not be able to detect the difference, compromising a channel of communication that for nearly 150 years was thought to be materially more secure than text-based communication,” writes Jim Whitehead, Chief Scientist for AI Research Development and Experiments at Sauce Labs. 

AI has no shortage of material to draw from when you consider how much information organizations post on their websites, in press releases and social media. It’s easy to see how AI can mimic tone and style to present a version of itself that is indistinguishable from a real organization. 

AI can write malware purposely engineered to probe and exploit an organization’s soft spots

While it may require a tweak here and there, ChatGPT’s ability to produce functioning code on command has made it a valuable DIY kit for developers and engineers to test-drive potential scripts. Of course, it didn’t take long for adversaries to harness that technology for their own purposes. FraudGPT and WormGPT are large language models residing in the dark web that enable black hats to request malicious code without having to write it from scratch. 

 “AI knows the vulnerabilities in our networks, and knows how to code. It also knows what cybersecurity solutions our organizations are using and how to disable them,” says Adrien Gendre, Chief Tech and Product Officer at Vade.

This capacity to develop malicious code on command makes it much easier for criminals to get skin in the game. You no longer need to be an advanced hacker; all you need is a motive and a target. 

“If before hackers needed some knowledge and experience, now all that is necessary is executing a hosted malicious LLM version or — for the more advanced ones — building their own trained version of ChatGPT and either keeping it for themselves or releasing it on GitHub,” says Michael Assraf, CEO of Vicarius.

This means LLMs like FraudGPT and WormGPT can effectively function as AI-powered marketplaces for criminals to exchange tools and trade tactics. 

“It has the potential to become a one-stop shop for cyberattacks, providing not only the tools and techniques to carry out an attack, but also assistance with crafting persuasive phishing messages or writing malicious code to make an attack successful,” says Okey Obudulu, Chief Information Security Officer at Skillsoft.

Malicious AI can poison datasets used for training good AI

A drop of poison contaminates a whole well. 

The use of AI introduces major questions about trust, consumer privacy and ethics regarding its applications. For white hats, developing and securing trustworthy AI is essential to doing business and maintaining consumer confidence. For black hats? Not so much. 

In fact, adversaries are already using AI-based tools to sow distrust and disinformation in companies, in electoral and democratic processes, in social media platforms, and in the stability of critical infrastructure. 

By manipulating data or implanting false information, bad guys can effectively poison the datasets that good guys use to train their AI. 

“These systems heavily rely on large volumes of data to learn and make decisions. If an attacker can manipulate the data, they may inject malicious patterns or false information, leading to compromised AI models. This can result in the AI system making incorrect or harmful predictions or decisions,” says Fred Rica, a partner at consulting firm BPM. 

“Right now, [criminals] are compromising training models by flooding them with inaccurate data at a rapid speed. This raises many concerns, but for now, the technology has its limitations. As AI gets smarter, so will the attack methods of the bad guys who use it. It’s important that as a community, we put in the proper guardrails needed to keep our communities safe from weaponized AI,” says SentinelOne’s Morgan Wright.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Daniel Thomas

Daniel Thomas is a technology writer, researcher, and content producer for CyberRisk Alliance. He has over a decade of experience writing on the most critical topics of interest for the cybersecurity community, including cloud computing, artificial intelligence and machine learning, data analytics, threat hunting, automation, IAM, and digital security policies. He previously served as a senior editor for Defense News, and as the director of research for GovExec News in Washington, D.C.. 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.