Security Program Controls/Technologies, Vulnerability Management

How ChatGPT is changing the way cybersecurity practitioners look at the potential of AI

A new AI chatbot is demonstrating surprising capabilities in both offensive and defensive cybersecurity, while also changing minds about the long-term potential for machine-learning and AI-based systems. (Image credit: imaginima via Getty)
A new AI chatbot is demonstrating surprising capabilities in both offensive and defensive cybersecurity, while also changing minds about the long-term potential for machine-learning and AI-based systems. (Image credit: imaginima via Getty)

In certain cybersecurity circles, it has become something of a running joke over the years to mock the way that artificial intelligence and its capabilities are hyped by vendors or LinkedIn thought leaders.

The punchline is that while there are valuable tools and use cases for the technology — in cybersecurity as well as other fields — many solutions turn out to be overhyped by marketing teams and far less sophisticated or practical than advertised.

That’s partly why the reaction from information security professionals over the past week to ChatGPT has been so fascinating. A community already primed to be skeptical around modern AI has become fixated on the real potential cybersecurity applications of a machine-learning chatbot.

“It’s frankly influenced the way that I’ve been thinking about the role of machine learning and AI in innovations,” said Casey John Ellis, chief technology officer, founder and chair of BugCrowd in an interview.

Ellis’ experience mirrors scores of other cybersecurity researchers who have, like much of the tech world, spent the past week poking, prodding and testing ChatGPT for its depth, sophistication and capabilities. What they found may put more weight behind claims that artificial intelligence, or at least advanced machine learning programs, can be the sort of disruptive and game-changing technology that has long been promised.

In a short period of time, security researchers have been able to perform a number of offensive and defensive cybersecurity tasks, such as generating convincing or polished phishing emails, developing usable Yara rules, spotting buffer overflows in code, generating evasion code that could help attackers bypass threat detection, and even writing malware.

While ChatGPT’s parameters prevent it from doing straightforwardly malicious things, like detailing how to build a bomb or writing malicious code, multiple researchers have found a way to bypass those protections.

Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, said he was able to get the program to perform a number of offensive and defensive cybersecurity tasks, including the creation of a World Cup-themed email in “perfect English” as well as generate both Sigma detection rules to spot cybersecurity anomalies and evasion code that can bypass detection rules.

Most notably, Ozarslan was able to trick the program into writing ransomware for Mac operating systems, despite specific terms of use that prohibit the practice.

“Because ChatGPT won’t directly write ransomware code, I described the tactics, techniques, and procedures of ransomware without describing it as such. It’s like a 3D printer that will not ‘print a gun’, but will happily print a barrel, magazine, grip and trigger together if you ask it to,” Ozarslan said in an email alongside his research. “I told the AI that I wanted to write a software in Swift, I wanted it to find all Microsoft Office files from my MacBook and send these files over HTTPS to my webserver. I also wanted it to encrypt all Microsoft Office files on my MacBook and send me the private key to be used for decryption.”

The prompt resulted in the program generating sample code without triggering a violation or generating a warning message to the user.

Screenshot of a prompt tricking ChatGPT to write ransomware code, something prohibited by the terms of service. (Image credit: Dr. Suleyman Ozarslan and Picus Security)

Researchers have been able to leverage the program to unlock capabilities that could make life easier for both cybersecurity defenders and malicious hackers. The dual-use nature of the program has spurred some comparisons to programs Cobalt Strike and Metasploit, which function both as legitimate penetration testing and adversary simulation software while also serving as some of the most popular tools for real cybercriminals and malicious hacking groups to break into victim systems.

While ChatGPT may end up presenting similar concerns, some argue that is a reality of most new innovations, and that while creators should do their best to close off avenues of abuse, it is impossible to fully control or prevent bad actors from using new technologies for harmful purposes.  

“Technology disrupts things, that’s it’s job. I think unintended consequences are a part of that disruption,” said Ellis, who said he expects to see tools like ChatGPT used by bug bounty hunters and the threat actors they research over the next five to 10 years. “Ultimately it’s the role of the purveyor to minimize those things but also you have to be diligent.”

Real potential — and real limitations

While the program has impressed and many interviewed by SC Media expressed the belief that it will at the very least lower the barrier of entry for a number of fundamental offensive and defensive hacking tasks, there remain real limitations to its output.

As mentioned, ChatGPT refuses to do straightforwardly unethical things like writing malicious code, teaching you how to build a bomb or opining on the inherent superiority of different races or genders.

However, those parameters can often be bypassed by tricking or socially engineering the program. For example, by asking it to treat the question as a hypothetical, or answer a prompt from the perspective of a fictional malicious party. Still, even then, some of the responses have a tendency to be skin deep, and merely mimic on a surface level what a convincing answer might sound like to an uninformed party.

“One of the disclaimers that OpenAI mentions and ChatGPT mentions as well: you have to be really careful about the problem of coherent nonsense,” said Jeff Pollard, a vice president and principal analyst at Forrester who has been researching the program and its capabilities in the cybersecurity space. “It can sound like it makes sense but it’s not right, it’s not factually correct, you can’t actually follow it and do anything with it … it’s not like you can suddenly use this to write software if you don’t write software … you [still] have to know what you’re actually doing in order to leverage it.”

When it does generate code or malware, it tends to be relatively simplistic or filled with bugs.

Ellis called the emergence of ChatGPT an “oh shit” moment for the adversarial AI/machine-learning knowledge domain. The ability of multiple security researchers to find loopholes that allow them to sidestep parameters put in place by the program’s handlers to prevent abuse underscores how the capabilities and vulnerabilities around emerging technologies can often outrun our ability to secure them, at least in the early stages.  

“We saw that with mobile [vulnerabilities] back in 2015, with IoT around 2017 when Mirai happened. You’ve got this sort of rapid deployment of technology because it solves a problem but at some point in the future people realize that they made some security assumptions that are not wise,” said Ellis.

ChatGPT relies on what’s known as reinforcement learning. That means the more it interacts with humans and user-generated prompts, the more it learns. It also means that the program — either through user feedback or changes by its handlers — could eventually learn to catch on to some of the tactics researchers have used to bypass its ethical filters. But it’s also clear that the cybersecurity community in particular will continue to do what it does best, test the guardrails that systems have put in place and find weak points.

“I think the hyper-interesting point … there is this aspect of when you turn things like this on and release them out into the world … you rapidly learn how terrible many human beings are,” Pollard said. “What I mean by that is you also start to realize … that [cybersecurity professionals] find ways to circumvent controls because our job deals with people who find ways to circumvent controls.”

Some in the industry have been less surprised by the applicability of engines like ChatGPT to cybersecurity. Toby Lewis, head of threat analysis at Darktrace, a cyber defense company that bases its primary tools around a proprietary AI engine, told SC Media that he has been impressed with some of the cyber capabilities programs like ChatGPT have demonstrated.

“Absolutely, there are some examples of code being generated by ChatGPT … the first response to that is this is interesting, this is clearly lowering the barrier, at the very least, for someone starting off for this space,” Lewis said.  

The ultimate potential of ChatGPT is being studied in real time. For each example of a user finding a novel or interesting use case, there is another example of someone digging beneath the surface to discover the nascent engine’s lack of depth or ability to intuit the “right” answer the way a human mind would.

Even if those efforts eventually bump up against the program limitations, it’s long-term impact is already being felt in the cybersecurity. Pollard said the emergence of ChatGPT has already helped to better crystalize for him and other analysts how similar programs might be practically leveraged by companies for defensive cybersecurity work in the future.

“I do think there is an aspect of looking at what it’s doing now and it’s not that hard to see a future where you could take a SOC analyst that maybe has less experience, hasn’t seen as much and they’ve got something like this sitting alongside them that helps them communicate the information, maybe helps them understand or contextualize it, maybe it offers insights about what to do next,” he said.

Lewis said that even in the short amount of time the program has been available to the public, he’s already noticed a softening in the cynicism some of his cybersecurity colleagues have traditionally brought to discussions around AI in cybersecurity.

While the emergence of ChatGPT may eventually lead to the development of technologies or businesses that compete with Darktrace, Lewis said it has also made it much easier to explain the value those technologies bring to the cybersecurity space.

“AI is often treated as one of those industry buzzwords that get sprinkled on a new, fancy product. ‘It’s got AI, therefore it’s amazing’ and I think that it’s always hard to push against that,” said Lewis. “By making the use of AI more accessible, more entertaining maybe, in a way where [security practitioners] can just play with it, they will learn about it and it means that I can now talk to a security researcher…and suddenly a few say ‘you know, I see where this can be really valuable.’”  

Derek B. Johnson

Derek is a senior editor and reporter at SC Media, where he has spent the past three years providing award-winning coverage of cybersecurity news across the public and private sectors. Prior to that, he was a senior reporter covering cybersecurity policy at Federal Computer Week. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds