Security Program Controls/Technologies, Threat Management, Vulnerability Management

New AI phishing tool FraudGPT tied to same group behind WormGPT

Phishing

A new AI bot called FraudGPT has been discovered being sold on various dark web marketplaces and Telegram accounts that is exclusively targeted for offensive purposes, such as creating spear-phishing emails, cracking tools and carding.

John Bambenek, principal threat hunter at Netenrich, which discovered FraudGPT, said his team believes the threat actor behind FraudGPT is likely same group that runs WormGPT, another AI phishing tool that SlashNext reported on in a July 13 blog post.

“We believe it’s likely the same actor behind WormGPT and FraudGPT, and the reports indicate a criminal actor developing multiple tools for different audiences,” said Bambenek. “In a similar way to startups, it looks like this entity is trying to find their market with their techniques.”

Bambenek pointed out that to date, Netenrich does not know of any active attacks by FraudGPT tools. However, Bambenek said that FraudGPT was more focused on short-duration, high-volume attacks such as phishing, while WormGPT was more focused on longer-term attacks with malware and ransomware.

In a blog post July 25, Netenrich researchers said they found evidence that FraudGPT has been circulating on Telegram since July 22. The researchers said a threat actor can draft an email with a “high-level of confidence” that will entice victim users to click on malicious links.

This kind of tool can help threat actors who purchase the tools execute business email compromise (BEC) phishing campaigns on organizations, the researchers said. A subscription for FraudGPT starts at $200 a month and goes up to $1,700 per year.

“As time goes on, criminals will find further ways to enhance their criminal capabilities using the tools we invent,” said the Netenrich researchers in the blog. “While organizations can create ChatGPT and other tools with ethical safeguards, it isn’t a difficult feat to reimplement the same technology without those safeguards.”

Next-gen product lowers the barrier for cybercrime

ChatGPT works for even the most low-end cybercriminals, and the new FraudGPT offers added convenience, no ethical guardrails, and offers hand-holding throughout the phishing campaign creation process, explained Pyry Avist, co-founder and CTO at Hoxhunt. Avist said this lowers the barrier-of-entry to cybercrime and increases the probability of democratization of sophisticated phishing attacks.

“It’s the cybercrime economy’s version of next-gen product development for the phishing kit model,” said Avist.

Despite their telltale signs of poor grammar and graphics, Avist said the email texts and malicious site templates found in phishing kits are cheap and work well.

"Instead of phishing templates, FraudGPT lets criminals craft tailored attacks as per targeted specifications," added Avist. "That's certainly concerning, but it’s something that ChatGPT will also do, and probably do better.

 Melissa Bischoping, director, endpoint security research at Tanium, added that with FraudGPT, the “samples” they cite in the screenshots for tools such as phishing lures, HTML for web pages, and other features aren’t much different than what regular ChatGPT could generate, with some minor workarounds to bypass the anti-abuse mechanisms.

“I would even challenge if you’re doing them ‘better, faster,’ because we all know GPT-generated code is error-prone and there’s not yet a ton of conclusive, well-designed research on whether GPT-generated phishing lures are more effective than human-generated ones,” said Bischoping. “In all honesty, this seems like a lot of hot air to scam script kiddies out of cash and capitalize on the surge in interest around LLM-based attacker tools.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds