AI/ML, Endpoint/Device Security, Generative AI

Popular AI tools tricked to create malware for Chrome browser

Google Chrome application icon on Apple iPhone X screen close-up. Google Chrome app icon. Google Chrome application.

Cato Networks demonstrated how a threat intelligence researcher with no prior malware coding experience was able to trick popular large language model (LLM) tools to develop a Google Chrome infostealer.

The news from earlier this week caught the eye of security pros, mainly because they were able to jailbrake LLMs like DeepSeek, Microsoft Copilot, and OpenAI’s ChatGPT, and created malware for arguably the most popular one on the market with more than 3 billion users: the Google Chrome browser.

Cato researchers explained that they created an alternative fictional universe they called “Immersive World,” which used narrative engineering to assign roles to the LLMs, effectively bypassing controls and normalizing restricted operations.

In essence, they created a fictional world where hacking is normal.

According to the Cato CTRL report, Cato researchers reached out to DeepSeek, Microsoft and OpenAI and only Microsoft and OpenAI acknowledged receipt. For the infostealer, Cato said it shared the infostealer code with Google, but the giant tech vendor declined to review the code.

Jason Soroko, senior fellow at Sectigo, said the Cato Networks research proved that once jailbroken and freed from safeguards, an LLM can generate harmful instructions, disinformation, and toxic content, which attackers can weaponized for criminal or unethical activities. These activities include facilitating cybercrime, evading moderation on harmful topics, and amplifying extremist narratives — all of which erode trust in AI systems.

“Mitigation requires multi-layer defenses: rigorous filter tuning, adversarial training, and dynamic monitoring to detect anomalous behavior in real-time,” said Soroko. “Hardening prompt structures, continuous feedback loops, and regulatory oversight further reduce exploitation risks, fortifying the model against malicious jailbreak attempts.”

Nicole Carignan, senior vice president for security and AI Strategy and Field CISO at Darktrace, added that the industry has already seen the early impact of AI on the threat landscape and some of the challenges that organizations face when using these systems — both from inside their organizations and from adversaries outside of the business.

Carignan pointed out that Darktrace recently released research that found 74% of security professionals said AI-powered threats are now a significant issue. And, 89% agreed that AI-powered threats will remain a major challenge into the foreseeable future.

“Moving forward, it will take a growing arsenal of defensive AI to effectively protect organizations in the age of offensive AI,” said Carignan. “As adversaries double down on the use and optimization of autonomous agents for attacks, human defenders will become increasingly reliant on and trusting of autonomous agents for defense.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds