Widely used generative artificial intelligence models DeepSeek, Microsoft Copilot, and ChatGPT have been jailbroken with the new "Immersive World" technique to enable the creation of a Google Chrome information-stealing malware, reports Cybernews.
Harnessing the Immersive World method involved prompts that detailed scenarios based on a fictional virtual environment that emphasized the importance of advanced programming and malware development, with the generative AI models later goaded with encouraging phrases to produce the infostealing payload, according to Cato Networks researchers.
Only Microsoft and OpenAI have acknowledged the findings while Google refused to review the infostealer crafted by the generative AI models.
"The investigation emphasizes that even unskilled threat actors can leverage LLMs to create malicious code, which highlights the urgent need for improved AI safety measures," said researchers.
Such a development comes amid the growing adoption of generative AI and mounting privacy concerns surrounding the technology, particularly with China-based DeepSeek, which may be banned by the U.S. from government-issued devices.