More threat actors have been looking to exploit artificial intelligence in cyberattacks, with malicious AI tool mentions and AI jailbreak discussions on the dark web surging by 200% and 52%, respectively, during the past 12 months, SiliconAngle reports.

Intrusions have also increasingly involved the utilization of jailbroken AI tools, such as FraudGPT and WormGPT, which have been leveraged for automating phishing attacks and malware development, according to findings from KELA's 20025 AI Threat Report. Attackers have also harnessed generative AI to facilitate more sophisticated phishing campaigns with highly convincing social engineering lures that consisted of deepfakes, said KELA researchers. Such findings are indicative of a significant transition in the cyber threat landscape, noted KELA AI Product and Research Lead Yael Kishon. "Cybercriminals are not just using AI they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime. Organizations must adopt AI-driven defenses to combat this growing threat," Kishon added.