Abuse of artificial intelligence (AI) by criminals is going beyond polished phishing emails and is on the cusp of driving a wave of automated and multistage cyberattacks, researchers predict.
Attack data collected between May and July underscore a trend where cybercriminals are increasingly using social engineering techniques to deliver multistage payloads. The trend, according to researchers at Darktrace, shows a 59% uptick in malicious emails sent to potential victims that encourages them to follow a series of steps before delivering a malicious payload or attempting to harvest sensitive information.
"Nearly 50,000 more of these attacks were detected by Darktrace in July than May, indicating potential use of automation, and the speed of these types of attacks will likely rise as greater automation and AI are adopted and applied by attackers," according to the Darktrace Cyber AI Research Centre.
The multistage payload technique saw the rise in Quishing, phishing using QR codes, which researchers said indicated the use of automation in attacks.
The Darktrace research is based on analysis of its own customer base.
The keyword here is "potential" use of AI by cybercriminals. Darktrace did not conclusively assert that AI was being used in these attacks, rather that the technology could easily be leveraged to streamline attacks.
The trend would dovetail similar abuse of generative AI by criminals. In April, Darktrace reported a 135% uptick in “novel social engineering attacks” in what it believe is abuse of platforms such as ChatGPT.
The common denominator in multistage attacks and one-and-done phishing attacks is persuasive and believable text-based communication. However, as targets become increasingly aware of plain-vanilla spear and whale phishing attacks, cybercriminals are attempting to flip the script.
According to Darktrace, phishing emails impersonating senior executives are down 11%. Attackers switched to impersonating company IT teams, which are up 19%, as employees caught on to the VIP impersonation ruse, according to Darktrace’s data.
“While it’s common for attackers to pivot and adjust their techniques as efficacy declines, generative AI — particularly deepfakes — has the potential to disrupt this pattern in favor of attackers,” wrote Jack Stockdale, Darktrace’s chief technology officer. “Factors like increasing linguistic sophistication and highly realistic voice deep fakes could more easily be deployed to deceive employees.”
Cybersecurity pros said generative AI is a game-changer for cybercriminals to develop and modify attacks quickly, but SlashNext CEO Patrick Harr said it has also improved security at organizations.
“With the increase in sophistication and volume of threats attacking organizations on all devices, generative AI-based security provides organizations with a fighting chance at stopping these breaches,” said Harr.
As AI and automation have enabled attackers to operate at increased speed and scale, Nicole Carignan, Darktrace vice president of strategic cyber AI, said organizations must also entrust AI to interrupt in-progress, sophisticated attacks.
“Adoption will need to increase in the future as novel threats become the new normal,” said Carignan.