Advancements in artificial intelligence and social engineering are going to make phishing attacks harder than ever to spot and eliminate, according to KnowBe4.
The security training specialist said in its annual Phishing Trends Report that advances in AI are leading to phishing attacks that are more prolific and convincing to end users.
In particular, the researchers noted an increase in the use of "polymorphic" attacks that use AI platforms to create slightly altered phishing messages. These small changes can allow the phishing messages to evade filters, leading to a 47% increase in phishing messages missed over the course of 2024.
The result is phishing messages that are not only more likely to evade detection by security filters, but are also more likely to trick users into following malicious links thanks to the use of more convincing AI creation tools.
“Polymorphic phishing campaigns consist of a series of almost identical emails which only differ by a small detail. These slightly altered attacks can be difficult to detect by systems that look out for ‘known bad’ (blocklisting of known fraudulent addresses and payloads), such as Microsoft’s native security and secure email gateways,” KnowBe4 said in its report.
“Similarly, they can also be hard to remediate from inboxes across an organization using traditional email security technologies.”
External phishing campaigns are not the only threat facing organizations, however. The report also notes that there is a growing trend towards internal attacks thanks to efforts by North Korean threat actors to infiltrate companies as new hires.
KnowBe4, of all companies, would know well of the dangers posed by insider attacks from IT workers. Last July, the company disclosed a network breach born from a supposed new hire who was actually a North Korean intelligence agent.
“Our new employee ‘Kyle’ attempted to install malware the minute he switched on his new laptop. We isolated his account, and he never succeeded with the installation, let alone gained access to any data,” the company said in is recounting of the incident.
“Kyle had applied with a fake CV, an AI manipulated headshot, and a stolen Social Security number. He was not a real person but part of North Korea’s fake employee scheme which places insiders in organizations for financial gain and espionage.”
Part of the problem, according to KnowBe4, is the increasing reliance on remote workers. Without the ability to have applicants meet with the hiring company on-site or through direct video communications, organizations are prone to social-engineering attacks by threat actors seeking direct access to networks and databases.
“Software engineering roles are frequently targeted by cybercriminals due to their job mobility and privileged access — you may never meet someone face-to-face who has a high level of access to systems and data,” KnowBe4 said.
“Additionally, coding challenges within the hiring process provide a unique avenue for attack.”