COMMENTARY: Most people accept email as the standard for digital business communication. Unfortunately, it’s also notoriously difficult to secure.
Despite decades of efforts to strengthen protection protocols and educate users on spotting potential threats, email remains the leading attack vector. Worse, the proliferation of generative AI has fueled a wave of malicious AI tools, making it even easier for attackers to circumvent traditional email security.

Unsurprisingly, advanced attacks have escalated. Between 2023 and 2024, business email compromise (BEC) increased more than 54%, and, on any given week in 2024, organizations had a 70% chance of receiving at least one vendor email compromise (VEC) attack.
How AI enables advanced attacks
Over the past two years, we’ve seen an explosion in generative AI products and, with them a rise in malicious AI. Cybercriminals now have access to a wealth of tools unbound by the ethical and safety restrictions of traditional AI, such as uncensored chatbots and weaponized large language models (LLMs).
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
Jailbroken AI tools have lowered the barrier to entry for cybercriminals, allowing even novice threat actors to launch sophisticated campaigns. Weaponized AI now fuels difficult-to-detect phishing attacks, which account for more than 76% of all advanced attacks. By gaining an initial foothold through these AI-generated threats, attackers can escalate to more complex tactics.
Though most email users still associate phishing attacks with poorly written and error-riddled messages, attackers can use malicious AI to quickly generate polished and professional emails that, when paired with spoofed email addresses, can easily bypass employee scrutiny and traditional security solutions.
These tools also facilitate more convincing impersonations, allowing threat actors to mimic a colleague or vendor’s voice and messaging style with alarming accuracy — making BEC and VEC attacks that are nearly impossible to detect.
Other trends supporting the escalation in advanced email attacks
The democratization of AI has created plenty of headaches for security teams, but it's not the only trend working in cybercriminals’ favor. There are a few other factors undermining email security:
Cybersecurity has historically been more reactive than proactive. However, threat actors are evolving their techniques faster than traditional defenses can keep up.
Even with frequent protocol updates and ongoing employee education, security teams face an ever-increasing influx of sophisticated attacks. As malicious AI becomes more pervasive and accessible, cybercriminals will likely discover new ways to launch highly personalized and virtually undetectable attacks at scale.
While weaponized AI creates more challenges, embracing AI-powered threat detection can help mitigate growing risks and help organizations avoid costly attacks in the year ahead.
Mike Britton, chief information officer, Abnormal Security
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.