AI/ML, Generative AI, Email security

How malicious AI powers the latest surge of advanced email threats

An awareness sign in the laptop's email inbox informs of phishing attempts and the necessity for awareness.

COMMENTARY: Most people accept email as the standard for digital business communication. Unfortunately, it’s also notoriously difficult to secure.

Despite decades of efforts to strengthen protection protocols and educate users on spotting potential threats, email remains the leading attack vector. Worse, the proliferation of generative AI has fueled a wave of malicious AI tools, making it even easier for attackers to circumvent traditional email security.

Unsurprisingly, advanced attacks have escalated. Between 2023 and 2024, business email compromise (BEC) increased more than 54%, and, on any given week in 2024, organizations had a 70% chance of receiving at least one vendor email compromise (VEC) attack.

How AI enables advanced attacks

Over the past two years, we’ve seen an explosion in generative AI products and, with them a rise in malicious AI. Cybercriminals now have access to a wealth of tools unbound by the ethical and safety restrictions of traditional AI, such as uncensored chatbots and weaponized large language models (LLMs).

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Jailbroken AI tools have lowered the barrier to entry for cybercriminals, allowing even novice threat actors to launch sophisticated campaigns. Weaponized AI now fuels difficult-to-detect phishing attacks, which account for more than 76% of all advanced attacks. By gaining an initial foothold through these AI-generated threats, attackers can escalate to more complex tactics.

Though most email users still associate phishing attacks with poorly written and error-riddled messages, attackers can use malicious AI to quickly generate polished and professional emails that, when paired with spoofed email addresses, can easily bypass employee scrutiny and traditional security solutions.

These tools also facilitate more convincing impersonations, allowing threat actors to mimic a colleague or vendor’s voice and messaging style with alarming accuracy — making BEC and VEC attacks that are nearly impossible to detect.

Other trends supporting the escalation in advanced email attacks

The democratization of AI has created plenty of headaches for security teams, but it's not the only trend working in cybercriminals’ favor. There are a few other factors undermining email security:

  • Availability of personal data online: By spending time perusing LinkedIn and company websites, threat actors can easily learn everything they’d need to know to impersonate someone and create a hyper-personalized attack. Of course, even if they can’t gather enough intel through those resources, dark web forums offer a gold mine of data, advice, and tools.
  • Multichannel phishing tactics: The prevalence of email makes it an easy vector for engaging with a target. However, to reduce the chance of being detected, attackers are increasingly employing a multichannel approach. By initiating contact via email and then directing victims to continue conversations on unsecured channels, such as text messages or third-party platforms like Telegram, they can conduct their social engineering outside a company's purview.
  • Over-dependence on outdated security tools: Traditional security products, like secure email gateways (SEGs), simply aren’t equipped to detect and block modern attacks. As a result, it’s dangerous to put too much faith in these systems — especially as cybercriminals continually evolve their tactics to circumvent legacy monitoring tools. Even without using AI, threat actors have plenty of methods they can use to bypass SEGs. As long as the malicious email contains no traditional indicators of compromise, it’s a safe bet that it will evade detection by the SEG.
  • Overreliance on security awareness training: While it’s vital to keep employees trained on security best practices, it’s also shortsighted to expect workforces to identify every malicious email they receive — especially when new, never-before-seen tactics emerge almost daily. Threat actors are using increasingly advanced tactics to ensure their emails are nearly impossible to detect — even by the most security-aware professionals. Depending on employees to be a first-line defense against ever-evolving threats is a one-way ticket to costly consequences.
  • Impaired employee judgment: Given the challenging job market, economic uncertainty, and a near-constant fear of layoffs, employees may feel pressured to correct perceived mistakes that could reflect poorly on their performance. For example, if they receive what appears as a legitimate reminder from a vendor about an overdue invoice, they might rush to pay it without first verifying the request. Similarly, burnout and overwork can also cloud an employee’s judgment, making them easy targets for manipulation by cybercriminals.
  • Cybersecurity has historically been more reactive than proactive. However, threat actors are evolving their techniques faster than traditional defenses can keep up.

    Even with frequent protocol updates and ongoing employee education, security teams face an ever-increasing influx of sophisticated attacks. As malicious AI becomes more pervasive and accessible, cybercriminals will likely discover new ways to launch highly personalized and virtually undetectable attacks at scale.

    While weaponized AI creates more challenges, embracing AI-powered threat detection can help mitigate growing risks and help organizations avoid costly attacks in the year ahead.

    Mike Britton, chief information officer, Abnormal Security

    SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

    An In-Depth Guide to AI

    Get essential knowledge and practical strategies to use AI to better your security program.
    Mike Britton

    Mike Britton, chief information officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

    LinkedIn: https://www.linkedin.com/in/mrbritton/

    X: https://twitter.com/AbnormalSec

    Get daily email updates

    SC Media's daily must-read of the most current and pressing daily news

    By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

    You can skip this ad in 5 seconds