AI/ML, Email security, Generative AI

How BEC attacks are evolving in the AI era

Share
Unknown vectors haunts ransomware plauybook

Email scams targeting businesses have existed since the early days of email. Many of us are familiar with the “Nigerian Prince” scams that characterized phishing attacks in the 1990s, which duped thousands of people despite their absurdity. But as they became more common, costing more people significant amounts of money, awareness of these kinds of attacks grew until threat actors were forced to pivot to other more effective tactics.

That gave way to business email compromise (BEC) attacks — an evolution of traditional phishing scams that grew in popularity over the last decade. The hallmark of a BEC attack is impersonation, where criminals pretend they are trusted identities (usually colleagues or company executives) through spoofed email addresses or compromised accounts, and trick their targets into divulging sensitive information or making unauthorized financial transactions.

CEO gift card scams are one of the hallmark BEC types we’ve seen in recent years. While these kinds of attacks were initially highly successful — especially because of their exploitation of human trust — most organizations have effectively trained (or are actively training) their employees to spot these attacks before it’s too late.

This puts threat actors right back at the beginning of their innovation cycle. What will they do next to refine or create new BEC attack tactics in an effort to outwit their targets? As the latest FBI Internet Crime Report shows, BECs still remain a significant threat to modern enterprises, exposing them to billions of dollars in losses each year. As a result, it’s critical for CISOs to keep up with their shifting tactics.

Here are just a few emerging BEC methods that are increasingly occurring, which security leaders should watch for.

  • Vendor email compromise: Vendor email compromises (VEC) function as a spin on the traditional BEC attack, but rather than impersonating someone within the target’s organization, these attacks impersonate a trusted vendor (or use a compromised vendor account) to execute an invoice scam or other financial fraud. These attacks are highly successful because they exploit the trust and existing relationships between vendors and customers through social engineering. VEC attacks often ask the recipient to pay an outstanding invoice or update their billing account details (to a fraudulent bank account) for their next payment. And because vendor conversations often involve invoices and payments, these attacks rarely raise red flags — unlike the CEO gift card requests that have become nearly synonymous with BEC. Because VECs leverage known identities — whether through compromising a vendor’s account or spoofing a legitimate domain — they are often incredibly difficult to detect. They can fool even the most cybersecurity-savvy employees, which can quickly lead to lost revenue.
  • AI-generated BEC attacks: Previously, many cybercriminals relied on templates to launch their BEC campaigns. Because of this, a large percentage of attacks share common indicators of compromise that the human eye can detect, as well as by traditional security software. However, generative AI tools like ChatGPT let scammers craft unique, perfectly written, and highly-targeted content instantly, making detection exponentially more difficult. Although Open AI has restricted the use of ChatGPT to create malicious content, cybercriminals have responded by finding creative ways around these controls by “jailbreaking” ChatGPT or even creating their own malicious platforms like FraudGPT and WormGPT. Over the past year, we’ve seen numerous attacks that were likely generated by AI. While AI-generated content on its own does not directly indicate an email attack, it’s another signal that security teams can evaluate — alongside other patterns in email behavior — to detect an attack.
  • Email thread hijacking: Attackers increasingly employ email thread hijacking to insert themselves into an existing and legitimate email conversation. By impersonating one of the parties with a lookalike domain or even fabricating a completely new identity, the attacker will hijack the email thread to launch further phishing exploits, monitor emails, learn the organizational command chain, and target those who authorize financial transactions. Thread hijacking attacks typically start with account compromise, allowing attackers access to the inbox to begin searching for ongoing conversations about payments or other sensitive information. They then hijack those threads by pasting the conversation into a new email (usually with a lookalike or typo squatted domain) and carry on the conversation with the original recipients. Because the other recipients are familiar with the conversation and the threat actor replaces the victim, the message often gets overlooked as a continuation of the conversation, which can lead to devastating results. By simply reading and understanding the conversation history — and even automating this process through generative AI — attackers can seamlessly blend into the conversation. These attacks are especially dangerous and difficult to detect because there’s often no way for the average employee to realize that they are no longer communicating with their known colleague or vendor. We’ve seen recent  instances where sophisticated attackers incorporate additional thread-hijacking tactics, like copying additional “colleagues” into the conversation. Those “colleagues” are actually their adversarial counterparts, using lookalike domains to increase legitimacy.

Attackers will likely always prefer BEC attacks as a first choice and will continue as a leading category in financial losses. Why? Because they work. Humans remain the biggest weakness in today’s organizations as they put immense amounts of trust in their digital communications. Cybercriminals know this and we can count on them continuing to employ novel techniques to exploit that trust — using social engineering tactics to log in, rather than hack in.

Traditional threat detection products, particularly those that rely on detecting known signatures like malware attachments and suspicious links, can only go so far in preventing this threat. Human behavior isn’t a static attack signal, and organizations will need dynamic products that can learn and adapt to user behaviors in their email environment. Basing detection on user behavior signals can let teams detect anomalies indicative of attacks, no matter where or how they originate — whether through a spoofed vendor domain, a compromised executive account, an AI-generated email attack, or whatever technique hackers use next to launch BEC attacks.

Mike Britton, chief information security officer, Abnormal Security

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Mike Britton

Mike Britton, chief information security officer at Abnormal Security, leads the company’s information security and privacy programs. Mike builds and maintains Abnormal Security’s customer trust program, performing vendor risk analysis, and protecting the workforce with proactive monitoring of the multi-cloud infrastructure. Mike brings 25 years of information security, privacy, compliance, and IT experience from multiple Fortune 500 global companies.

LinkedIn: https://www.linkedin.com/in/mrbritton/

X: https://twitter.com/AbnormalSec

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.