COMMENTARY: The cybersecurity industry has always highlighted a variety of different “gaps” in visibility, defense technologies, and policies. As technology continues to advance, we now see significant security gaps because of artificial intelligence (AI) and machine learning (ML).
Threat actors leverage and develop new AI-powered threat tactics with a focus on gaining access to organizations via advanced phishing attacks. It’s evident in the sharp rise of malicious emails bypassing secure email gateways (SEGs). In the past year, our research found a staggering 104% increase in malicious emails making it to end-user mailboxes. The gaps we now face clearly highlight the need for a much stronger and more multifaceted approach to email security.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
Earlier this year, the FBI issued a warning about the increasing threat of cybercriminals using AI to craft these deceptive campaigns. Leveraging AI, threat actors rapidly create large volumes of highly-convincing phishing campaigns that bypass even the most popular SEGs, like Microsoft and Proofpoint. AI-powered phishing attacks are becoming harder to detect because of a variety of factors, including:
- Hyper-personalization: AI analyzes vast amounts of data from social media and other sources to mimic the writing style and language of friends, colleagues, and organizations. This lets threat actors create highly personalized emails that are more likely to bypass email security controls and deceive recipients.
- Optimal timing: AI algorithms analyze behavioral patterns to determine the best time to send phishing emails. By understanding when individuals are most likely to be distracted or tired, threat actors increase the chances of a successful attack.
- Scalability and adaptability: Automation technologies powered by AI let threat actors generate large volumes of phishing emails in a short amount of time. They adapt and evolve based on feedback from previous attacks, making them more effective at bypassing detection.
- Enhanced phishing email quality: AI improves the cosmetic quality of phishing emails, making them indistinguishable from legitimate correspondence. This makes it difficult to detect attacks based on traditional indicators like spelling, grammatical, or language errors.
The ongoing cat-and-mouse game between defenders and attackers remains a constant challenge in cybersecurity. While AI-based defenses offer advancements, they often struggle to keep pace with the rapid evolution of threat tactics. It’s especially true for email security, where AI-powered attackers can innovate at a much faster rate than defensive AI models can adapt.
Model-based SEGs, which use learning algorithms to identify patterns and block malicious emails, offer a more advanced approach to email security compared to traditional rules-based SEGs, which rely on predefined rules and signatures. Model-based SEGs can learn from new threats and adapt their defenses accordingly.
However, because of the inherent limitations of ML models, which require constant supervised training data to identify newly seen threats, no model can learn what it hasn't encountered. Meanwhile, threat actors, leveraging AI, can innovate at a much faster pace, leaving defensive AI SEGs perpetually playing catch-up.
The AI email security gap gets exacerbated by the inherent limitations of AI training models. These models are playing catch-up, struggling to identify the latest threats because of their reliance on past categorized data and periodic retraining. The rapid pace of innovation by threat actors outstrips the ability of many AI models to detect new threats in real-time. This disparity creates a significant challenge for organizations seeking to protect themselves against increasingly advanced email-borne threats.
The role of employees in email security
To effectively defend against AI-enhanced phishing campaigns, organizations must adopt a multifacted approach that combines defensive AI with human intelligence. By training AI/ML models with human-reported attacks, organizations can equip their defensive AI with valuable insights from firsthand witnesses of threats.
Harnessing the power of employees through security awareness training (SAT) promises to close the gaps left by model-based SEG approaches. Organizations need to shift their perspective and see employees as valuable assets, not liabilities. By investing in SAT, companies empower their teams to become active participants in the organization’s cybersecurity defense. Effective SAT goes beyond basic awareness and should specifically train employees to identify and report on emerging threats, the kind that even AI-based defensive tools miss.
This makes employees a vital first line of defense against sophisticated phishing attacks. By bridging this AI gap with a strong human intelligence layer, organizations can significantly strengthen their overall cybersecurity posture against AI-enhanced threats.
The benefits of employee-driven phishing intelligence extend beyond a single organization. Crowd-sourcing employee-reported phishing threats creates a scalable and diverse set of intelligence that fosters a defensive network effect. The more organizations that share threat intelligence, the better all of our defenses become.
Josh Bartolomie, vice president, global threat services, Cofense
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.