AI/ML, Governance, Risk and Compliance, Vulnerability Management, Threat Intelligence

AI-fueled phishing, shadow AI, jailbreaks kept security pros busy in 2024

AI Sphere: A glowing red AI logo illuminates a futuristic, sci-f

Two years out from the launch of OpenAI’s ChatGPT and the start of the generative AI boom, it has become clear that AI is not going anywhere anytime soon, and the technology has continued to have a major impact in the world of cybersecurity in 2024.

From deepfake fraud and the risks of shadow AI, to new AI safety laws and the potential of AI-powered vulnerability research, here’s a look back at some of the hottest AI cybersecurity topics of the past year.

Deepfake attacks and AI-powered phishing on the rise

The most prevalent AI security threat so far has not been AI-generated malware or rogue AI agents. Instead, AI-powered fraud through the use of AI-generated phishing lures and deepfakes has had the greatest real-world impact.

Video deepfakes that use AI to create lifelike imitations of individuals’ faces have been leveraged to manipulate not only unsuspecting human victims but also biometric identity verification systems, making them an attractive and dangerous tool for fraudsters.

In one jaw-dropping case at the start of the year, a finance worker in Hong Kong transferred the equivalent of about $25.6 million to scammers after being fooled by a video conference that included multiple deepfakes of their company’s chief financial officer and other colleagues. The case demonstrates not only the potential financial impact of such attacks, but also the increasingly sophisticated techniques attackers are using to make their deepfake schemes more convincing.

Meanwhile, “face swap” attacks on biometrics-based remote identity verification systems, which increased a whopping 704% in 2023 according to iProov, continue to be leveraged in fraud campaigns primarily targeting bank accounts.

Group-IB researchers reported on one such campaign in February, which used an iOS trojan called GoldPickaxe.iOS to collect facial recognition data. This facial data would then be used to generate deepfakes to bypass facial recognition systems meant to protect users’ bank accounts.

Overall, Gartner predicts that the rise in deepfake attacks will cause 30% of companies to lose confidence in facial biometric authentication methods by 2026.

Video deepfakes are not the only AI replicas that pose a threat to security in 2024 — voice deepfakes are also a popular tool, with one attack even targeting password management company LastPass in April of this year. Fortunately, the phishing attempt, which imitated LastPass CEO Karim Toubba’s voice, was recognized by the targeted employee and quickly thwarted.

AI-powered email phishing is also going strong in 2024, with email security firm VIPRE Security Group estimating that AI-generated emails now make up about 40% of business email compromise (BEC) lures.

Threat actors are increasingly recognizing the potential benefits of leveraging AI in their phishing campaigns, with an estimated 75% of phishing kits offered for sale on the dark web advertising some AI capability, according to a report published by Egress in October. Deepfake creation features were also advertised for 82% of phishing kits, Egress found.

‘Shadow AI’ and data disclosure a major concern for workplace AI use

Another aspect of AI risk faced by organizations across various industries is the use of unapproved or unmonitored AI applications by employees to perform various work tasks, also known as “shadow AI.” Unmonitored use of AI tools, namely large language model (LLM) chatbots, by employees is a growing concern due to employees’ tendency to upload sensitive information to these applications, as several studies have shown.

For example, Cyberhaven reported in May that 27.4% of the data employees submitted to LLM chatbots was sensitive data — a 156% rate increase from the previous year. This sensitive data mostly consisted of customer support information, source codes and research and development data, according to Cyberhaven’s report. The report also showed that the overall volume of data — sensitive or otherwise — sent to AI tools increased by 485% between March 2023 and March 2024, demonstrating the growing popularity of these tools in the workplace.

Another report by the National Cybersecurity Alliance (NCA) and CybSafe published in September found that 27% surveyed employees said they use AI tools at work — and more than a third (38%) of those employees admitted to sending sensitive work-related data to those tools. Despite the risk, less than half of respondents said they had received any training about AI at work.

With ChatGPT adding its file upload feature earlier this year, and business-focused AI copilots becoming more commonplace, organizations are racing to implement effective AI security policies and training programs to prevent sensitive data disclosure and other insecure uses of AI, while still enabling employees to leverage the productivity boosts afforded by these tools.

However, some organizations may choose to ban specific AI tools outright due to the risk, such as the U.S. House of Representatives, which decided on March 29 to forbid the use of Microsoft Copilot by its staffers, citing the risk of leaking House data to non-House approved cloud services.

LLMs targeted with jailbreaks, used by known threat groups

LLM jailbreaks — methods that attempt to bypass a model’s guardrails and filters to generate harmful content or reveal sensitive information — are nothing new, but continued to evolve in 2024, with several new techniques being discovered or developed as proofs-of-concept by security researchers.

Multi-step jailbreak methods were popular this year, leveraging LLMs’ limited “attention span” to manipulate them across multiple interactions rather than through a single prompt injection. For example, Palo Alto Networks Unit 42 developed a multi-step jailbreak method called “Deceptive Delight,” which it revealed in October, that had a 65% success rate in just three interactions. This method worked by convincing the LLM model to draw connections between harmful and benign topics and elaborate on the harmful topic in the final step.

Pillar Security’s State of Attacks on GenAI report, published in October, painted an alarming picture of the effectiveness of jailbreak attacks on LLMs, with the average jailbreak taking fewer than five interactions and less than a minute on average to be successful. When successful, these attacks leaked sensitive data 90% of the time, according to the Pillar report.

Meanwhile, LLM jailbreaks, especially targeting ChatGPT, proliferated on hacker forums, with entire forum sections often dedicated to misusing AI, Abnormal Security CISO Mike Britton told SC Media in April. And while ChatGPT is seemingly the most popular intended target of such jailbreaks, direct and indirect prompt injection vulnerabilities in several other LLMs including Copilot and Gemini for Workspace have been uncovered throughout the year.

The confirmed use of LLM tools by known advanced persistent threat (APT) groups was another significant development this year, with Microsoft and OpenAI first revealing the extent of ChatGPT use by state-sponsored threat groups in February 2024. These threat actors from Russia, North Korea, Iran and China used the service for a range of tasks, including scripting help, vulnerability research, target reconnaissance, language translation and social-engineering content generation.

OpenAI quickly shut down the offending accounts upon discovery, and although the threat actors’ use of the LLM was mostly exploratory, the activity prompted Microsoft to propose nine new LLM-specific tactics, techniques, and procedures (TTPs) for inclusion in the MITRE ATT&CK framework. OpenAI continued to disrupt threat actors’ use of its platform throughout the year, exposing influence campaigns and malware developers attempting to take advantage of ChatGPT’s capabilities.

EU approves landmark AI Act, California tries its hand at AI regulation

Government regulators around the world faced the challenge of keeping pace with the rapid development and adoption of new AI technologies, aiming to establish policies that ensure the technology is used safely and securely. The European Union (EU) led the way in AI governance this year with the approval of its landmark AI Act in March. This legislation classifies AI systems by risk level, banning the most dangerous uses while creating risk-based regulatory requirements systems.

The United States has yet to establish a major AI regulation at the national level, although it did lead the first global resolution on AI that was unanimously adopted by the United Nations General Assembly in March. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) released AI safety and security guidelines for critical infrastructure in April, which builds on the National Institute of Standards and Technology (NIST) AI Risk Management Framework and President Joe Biden’s Executive Order on AI from 2023, however, any major legally binding federal AI safety regulation is yet to be seen.

Some U.S. states drafted and passed their own AI safety regulations over the past year, with California’s efforts earning significant attention due to the controversial California Senate Bill 1047, which was passed by state legislators but ultimately vetoed by Gov. Gavin Newsom. Many argued the bill would place an undue burden on AI developers, especially startups and open-source developers, and stifle innovation by creating requirements aimed at preventing hypothetical mass casualty events and catastrophic cyberattacks rather than addressing evidence-based risks.

Despite the failure of SB 1047, California passed 17 other AI privacy and safety regulations, including measures related to deepfakes, AI watermarking and AI-generated misinformation, showing that state-level AI laws have gained traction in 2024.

Advancements in AI for cyber defenders, researchers

In addition to evolving AI risks, threats and regulations, AI use by cybersecurity professionals and researchers also progressed with new technologies and solutions emerging to detect and analyze threats, streamline and automate tasks and uncover vulnerabilities.

The pattern recognition capabilities of AI and machine-learning systems can be a major boon to speed up and reduce false positives in threat detection while LLM-based systems can assist analysts understanding of potential malware samples, as Google Threat Intelligence Strategist Vicente Diaz described in a presentation at the RSA Conferences 2024 in May.

VirusTotal Code Insight, which uses an LLM to produce natural language summaries of how code samples function, can yield insights beyond a simple “malicious” or “not malicious” and catch otherwise-overlooked CVE exploits, Diaz explained. This is just one example of the AI benefits explored at RSAC 2024, with conference keynotes also addressing the technology’s pros and cons in relation to critical infrastructure and national defense.

Recent headways were also made in the area of AI-driven vulnerability research, with Google’s LLM agent “Big Sleep” discovering a previously unknown vulnerability in a development version of the popular open-source database engine SQLite, Google announced last month. Big Sleep is based on an earlier project called Naptime, which is a framework to provide LLMs with the tools to autonomously perform basic vulnerability research in a human-like workflow.

Google also revealed in November that the AI-enhanced version of its OSS-Fuzz fuzzing tool discovered 26 new vulnerabilities in open-source projects since it first added LLM capabilities to the tool in late 2023. Google plans to eventually improve the AI-enhanced OSS-Fuzz to require less human review and automatically report the flaws it discovers to project maintainers. Advancements such as those made by Google could potentially lead to more overlooked threats and vulnerabilities being detected and may also boost the productivity of human analysts by remove time-consuming manual processes from their workload.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds