Artificial intelligence will super-charge familiar 2024 threats in 2025, putting new wrinkles on old security challenges such as phishing, insider threats and ransomware.
Meanwhile artificial intelligence (AI) itself will increasingly be thrust in hacker crosshairs with AI models themselves coming under innovative new and constant threats such as malicious prompt injections from bad actors and large language model (LLM) data tampering by adversaries.
Red flag warnings also include a growing number of cyberattacks that have real world implications as hacks continue to impact the virtual and now the physical world. This year, experts shared with SC Media, that 2025 will be shaped by the rise and use of quantum computing attack techniques by adversaries aimed at exploiting existing and emerging encryption tech.
What follows is SC Media's annual roundup of security expert forecasts and predictions for the year ahead. And if and when any of these forecasts should become a reality SC Media will be here to help sort out who, what and why - including why it matters and how to best mitigate those attacks.
The promises and dangers of artificial intelligence in cybersecurity
Generative AI will upend traditional security methods — and vastly increase the amount of zero days to the detriment of many, says Sanjeev Verma, co-founder of PreVeil:
GenAI accelerates general understanding of people, processes, and technologies — and that will spur elaborate attacks including sophisticated phishing emails, deep fakes, vishing, and more. Not only that, but GenAI has robust search and analyze capabilities that can and will be used to surface unknown zero days and CVEs that haven’t been patched.
Threat actors will exploit AI by manipulating private data, Daniel Rapp, Proofpoint's chief AI and data officer:
We are witnessing a fascinating convergence in the AI realm, as models become increasingly capable and semi-autonomous AI agents integrate into automated workflows. This evolution opens intriguing possibilities for threat actors to serve their own interests, specifically in terms of how they might manipulate private data used by large language models (LLMs). As AI agents depend increasingly on private data in emails, software-as-a-service (SaaS) document repositories, and similar sources for context, securing these threat vectors will become even more critical.
In 2025, we will start to see initial attempts by threat actors to manipulate private data sources. For example, we may see threat actors purposely trick AI by contaminating private data used by LLMs — such as deliberately manipulating emails or documents with false or misleading information — to confuse AI or make it do something harmful. This development will require heightened vigilance and advanced security measures to ensure that AI isn’t fooled by bad information.
Protect against AI-assisted threats; plan for AI-powered threats, Troy Bettencourt, head of IBM X-Force and global partner:
There is a distinction between AI-powered and AI-assisted threats, including how organizations should think about their proactive security posture. AI-powered attacks, like deepfake video scams, have been limited to-date; rather, today’s threats remain primarily AI-assisted – meaning AI can help threat actors create variants of existing malware or a better phishing email lure. To address current AI-assisted threats, organizations should prioritize implementing end-to-end security for their own AI solutions, including protecting user interfaces, APIs, language models and machine learning operations, while remaining mindful of strategies to defend against future AI-powered attacks.
Cybersecurity will see a 'trust and verify’ approach to coding with AI, says Andrea Malagodi, Sonar CIO:
We should embrace AI innovation to benefit the future trajectory of software development. AI-generated code and testing tools can amplify developers' productivity, enabling them to focus more on projects that align with broader business goals. However, AI is a complement, not a replacement, of developers’ skills, and business leaders must recognize this important distinction. The activity of conceiving, designing, and architecting a system or a feature is not only a coding detail, it is a craft and should not be ignored.
Humans must remain integral to the testing and verification process, whether the code is AI-generated or written by developers. The demand and rising use of AI in the coding process means developers are writing more code, all of which must be tested for security and quality. At a minimum, all code should undergo rigorous testing, with multiple control checks established by developers to trust and verify code at each stage of development.
While AI will continue to boost developer productivity in the coming years, if underlying issues in the code development process aren't addressed, more AI-generated code will only lead to more code to fix. Software teams need to utilize trusted, automated code testing tools and apply a human lens and critical thinking to ensure the delivery of high-quality code they can be confident in.
Malicious use of multimodal AI will create entire attack chains, says Corey Nachreiner, CISO at WatchGuard:
By 2025, malicious use of multimodal AI will be used to craft an entire attack chain. As multi-modal AI systems gain the ability to integrate text, images, voice, and sophisticated coding, they will be coming to threat actors who will leverage them to streamline and automate the entire pipeline of a cyberattack. This includes profiling targets on social media, crafting and delivering realistic phishing content, including voice phishing (vishing), sometimes finding zero-day exploits, generating malware that can bypass endpoint detection and deploying the infrastructure to support it, automating lateral movements within compromised networks, and exfiltrating stolen data. This hands-off, entirely seamless approach will democratize cyber threats even more radically than malware-as-a-service offerings have in recent years, enabling less skilled threat actors to launch advanced attacks with minimal human intervention. Therefore, organizations and security teams, regardless of size, will face an increase in highly tailored cyber threats that will be difficult to detect and combat.
Bad actors will develop synthetic online personalities for financial gain, says Tyler Swinehart, Ironscales director of global IT and security:
In 2025, I imagine we'll see a significant uptick in the presence of fabricated experts and audiences for sale. The phenomenon is already taking place, albeit at smaller, more hand-tailored scales. However, with the emergence of generative AI, deepfakes, and other forms of synthetic content, people will be able to create rather believable internet personalities with significant online presences, which will be able to gain sizable audiences by doing things like creating tutorials, producing articles, writing reviews, blogging, and even creating podcasts and video series. I also expect there will be a widespread effort to automate these personalities and content in order to establish substantial online circles that can then be offered up for sale. Alternatively, they can be used to promote, sell, or criticize whatever the highest bidder chooses. I think in most cases we’ll see people initiate the process with some real content early on, in order to plant the seeds of trust and establish some credibility before they’re used for more nefarious purposes. This will allow them to propagate messaging and influence more effectively, circumventing the screening technologies we have in place today.
System prompt vulnerabilities is Achilles’ heel for LLM security, says Elad Schulman, Skyhigh Security CEO:
A new addition to OWASP’s latest list for LLM security, system prompts often act as both behavior guides and inadvertent repositories for sensitive information. When these prompts leak, the risks extend far beyond the disclosure of their content, exposing underlying system weaknesses and improper security architectures. System prompts are essential for steering LLM behavior, defining how an application responds, filtering content, and implementing rules. But when they include sensitive data (API keys, internal user roles, or operational limits), they create a hidden liability. Worse, even without explicit disclosure, attackers can reverse-engineer prompts by observing model behavior and responses during interactions. Companies should adopt best practices to avoid potential sophisticated exploits via system prompts such as separating sensitive data, red teaming LLMs and implementing layered guardrails.
In 2025, artificial intelligence will fundamentally reshape the cybersecurity landscape, creating a double-edged sword of unprecedented complexity, says Simon Tiku, OpenText Cybersecurity VP of engineering and product management:
Threat actors will leverage AI to accelerate vulnerability discovery, craft hyper-personalized phishing attacks, and develop sophisticated evasion techniques for malware. Simultaneously, cybersecurity defenders will employ AI-driven threat detection systems that can analyze massive datasets, identify anomalies in real-time, and provide predictive threat intelligence, creating an escalating technological arms race between attackers and defenders.
Automation will be your first responder, says Darren Wolner, GTT VP of product management - managed and professional services:
As cyber threats multiply and the network attack surface continues to expand due to growing reliance on hybrid workforces, IoT, cloud services and more, data-driven and AI-infused automation will serve as the primary frontline defense. Such systems will act instantly and autonomously, analyzing data patterns to combat threats without requiring human intervention. They will also allow networks to be more adaptable as automated defenses learn from every new incident to evolve in real time. This will allow organizations to be faster, smarter, and more precise in their responses to emerging threats.
AI-powered ransomware attacks will rise, says Art Ukshini, Permiso associate threat research:
In 2025, threat actors will benefit from AI’s continuous enhancement, using it to craft highly successful ransomware campaigns. New models which are capable of analyzing massive amounts of public and stolen data, can and will be used to create “tailor-made ransomwares” to match “customer’s” situation and request the perfect ransom amount. AI-driven ransomware will automate attacking steps and even dynamic decision-making during the attack by identifying the most critical systems to target and adjust encryption speeds or scope in real-time, optimizing the attack for maximum success rate.
Threat actors turn to AI-driven cloud threats, says Marina Segal, Tamnoon CEO:
Threat actors already use automated scanners to identify critical misconfigurations across cloud environments, including overly permissive SNS policies in AWS to unencrypted glue catalog data. We’ve also seen a rise in supply chain attacks, where AI maps interdependencies between cloud services and SaaS applications. Don’t expect this trend to slow down anytime soon, either.
Cloud security teams already struggle with an endless onslaught of alerts and false flags. As AI matures, we expect to see zero-day exploits become even more common. This will put even more pressure on overworked security teams — as they look to leverage AI for threat detection and automated remediation.
What’s clear: a vicious battle will be fought in the cloud between threat actors and cloud security professionals, and AI will play a key role in enabling both sides. This also means vendors and analysts will jump at the opportunity to corner this market.
Agentic AI will continue to redefine security strategies, says Kurtis Shelton, NetSPI principal AI researcher and AI/ML penetration testing (AML) service lead:
In the coming year, agentic AI is poised to significantly transform security strategies by enhancing both proactive and reactive measures. Autonomous agents will likely be used to monitor networks for threats, identify vulnerabilities before exploitation, and respond to incidents in real-time with minimal human intervention. They may dynamically adjust security rules based on evolving threat patterns or autonomously quarantine compromised systems, greatly reducing response times.
However, the rise of these autonomous agents will also introduce new risks, as they themselves can become targets for attacks. If compromised, they could inflict considerable damage to an organization due to their limited oversight. Future security strategies will need to focus on robust defenses against adversarial AI, emphasizing the importance of explainability, continuous monitoring of decision-making processes, and adherence to strong security principles to ensure that these systems remain secure and trustworthy in a rapidly evolving threat landscape.
Zero trust architecture will increasingly be adopted, says Thyaga Vasudevan, Skyhigh Security EVP of product:
In 2025, we will no doubt see increased adoption of zero trust architecture by organizations of many different sectors, as the chain reaction following the implementation of zero trust into the FBI and the U.S. Air Force, among other federal departments, will reverberate throughout the nation and globally. With this adoption will come a seismic shift in the culture of these organizations. Because zero trust requires close collaboration between IT, security teams, and business units, security will finally become a priority for all employees and will be integrated into every aspect of the business. We can expect to see increased security posture, enhanced user and device management, and overall more secure data in organizations thanks to zero trust architecture.
AI will also have a profound impact on zero trust adoption. AI will enhance the zero trust architecture by providing intelligent automation, adaptive security, and real-time risk analysis. Additionally, zero trust frameworks will secure AI systems themselves, ensuring that AI applications and data are protected against emerging threats. Together, they will create a more resilient, scalable, and proactive approach to cybersecurity.
Shifting to AI-driven remediation for stronger cloud resilience, says Gilad Elyashar, Aqua Security chief product officer:
In the cloud-native space, we anticipate a shift from prioritizing vulnerability detection to focusing on streamlined remediation, driven by faster, automated responses to security issues. With rising threat volumes, organizations will increasingly rely on AI-guided remediation, automated workflows, and contextual analysis to expedite fixes and reduce manual workload. Advanced tools will assign responsibility, provide targeted guidance, and adapt in real time, enhancing both accuracy and speed. This transition will strengthen cloud resilience, as organizations move from merely identifying risks to actively and efficiently closing vulnerabilities across their dynamic infrastructures.
AI models themselves are the next focus of AI-centered attacks, says Brad Jones, Snowflake CISO and VP of information security:
Last year, there was a lot of talk about cybersecurity attacks at the container layer — the less-secured developer playgrounds. Now, attackers are moving up a layer to the machine learning infrastructure. I predict that we’ll start seeing patterns like attackers injecting themselves into different parts of the pipeline so that AI models provide incorrect answers, or even worse, reveal the information and data from which it was trained. There are real concerns in cybersecurity around threat actors poisoning large language models with vulnerabilities that can later be exploited.
Although AI will bring new attack vectors and defensive techniques, the cybersecurity field will rise to the occasion, as it always does. Organizations must establish a rigorous, formal approach to how advanced AI is operationalized. The tech may be new, but the basic concerns — data loss, reputational risk and legal liability — are well understood and the risks will be addressed.
AI agents will become targets of compromise leading to data breaches, says Shimon Modi, Dataminr VP of product management:
As generative AI tools become more commonplace and advanced, we will see a new vector for data breaches emerge. Malicious actors will create innovative prompt engineering techniques to target AI agents empowered to take actions on behalf of end users. The aim of these attacks will be to trick the agents into disclosing information or taking an action, like a password reset, that will enable the attacker to compromise the network or achieve other objectives.
Big Tech bets on GenAI — will it be worth the reward, asks Ratan Tipirneni, Tigera president and CEO:
Recent earnings reports from major players like Meta, Google, Amazon, and Microsoft revealed a spike in quarterly capital expenses — capital being invested in land, data centers, networking, and GPU. The payback from the capital is not clear, but the reports indicate that the payback time could take up to 15 years.
This is a staggering amount of capital and an extraordinarily risky bet. What’s more, this investment is not coming from the venture capital community; it’s a balance sheet item for these companies, and the cash is coming from their reserves.
Why is Big Tech making such risky investments? Simple: because they cannot afford not to. If they don't make the investment, they will be shut out of the race. We are witnessing a market transition: If you look at the last 30 to 40 years in the tech industry, we have never seen capital investments at this scale. GenAI is going to become the next platform and to play in that, companies must make these kinds of capital investments or risk becoming irrelevant.
The AI revolution is bringing about better versions of existing solutions, says Seth Spergel, Merlin Ventures managing partner:
In many instances, the AI-fueled technology we’re seeing is not coming up with entirely novel ideas, but instead addressing the limitations of previous solutions that were on the right path but were limited by the available technology. The software industry is plagued by solutions that sound good on paper, but have turned out to be too difficult to get configured in a complex environment or deploy at the scale large enterprises need. That usability gap is so often the difference between a wildly successful technology and a failure. Just as the iPhone moved smart phones into the mainstream, we are seeing companies leverage AI to make products that “just work.” These products are not tackling completely new problems, but are instead building solutions that work with minimal human intervention and can self-configure based on the environment they are operating in. Leveraging AI around usability will help enterprises reduce the amount of “shelfware,” allowing for far faster growth for the startups that get this right than their harder-to-deploy predecessors were ever capable of.
AI will not be a significant threat to business critical applications in 2025, says Paul Laudanski, Onapsis director of security research:
I’m over the machine learning (ML) and artificial intelligence (AI) hype—it was overblown in 2024. While there are real concerns, such as mental health and misuse cases like deepfakes, it will not impact business-critical applications. When it comes to fraudulent activities or ill-intentioned use of AI, as long as companies are able to rapidly implement patches, there isn’t an increased risk to SAP security due to AI advancements.
AI has not been a significant factor in adversaries’ operations this year, even among very focused actors who know what they’re after, like nation states. If it had been, we’d already be seeing concrete results. Even for opportunistic attackers, like script kiddies, there’s nothing out there for them to package up and detonate on someone’s environment. Take the recent CISA report on the top routinely exploited vulnerabilities, for example. In 2022, SAP and Oracle were prominent on that list, but have since decreased. Although the threats are still active, this reduction reflects progress in addressing known risks, not increased activity because of AI.
What is most concerning are the SAP installations with vulnerabilities that remain unpatched, in parallel with not prioritizing the security of business-critical applications. Attackers who are interested in these apps will continue to get into your environment in other ways, not by utilizing AI – and it’s unlikely we’ll see that change in 2025.
Accelerated automation to outpace security threats, says James Fisher, SecureCyber director of security operations:
With AI tools enabling expedited attack timelines, automated security solutions are essential. Emerging automations within the security stack will allow teams to respond efficiently to streamlined attacks. AI will drive the implementation of creative responses to new threats, offering enhanced ways to safeguard against evolving risks. As teams update their security tools with new features and functionality, they’ll be able to automate these capabilities to increase resilience.
SaaS applications will continue to face increasingly sophisticated threats as adversaries exploit advancements in technology — especially AI, says Justin Blackburn, AppOmni senior cloud threat detection engineer:
AI will enable threat actors to more easily uncover SaaS vulnerabilities and misconfigurations, bypass traditional security measures, and craft more convincing phishing campaigns. As AI becomes more capable and accessible, the barrier to entry for less skilled attackers will become lower, while also accelerating the speed at which attacks can be carried out. Additionally, the emergence of AI-powered bots will enable threat actors to execute large-scale attacks with minimal effort. Armed with these AI-powered tools, even less capable adversaries may be able to gain unauthorized access to sensitive data and disrupt services on a scale previously only seen by more sophisticated, well-funded attackers.
Politics and world events affect on the threat environment
Cyber mercenaries and proxy actors are the hidden hands of cyberwarfare, says Nadir Izrael, Armis CTO
A new breed of actors is emerging on the cyber battlefield: cyber mercenaries and proxy groups. These private contractors operate in the shadows and often conduct operations on behalf of nation-states, often with plausible deniability. The rise of these actors complicates attribution, making it harder to identify the true culprits behind a cyberattack and escalating international tensions. In 2025, we will see increased involvement of these proxy actors, particularly in regions of political conflict, where nation-states seek to wage cyber campaigns without direct accountability. This will lead to heightened uncertainty and confusion, as attacks can no longer be easily attributed to state actors, further muddying the waters of cyberwarfare.
Cyber espionage and the race for emerging technologies: Intellectual property theft and cyber espionage are likely to intensify as nation-states seek to gain competitive advantages in emerging technologies, including AI, biotechnology, and quantum computing. The strategic importance of these technologies cannot be overstated, as they are central to the future of economic and military power. In 2025, we expect to see more targeted attacks on research institutions, tech companies, and critical infrastructure linked to these innovations.
Geopolitics will shape cyber espionage and the rise of regional cyber powers, Joshua Miller, Proofpoint staff threat researcher:
2024 has demonstrated that state-aligned cyber espionage operations are deeply intertwined with geopolitical dynamics. In 2025, APT operations will continue mirroring global and regional conflicts. The cyber espionage campaigns preceding these conflicts will not be limited to large nations historically seen as mature cyber actors but will proliferate to a variety of actors focused on regional conflicts seeking the asymmetric advantage cyber provides.
Additionally, state-aligned adversaries will use cyber operations to support other national goals, like spreading propaganda or generating income. Targeted threat actors will likely leverage the continued balkanization of the internet to attempt to deliver their malicious payloads.
Nation-state actors will increasingly exploit AI-generated identities to infiltrate organizations, says George Gerchow, IANS Research faculty and MongoDB interim CISO/head of trust:
An emerging insider threat gaining traction over the past six months, these sophisticated operatives bypass traditional background checks using stolen U.S. credentials and fake LinkedIn profiles to secure multiple roles within targeted companies. Once inside, they deploy covert software and reroute hardware to siphon sensitive data directly to hostile nations. The FBI confirmed that 300 companies unknowingly hired these imposters for over 60 positions, exposing critical flaws in hiring practices. Traditional background checks can’t catch this level of deception, and HR teams lack the tools to identify these threats. This escalating risk demands stronger identity verification and fraud detection—ignoring it leaves organizations vulnerable to catastrophic breaches. This isn’t just an attack trend; it’s a wake-up call.
Cybersecurity teams will become augmented operators rather than mere responders, says Will Ledesma, Adlumin senior director of MDR cybersecurity operations:
2025 will be the year we will start to fully realize the power of AI augmentation in cybersecurity efforts. We’ve seen augmented reality, like Microsoft’s mixed-reality headsets, enhance physical battlefield awareness by enabling U.S. soldiers to see through smoke, around corners, and view 3D terrain maps in their field of vision. While we’ve already seen examples of AI empowering cybersecurity teams, with the leaps in 2024 of LLM and forward-thinking concepts, AI is set to similarly enhance security teams’ abilities on the cyber battlefield. AI will become an additional weapon set in an organization’s arsenal for combatting increasingly sophisticated threats. There aren’t too many people who realistically think technology will fully replace soldiers on the physical and cyber battlefield in the near term, but we all agree that it will help them do their jobs better. As AI continues to be infused into all cyber operations, it will similarly enhance human efforts by automating routine frontline tasks, providing real-time threat insights, and potentially identifying zero-day vulnerabilities autonomously. Google’s recent claim that an AI agent discovered a previously unknown vulnerability in real-world code indicates that we’re closer than we might think to this becoming a widespread reality.
As AI is more deeply embedded in cybersecurity operations, cybersecurity services will adapt. For example, traditional managed detection and response (MDR), which relies on human-led detections and responses, will give way to cybernetic detection and response, where AI acts as a powerful force multiplier for security teams.
Nation-states will form cyber alliances, says Jim Coyle, Lookout U.S. public sector CTO:
There will be an uptick in nation-on-nation cyber espionage in the coming year. Nation-state threat actors will attempt to hack other foreign governments in search of possible responses to armed conflict, with the goal of understanding how far they can push the envelope kinetically before sparking a kinetic response.
With alliances building among hostile nation-states, as seen with the presence of North Korean troops in Russia, we will likely see an alignment in the cyber efforts between key global threat actors. Iran and North Korea have historically been more destructive on the cyber front, while Russia and China have been more passive. As such, we can anticipate North Korea engaging in more cyberattacks against Ukraine and its allies, while Iran will branch out to target allies of Israel.
China, meanwhile, will increase the frequency of its critical infrastructure attacks against the U.S., focusing on industrial-level compromises. They will utilize increasingly sophisticated social engineering, phishing and AI-powered attacks to target manufacturing processes, chemical compounds, farming techniques and genetically modified crop capabilities.
Expect escalating threats to network devices, says Jake Williams, IANS Research faculty and Hunter strategy VP of research and development:
Advanced threat actors, primarily nation-state threat actors, are likely to focus more on targeting network devices, specifically routers and firewalls. While threat actors continue to struggle to stay ahead of endpoint detection and response (EDR) software on endpoints, similar monitoring software can’t be installed on network devices. We’ve already seen multiple threat actors targeting networking devices to gain access to networks. While this isn’t exactly unprecedented, we can expect the scope and scale of these efforts to increase as threat actors encounter more difficulty maintaining operations with EDR software. It’s also worth noting that the number of compromised network devices is almost certainly underreported today. The vast majority of organizations lack a dedicated threat hunting program for compromised network devices. Very few have the telemetry needed to perform such threat hunts, and even fewer know what to look for. All of this creates a perfect storm for threat actors targeting network devices. Finally, threat actors may target network devices for their lawful intercept capabilities or to disrupt operations in a destructive cyberattack. Some evidence of such prepositioning was seen with Salt Typhoon in 2024, doubtless a sign of more to come.
Nation-state actors will blend in with criminals, using off-the-shelf tools to evade detection, says Andrew Costis, AttackIQ engineering manager of the adversary research team:
Nation-state actors, including Russia’s Sandworm and China’s APT 41, will dominate global cybersecurity concerns in 2025, with tactics evolving in complexity and stealth. These groups are now turning to widely available off-the-shelf tools, blurring the line between nation-state and financially motivated cybercriminals. But the real danger? The proliferation of zero-day exploits and sophisticated backdoors designed to evade detection for months or even years. This means that organizations, especially in critical infrastructure sectors, must adopt real-time threat detection to stay ahead of this mounting threat.
2025 is the year of the geopolitical AI arms race, says Sabeen Malik, Rapid7 VP of global government affairs and public policy:
As AI drives the next wave of cyber strategy, the stakes have never been higher. Welcome to a new age of geopolitical tension, where AI will drive both attack and defense strategies in 2025, ultimately redefining how we approach incident response. AI systems will become increasingly essential for detecting potential breaches, identifying anomalies, and automating cybersecurity measures to address threats before they can cause significant damage. On the flip, AI is poised to revolutionize attack strategies for cybercriminals, making it easier for them to execute large-scale operations with minimal effort. The net-net? AI itself isn’t the issue – it’s about whose hands it’s in.
The more things change ...
The "how" of the threat actor landscape is evolving faster than the "what," says Daniel Blackford, Proofpoint head of threat research:
The end game for cybercriminals hasn’t evolved much over the past several years; their attacks remain financially motivated, with Business Email Compromise (BEC) designed to drive fraudulent wire transfers or gift card purchases. Ransomware and data extortion attacks still follow an initial compromise by malware or a legitimate remote management tool.
So, while the ultimate goal of making money hasn’t changed, how attacks are conducted to get that money is evolving at a rapid pace. The steps and methods cybercriminals employ to entice a victim to download malware or issue a payment to a bogus “supplier” now involve more advanced and complex tactics and techniques in their attack chain.
Over the past year, financially motivated threat actors have socially engineered email threads with responses from multiple compromised or spoofed accounts, used “ClickFix” techniques to run live Powershell, and abused legitimate services like Cloudflare to add complexity and variety to their attack chains.
We predict that the path from the initial click (or response to the first stage payload) will continue to become increasingly targeted and convoluted this year to throw defenders, and especially automated solutions, off their scent.
This is the year we see a catastrophic cyber event related to insider threats, says Jeff Krull, Baker Tilly principal and practice leader:
We certainly are not rooting for this, but 2025 could be the year we witness a major cyber event driven by insider threats. A coordinated attack involving insiders from multiple companies or industries could cause widespread disruption. For example, imagine a situation where a foreign government recruits thousands of insiders to execute a synchronized attack across different organizations. The potential damage from such an event would be catastrophic, and attribution would be exceedingly difficult, especially when foreign actors infiltrate organizations from within.
Insider threats are becoming more common, and organizations are struggling to detect them despite the advancements in AI and analytics. As a result, companies are increasingly adopting zero-trust security models, which assume that no user or device is inherently trustworthy. This model is essential for minimizing risks, but it doesn’t entirely eliminate insider threats.
Hackers have proven they can infiltrate companies by posing as legitimate employees, gaining insider access and patiently collecting sensitive information before launching their attacks. In some cases, hackers have even impersonated on-shore IT specialists, when in fact they were working on behalf of foreign governments. As insider threats continue to rise, organizations must adopt stronger vetting processes and improve monitoring to protect themselves from these hidden dangers.
Insider threat risks will force organizations to evolve zero trust strategies, says Marcus Fowler, CEO of Darktrace Federal:
In 2025, an increasingly volatile geopolitical situation and the intensity of the AI race will make insider threats an even bigger risk for businesses in 2025, forcing organizations to expand zero trust strategies.
The traditional zero-trust model ensures protection from external threats to an organization’s network by requiring continuous verification of the devices and users attempting to access critical business systems, services, and information from multiple sources. However, as we have seen in the likes of Edward Snowden, or the more recent Jack Teixeira case, malicious actors can still do significant damage to an organization within their approved and authenticated boundary.
To circumvent the remaining security gaps in a zero-trust architecture and mitigate increasing risk of insider threats, organizations will need to integrate a behavioral understanding dimension to their zero trust approaches. The zero trust best practice of "Never Trust, Always Verify" will evolve to become "Never Trust, Always Verify, Continuously Monitor."
In 2025 ransomware “mafias” will emerge, says Kevin O’Connor, Adlumin director of threat research and incident response:
In the early 2000s, the PayPal Mafia surfaced after budding entrepreneurs, including Elon Musk, Peter Theil and Reid Hoffman, parlayed their success with PayPal into the next generation of disruptive Silicon Valley startups. Calling them a “mafia” was certainly tongue-in-cheek back then, but the similarities we’re seeing with ransomware groups today is a much truer use of the word. Over the past several years, the most notable ransomware groups have scaled immensely, establishing mature operations just like any big business. And similar to the business world, ambitious individuals that have been moving up the ranks in these criminal enterprises are now branching out to establish their own operations using what they’ve learned. We’ll see a lot of connective tissue between these Ransomware Mafias in the year ahead, and probably for many years to come.
Stopping cloud breaches will require a hybrid approach, says Elia Zaitsev, CrowdStrike CTO:
With a 75% increase in cloud intrusions over the past year, securing the cloud is more critical than ever. But today, tools protecting the cloud alone are not enough. Attackers are increasingly moving laterally between cloud platforms and on-prem environments to evade detection and achieve their objectives, taking advantage of the complexity of hybrid environments and protection gaps created by disconnected point products.
To regain control in 2025, businesses must have full visibility across public and private clouds, on-prem networks and APIs, from the same unified console and workflow. A holistic security platform that integrates runtime, posture management, identity and data security across hybrid environments will be essential to protect against these sophisticated threats.
Consumers will be testing ground for scamming operations, says Selena Larson, Proofpoint staff threat researcher:
In the early stages of fraud in the cyber or digital arena, individual consumers were the target; now, after two decades of evolution of the cybercrime ecosystem, we see ransomware operators "big game hunting" enterprise businesses for tens of millions of dollars.
Over time, layered defenses and security awareness have hardened organizations against many of the everyday threats. As a result, we have seen an uptick in actors once again leaning on individual consumers for their paydays. Pig butchering and sophisticated job scams are two examples that focus on social engineering outside of a corporate environment.
We will see a resurgence in the number of less sophisticated threat actors leveraging alternative communication channels, such as social media and encrypted messaging apps, to focus on fleecing individuals outside of enterprise visibility.
Ransomware threat from personal devices and unauthorized apps will increase, says Jeff Shiner, 1Password co-CEO:
In 2025, ransomware will continue to rise from the increased use of unmanaged applications and devices. Employees today are reaching for personal devices and their preferred apps — including new AI solutions and services that are introducing an entirely new layer to shadow IT — to be more productive. As a result, CISOs and their security teams will have to re-evaluate their security strategies in the new year and beyond to ensure corporate and customer data remains protected while allowing employees to use the tools they prefer to get their jobs done.
Data exfiltration and extortion will eclipse ransomware as the primary threat, says Jim Broome, DirectDefense CTO and president:
In 2025, ransomware will increasingly be used as a precursor to larger attacks, where the real threat is data exfiltration and extortion. Attackers will leverage stolen data as a bargaining tool, especially in highly-regulated industries like healthcare, where companies are forced to disclose breaches. As a result, we’ll see more sophisticated ransom demands based on exfiltrated data.
Threat actors threaten victims with physical harm to extract payments, says Sean Deuby, Semperis principal technologist of North America
If you assume the threat actors’ goals are to make as much money as quickly as possible, we will start to see the inclusion of physical coercion of the victim's organization — in other words, threats or intimidation of the victim company’s management. How do you decrease the amount of time to payment while also reducing the likelihood of decreasing the ransom? You threaten the other party. Ransomware payments have run into the billions in 2024 and record numbers of companies paid ransoms this year, yet there will be no end to attempts by threat actors to extract even more money in year ahead.
MMS-based cyberattacks will flourish in 2025, Stuart Jones, Proofpoint director of Cloudmark Division:
MMS (Multimedia Messaging Service)-based abuse, consisting of messages that use images and/or graphics to trick mobile device users into providing confidential information or fall for scams, is a burgeoning attack vector that will expand rapidly in 2025. Built on the same foundation as SMS, MMS enables the sending of images, videos, and audio, making it a powerful tool for attackers to craft more engaging and convincing scams. Cybercriminals will embed malicious links within messages containing images or video content to impersonate legitimate businesses or services, luring users into divulging sensitive data. Mobile users are often unaware that they are using MMS, as it blends seamlessly with traditional SMS, creating a perfect storm for exploitation.
RMM tools will increasingly be abused, says Melissa Bischoping, Tanium senior director of security and product design research:
This year, we’ve seen an increase in the abuse of Remote Monitoring and Management (RMM) tools, which allow threat actors to hide under the cover of legitimate IT traffic. Shadow IT and uncontrolled asset inventory often make it difficult to detect these threats. As legitimate business apps, they also don’t have malware signatures and have usually been signed by trusted code signers. Next year, organizations will address this problem through automated workflows and AI-enabled detection of deviations from baseline installations. Because this is best done at scale, organizations will start to scale their visibility into application control as well. This will allow them to make informed policy decisions and better detect threats.
How mobile phishing will be opening corporate doors for hackers in 2025, says David Richardson, Lookout VP of endpoint:
In 2024, we saw a massive rise in credential theft through mobile phishing tactics, techniques and procedures, and in 2025, I expect we will continue to see a rise in these phishing attacks that lead to the compromise of corporate cloud environments.
While credential theft alone can be worrisome for modern enterprises, it’s what the hackers can do with those credentials to move further within an organization’s infrastructure that should be of the highest concern. We’ve all received those suspicious text messages that try to trick you into clicking a link even though you may not have ordered anything, or phone calls about insurance policies from companies you don’t have coverage with. While oftentimes focused on individuals, hackers also target employees as they assume they may have access to sensitive corporate information.
All it takes is one hacker to trick an employee, through a mobile phishing attempt, into providing their credentials through a fake access portal set up specifically for their organization’s targeting, like Okta, to get access to corporate data that they can sell on the dark web or hold for ransom. In fact, Lookout researchers observed a massive increase in sites that include Okta in the domain name, up over 1,000% in the last two years. Each of these sites are for a specific attack against a specific company, and I expect these hackers to continue this trend and innovate on it even further in 2025 with the expansion of generative AI adoption.
Vulnerabilities in connected vehicle systems will be exploited by cybercriminals, leading to possible attacks on fleets and personal vehicles, says Josh Smith, Nuspire cyber threat analyst:
We may see an attack against a fleet management system or to an automotive manufactures portal that gives threat actors the ability to deploy a form of ransomware that effectively locks down all connected vehicles.
Ransomware will continue to evolve and create havoc, says Arvind Nithrakashyap, Rubrik CTO:
If 2024 taught us anything, ransomware isn’t going anywhere — and will continue to be a favorite of bad actors. With the evolution of AI and more data moving to cloud and SaaS-based platforms, attackers can automate and refine their attack strategies, making ransomware even more effective in 2025.
But it gets worse. We expect Ransomware-as-a-Service (RaaS) to expand beyond malware, offering initial access brokering, data exfiltration, and negotiation services. RaaS platforms will also continue to lower the technical threshold for launching ransomware attacks, which means more individuals or less technically skilled groups can engage in ransomware activities, increasing the volume of attacks. Organizations will need to develop new strategies to contend with this reality.
Abuse of inbox rules will escalate, says Andi Ahmeti, Permiso associate threat researcher:
As attackers continue to refine their tactics, the abuse of inbox rules in compromised email accounts is likely to escalate during 2025, by using inbox rules they can conceal important security alerts, delete/hide incoming messages, or otherwise alter email flows in victim mailboxes they compromise.
Threat actors will turn their eye to cloud technologies in 2025, exploiting their increasing complexity and potential vulnerabilities, says Paul Reid, AttackIQ VP of adversary research:
As organizations migrate workloads to the cloud for cost-efficiency and faster delivery, cloud assets are becoming more attractive targets for attackers. Consolidating critical functions like identity and authentication in the cloud, while potentially enhancing security, is also creating a larger, more valuable target. The centralized identity and authentication provider is now a single point of compromise for threat actors. This makes it easier for threat actors to compromise a single point of access to breach multiple systems. In 2025, we will see cybercriminals seek to compromise a single, centralized cloud access point for an opportunity to breach all of a company’s most important assets.
Physical and cybersecurity challenges, says Greg Parker, Johnson Controls, global vice president, security and fire, life cycle management:
As cyber and physical security increasingly intersect, zero-trust architectures will be essential to safeguard access and mitigate vulnerabilities. Organizations must ensure all users, devices and systems are verified continuously with robust access controls to prevent unauthorized intrusions into physical security systems. I anticipate zero trust becoming the industry standard, especially for facilities leveraging IoT and cloud-based solutions, where the stakes for security and operational continuity are higher than ever.
Digital fatigue increases will amplify talent shortages, says Damian Archer, Trustwave VP for consulting and professional services, Americas:
Given the current political and social environment globally, we will start to see a more noticeable amount of "digital fatigue." This fatigue will lead to more people disconnecting completely when outside of work. Technology trends will start to adopt digital fatigue protection as part of their value propositions. This change will impact security, particularly in managed services where organizations will find an amplified talent shortage due to the need for employees to want to disconnect. The constant on-call mentality of workers will start to trend the other way, and, as such, organizations will look to complement their workforces.
Quantum computing is coming for your encrypted data
Quantum-resistant algorithms related to domain names management and DNS, Ihab Shraim, CSC CTO
As quantum computing keeps evolving, traditional encryption methods used in DNS will be vulnerable and risky. To address this, there will be a shift toward quantum-resistant algorithms to secure DNS queries and prevent future exploits of the imminent threats of quantum computing capabilities. Therefore, a shift towards post-quantum cryptography (PQC) to future-proof critical online infrastructure, including global domain name portfolios, will begin related daily domain management operations and securing DNS queries.
Escalating “steal-now, decrypt-later” threats will drive broad integration of Post-Quantum Encryption, says Karl Homqvist, Lastwall founder and CEO:
In 2025, the intensifying threat of “Steal-Now, Decrypt-Later” attacks will force organizations to accelerate the adoption of post-quantum cryptography (PQC). With quantum computing advancements making traditional encryption methods increasingly vulnerable, adversaries are actively stockpiling encrypted data today to decrypt it with future quantum capabilities. The recent standardization of FIPS-203 in August 2024 enables organizations to legally deploy proven PQC algorithms like ML-KEM, pushing CISOs to establish comprehensive cryptographic asset registers and proactively overhaul encryption strategies. Without immediate action to secure high-value assets, organizations face a growing risk of quantum-enabled breaches, threatening not just data but national security and global stability.
Quantum computing begins to threaten current encryption standards, says Julian Brownlow Davies, Bugcrowd VP of advanced services:
Advances in quantum computing will reach a point where they start to pose a legitimate threat to traditional encryption methods. While not yet powerful enough to break all encryption, these developments will accelerate efforts to adopt quantum-resistant cryptographic algorithms. Governments and large enterprises will begin transitioning to new encryption standards to future-proof their data security.
Quantum computing will begin to crack strong encryption system, says Jesse Emerson, Trustwave VP of managed security services, Americas:
Quantum computing will begin to crack strong encryption systems, prompting significant investments in quantum-resistant models for securing information and communications. While this may not have a material impact in 2025, it will become a key focus for nation-states and organizations with sensitive data and low risk tolerances.
As these trends unfold, the cybersecurity landscape will evolve, presenting both challenges and opportunities. Staying ahead of these developments will be crucial for organizations aiming to protect their data and systems in an increasingly complex digital world.
With "Q Day" approaching, it’s time for organizations to start prepping, says Maurice Uenuma, Blancco VP and general manager, Americas, and security strategist:
With the August release of NIST standards for Post-Quantum Cryptography, it’s “go time” for organizations that haven’t yet started working on implementing the new standard. Full deployment will take time, and with some estimates of "Q-Day” (quantum computers’ ability to break current encryption standards) arriving within the next decade, organizations will need to lean in to avoid getting caught off-guard. Furthermore, enterprises and individuals will need to anticipate the data compromises looming from Q-Day based on the “harvest now, decrypt later” strategies of some adversaries and hostile nation states. We do not yet know the full impact of this scenario, but it could lead to a spike in ransomware, extortion, spear phishing and other attacks. Just because sensitive information from a previous incident was not publicly released, does not mean it could not happen in the future. Preparing for Q-Day in 2025 should be a top priority for CISOs for this very reason.