Generative AI

How ‘Agentic AI’ will drive the future of malware

AI Artificial Intelligence technology for data analysis, research, planning, and work generate. Man uses a laptop and AI assistant dashboard. Technology smart robot AI agents and agentic workflows.

COMMENTARY: AI adoption experienced explosive growth over the past two years and has been integrated within every single business unit, every business function, and every application we use online.

Until now, we’ve only come across ChatGPT-like systems that can generate results if we ask it a question. Now, security teams need to start worrying about a fully-autonomous AI system (agentic AI), which can operate independently from human oversight.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Agentic AI isn’t just about answering queries, it's about building things. It doesn’t just come up with an idea about something: it actually creates something.

It’s a personal assistant that can perceive (gather data), reason (analyze data to understand what’s going on), act (take action based on its understanding), and learn (adapt itself based on learning, feedback and experience), instead of humans having to do those tasks.

Agentic AI may consist of multiple independent agents, each specialized in handling a particular task, working cooperatively towards a common goal, and possibly, in the hands of threat actors, leading to the creation of self-driven AI-enabled malware.

What an adversarial agentic AI system looks like

Like all forms of technology, agentic AI is dual use in that people can use it for both good and bad purposes. Adversaries will soon enough begin harnessing this powerful technology to improve the speed, scale, efficiency and targeting of their cyberattacks.

Agentic AI will scan the internet, identify targets and then figure out how to compromise them. It will design a social engineering attack, complete with fake audios, videos, messages that impersonate a trusted entity. It will adapt its tactics automatically, if the target does not respond to a message, it will give them a phone call. It will scan environments to identify vulnerable points of entry. It will launch multi-stage attacks, dynamically deciding on its next course of action based on the former. It will look to take the shortest path to reach its target.

AI systems have showcased self-perception, situational awareness, and problem-solving abilities, even displaying the capacity to use self-replication for survival and population growth.

It follows that we can expect to see future malware consisting of multiple collaborative autonomous AI agents, each geared to perform a specialized task.

For example, a target-searching agent identifies a target that meets a specific criteria; an intelligence agent gathers OSINT and analyses intelligence about the target; a vulnerability exploit agent discovers vulnerabilities and writes exploit code; a social engineering agent designs social engineering attacks; a credential agent validates stolen credentials or performs infiltration; a smash-and-grab agent steals and destroys stuff – the possibilities are endless.

How to mitigate the risk of adversarial AI

Here are some recommendations and best practices that can help organizations defend against the rise of adversarial agentic AI:

  • Train employees to detect AI-powered attacks: Educate staff on the growing risk of agentic AI’s malicious use by bad actors. Use social engineering and phishing simulation exercises, security awareness tests and red-teaming to teach employees what an AI-powered attack (deepfake, AI-drafted phishing emails) can look like and why they must report them immediately to IT or the security team.
  • Fight AI with AI: Agentic AI is not exclusive to attackers. Defenders can also leverage agentic AI to improve their detection and response, their intelligence, as well as defense mechanisms. They can create an army of AI agents that can find and fix bugs and misconfigurations, perform proactive patching, run continuous simulation testing on employees, identify weaknesses in security policies and controls, search and destroy malicious programs, monitor networks, and traffic for anomalies.
  • Deploy strong security controls and authentication: Implement phishing-resistant MFA on critical systems and user accounts to prevent unauthorized access. Use a layered security system that can detect and block adversaries (whether they are automated agents or not) from performing lateral movement. Leverage robust monitoring tools to flag unusual activity. Track user interactions to identify compromised accounts.

It's highly likely that bad actors will have already begun weaponizing agentic AI. The sooner organizations can build up defenses, train employees, deploy their own AI agents, and invest in stronger security controls, the better they will be equipped to outpace AI-powered adversaries.

Stu Sjouwerman, founder and CEO, KnowBe4

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

Related Terms

Algorithm

You can skip this ad in 5 seconds