COMMENTARY: AI adoption experienced explosive growth over the past two years and has been integrated within every single business unit, every business function, and every application we use online.
Until now, we’ve only come across ChatGPT-like systems that can generate results if we ask it a question. Now, security teams need to start worrying about a fully autonomous AI system (agentic AI), which can operate independently from human oversight.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
Agentic AI isn’t just about answering queries, it's about building things. It doesn’t just come up with an idea about something: it actually creates something.
It’s a personal assistant that can perceive (gather data), reason (analyze data to understand what’s going on), act (take action based on its understanding), and learn (adapt itself based on learning, feedback and experience), instead of humans having to do those tasks.
Agentic AI may consist of multiple independent agents, each specialized in handling a particular task, working cooperatively towards a common goal, and possibly, in the hands of threat actors, leading to the creation of self-driven AI-enabled malware.
What an adversarial agentic AI system looks like
Like all forms of technology, agentic AI is dual use in that people can use it for both good and bad purposes. Adversaries will soon enough begin harnessing this powerful technology to improve the speed, scale, efficiency and targeting of their cyberattacks.
Agentic AI will scan the internet, identify targets and then figure out how to compromise them. It will design a social engineering attack, complete with fake audios, videos, messages that impersonate a trusted entity. It will adapt its tactics automatically, if the target does not respond to a message, it will give them a phone call. It will scan environments to identify vulnerable points of entry. It will launch multi-stage attacks, dynamically deciding on its next course of action based on the former. It will look to take the shortest path to reach its target.
AI systems have showcased self-perception, situational awareness, and problem-solving abilities, even displaying the capacity to use self-replication for survival and population growth.
It follows that we can expect to see future malware consisting of multiple collaborative autonomous AI agents, each geared to perform a specialized task.
For example, a target-searching agent identifies a target that meets a specific criteria; an intelligence agent gathers OSINT and analyses intelligence about the target; a vulnerability exploit agent discovers vulnerabilities and writes exploit code; a social engineering agent designs social engineering attacks; a credential agent validates stolen credentials or performs infiltration; a smash-and-grab agent steals and destroys stuff – the possibilities are endless.
How to mitigate the risk of adversarial AI
Here are some recommendations and best practices that can help organizations defend against the rise of adversarial agentic AI:
It's highly likely that bad actors will have already begun weaponizing agentic AI. The sooner organizations can build up defenses, train employees, deploy their own AI agents, and invest in stronger security controls, the better they will be equipped to outpace AI-powered adversaries.
Stu Sjouwerman, founder and CEO, KnowBe4
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.