AI/ML, AI benefits/risks

Three ways to prepare for Agentic AI

Adobe Stock

COMMENTARY: The emergence of Agentic AI has been one of the most widely-talked about advancements in the technology industry. In the last month alone, we’ve seen a wave of announcements about new agentic systems for use cases ranging from automotive to customer service to supply chains.  

But what exactly do we mean by Agentic AI? And, why has it received so much hype?

The definition of AI agents varies widely across the academic community and technology industry. Broadly, we can define them as advanced AI systems that autonomously perceive, decide, and act to achieve specific goals without constant human intervention. These systems exhibit characteristics such as planning, adaptability, and self-directed problem-solving, which let them operate in dynamic environments. They use a combination of AI techniques, including machine learning, deep learning, and reinforcement learning to execute tasks with little human oversight.  

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

While these systems have not yet been widely implemented by businesses today, Gartner estimates that developers will embed Agentic AI in 33% of enterprise software applications by 2028. This will let industry make roughly 15% of day-to-day work decisions autonomously.

As Agentic AI becomes more viable and becomes more widespread, there are three building blocks that IT security teams need to have in place to help with a smooth implementation. By building upon a strong foundation and baseline data and cybersecurity best practices, organizations can pre-empt some of the emerging risks associated with the technology, such as novel attack vectors, vulnerabilities, data poisoning, and manipulation. Here’s some insight on what companies will need to embrace Agentic AI:

Data security and governance

We need to start with strong data security practices. This includes data security posture management and data classification, so that businesses have visibility into where sensitive data gets stored, which users have access to that data, how it has been accessed and used, and the sensitivity of the various data types that the business holds. Having visibility into the organization’s data is essential for data loss protection and it’s especially important for businesses introducing AI tools and workflows.

Data management and governance have historically been a challenge for security teams as it requires organizations to create enforceable policies, wrapped within technology capable of monitoring and ensuring those standards are maintained. Smaller organizations often under-prioritize this because of competing needs and budget, while larger organizations struggle with the project's scope.

When applying AI tools, workloads, and eventually agentic systems, the value of good data security and governance increases tenfold. We need quality data for AI outputs, whether building first-party or applying third-party AI. Governance and control of that data from a sourcing and quality assurance standpoint are essential to the adoption and running of AI technology. Particularly when dealing with a multi-technology environment where the AI models will have different creators, ethics, weighting, and data requirements, having a firm grip on the organization’s data types, usage, policies and guidelines, and restrictions remains important.

Baseline network and identity security

Having robust baseline security across the infrastructure, specifically network and identity security, is also critical for the responsible use and management of agentic systems. Whether its compliance monitoring, data access and usage, or zero-trust provisions and responsibilities, the adoption of Agentic AI will require high-level protections both from external manipulation and internal governance issues. The potential for agents to take out-of-scope actions requires clear monitoring and response capabilities. Additionally, the possibility of misuse through user interaction takes a robust network and identity security strategy.

Organizations struggling with visibility or zero-trust adherence will find it challenging to implement AI workflows and Agentic AI effectively. Without these safeguards, it’s difficult to guarantee security and identify misuses until after damage has occurred.

Zero-trust policies, monitoring, and enforcement

A zero-trust security framework requires that all users, whether inside or outside the organization, are authenticated, authorized, and continuously validated before being granted access to applications and data. While most organizations claim to have some form of zero-trust measures or policies in place, such as multi-factor authentication or SASE technology, the enforcement of these policies varies greatly, impacting the efficacy of controls on AI workflows and agents.

Identity security and zero-trust policies and controls will have significant overlap. However, the systems integrated into security stacks and the degree to which organizations have invested in them varies notably. Although identity security technology supports zero-trust and SASE strategies, the effectiveness of these measures depends on organizational policies and the scope and enforcement of these policies as functional limiters to critical data and systems. For Agentic AI, these measures are exceptionally important as they serve as guardrails to control agentic access. The requirements for Agentic AI include strict communication boundaries and explicit access controls for permissions, data access, and action boundaries within roles.

In recent years, it's become clear that innovation necessitates a Secure by Design approach. However, establishing robust baseline security has become equally crucial for the successful implementation of innovative technologies. Organizations that want to leverage Agentic AI and multi-agent systems must ensure their existing security practices are highly functional and effective. This includes incorporating AI technologies such as machine learning to implement additional AI workflows responsibly, with clear visibility and controls.

Preparing security teams for agentic systems will require enhanced visibility, monitoring, and response actions across data, network, identity, and zero-trust domains. It’s a challenge, but with advanced security systems with behavioral monitoring and sophisticated response actions beyond decision-tree automation, companies will get it done.

Hanah Darley, director, security and AI strategy and Field CISO, Darktrace

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds