AI/ML, AI benefits/risks

Five privacy concerns around agentic AI

(Adobe Stock)

COMMENTARY: Traditional AI systems have relied on human input and oversight for decision-making. In contrast, agentic AI systems operate with autonomy, adapt to new situations, and take actions or pursue specific goals independently.

Examples of agentic AI include autonomous vehicles or personalized AI assistants that respond to commands and also anticipate needs and manage tasks automatically. 

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Gartner predicts that by 2028, about one-third of software applications will include some form of agentic AI, and autonomous AI agents will mange around 15% of daily work decisions.

Privacy and security concerns surrounding agentic AI

The ability to collect and analyze massive amounts of data and act autonomously — these very characteristics that make agentic AI so effective — pose a major threat to the privacy and security of individuals. Let’s understand the five major privacy concerns:

  • Surveillance and profiling: Say an AI agent gets tasked with planning a trip to another city. It may require access to a user's travel schedule, preferences, credit card details, and identifying information to book hotels, transportation, and other services. Similarly, if an AI agent has been tasked with responding to certain types of non-sensitive emails, it could gain access to personal information embedded within the user’s email communications, such as contacts, trade secrets, or confidential correspondence. The more data an AI agent collects to perform its tasks, the greater its potential for overreach, turning these systems into tools that not only assist users, but also monitor and profile them.
    • Consent: Agentic AI also has to ensure informed consent. These systems collect and analyze enormous amounts of personal data, often without users understanding what’s being recorded or how it’s used. Even if people accept certain data practices while interacting with a virtual assistant, the complexity and scope of data collection can make the actual extent of what they agree to invisible. This obscurity makes the risk of unintended privacy violations greater, as users may inadvertently surrender control of sensitive details that AI systems use for tasks such as decision-making or predicting behavior. Without clear transparency, the line between convenience and exploitation becomes perilously blurred.
      • Compliance: Data protection regulations such as the California Consumer Privacy Act (CCPA) and the European Union’s GDPR require that firms disclose what type of personal data gets collected and for what reasons. So, for example, if an AI system gets used to book trips, it must disclose which travel platforms are accessed and give consumers the choice to opt out of sharing their personal information. However, it’s not so easily achievable with AI. Its inherent black box nature and its ability to operate independently make it harder to track and manage data flows in real-time, further increasing the risk of non-compliance.
        • Data security: Agentic LLMs have excessive agency, meaning they have deep access to data, functionalities, and permissions. This makes them a potential target for cyberattacks. Any breach in agentic AI can compromise sensitive personal data, resulting in identity theft, financial fraud, or other serious consequences. For example, if hackers infiltrate an autonomous vehicle’s AI, they could gain control of the vehicle, its speed, and its brakes, endangering the driver’s safety. Similarly, if an AI virtual assistant is hacked, bad actors could obtain private information contained within an organization’s email traffic or online chats. The data retrieved from these breaches can be further exploited for highly targeted social engineering and phishing attacks.
          • Anonymity: Although privacy regulations do not mandate full anonymity yet, they do emphasize methods like data anonymization and pseudonymization to protect identities. However, in today’s AI era, even if individual data points are anonymized, sophisticated AI systems can often re-identify individuals by combining data from different sources. For example, location data from a smartphone, combined with purchase history from a retail website, could be used to identify a specific person. This erosion of anonymity has major implications for privacy. It means that people can no longer assume their actions will remain private, even if they go through steps to protect their identities. The loss of anonymity could have serious implications for freedom of expression and other fundamental rights.
          • Three ways to address the privacy challenges of agentic AI

            While Agentic AI holds immense potential in revolutionizing industries, it also raises significant privacy concerns. Here are some potential strategies for addressing these concerns:

            Foster a privacy-centric culture: Make employees aware of the privacy and security risks of agentic AI. Encourage them to follow security best practices such as limiting data sharing and opting out of data collection when using AI agents. And, foster a safe environment where employees can report privacy concerns without worrying about being reprimanded.

            Set clear policies and procedures: Security policies should mandate regular audits of AI systems to identify and mitigate potential privacy risks. Training employees on these policies and fostering a culture of accountability has become equally critical in ensuring compliance and maintaining trust in AI-driven processes.

            Implement robust security measures: Restrict access to AI systems and sensitive data based on employees’ roles and responsibilities. Implement log harvesting, parsing, and alerting tools such as security information and event management (SIEM), data leakage prevention, encryption, and other security measures to prevent breaches and unauthorized access. Continuously monitor user activity and network traffic to detect and respond to anomalies in real-time.

            As agentic AI systems become more integrated into our daily lives, their privacy and security implications demand careful scrutiny. By fostering a privacy-centric culture, establishing clear guidelines and procedures, as well as implementing user education and security best practices, organizations can harness the benefits of agentic AI while safeguarding themselves and employees from serious privacy and security threats.

            Erich Kron, security awareness advocate, KnowBe4  

            SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

            An In-Depth Guide to AI

            Get essential knowledge and practical strategies to use AI to better your security program.

            Get daily email updates

            SC Media's daily must-read of the most current and pressing daily news

            By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

            You can skip this ad in 5 seconds