COMMENTARY: It’s no secret that organizations are grappling with whether artificial intelligence (AI) will help or hinder their work. Opinions range widely – from concerns that AI could replace jobs to optimism about enhanced human-machine collaboration or even full automation as the future path.
Regardless of stance, it’s clear that enticing as today’s AI products are, there’s a common thread: a need to build trust in AI systems.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
AI has unlimited potential to transform cybersecurity, but its promise comes with limitations that make human collaboration essential. Trust in AI cybersecurity hinges on its technical capabilities, and also on its ability to reliably analyze data in real-world environments. Today, it’s not realistic to hand over full control of security decisions to AI. Instead, AI’s strength lies in collaboration with human security experts who can guide, supervise, and evaluate AI’s outputs.
Understand AI’s limits
In understanding the role of human collaboration in AI, we must start with an understanding of where AI stands today. Broadly, we can categorize AI into two types: augmented and autonomous.
Augmented AI assists humans in decision-making. It supports experts by processing vast amounts of data and identifying patterns, making security work faster and more efficient. However, decisions still rest with humans. Autonomous AI, by contrast, can operate independently, making decisions on its own. While this level of AI may be the goal, the cybersecurity industry remains in the augmented phase—meaning we are far from a reality where AI systems can make security decisions without human involvement.
Even with all the progress in the past two years, there's still a lack of confidence in the accuracy of the data that powers AI. The AI algorithms rely on vast datasets to identify trends and anomalies, yet these systems are still evolving. To make reliable decisions, AI systems need data accuracy that’s consistent and nearly flawless. Until we achieve this level of precision, humans are necessary as a final check on AI’s outputs. For example, take today’s security operations centers (SOCs), where AI can flag threats, but it’s the analysts who make the final call.
While examples like these show the value of AI, the technology also presents a double-edged sword. Just as security teams use AI to bolster defenses, attackers also adopt AI to amplify their criminal activities. AI can help cybercriminals write sophisticated phishing emails in multiple languages, aid them in navigating networks and identifying sensitive data to exfiltrate, and even mimic someone’s likeness to gain access to sensitive information. This evolution of AI attacks creates a need for an even stronger human presence as a countermeasure.
As both defenders and attackers leverage AI, the human element remains a final line of defense. Security teams must bring human insight, judgment, and intuition to the table – qualities that can outwit even the most advanced AI-driven attacks. The juxtaposition of human and machine is not simply about achieving better security: it’s about ensuring resilience in an environment where both good and bad actors wield AI.
What human collaboration with AI looks like
Most security organizations recognize the need for human oversight, and they can take these four practical steps to foster a working relationship between human beings and AI:
- Prioritize data quality and training: Building trust in AI starts with a strong foundation of high-quality, relevant data. AI algorithms depend on data accuracy to identify security threats and respond appropriately. Without regularly updated, reliable data, AI systems risk producing flawed outputs or overlooking new threats. An AI system monitoring identities might initially flag only high-frequency login attempts as suspicious. But as attack methods evolve, attackers may shift to low-frequency, high-impact intrusions that slip past traditional detection. A common example is a "sleeping attack," where an attacker gains access to an account and lies dormant, using it sparingly to avoid detection while gradually accessing sensitive data or escalating privileges over weeks or months. Continuous data updates are essential to equip AI with the latest threat intelligence, letting it spot these stealthy tactics in real time.
- Target specific workflows: Organizations should focus on workflows where AI can create tangible value. These may include reducing repetitive tasks, identifying potential threats faster, and enhancing team efficiency. By confining AI to targeted functions, organizations can gradually build confidence in its outputs while maintaining a strong human presence. It isn’t necessarily large-scale or nation-state attacks bringing businesses to a halt. Instead, bad actors are targeting the weakest link across organizations – people. They’re stealing credentials to enter networks and then move laterally across to access sensitive data and information. In fact, according to Forrester Research, more than 80% of security breaches involve privileged credentials. Given the prevalence of these types of attacks, teams should make working to thwart them before they happen a priority before adopting any AI technology. In the beginning, this can look like using AI to automate the identification of risky user behavior, access patterns, and unusual activity. Start small, monitor the outcomes, and adapt based on human feedback to avoid compromising business efficiency.
- Create safeguards to balance AI's potential and risks: As AI continues to enhance cybersecurity, set clear safeguards that manage AI’s power responsibly. These safeguards should prevent AI from operating unchecked, while still leveraging its strengths to support security objectives. Start with layered decision-making. Use AI to identify potential threats and anomalies, but require human intervention to validate critical actions. For example, AI might flag unusual access patterns, but it’s the security team that reviews and confirms whether these patterns pose a real risk. This tiered approach minimizes the chances of AI missteps and ensures final decisions are made with human insight.
- Protect identities at all costs: Organizations also must think about how they could be breached, recognizing that protecting identity has become a top priority. Part of this includes the democratization of data and access – basically using plain language to interact with AI tools. This ensures anybody across an organization can understand how to best protect identities, regardless of their technical know-how. Of course, we don’t expect employees to knowingly compromise data or expose their credentials. But by making identity security more accessible across an organization, the parameters are in place to best protect sensitive information.
Human-AI collaboration in cybersecurity will mature, which means we may reach a point where as AI’s reliability and data confidence improve, we can delegate more security responsibilities to it. Over time, human roles could shift from day-to-day oversight to strategic direction and high-level problem solving.
Imagine a future in which AI autonomously handles a significant portion of threat detection and response, making real-time decisions to thwart attacks without waiting for human input. We can reach this vision of cybersecurity, but it hinges on achieving trust in AI’s abilities – a trust we can only build through careful collaboration, intentional data stewardship, and a commitment to balancing AI’s power with caution.
Phil Calvin, chief product officer, Delinea
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.