Automation of the Security Operations Center (SOC) has failed. Almost 20 years since the rise of the SIEM, and 10 years after SOAR platforms first hit the market, SOCs are still struggling. Analysts are drowning in an “everywhere data” environment, struggling to interpret, prioritize, and respond to seemingly never-ending indicators as close as possible to the speed of threat. Many companies run more than 100 different security tools, forcing analysts to bounce between screens and portals, each with its own query language, while trying to piece together a cohesive investigative narrative. SOC leaders face mounting pressure to deliver on metrics and prove ROI on their growing security budgets.
With the introduction of generative artificial intelligence (AI) into the SOC, we're on the cusp of something truly revolutionary. For the first time in human history, we have systems that can reason over unstructured data and draw semantic meaning without explicit programming. These systems can make connections between words and concepts in ways that feel almost human-like. It's a dramatic shift from the technology underpinning the SIEM and SOAR that required everything to be neatly structured and categorized. In a recent episode of CSO Perspectives by N2K CyberWire, I explored in detail these previous attempts to solve the core problems facing the SOC.
AI, if done right, can amplify and augment the SOC analysts, ushering in a golden era of SOC automation. Instead of "if-this-then-that" automation, we're moving toward true human-AI collaboration and expert reasoning systems. Technology can help all analysts achieve better outcomes faster, regardless of their experience level.
As we ride the wave of AI innovation, we need to be thoughtful about how we implement it in diverse SOC environments. No black box AI making decisions we can't understand. No automated remediation that exceeds an organization's risk tolerance. These platforms need to adapt to the specific needs and constraints of different industries and organizations. The goal shouldn't be to automate everything – it's to automate the right things in the right way, always keeping the human analyst in the decision-making loop.
I believe any AI-powered platform that genuinely supports human analysts in the SOC needs to be guided by a four-part framework to be truly transformative and broadly accepted by security teams.
First, human-AI collaboration must be at the center of workflows. Any automation or AI should enhance the decision-making process, not become another burden or take away any human agency in security workflows. The AI should both teach and learn from analysts at all skill levels in a symbiotic relationship.
Second, there needs to be "Safe AI" architecture. Security teams must have absolute confidence that sensitive data stays within predefined boundaries and isn't used to train external AI models. Safe AI includes “evidentiary AI” -- every step the AI takes is auditable and can be compiled into comprehensive reports about investigations and outcomes, to give the CISO peace of mind that due diligence was done. No black boxes allowed.
Third, a platform needs to be modular and integrate seamlessly with existing environments. We need to get security teams out of the "rip and replace" cycle that's become all too common. The solution should optimize existing ecosystems without requiring expensive log aggregation strategies.
Fourth, a platform should enable federated data analysis, working with data where they sit, extending analytical workflows across different data islands without complex data pipelines. This type of platform leaves data islands where they are, extending analytic workflows across all of them.
So, what does this framework look like in practice? Think of it like incorporating autopilot into an aircraft cockpit. Just as a pilot wouldn't pre-program their autopilot to "take off, engage in combat, avoid gunfire, and return home with half a tank of gas,” we shouldn't expect SOC automation to run end-to-end without human oversight. Instead, imagine a system where AI augments the analyst's capabilities, providing enhanced situational awareness across multiple data islands, just as modern avionics give pilots a comprehensive view of their battlespace.
Analysts constantly need more context to make good decisions, but that context is scattered across data islands, buried in PDFs and wikis, and spread throughout the organization. No human can possibly reason over all of that at scale. That's where AI comes in -- to gather and present the context that humans need to make better decisions.
Our industry’s guiding principle should be simple: every piece of technology we bring into the SOC must be wrapped around the human analyst, not the other way around. When you strip away all the tech and tools, it's still the human analyst who stands between attackers and their targets. The human needs to be in charge, but significantly upgraded. That's the promise of AI in cybersecurity operations.