AI benefits/risks

Put the ‘Eye’ in AI: The future of cybersecurity requires visibility

Close up of digital eye with layered graphics

We’re living in what’s become the year of artificial intelligence (AI), as seemingly every major tech vendor rushes to integrate and promote their AI solutions. But many organizations have already been using AI and machine learning (ML) for more than a decade. The recent excitement for AI solutions makes sense, but tech professionals who have been working with AI know that these new products are also not without limitations.

For security teams to get the most out of the next generation of AI solutions being developed today, organizations would be far better served by focusing on certain fundamentals, such as visibility. For example, by discovering and classifying devices. AI is only as effective as the data it ingests.

It’s almost impossible to predict how powerful AI products will become during the next five years, but as of today, there are still quite a few cracks under their shiny new veneer. ChatGPT can generate some very impressive and convincing content – even scripts and code – but it has also been criticized for being “confidently incorrect” or numerous “hallucinating” errors. Likewise, AI-enabled image generators struggled to get a grasp on human hands.

Security teams can start by understanding that AI outcomes are predicated entirely on the data the technology ingests. The data these models consume has arguably become more important than the algorithms themselves — and the quality of this data offers far more value than the quantity of data. For example, Microsoft released an AI chatbot in 2016 that was shut down in less than 24 hours because Twitter users flooded it with so much hate speech that its responses became obscene.

From signature-based detection to behavioral analysis

AI products show a lot of promise in cybersecurity, in part because the traditional method of signature-based detection was rapidly outdated and outsmarted by threat actors. Signature-based detection only detects threats that have been previously detected and analyzed by the vendor. It’s burdensome because it requires cybersecurity providers employ large numbers of security analysts, and to maintain a critical mass of customer deployments to ensure their “network effect.” Most critically it requires customers to download and roll-out signature updates constantly, just to stay behind the curve.

Signature-based detection works on “known” threats. While it’s not redundant technology, its purpose has certainly changed over time. Think of it as a high-speed, low computational intensity, noise reduction system. However, threat actors have learned the limitations of signature-based detection, such that metamorphic and polymorphic malware updates and recompiles constantly, to stay ahead of static detection. Advanced threat actors will leverage zero-day attacks that have never been seen before. Signature-based detection rates vary by vendor, but to steal a march on the threat actors, the cybersecurity market has shifted to meet the demand for alternative products over the past decade.

Behavioral analysis has become one prevalent application of AI-based products. The shift from signature-based detection to behavioral analysis is essentially the shift from the detection of known bad to the detection of deviations from known good.

The low detection rates of signature-based technology have now been replaced by higher rates of false positives. For example, if an AI product spends two weeks training it may not learn about monthly or quarterly events and could flag them as anomalies, even though they are not malicious. AI products also suffer from false positives because there’s not enough clean, contextualized, and relevant data to train their learning models.

Visibility: an investment in the future that pays dividends today

The limited availability of labeled or contextualized data has become further complicated by the lack of data standardization across systems and solutions; quality matters. Data storage limitations mean that quality data (versus large quantities of data) results in lower costs.

What organizations really need is Ai: actual insight. Think of lowercase “i” as a magnifying glass to peer more deeply or a keyhole in a door to look into the network. Network monitoring tools are the key to unlocking the intelligence of artificial intelligence by offering contextualized data from a variety of sources, such as Active Directory, switches and routers, and net flow data

Furthermore, products that deliver visibility into all cyber assets that connect to the network or enterprise, offer an immediate ROI by discovering and classifying the hitherto unseen, unmanaged devices. Since many organizations are already using AI in some capacity, enhanced visibility can increase accuracy, accelerate threat detection and improve incident response today. Combining visibility with AI can even enhance prevention capabilities, such as by automatically mitigating threats with network access control (NAC) or extended detection and response (XDR) solutions.

We need to accept that big technology initiatives take time, often a decade or more, so we are still in the early stages of AI-powered cybersecurity. When the security team considers how they will optimize AI over the next few years it’s important to think of data as the fuel for their engines. By investing in full visibility today, companies can ensure that they will deploy the most high-octane AI solutions possible.

Barry Mainz, chief executive officer, Forescout

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds