COMMENTARY: As a cybersecurity technology vendor, practically every customer asks us if our products are using artificial intelligence (AI).
Naturally, potential customers ask this question as a bellwether to determine if our company has been innovating. Unfortunately, it’s remarkably easy to mislead consumers with a simple “yes” reply, even when AI does not truly contribute to cybersecurity processes or drive unsupervised automation.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
Many of the latest detection and response products rely on AI technologies. We’re talking about all the "DRs," including endpoint detection and response (EDR), network detection and response (NDR), extended detection and response (XDR), and managed detection and response (MDR). These products can use machine learning (ML) and deep learning to spot anomalous behaviors that indicate a potential threat or attack within an environment.
GenAI needs more time to mature
Since Generative AI (GenAI) in the form of tools like ChatGPT has only been around for just under two years, the technology hasn’t had the same time to develop and evolve as ML and deep learning. While other AI technologies have largely focused on learning from large data sets, GenAI is more strictly relevant for creating written, visual, and auditory content using prompts or data. Without the benefit of time to distinguish the uniquely valid and invalid use cases, and given the broad public exposure, this technology has been in a hype phase. We can anticipate an accelerated path into the “trough of disillusionment” followed by proven use cases in short order, but we’ll have to see what happens next.
As for GenAI’s use in cybersecurity, it’s early and the industry has been exploring how it can improve threat detection and remediation. Some natural security paths for IT teams to explore with GenAI include:
- Email: There’s potentially an answer here to the age-old problem of blocking a phishing attempt before it lands in the inbox of its intended recipient. Today’s products heavily rely on employees to find and report phishing messages. GenAI-based products that are trained to spot anomalies in written language and email addresses could dramatically reduce impacts from spam and phishing messages. Unfortunately, hackers are also using GenAI to improve the quality of their messages, so we’ll lose some easy ways to identify a phish, too.
- Identity: Cybercriminals now have tools to help them mimic other people, including imitating their voice, image, and writing style. The GenAI tools often don’t get it exactly right, introducing hallucinations. So, it can help to use GenAI in security products to illuminate the artifacts that don’t match the actual person. This can let security platforms separate and block the GenAI-based attacks, and it can contribute new factors to help authenticate users.
- Reporting: GenAI can now create customized reports efficiently. Imagine that with a few prompts, the user gets a draft of a custom report showing security protocol compliance and effectiveness and can respond to requests from CSOs and MSP clients in minutes. With today’s GenAI capabilities, the first draft isn’t yet good enough to fully automate and it takes a human to review and revise, but it can speed up that work for IT and security staff
- Enhanced security analyst assistants: These are some of the earliest use cases, where GenAI tools are helping to summarize an incident or security finding, converting the tech speak into more accessible descriptions and recommending actions. Teams could adapt these assistants in the other direction as well, where IT pros can enter prompts that ask for security policy suggestions to upgrade security platform configurations and advance the overall stance of their defenses. It can help for a fast response to an emerging threat or outbreak when it makes headlines in the news.
There’s a bright future ahead for GenAI and many in the security industry are exploring ways to leverage it to deliver stronger protection. Like most companies, we are piloting GenAI use cases and finding pockets of promise now and potential for future expansion. The immediate uses are linked more to internal efficiency gains in coding, customer support, and sales/marketing content creation than with in-product integrations. Cybersecurity requires a high level of predictability to support customer needs, and GenAI needs a little more time before it will meet those standards.
While GenAI isn’t going to reinvent cybersecurity products overnight, don’t overlook the substantive positive impacts coming from the more established AI and ML technologies that are accelerating critical modern cyber defenses. As threat actors use similar technologies to speed up and strengthen their attacks, AI-powered threat detection and response capabilities are a must.
Tracy Hillstrom, vice president, brand and content marketing, WatchGuard Technologies
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.