AI/ML, Generative AI, AI benefits/risks
BrandView

CISOs on AI: 7 key takeaways from security leaders at Elastic and Drata

Share

Advancements in generative AI have created unprecedented challenges for CISOs, increasing the complexity of enterprise environments and helping malicious actors create more sophisticated attacks.

Security leaders now face the unenticing task of mastering this evolving technology and adopting AI in their defense strategies, all while ensuring the security and privacy of AI tools.

Data confirms that these challenges are top of mind for CISOs. In Tines’ survey on AI adoption, 94% reported feeling concerned that AI will increase pressure on their teams.

But how are leading CISOs approaching AI today? And do they feel satisfied or underwhelmed by AI’s impact so far?

I recently posed these questions to Mandy Andress, CISO of Elastic, and Matt Hillary, VP of Security and CISO of Drata, during a webinar titled, How to make AI an accelerator, not a blocker.

I came away from that conversation with seven key takeaways, which provide great food for thought for forward-thinking security leaders.

1. Security teams are already feeling the benefits of Gen AI

Both Andress and Hillary told me that AI is helping their teams reduce repetitive and manual tasks, like responding to large volumes of security alerts.

Elastic’s Mandy Andress said, “We could automate bringing in asset data, owner data, application criticality to the business, IoCs, etc. using today's tools. But what we couldn't always do was tie that into what's happening in the threat environment around us, because that's always changing. Having some of that accessible via an LLM allows you to apply better context in a world that's changing quickly.”

2. Ensuring the security and privacy of AI tools is a top priority

During the webinar, we talked a lot about the risks of limited visibility into the “black box” of AI.

But as CISO of the leading platform for search-powered solutions, Andress is encouraged to see teams prioritizing the security and privacy of AI tools. “I see a desire for more transparency in the AI space,” she says. “From a product perspective, it's about being explicit and letting customers use what works best and what's approved by them and helps their environment. It’s not us dictating what needs to be there.”

3. A cross-functional AI committee can help organizations proactively address risk

To help govern AI usage, both Andress and Hillary suggest forming cross-functional AI committees.

Andress explains, “It's representation from technology, from security, from legal, compliance, business, and bringing all of those perspectives together. I think some companies will put accountability on a Chief AI Officer, but they'll still bring together these same groups to understand what we need to watch out for, and our ideas for utilizing AI in the business.”

4. AI helps bad actors with phishing (but not much else)

While Andress and Hillary have concerns about AI, they haven’t seen it significantly affect cybercriminal tactics.

Hillary explained that, while malicious actors use AI, human creativity remains the biggest threat. “There are still humans behind these [phishing emails and deep fakes], creating content, creating misinformation. I think they will have a much greater impact on what might hurt us in the long term.”

5. The pressure to leverage AI is real

Security leaders are facing pressure to adopt AI, often coupled with lack of understanding about how complex that process may be. Executives hope AI will “supercharge” their organizations, while employees are eager to use AI in their roles.

As CISO of security and compliance automation platform Drata, Hillary is conscious of the impact of this added pressure. “AI has added a whole new domain to the already extensive list of things that CISOs have to worry about today,” he says. “There’s lots of additional domain-level knowledge that we'll need to increase on our teams and individually.”

6. CISOs are hopeful about AI’s potential in security 

When asked about the future of AI, both security leaders hoped AI could become the connective tissue between the tools - 76 on average - that are used to protect their environments.

Hillary says, “One thing I haven't seen yet, but I'm excited for as a CISO, is asking an LLM, ‘How is my posture? What are the things that are exposed today that weren't yesterday? Give me a dossier of my own perimeter that a hacker might use to come at me.’”

Products like Tines Workbench, an AI chat interface that allows practitioners to access proprietary data and take action in real time, are already helping teams achieve this level of efficiency.

7. Maintaining human oversight is critical to a security team’s success

Both CISOs are eager to find the right balance between leveraging AI and keeping humans in the loop.

“I've always been biased towards the automation of problems,” Hillary says. “But you still have humans to herd the bots, right? The creativity, the inspiration, the thinking out of the box, all the things that we bring as humans, I don't think they’re going to be materially replaceable. But AI is going to increase our capability and capacity on the automation side, more than I think we've seen before.”

To learn more about how CISOs are approaching AI, read the full results of Tines’ survey.

Written by Thomas Kinsella

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Related Terms

Algorithm

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.