AI/ML, AI benefits/risks

AI presents many challenges – but we have to face them

AI Artificial Intelligence technology for data analysis, research, planning, and work generate. Man uses a laptop and AI assistant dashboard. Technology smart robot AI agents and agentic workflows.

COMMENTARY: These days it goes without saying that AI brings both pros and cons. Chances are most of us have already read articles, attended trade show sessions, or tuned into webinars about the benefits and risks surrounding AI, especially when it falls into the wrong hands. But adversaries using AI for a leg up on security teams is only part of the equation.

AI has become a cornerstone of business processes and innovation, which amplifies the pressure on businesses to produce and scale while simultaneously complicating the associated security risks. Though these risks are rooted in security, they ripple up and down businesses and could cause serious potential damage, and that’s why they demand attention from decision makers across organizations. Business leaders are moving aggressively to adopt AI, and no security or AI governance team wants to stand in the way of these projects’ ROI – but in today’s AI era, success goes hand-in-hand with security.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Here are some of the important challenges today’s business leaders need to understand in confronting the rise of AI:

Lack of visibility

With dozens or even hundreds of new AI applications introduced daily, many find it difficult to even distinguish how many exist. Estimates differ depending on different sources. There are simply too many new AI apps to keep track of, and security teams cannot feasibly research every single new one and craft appropriate policies, because it takes too much time.

Even further, assuming all AI apps are accounted for, security and IT teams don’t have enough visibility into the prompts and responses in the AI conversations to adequately monitor how users are sharing data within the apps. To understand which interactions are risky, it takes conversation-level visibility for each interaction, combined with context at the network and device level. It's not an easy task because of the myriad of new ways AI-driven tools interact and communicate with users, data, and one another.

As such, over a quarter of organizations have resorted to blocking AI access to eliminate the risk. There are a couple of reasons why it’s not effective. On the one hand, blocking AI access is really more a matter of attempting to block AI access because executives underestimate by up to 300% the actual extent of employees’ use of third-party, and often unsanctioned, AI apps. One in three employees report paying out-of-pocket for at least one AI app, which indicates the prevalence of these tools, whether sanctioned or not, and creates hidden risks known as “shadow AI.”

On the other hand, even if an organization can successfully block all AI access, it puts itself at an innovation disadvantage behind counterparts exploring new possibilities. Experts everywhere from the World Economic Forum to even high-profile businessman Mark Cuban urge today’s businesses to take the AI revolution seriously or risk falling behind. Just as disconnecting from the internet wasn’t a feasible approach to manage the security threats that emerged around the dot-com era, attempting to block all AI activity isn’t feasible today. Instead, organizations need ways to safely allow more AI activity.

Tools that don’t offer enough protection

In a sense, the massive scale of AI adoption has ushered in a shift where current or modern security tools have become legacy ones because – new or old – none were designed to adequately meet the new realities of the AI era. Existing tools cannot offer effective and accurate security for AI application use because they weren’t created to handle the unique data flows, usage patterns, and risks that come with AI apps.

Existing tools typically rely on predefined rules and data classification methods, which can’t effectively get the job done for the dynamic ways AI tools interact with users, data, and other AI apps. This challenge grows every day, as new AI apps are constantly emerging and existing apps are continuously evolving. These apps present high incentives for users to interact with them, and this ease-of-use allows new threats, such as embedded malware or malicious code embedded in responses, to make their way into networks. And as organizations look towards integrating AI agents into business operations, a whole new frontier of threats and potential data exfiltration channels open up. The way that AI tools interact has become more complex with every passing day, and organizations need AI-native approaches that deliver visibility and control, and also factor in nuanced context to keep the users, organization, and the data protected.

Overloaded operations teams

Security operations teams have been overloaded for years, and AI brings a lot of new and uncertain risks and threats. These teams face growing pressure to demonstrate safe and secure AI adoption, but the visibility and security problems are getting in their way.

Our security teams cannot afford to spend hours upon hours researching, inventorying and crafting policies for every new AI app, especially given that these apps are constantly evolving. Features change, terms and conditions change, data retention policies change – it’s just not feasible. As soon as one app gets finished, five more appear and the first one has already been updated. At the same time, we also know if there’s no visibility into AI activity in the business, the team will fly blind to potential risks. The visibility problem has manifested itself as a seemingly impossible game of whack-a-mole, and operations teams aren’t equipped to win. 

Even if security teams can see the activity in an AI application, existing security tools can only apply all-or-nothing access policies. This all-or-nothing, “hammer” approach to security policy does not let users leverage AI tools to boost their productivity. This slows people, teams, and businesses down. Plus, crafty users will still find ways to use AI tools, often through searching out lesser-known and even riskier new AI apps.

In other words, security teams can entirely block AI applications, but the hammer approach to AI security policy will not work in the long-term. AI is here to stay and it demands more nuanced policies.

The road ahead

So are security operations teams out of luck, destined for a never-ending game of catch up? Do businesses need to just accept security and efficiency as mutually exclusive tradeoffs? I don’t think so.

During my years at Netskope, Palo Alto Networks, and Zscaler, I met some of the brightest minds in the industry, and we’ve been extensively working together to crack these problems. We determined that it boils down to this: organizations need breadth and depth of visibility, plus highly effective threat prevention and data security, to control AI activity.

By “slowing down” to embrace this mindset, businesses can safely accelerate into this new AI-powered future rather than grinding to an AI-generated halt.

Moinul Khan, co-founder and CEO, Aurascape

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds