Security Program Controls/Technologies, RSAC

RSAC advice: Five questions to ask vendors about AI claims in their products

Share
AI tips

The business world has been swept up in AI fever — and we can expect it on full display this week at RSAC. In nearly every boardroom and C-suite across corporate America, there’s discussion of how to best use these new AI tools. And in almost every company, employees are downloading AI apps to figure out how the technology might help with their most mundane tasks, while in-house developers scour the web for the best new data libraries to build around.

Companies are responding to a market frenzy. From big players to scrappy niche vendors, business software makers race to market with new AI-based tools and features. IDC expects enterprise spending on AI to jump 27% this year to $154 billion.

Organizations should proceed with caution. As companies race to adopt AI technology in its various forms, they risk bypassing critical security steps that sooner or later could expose them to devastating hacks. That’s because many of the new AI tools are based on open-source infrastructure or data repositories, which may require an entirely new defensive strategy than many of the proprietary tools used over the past few decades.  

It’s more crucial than ever for CIOs, CISOs and other in-house tech leaders to put a process in place so security professionals can validate the libraries or platforms many of these AI programs are based on. 

Open isn't always good    

With a simple Google search, it’s possible to find an AI-backed product to help with virtually every common business task. Some of them are free to try out. Or with a swipe of a credit card, employees are up-and-running with the full software.

It’s not a new problem. So-called “shadow IT” — software and devices that workers use without their employer’s knowledge — has been a thorn in the side of CISOs and CIOs for years. But AI can potentially make the problem much worse. Unlike most of the big-box software in use by organizations today, modern AI tools are increasingly built on open source architecture.

As part of this open-source wave, there are already legions of data libraries available online. And that number only grows as companies like OpenAI release their own data sets for developers to create on.    

Open source is a powerful tool. But there are risks. As we’ve seen, some bad actors are targeting open platforms. And the SolarWinds hack a few years back when thousands of data networks were compromised shows the damage IT supply chain breaches can cause. That’s why from a security standpoint, we’re so concerned about the AI gold rush: the more open AI platforms that get adopted, the more an enterprise exposes itself to a potentially catastrophic IT supply chain breach.  

Fortunately, there are steps that security leaders can take to help continuously vet open source tools for any potential vulnerabilities. 

Shine a light in the shadows

It’s more important than ever for businesses to do their homework and thoroughly research any vendor that could ultimately provide IT services for the enterprise. But it’s also important that CISOs and their teams are constantly made aware of the tools that employees are seeking to use.

Starting now, security teams should work closely with their colleagues in development to qualify the vendors considered, and ascertain the security protocols that are being used to protect open-source libraries.

Once the in-house IT team knows whether the repositories are secure, they can begin to craft access guidelines that would let employees download the apps of their choice or begin using certain libraries to help power machine learning algorithms.    

Score the vendors  

But that doesn’t mean workers should then rush to try out every available tool. It’s still important for employees and security professionals alike to weigh the value that the software might bring against the potential threat it could pose.

Vendor scorecards can play a powerful role in assessing potential threats. Taking the time to benchmark IT providers against one another can arm enterprises with the information they need to decide which providers to hire. Such benchmarking has already become a standard practice for any responsible enterprise IT team. But AI has created an entirely new open-source ecosystem of potential vendors and partners that security teams must manage. Businesses should look to document answers to the following five questions:

  • What development methodology did this vendor use?
  • Did the vendor conduct sufficient code analysis?
  • Does the vendor have dynamic scanning enabled, which would help detect abnormalities?
  • What process does the vendor have to remediate any vulnerabilities that are found?
  • Does the vendor have the systems in place to understand the impact to their products in the event of a supply chain hack? 

Once that’s finished, in-house IT teams can then decide whether to greenlight the vendor as a trusted entity. But the job doesn’t stop there. As more open source tools get deployed, it’s imperative that security teams are constantly monitoring their applications for unknown code or potential security breaches. 

Luckily, AI can give security teams some assistance with this work — given that security teams can now automate much of the daily monitoring. That lets analysts spend more time protecting next-generation AI software.

While we understand the intense excitement around AI, we have to match it with heightened scrutiny. Businesses must tune out the hype to understand both the value the software can truly provide, as well as the risks associated with adopting it. Otherwise, instead of benefiting from the AI gold rush, they could find themselves scrambling to secure their systems against a new wave of AI-opportunistic hackers. For those going to RSAC: stay skeptical and keep these five questions and the points we've discussed in mind while meeting with vendors this week.

Indu Peddibhotla, vice president for products, Commvault

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.