Generative AI

CIOs and CISOs need a common strategy around AI copilots

Microsoft Copilot AI platform macro close up view

COMMENTARY: More AI copilots, more problems?

While the massive uptick in AI copilots integrated into Software-as-a-Service (SaaS) platforms has been a game-changer for productivity and the user experience, this sea change has not been without its challenges.

It’s now been a few years since the generative AI (GenAI) explosion, and everyone can attest to the technology’s value. The floodgates have opened, and there’s no going back: Across the enterprise, everyone from the board, to the C-level, to the general workforce has become accustomed to the efficiency boost that AI offers.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Unsurprisingly, organizations are eager to leverage the AI copilots built into their existing SaaS platforms and adopt other, new AI-powered tools. While AI has been important for staying competitive in today’s business landscape, the ubiquity of AI copilots has created a tug-of-war between CIOs and CISOs.

CIOs want to empower employees with AI-powered tools to help them work more efficiently, while CISOs are painfully aware of the security gaps these products create and have an obligation to protect the organization’s data. As companies increasingly accumulate tools with AI copilots, this clash of interests will continue to play out, unless organizations address it proactively.

Let’s take a closer look at the security implications of AI copilots, and how companies can bridge the gap between CIOs and CISOs to reap the benefits of AI safely.

The data access problem

AI copilots require a level of data access that hasn’t been warranted until recently. For example, an HR software AI copilot needs access to an organization's employee and benefits information, payroll data, and performance records. An AI code completion copilot needs access to an organization’s existing code, project-specific code, external code libraries, and API documentation. Although necessary, this level of data access creates two potential security risks.

The first: an organization's employees could inadvertently gain access to sensitive data when using an AI copilot. A well-known example of this was when Microsoft’s Copilot gave employees access to information that should’ve been restricted, including C-level executives’ emails and classified HR documents. AI copilots’ ultra-powerful search capabilities are a double-edged sword: They save employees a huge amount of time by quickly providing information and answering questions, but without proper restrictions in place, employees can uncover information they shouldn’t be able to access.

The second risk: users may share sensitive information with AI copilots that could then leak outside of the application. This happened in 2023 when Samsung employees accidentally leaked sensitive code via ChatGPT, causing the company to ban the use of GenAI tools altogether to avoid similar breaches. Since that incident, GenAI has become vital for companies to stay competitive, and banning the technology is no longer a viable strategy. Now more than ever, companies need ways to use AI copilots safely so as not to hinder productivity and innovation.

Using AI copilots safely

CISOs are under immense pressure to figure out how to implement AI copilots without creating new security gaps—fast. Employees want to use them, there’s urgency to be more efficient coming from the top down, and companies want to use the software they’re already paying for to its fullest extent, which includes turning on AI copilots.

So, what’s the solution? Using AI copilots safely hinges on facilitating a collaborative effort between CIOs, CISOs, and data governance teams. We need to make security a shared responsibility across these roles, and they will also need to pull-in other teams from finance, HR, engineering, and other areas of the business on an as-needed basis.

These teams must work on governing access to data, it’s their most important function. Many organizations are dealing with overprivileged access to data because there’s never been a need to restrict it in such a way until now. Pulling in top stakeholders from other parts of the business, can help CIOs and CISOs determine where sensitive data resides, and—importantly—who should and should not have access to the data.

Additionally, we need to make this data both clean and high-quality so it produces accurate answers. For instance, it’s not helpful for an AI copilot to summarize information from a 10-year-old HR document to a new employee. It’s not easy to managing data governance and data hygiene, especially considering that today’s organizations are contending with and generating more data than at any other time in history. Businesses need intelligent, automated solutions that help with data governance and hygiene so they can start taking advantage of the perks of AI copilots.

There’s no such thing as 100% productivity or 100% security when it comes to AI copilots. It’s all about striking the right balance between risk and innovation. By bringing together CIOs, CISOs, and data governance teams—and empowering them with intelligent, automated tools to make their jobs easier—organizations can safely deploy AI copilots and enjoy all the benefits they promise.

Jack Berkowitz, chief data officer, Securiti

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

Related Terms

Algorithm

You can skip this ad in 5 seconds