AI/ML, Application security, Data Security

‘Shadow AI’ on the rise; sensitive data input by workers up 156%

Share
(Credit: Robert – stock.adobe.com)

AI use in the workplace is growing exponentially, and workers are inputting sensitive data into chatbots like ChatGPT and Gemini more than twice as often as they did last year, a new report by Cyberhaven revealed.

"The AI Adoption and Risk Report" published Tuesday also noted growth in “shadow AI” — workplace use of AI tools on personal accounts that may not have the same safeguards as corporate accounts. Without visibility and control over the use of shadow AI by employees, organizations may be unaware of and unable to stop the exposure of confidential employee, customer and business information.

As employee AI use booms, sensitive data input more than doubles

The Cyberhaven report is based on an analysis of AI usage patterns from more than 3 million workers. Overall, the amount of corporate data workers input into AI tools increased by 485% between March 2023 and March 2024, according to Cyberhaven. The vast majority of this use — 96% — involved tools from OpenAI, Google and Microsoft.

Employees at technology companies were the heaviest users of AI tools, sending data to bots at a rate of more than 2 million times per 100,000 employees and copying AI-generated content at a rate of more than 1.6 million times per 100,000 employees.

Of the data submitted to chatbots by employees, 27.4% was sensitive data compared with 10.7% last year — a 156% rate increase. The most common type of sensitive data submitted was customer support data, which made up 16.3% of the offending inputs.

In addition to customer support data, sensitive data fed into chatbots also included source code (12.7%), research and development data (10.8%) and more.

Shadow AI poses confidentiality risks

The “big three” AI tool providers all offer enterprise AI solutions that have greater security and privacy features, such as not using inputs for further model training. However, Cyberhaven’s analysis found that the vast majority of workplace AI use is on personal accounts without these same guardrails, constituting shadow AI within organizations.

For ChatGPT, 73.8% of employee use was on personal accounts; for Google tools, personal account usage was much higher. Prior to rebranding to Gemini in February 2024, Google Bard was used in the workplace on personal accounts 95.9% of the time. After the release of Gemini, personal account use remained astronomically high at 94.4%.

One of the most significant risky uses of shadow AI was the submission of legal documents — although these submissions made up 2.4% of sensitive data inputs tracked, 82.8% of these uploads went to personal accounts, increasing the risk of public exposure. Additionally, about half of source code uploads went to personal accounts along with 55.3% of research and development materials and 49% of employee and human resources records.

Why employee AI use matters

Sending sensitive company information to AI tools not only risks feeding that information to the models, or potentially to other third parties via plugins, but also risks exposure through AI tool vulnerabilities and breaches.

For example, about 225,000 sets of OpenAI credentials were obtained by threat actors using infostealers and sold on the dark web last year, Group-IB found. If a worker was sending confidential information to ChatGPT, that information will be up for grabs if their account credentials are compromised.

Additionally, researchers at Salt Security discovered vulnerabilities in ChatGPT and several third-party plugins last year that could give threat actors access to users’ conversations and GitHub repositories.

Unsafe use of AI generated content in the workplace is also a concern. The Cyberhaven study found that 3.4% of R&D materials produced in March 2024 were AI generated, along with 3.2% of new source code insertions. Use of GenAI in these areas, especially tools not specifically meant to be coding copilots, raises the risk of introducing vulnerabilities or incorporating patent-protected material.

‘Shadow AI’ on the rise; sensitive data input by workers up 156%

Up to 95.9% of workplace chatbot use is on personal accounts, risking data exposure.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.