It’s been more than a year since 22-year-old airman Jack Teixeira, who served in the 102nd Intelligence Wing of the Massachusetts Air Guard, was arrested by the FBI and charged with unauthorized retention and transmission of national defense information. He now faces up to 16 years in prison for sharing classified materials with his online buddies on a Discord video game chatroom.
In a similar case, just this past March, a former engineer at Google was arrested for uploading AI-related trade secrets from his laptop to the cloud — one of several instances in which a Chinese national has been caught engaging in the theft of valuable intellectual property for the purpose of exfiltrating it to the Chinese government.
While the circumstances of these two cases are vastly different — a young enlisted soldier and a senior data engineer — they both highlight a critical vulnerability in today's information age: insider threats.
Data losses incurred by trusted insiders represent one of today’s most critical security risks, with the average insider data loss event estimated at an astounding $15 million per incident.
Our recently released 2024 Data Exposure Report found that more than half of data loss events (55%) are the result of malicious action — whether it’s a disgruntled employee who intentionally leaks sensitive information to harm their company or a malicious insider who sells trade secrets to a competitor.
So, what exactly is the scope of the insider threat, and how are current workplace and economic environment pressures amplifying its impact?
Quantifying the insider threat
This year's report paints a bleak picture: insider threats are not only surging, but they’re evolving in unexpected ways, with respondents reporting a 28% increase in insider-driven data leaks since 2021. Even more alarming is the paradox revealed by the data: while nearly all surveyed organizations (99%) claim to have data leakage prevention solutions in place, a staggering 78% admit they’ve lost valuable data.
This disparity was caused by a few interwoven factors. Most glaringly, the nature of insider threats has evolved, growing more complex with the broad adoption of new technologies and the shift toward more flexible work environments. In other words, the very tools and policies designed to enhance productivity and accommodate remote work can also increase the risk of data exposure.
Take, for instance, generative AI tools like ChatGPT. Are employees or third-party contractors inadvertently training these large language models using proprietary information and data? Would we even know if they were? While most enterprise organizations that let workers use tools like ChatGPT have usage guidelines and policies in place, 87% of security leaders surveyed expressed concern that their employees weren’t adhering to them.
Further, the findings suggest that while practically every company has invested in some form of data protection, the effectiveness of these solutions gets bogged down by a variety of shortcomings. These include the inability to adapt to the shifting tactics of malicious insiders, the underestimation of accidental data exposure by well-meaning employees, and often most concerning of all, the persistent shortage of skilled cybersecurity staff available in the workforce. On this last point, four in five (79%) cybersecurity leaders said the lack of skilled workers has made it more difficult for them to detect and respond to these evolving data exposure risks.
Four principles for mitigating insider threats
Like other external security measures, managing insider threats requires a combination of purpose-built technologies and human-centric strategies. This dual approach acknowledges that technology alone cannot foresee or counteract every possible scenario of data misuse or leakage, or the employee motivations behind these incidents.
If the Pentagon, one of the most secure institutions in the world, can get hit by a rogue insider disseminating sensitive information, it’s fair to believe that such risks are inescapable. However, that doesn’t mean teams can’t reduce the threat opportunity. The following four principles are fundamental to building a holistic insider threat management strategy:
- It’s not possible to manage what the team can’t see: Understanding and managing insider threats begins with visibility. Despite the proliferation of data protection tools within organizations, there remains a significant need for these systems to deliver more centralized visibility into data movements and user behaviors. Eighty-eight percent of security professionals believe that their organization requires enhanced visibility, particularly into how source code gets shared and stored in repositories. Achieving this level of visibility becomes essential for identifying and mitigating potential insider threats before they escalate into full-blown security incidents.
- Security training isn’t just checking a box: Humans are often considered the weakest link in the security chain, making it critical for continuous education on what constitutes proprietary information and the basic principles of data protection. However, it’s not enough for employees to check a yearly training box and cross off security training for the year––we don’t learn that way. Traditional security training methodologies like these are no longer sufficient. Innovative approaches, such as real-time interventions such as pop-up warnings when an employee is about to perform a risky action, can encourage actual learning among employees. This approach educates employees in the moment and builds a more security-conscious culture over time.
- Teams will use AI mainly for closing the security skills gap: The cybersecurity field faces a pronounced skills gap, with organizations struggling to hire and retain skilled professionals. This challenge gets compounded by the significant amount of time teams spend investigating potential insider threats — an average of three hours per day, according to research. To address this gap, an overwhelming 82% of respondents are looking towards AI and automation to bridge this gap. But this new adoption can either make or break organizations. As mentioned above, AI tools are only as good as the policies teams put in place to guide their use. By setting clear standards to protect valuable IP, AI tools can pay dividends.
- Not all data is created equal: In an era where data gets generated at an unprecedented rate, understanding and prioritizing the protection of the most valuable data types has become vital. Organizations must recognize that not all data warrants the same level of security measures. According to survey respondents, the three most valuable types of data are accounting and financial information (47%), research data (45%), and source code (44%). These priorities highlight the need for targeted protection strategies that focus on safeguarding the most critical assets, ensuring that additional controls are in place to prevent their unauthorized access or exfiltration.
While security leaders are paying attention to the threat of insiders, all organizations still have a ways to go in ensuring their data is completely protected. Looking ahead, it’s clear that addressing insider threats requires action that’s informed, strategic, and adaptive to the evolving nature of these dynamic risks.
Joe Payne, president and CEO, Code42