The focus of enterprise information security teams has been and continues to be in an ongoing state of change due to the increasing awareness of the value of corporate information assets and the potential impact on the business should these assets be compromised.
This impact can be characterized and quantified in situations such as a compliance violation, loss of intellectual property to a competitor, or tarnished brand.
This realization of the value of corporate information assets and the bombardment of headline articles about data breaches have brought information security to the forefront of enterprise concerns.
To address these challenges, numerous data loss prevention (DLP) solutions have surfaced, each providing techniques for protecting data in motion (data traversing protected points in the network), data at rest (data in servers, repositories, and workstations), and data in use (data users interact with on workstations).
These solutions each do an outstanding job of protecting information assets that obviously require protection – assets that have an easily defined structure, such as credit card numbers, social security numbers, and personally identifiable information (PII).
Many organizations that have implemented DLP as a means of protecting information start with an immediate “pain-point,” generally compliance or acceptable use, that involves protection of simple, easily classified and detected information.
Other organizations have started using DLP to protect more complex information – such as high-business-impact (HBI) information including intellectual property – that is amorphous and much more complex than a simple SSN or CCN. Organizations that began with compliance or acceptable-use guidelines increasingly are moving into broader coverage of HBI, and almost all of them are finding that one of the biggest hurdles associated with DLP is knowing what needs protection.
Learning applications that add a layer of multi-dimensional intelligence to DLP have surfaced to address this key challenge of identifying what HBI data is, who is using it, who it should or should not be going to, and how it should go to them.
These applications rely on basic components – information capture, indexing, data mining, and analytics – and enable users to “let the data tell its story” to see who has been using which data, where it has gone, how it was used, and how it was constructed.
This level of visibility provides DLP users with a head-start in finding out what needs protection, how it should be protected, and what the underlying business process is - and greatly minimizes the time spent on planning, implementing, and tuning the system. Time is precious, and the more quickly you can implement security measures, the lower your probability of encountering a threat or a leak.
A customer using a DLP solution that included learning capabilities could, for example, query the historical data to find out who worked with HBI related to a confidential project or new product, who received the information, and what its construction was – even though the system had no rules configured for these specific items.
Such information could take the user 95 percent of the way to accurately defining what information is sensitive and what the business process is – all by starting only with an idea of what he or she was looking to protect. The customer could quickly add structure to a concept definition and apply it to existing or new rules in the system for protection, notification, or monitoring.
With learning applications, not only is identifying sensitive content and the business process that applies to it simplified, but also the user can take it a step further and validate rules against the historical information to ensure accuracy and correctness – prior to ever using the rule in production. The business benefit? The user quickly realizes that learning applications in DLP help in a number of areas:
· Quickly identifying sensitive content, who is using it, where it is going, and how
· Understanding the business process applied to sensitive content
· Creating new concepts and rules, and tuning existing rules, while ensuring the highest levels of accuracy, proven against historical data
· Minimizing the time to accurately detect and protect data, thus exponentially minimizing the risk of exposure or data leakage
Learning applications, including rule tuning, data mining, and analytics – all rooted in information capture – are fundamentally revolutionizing the DLP market. By providing customers with a means of more quickly identifying sensitive material and rapidly employing accurate detection and protection, next-generation DLP systems that leverage learning applications are quickly making legacy DLP systems obsolete.