Content

Trusted Information Awareness – Another Name for Domestic Spying?

Co-incident with the news about the formation of the U.S. Department of Homeland Defense is the breaking of a story about a Defense Advanced Research Projects Agency (DARPA) ‘research project’ exploring the viability of sifting through huge amounts of transaction data looking for possible terrorist activities.

The project, titled 'Trusted Information Awareness,' run out of DARPA's information awareness office, is headed up by retired Admiral Carl Poindexter of Iran-Contra fame. Poindexter, who along with several famous Washington names from that period are suddenly popular again, is claiming the initiative to be simply a proof of concept prototype to see if it is possible to sift through large amounts of information in disparate databases looking for clues.

The project, while possibly good in its intentions, is more than a pure research activity. It's also another example of how government can use technology to invade the privacy rights of individuals. Unfortunately, trusted information awareness is doomed to failure. The most intriguing aspect of the program, evidence extraction and link discovery (EELD), is an extremely difficult problem to solve, as the experience with next-generation information security threat management solutions shows. Although trusted information awareness is not chartered to explore anti-cyberterrorism uses, many of the same problems exist in both worlds and could benefit from each other's research.

Trusted information awareness will fail first by drowning in a sea of false positives, as the experience with intrusion detection solutions has shown. A false positive is when a potential intrusion is detected, only to turn out to be not the case. Traditional intrusion detection systems typically create 90-95 percent false positives and the vendors are working furiously to reduce that rate. The problem false positives create is exactly the one that the trusted information awareness is trying to avoid, specifically requiring manual labor to sort through mountains of data to weed out the non-events. Even if TIA could automate some of the data collection processes to relieve the bottleneck there and find a way to integrate disparate databases it still would have a huge challenge ahead in figuring out how to reduce the rate of false positives without increasing the rate of false negatives. A false negative is when a real intrusion goes through undetected. Striking the proper balance between false positives and false negatives is a notorious difficult task.

Here's a scenario: an electronic intruder breaks into a corporate computing environment by 'guessing' a username/password combination or exploiting a weakness in a badly maintained system. It then obtains administrator or 'root' status, elevating its privilege by either guessing a system password using a dictionary attack or brute force, or possibly exploiting another unpatched system with a buffer overflow. Once elevated to 'superuser' it can run amok, turning off the intrusion detection alarms and otherwise hiding its tracks. It can plant logic bombs or Trojans to plan for return visits, espionage or outright theft.

An intrusion detection system can be set to log all failed password attempts, every time an administrator logs in, and every time the operating system kernel changes. Unfortunately these events occur almost continuously in a normal computing environment. System administrators and security experts would spend all of their time tracking down the false events.

Another challenge is identifying countermeasures designed specifically to avoid detection. TCP/IP packets can be broken down into fragments, sent past the intrusion detection mechanism, and then re-assembled on the target network. Even flooding the network with normal traffic can provide cover for illicit activities as the signal gets lost in the everyday noise.

To counter these measures some intrusion detection systems can perform packet re-assembly to detect malicious code before it infects a system. Performance improvements and flow mirroring can help deal with the volume of traffic.

Another scenario calls for a more coordinated attack using a combination of password guessing, insider assistance and creating a diversion to draw attention away from the perpetrators. Unlike the traditional 'outsider' attack this one is much more difficult to detect and prevent since it relies on the exploitation of trusted sources and 'normal' activity that may not exhibit any warning signs until it is too late.

This scenario is the type that TIA is trying to address. The idea is to piece together information in separate databases, looking for ways to 'connect the dots' among seemingly disparate pieces of information. Researchers in next-generation intrusion detection and prevention technology are trying to teach their systems to become smarter in two ways. First, by ignoring much of the data generated by intrusion detection systems that leads to false positives, next-generation intrusion detection systems can reduce the labor required to monitor their output. Second, by looking to incorporate additional information found in separate but related databases such as firewall logs and identity management systems. This will help to piece together clues of an impending break-in or compromise from seemingly unrelated data.

Unfortunately these correlation engine systems that can effectively do this type of work are at least several years away from commercial viability. Right now the engines are too easily fooled and the databases are too disparate to be effectively integrated. Joint research between the Information Awareness Office and corporate and academic research in intrusion detection event correlation could yield positive long-term benefits.

DARPA would do better to stick to the kind of long-term research that led to the internet and other similar innovations rather than working on a 'quick fix' for terrorism. No matter how laudable the TIA's goals are, the program is doomed to failure, drowning under a sea of false positives and political attacks. Too bad, because the information security industry would benefit from the advanced research in data-mining efforts that could spawn a new era of startups to combat both physical and cyberterrorism.

Robert Lonadier is the president of RCL & Associates, a Boston-based analyst and consulting firm. He can be reached at [email protected].
 

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds