Content

Blocking attacks on applications

Web services may be critical for business but leave you wide open to attack. Abhishek Chauhan looks at protection techniques

You've decided to grab the application security bull by the horns. After all, your web servers - and there are lots of them - run web applications that process financial transactions, credit card numbers, confidential records, user identities and more. They are critical to your business.

Yet these applications are so large and complex that you don't even want to think about the vulnerabilities lurking within them. The most troubling part is that they are completely exposed to attacks that flow undetected through ports 80 (primarily for HTTP) and 443 (for SSL) on your network firewall.

The vulnerability assessment report the security auditors left you is thick as a brick. Some of the fixes are easy, but what about the bugs in the application logic? Some are in third-party modules, others in legacy code. A couple of the serious ones are at best unfeasible, if not impossible to fix. Let's not even mention zero-day exploits - unpublished hacking tricks that the scanners didn't flag but still need to be defended against.

Enter the application firewall. Such products promise comprehensive protection against web vulnerabilities. But do they deliver? Detecting trouble in application traffic in real time is not easy. While lower layer attacks are broadly targeted and characterized by generic signatures, application layer attacks are often surgically precise and difficult to identify. They are almost impossible to detect using low level packet analysis techniques.

A different and more sophisticated approach is required to defend against the stealth nature of these attacks. Application firewalls need to dig deeper. Here's a brief overview of eight techniques application firewalls can use to detect and block application attacks.

1. Deep packet processing

Deep packet processing, sometimes referred to as deep ­ packet inspection or semantic inspection, involves correlating multiple packets into a stream, maintaining state across the stream while looking for anomalous behavior that constitutes an attack. Deep packet processing requires that application traffic be parsed, inspected, and reassembled at lightning- fast speeds to avoid application latency. Each of the following techniques represents various degrees of deep packet processing.

2. TCP/IP termination

Application-level attacks span multiple packets and often multiple requests, or separate data streams. To be effective, a traffic analysis system must be able to inspect packets and requests for attack behavior during the entire session that a user is interacting with the application. At a minimum, this requires the ability to terminate the transport level protocol and look for malicious patterns over an entire stream, instead of just in individual packets.

3. SSL termination

Virtually all secure applications today use HTTPS to ensure the privacy of their communications. However, SSL streams are encrypted end-to-end and thus opaque to passive observers such as intrusion detection systems (IDS) products. An application firewall must terminate SSL and decode the stream to see the traffic in the clear in order to stop malicious traffic. This is a minimum requirement for protecting application traffic.

If your security policy does not allow sensitive information to traverse your network without encryption, you will require a solution capable of re-encrypting the traffic before it is sent to the web server.

4. URL filtering

Once application traffic is in the clear, the URL portion of the HTTP request must be inspected for signs of malicious intent, such as suspicious unicode encodings. Using a signature-based approach to URL filtering, which involves matching regularly updated signatures and filtering out URLs that are known to be attacks like Code Red or Nimda, is insufficient.

What is required is an approach that looks at not just the URL but the remainder of the request as well. In fact, taking into account the responses from the application can dramatically increase the accuracy of detecting attacks. While URL filtering is an important operation that will block common script-kiddie type attacks; it is ineffective against the majority of application layer vulnerabilities.

5. Request analysis

Full request analysis is a more effective technique than URL filtering alone. It can prevent cross-site scripting and vulnerabilities at the web-server level. Full request analysis takes URL filtering one step further by ensuring that a request is well-formed and conforms to standard HTTP specifications, and that individual request components fall within sensible size limits.This is a very effective technique for preventing buffer overflows attacks.

However, request analysis is still a stateless technique; it examines the current request. As we will see, remembering prior actions enables much more meaningful analyses and also deeper protection.

6. User session tracking

Next in terms of sophistication is user session tracking. This is the most basic of stateful application traffic inspection techniques, which keeps track of user sessions and correlates the actions of individual users. This is typically achieved through the use of a session cookie via URL rewriting. Just by tracking requests from individual users, much stronger checks can be performed on the cookies. This enables strong protection against session-hijacking and cookie-poisoning types of exploits.

Effective session tracking will not only track the cookie created by the application firewall, but also digitally sign cookies that the application generates in order to protect those cookies from tampering. This requires the ability to track the response to every request and extract cookie information from it.

User session tracking is a pre-requisite for the following techniques.

7. Response pattern matching

Response pattern matching provides more comprehensive application protection by looking not only at the request submitted to the web server, but also the response that the web server generates. It is most effective against preventing web site defacement, or more precisely, preventing a defaced web site from being seen. Matching patterns within the response is the counterpart of URL filtering on the request side. There are three degrees of response pattern matching.

Anti-defacement is performed by the application firewall digitally signing static content on your site (and optionally, placing a copy of that content on the firewall). If modifications are detected when content leaves the web server, the firewall can substitute the original content for the defaced page.

Sensitive information leaks are where the application firewall monitors the responses looking for patterns that might signify a server problem, such as a long string of Java exception incantations. When such patterns are detected, the firewall can eliminate them from the response or block the response entirely.

The 'stop and go' words approach looks for generalized predefined patterns that must or must not be present within the responses generated by the application. For example, copyright notices could be required on every page served by the application.

8. Behavior modeling

Sometimes called a positive security model or white list security, behavior modeling is the only protection against the most elusive of application vulnerabilities - zero-day exploits. These are undocumented or 'not yet known' attacks. The only defense against this type of attack is to only allow behaviors that are known to be good, and disallow all others.

This technique requires modeling of the application's behavior, which in turn requires a full parsing of every response to every request that goes to the application, with the goal of identifying behavioral elements on the page, such as form fields, buttons and hyperlinks.

This level of analysis can identify malicious form fields and hidden field manipulation type of exploits, as well as enforce a much stricter oversight over what URLs a user is allowed to access. Behavior modeling is the only technique effective against all 16 classes of application vulnerabilities (see the table on previous page).

Behavior modeling is a great concept, but its effectiveness is often limited by its strictness. Certain situations, like extensive use of JavaScript, or deliberate deviations of the application from its behavior model, can trip it up and trigger false positives that deny legitimate users access to the application. For behavior modeling to be effective, it requires some human intervention to refine the accuracy of the security model.

Automatic behavior prediction, also known as automatic rule generation or application learning, is strictly speaking not a traffic inspection technique. It is a meta-inspection technique that analyzes traffic, establishes a behavior model, and through various correlation techniques generates a set of rules to be applied to the behavior model to make it razor precise. The power of behavior modeling is its ability to auto-configure after a brief learning period with the application.

Protecting port 80 is one of the biggest and most important challenges facing security professionals. Fortun­ ately innovative approaches to the problem are available today and continue to evolve. By integrating, within a ­ layered security infrastructure, an application firewall that can block the 16 classes of application vulnerabilities you can defeat the application security bull.

Abhishek Chauhan is co-founder and CTO of Teros (www.teros.com).

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds