Last year marked a new chapter in the history of information security. The hacker community, faced with increasingly more effective anti-virus protection, changed its strategy and began targeting the security holes in commonly used software products.
What resulted was "the year of the worm," in the words of Toby Weiss, senior vice-president of eTrust Security Solutions for Computer Associates (CA). Some people, such as Ubizen's head of technology solutions Carlo Schüpp, date the turning point more precisely to August 2003. That was when the Blaster worm caused major damage by exploiting a vulnerability deep in the heart of the Windows operating system. "Blaster suddenly made this a very visible and concrete problem," he says.
For the hackers, the vulnerabilities represented a sitting target. Although patches usually existed for most of the known vulnerabilities, most companies had not yet found the time to apply them. Hackers were therefore able to bypass the usual security defenses and inflict increasing chaos on the internet-connected community.
And hackers had so many new vulnerabilities to choose from. Microsoft issued 51 security warnings during 2003, with 20 of them described as critical. In 2004, the problem continues to grow – by mid-March, Microsoft issued its fourth critical vulnerability.
So why don't IT departments update the company software whenever new patches are produced? By doing so, they could avoid most of the pain caused by the new breed of computer worms.
The answer is that there are too many patches, some of them contain errors (and inflict more damage than they fix), some of them cause problems with other software and, in some cases, users are simply unaware that a patch exists.
What's more, swift deployment of patches might be opposed by certain functions in the organization, such as change and configuration managers who demand that people follow disciplined procedures.
Finally, some companies do not even have basic asset management in place to tell them what systems they are running.
So it is no surprise that, by the middle of 2003, patch management was fast becoming the biggest source of pain for information security professionals. User departments were crumpling under the weight of all the new vulnerabilities and patches. Microsoft tried to help by bundling new patches into monthly updates (rather than drip feeding them to the market) but the situation continued to get worse.
User groups also began to question why software vendors needed to churn out so many corrections to their products. Some even hinted that the vendors should bear some of the financial cost of patching.
It is obvious that something has to change. The industry cannot carry on lurching from one disaster to the next. We either have to reduce the number of patches (and therefore the number of vulnerabilities) or find new and better ways of managing the patch process. Moves are afoot on both sides of the equation.
For CA's Weiss, it is a question of improving software quality. "In the aftermath of MyDoom, we need to better manage our assets. Users are demanding more from suppliers to get to the root cause of the problem. They view it as a quality problem," he says.
"Vendors need to have a plan [for improving quality]. They need to deploy the software first internally and build in security out of the box. Companies are tired of being used as a quality assurance department [for the vendors]. They are going to start voting with their dollars. They will want to know what quality measures are in place."
In the meantime, we need to find ways of minimizing the cost and effort involved in applying patches, and most people agree that it should be treated as a management task like any other.
According to the META group, most firms still struggle with even the most basic task of knowing what software they are running and what patches have been applied. This results in staff gathering patches by hand and then physically moving from server to server to load up the new patches. "These limiting factors often result in a patch management schedule that runs on a quarterly basis at best," said META's Kai Sander in a May 2003 report on the subject. First find your patch Assuming that organizations have an accurate inventory of their computer assets, they need to know where to go to find the right patches.
According to Adrian Rayner, European managing director for software company Marimba, that is no easy task. "The first part of the problem is where to go for all the fixes that are out there. The SUS database [at Microsoft] only provides operating system fixes. But there are other databases you need to go to for Office, SQL Server and other systems," he says. "Then you need to go to all the other software suppliers for their patches."
You then need to do a gap analysis to see what patches have been applied already and which still need to be deployed. Again, this has to be done at a detailed level, taking into account the different machine configurations being operated in the organization.
Rayner cites the recent case of a retailer who applied patches to all his Windows-based systems, including both XP and NT. "It blue-screened all the NT boxes," he says.
Applying patches in a test environment first is obviously the right process to follow but, again, Sander describes this as "a step most organizations skip."
This also underscores the need to make backups before applying patches. For whatever reason, the patch might cause problems and you will need to be able to roll back to the original unpatched configuration.
Now the good news
The good news is that there is a growing number of tools on the market that will automate much or part of the task. None of the tools will do the whole job for you, but they will assist in the basic steps of discovering the state of your IT assets; gathering patches; determining which systems actually need patching; highlighting which jobs are urgent or can wait a while; and deploying the patches.
Patch management will never be easy or straightforward. Companies need to take intelligent decisions about which patches to apply and when, balancing the risk of delay against the cost of carrying out the process.
Ideally, patches should be tested before being applied and systems should be capable of being rolled back if the patch creates a problem. In the real world, however, the storm caused by a new worm might cause companies to panic and forget all their good practices. It really is up to you.
So what's the answer?
Ron Condon asks people at the sharp end how they have tried to tackle their patch management problems
Companies take a variety of approaches to patch management, depending on their size and their level of in-house skills. To prepare for this feature, we sought input from visitors to the PatchManagement.org website and observed wide variations in approaches.
Some companies relied heavily on well-known tools to provide them with the service they needed to keep software up to date and secure. Others were wary of automated tools and emphasized the need to monitor the whole process carefully.
Some relied on standard Microsoft tools, some built their own, and others used a range of commercially-available products to manage different aspects of the task.
Steven Blum of Sabic Americas caught the mood of caution: "We use HFNetChkPro 4.1 for patch management because we are a small shop, with between ten and 15 servers, and around 60 desktops, and everything runs Windows. Given that Shavlik wrote the code for the MBSA, and that it doesn't require an agent, it seemed like the best choice after evaluating others like Ecora and Service Pack Manager. Even so, I still don't trust it enough not to regularly test machines that it says it has patched by manually going to the Windows update site."
However, in larger installations, there is just no realistic prospect of doing patch management by hand.
As Mark Anthony Beadles, chief architect at Endforce Inc., says: "Enterprises simply have no choice but to trust the process to automated tools." He uses a combination of tools such as HFNetChk, QFECheck, QFEChain, and Software Update Server to manage the process and says you have no choice but to trust the technology.
"This is critically true for larger enterprises, who might have thousands of laptops and desktops both on their internal LAN as well as remote roaming users," he says.
The problem is compounded by the number of end points in the organization, the number of unique hardware configurations, and the range of different operating systems.
"If a large enterprise wants to have any chance at all of keeping all of its users at a known state of patching, the only way to manage this is through a tool or tool suite which provides automation," says Beadles.
"Automation can be useful in helping to define the patch set which is required across different OSs and hardware, at assessing the patch state of the thousands of roaming machines, and in remediation of any improperly patched or out-of-date machines.
"Is an automated solution perfect? Probably not – but at large volumes, manual patch management becomes far too costly and allows human error to creep in to the process," he continues. "In the end, you end up with no net gain over automation... and quite probably a loss. This is less a matter of trust than realism. You need to look at the pros and cons and the costs versus benefits of patch automation. To be sure, you have to do your diligence and select patch management tools that are well-architected, securely built, well-tested, proven, and so on; you can't just give up trust altogether."
For very large organizations, in-house skills are more readily available – but it also helps if you have a standard system configuration, as Pamela Fusco, CSO at hosting company Digex, explains. "We do not deploy a specific vendor and/or software technology to implement patches or manage the ongoing auditing of such security fixes," she says.
"However, we are proactive in the implementation and deployment of our security fixes and this work is conducted via custom, home-grown scripts."
"Since we have standard builds and services, we are able to audit our servers and workstations on a daily basis to determine what is on the network (down to the OS version, applications, patch levels, service accounts, and so on).
Having a standard build by which to launch the audits streamlines our ability to identify possible security threats and risks and further provides details on what we have in-house.
"Since we support clients on a global scale, we do work 24 hours a day, seven days a week, all year round, and it is essential for us to ensure we have a comprehensive, dependable patch and security auditing mechanism in place. When it comes to deploying patches, again our standard builds grant us the advantage of knowing the who, what, where and how – basically, we know the number of servers that are vulnerable to a particular exploit, and what versions of service packs and services the systems are operating with – this narrows our scope so that we do not have to examine each and every asset.
"We launch our scripts, capture the data, then develop security advisories that are deployed to customers and internal staff, detailing what we're doing to implement protective measures and secure their assets and resources. "Identifying the systems that require patches is only the first step. Patch testing is conducted against lab systems which are built with our standards. Upon successful completion of the patch testing, we develop our deployment notification to customers and then launch the patches via scalable automated processes.
"After the patches are deployed, we put the data into our daily auditing script and audit all systems every night to make sure the patches, security fixes, and secure configurations are intact.
"This is our proactive defense to ensure systems have not been altered. If they have been altered, we investigate the situation immediately."