In the beginning, there were network vulnerability scanners. These early security tools would scan the network for active hosts, would then scan for listening services, and would finally check for vulnerabilities.
These days, the vulnerability management category has spawned a wide range of sub-categories in application security (SAST, IAST, DAST, SCA, etc.), cloud (container scanning, CSPM), and vulnerability analysis (attack path mapping, risk-based vulnerability management).
Vulnerability management is too large to tackle all at once, so this round of reviews will focus entirely on commercial and open-source network vulnerability scanners.
An independent resource operated by the cybersecurity professionals at Security Weekly and built on the foundation of SC Media’s SC Labs, SW Labs is a clearinghouse for useful and relevant product and services information that enables vendor and buyer to meet on common ground.
The aim of this report is to share what we’ve learned about the attack surface management space, to clearly define it as a category, and provide useful context for the individual product reviews that accompany this report.
Looking for the methodology we used to test the products in this category? Click here.
Reviews
Following is the list of vendors and products we reviewed, in alphabetical order. We recommend reading through this overview before digging into individual reviews, as some thoughts about the space as a whole will be expressed here, to avoid needing to repeat these insights in each individual product review.
Commercial Products
- Digital Defense Frontline RNA
- F-Secure Elements Vulnerability management
- Qualys Cloud Platform VMDR
- Rapid7 InsightVM
- SecureWorks Taegis VDR
- Tenable Nessus Essentials
- Flan Scan
- OpenVAS - Open Vulnerability Assessment Scanner
- Vuls.io
Looking back
A little over six years ago, I wrote that "the future is vulnerability management as a feature, not a product." If you have a 451 Research subscription(1), you can still read this two-part essay on the history and future of vulnerability management. I made some bold statements about where I felt the market should go and how vulnerability management products should evolve.
(1) 451 Research is not in any way affiliated with CyberRisk Alliance, 451 is just the author's former employer.
I am grateful for the opportunity to examine this market now, 6 years later.
Certainly, for larger organizations, the data produced by vulnerability management products have become just another data point that feed into a larger risk equation. For those outside the F1000 or Global 2000, however, things might not look that different. Reviewing some of this market's past problems may provide some context for current and future market trends.
Quality was a problem. Vulnerability scanning vendors once competed on how many vulnerabilities each product could discover. For every vulnerability found in a commercial product, software engineers at each of the scanning vendors would create a piece of code that could detect if a particular system was vulnerable. These checks quickly grew into the tens of thousands. These engineers were given quotas and had to produce a minimum number of these 'checks' every month.
These vulnerability checks were often rushed out to meet quotas or just to get them in customer hands as quickly as possible. The quality of these checks varied wildly and often resulted in false positives. The result became hugely time-consuming: analysts would have to manually verify each vulnerability to weed out these false positives before reporting them to asset owners.
Quantity was also an issue. This software, designed to let organizations know if their systems were vulnerable to attack, quickly began to overwhelm its customers. It wasn't uncommon for a scan of an environment with 2000 computers to report critical issues in the hundreds of thousands. What happens when over 100,000 issues carry the highest priority, all screaming for attention? Giving up starts to seem like a reasonable option.
Network scans took too long to run. As the library of vulnerability checks grew, network scans took longer to run. In environments with tens of thousands or hundreds of thousands of systems, it was necessary to deploy dozens or hundreds of scanners to make scanning the entire organization feasible. Even with a distributed architecture, some percentage of hosts would be offline during scans or wouldn't get scanned for other reasons.
Most vulnerabilities labeled as critical weren't. The ubiquitous common vulnerability scoring system (CVSS) either didn't or wasn't able to take several significant factors into account. Was the vulnerability exploitable? Did an exploit exist for it? Was this exploit publicly available? Was it currently being used in the wild? Did the exploit simply crash the vulnerable software, or did it allow attackers to run commands? Could they do this remotely, or only if they already had access to the system? The system with this vulnerability: was it exposed to the public Internet, or tucked away, well-protected - deep within a corporate network?
Only the security team collected and managed vulnerability data. The typical vulnerability management model has the security team managing the scanning products, running the scans, and analyzing the results. Vulnerabilities would be broken out by asset owners, often in spreadsheets. Weekly meetings would be scheduled to check on the status of the most critical vulnerabilities. Why weren't they getting fixed? Some are false positives? We'll have to check on that. The clear issue here was that email and Microsoft Excel were the real workhorses of this process, not the vulnerability management product.
Compliance and regulations often made things worse. Compliance wasn't all bad - it was sometimes the only driver behind security in some organizations. The alternative to compliance was often no security program at all in the early days. However, compliance often encouraged inefficient vulnerability management processes and forced teams to ignore other security tasks in favor of correcting hundreds of thousands of vulnerabilities that represented little to no actual risk. Many regulations, for example, require fixing vulnerabilities above a certain baseline CVSS score. As previously mentioned, CVSS scores weren't reliable indicators of risk. In fact, studies have shown that an unmodified CVSS score is as reliable as choosing a score at random.
A decade ago, these scanners were amazing at finding problems, but not very good at helping folks prioritize or fix them.
The current market: 10 left standing
Before discussing how some of these past issues have been addressed (or not), it's worth taking a quick look at which vendors are still around and which have exited the market.
Over the last decade, many vulnerability management vendors have migrated to a SaaS platform strategy. Network vulnerability scanners are no longer the flagship product at "the big three"-- Qualys, Rapid7, and Tenable (often abbreviated as 'QRT', as we discovered while interviewing buyers for this review).
They're simply one of several tools that feed data into a larger risk analysis platform. In addition to network scanners, vulnerability data now comes from cloud connectors, patching systems, change management databases, attack surface management platforms, passive scanners, and endpoint agents.
Tenable remains the most focused on vulnerability management, while Rapid7 expanded into incident response, UBEA, and orchestration. Qualys expanded into patch management, asset management, and even introduced a WAF to address larger pieces of remediation and mitigation workflows. Outside QRT, few legacy scanners have survived. BeyondTrust has recently shut down the old eEye Retina product. ISS Internet Scanner is long gone. Foundstone's scanner was shut down by McAfee years ago.
Comprising the rest of the market, Digital Defense, Outpost24, Tripwire (fka nCircle Suite360), SAINT, and Greenbone (commercial support for the open-source OpenVAS) continue to maintain and sell network vulnerability scanners. Only two new scanners have emerged on the scene in the past decade: SecureWorks Taegis VRM (fka Delve Labs Warden) and F-Secure Elements Vulnerability Management (fka F-Secure RADAR and nSense Karhu).
The state of network vulnerability scanning
Compliance-driven product development inevitably seeks to satisfy the auditor. That doesn't mean that results-driven development won't satisfy the auditor, however. Once heavily compliance-driven markets like vulnerability management realized this, we started to see some of the larger issues get addressed.
While the quantity of findings coming out of the average scan is still a challenge, vulnerability management products have added some key features to help tame the numbers. While most of the following features were first introduced by risk-based vulnerability management vendors (e.g., Kenna Security, NopSec, RiskSense), many have trickled down in some form to basic network scanners. In a future group test, it will be interesting to compare the performance between these two categories.
- Flexible search and filtering functions help analysts answer questions quickly.
- Exploit and threat intelligence correlation separates theoretical risk, from real-world risk. It also removes reliance on CVSS as the only quantifiable factor to prioritize findings.
- Asset criticality and contextual data (is the vulnerable host exposed to the public Internet?) also helps with prioritization
- Confidence scores also help prioritize. Some vulnerability checks can be 100% certain, while others have to guess. Knowing the difference is important.
The quality of vulnerability checks seems to have improved, but asset identification remains a significant issue. For example, Linux is broadly used for network and IoT devices today. Most scanners fail to see the difference and it's difficult to update a vulnerable IoT device if the scanner identifies it as "Linux 2.6 kernel".
Admittedly, cataloging the vast sea of IoT and network devices is challenging, but that's the job network scanners have committed to.
One aspect of vulnerability management that has drastically improved is workflow. Originally designed as purely a security tool, vulnerabilities were once exported to spreadsheets and emailed to asset owners. Today, modern vulnerability management tools are designed for non-security folks as well. Role-based access control means that an asset owner can log in and see remediation recommendations just for their assets and no one else's. Reporting is also more flexible and useful.
It has long been said that the spreadsheet is the primary competition in this market. When customers stop exporting findings to CSV, vendors will know they're making positive progress with UI, UX, and workflow.
Scanner architecture: getting the data
Qualys is well-known for having adopted a SaaS architecture long before anyone was using the acronym or knew it was pronounced "sass". Currently, all the commercial products we tested employed both on-premises components and cloud, SaaS-delivered components. Typically, these components include:
- A SaaS-based console, managed by the vendor
- Network scanning engines that either install as software packages or are available as complete virtual appliances compatible with most hypervisors. These network scanning engines send their results back to the SaaS-based console.
- Cloud scanning engines that can be used for performing external vulnerability scans (scanning from an Internet, 'outside', perspective)
Optional agents can typically be installed on Windows, Mac, and a variety of Linux and Unix operating systems. Agents alleviate the need to run active, point-in-time network scans by collecting data and sending it back to either a local scan engine or the SaaS console on a regular basis. Agents are typically preferred in very large environments where active scanning is difficult or impossible. They are also ideal for monitoring the state of remote systems on home networks, or small branch offices too small to deploy a network scanning engine.
Use cases and strategies
Not everyone uses vulnerability management tools in the same way. For some, the scanner output is the main source of a day's work. For others, they're seldom used and only when necessary.
The vulnerability-driven organization might feel like it is running on a hamster wheel - always moving, but never getting anywhere. The output of the scanner drives analysis activities, kicks off patching processes, and generates endless meetings to check on remediation status.
Sometimes this workflow is necessary but is rarely productive long-term.
The goal-driven organization uses vulnerability scan results to inform task selection. The goal-driven approach asks questions, like, "why are these systems missing so many patches in the first place" and "how do these misconfigurations keep getting propagated, even after we've corrected them?" By going after specific goals and the root causes of vulnerabilities getting out of control, permanent progress can be made. This approach is ideal for organizations that have a lot of catching up to do on patches and hardening.
The automated remediation approach is often adopted by more mature or cloud-first organizations. There are organizations that don't test or discuss the impact of security patches. Instead, they just automatically push them out as soon as they're available and ensure they're prepared to deal with any potential fallout if it comes.
In a typical DevOps shop, vulnerability scanning and patching are just parts of an automated build process. If tests fail, the process stops, and the details of the failure are investigated.
The second opinion use case is common in mature and/or cloud-first organizations. With mature, mostly automated patching programs in place, vulnerability scans are typically used to ensure nothing is missed by these patching programs and that patches, do in fact, address the vulnerabilities they claim to fix.
Reporting and metrics are a common use case, often in addition to one of the others mentioned. When building formal metrics to share with upper management and the board, vulnerability scan results often feed the risk calculations and trends that produce these metrics. Most compliance needs also fit into this use case.
Conclusion
Vulnerability scanners aren't quite as essential and central as they once were, but they're still necessary. Thanks to the explosion of IoT devices, the need for them is unlikely to change any time soon. However, they have a long way to go before they can provide useful results when it comes to IoT devices.
According to the Verizon DBIR team, vulnerabilities are exploited in less than 5% of all reported cyber incidents. Don't take this statistic the wrong way - when attackers discover an exploitable vulnerability, they will take advantage of it. This news puts defenders in an awkward position - vulnerability management is still an essential practice (and rightfully sits at #7 on the 18 CIS Controls), but this traditionally labor-intensive process shouldn't distract from more important security work.
But then, in the world of cybersecurity, it seems appropriate to conclude on a paradox.