A growing number of companies are adopting risk-based vulnerability management programs to handle the endless wave of new vulnerabilities being disclosed every day -- more than 2,800 in the first three months of 2021. Yet, too often these programs make one critical error – they focus too much time on a risk score, and not enough time on the system itself.
That’s wrong, because it obscures vital context. By missing that context, organizations can miss prioritizing and patching vulnerabilities that are truly high-risk to their organizations.
Take, for example, the Common Vulnerability Scoring System (CVSS). When a hot new vulnerability garners mass attention, analysts often quote the vulnerability’s CVSS score.
Those comments fail to recognize that the distribution of CVSS scores isn’t a tidy bell curve with an equal number of vulnerabilities straddling a median score of 5. There’s a rightward skew to CVSS scores, meaning a whole lot of vulnerabilities are grouped at the highest ranks. More than 16% of vulnerabilities fall into the “critical” categories of a 9 or 10 rating. Roughly 30% of vulnerabilities score a 7 or higher – CVSS scores that are considered “high risk.” Conversely, almost no vulnerabilities score below 4.
For our counterparts not in cybersecurity, reading that a vulnerability has a “critical” score makes it sound like it’s rare, a white whale to spot and handle immediately. That’s simply not true.
I’m only using CVSS as an example because it’s a widely-recognized, publicly-available resource in cybersecurity. But it’s a lesson common to nearly any scoring system: sometimes, numbers are just numbers. If people don’t understand the context of that number – the distribution of scores across the system – it’s generally useless.
When it comes to vulnerability management, this analysis has a couple of ramifications. In many instances, risk scores offered by private companies – like mine, Kenna Security -- seem drastically lower than CVSS scores for the same vulnerability. But because of the distribution of all scores, a relatively average-seeming score in one system may rank a vulnerability higher than in a severity scoring system like CVSS.
For cyber security pros, the distribution of CVSS scores makes clear that there’s some cause for concern for nearly every vulnerability. However, it isn’t feasible to mitigate every vulnerability. Companies usually have the capacity to patch just one out of every 10 vulnerabilities on their networks. So on average, they likely wouldn’t have the ability to patch everything in the “critical” category, much less in the “high” category.
Scoring systems have value because they can guide decision-making – especially the “next-best action.” In the case of CVSS scores, the rightward skew of the distribution means that the system isn’t really making granular distinctions between “critical” vulnerabilities.
So whether working with vulnerability management, or user-identity management, or some other cybersecurity discipline amenable to risk-based scoring, the true value of the system lies in its ability to make granular distinctions between competing actions.
When headlines break around a dangerous new vulnerability and colleagues on the team ask about the security level of the company, should the team ask for the CVSS score, or how much risk the vulnerability poses? Security pros need to ask if there other vulnerabilities or other security actions they can take that are more important or will address more risk than this one? There’s a difference, and the answer lies in making that distinction. If we stop to patch it, are we actually closing off the avenue that attackers are most likely to take? If not, that hot new vulnerability will have to wait its turn. That’s the path forward to meaningful improvements in vulnerability management.
Ed Bellis, co-founder and CTO, Kenna Security