Subscribe to the feed

For some time now, the conversation around what poses risk in software vulnerabilities has been evolving. It has been gratifying to hear other voices amplifying what I, and generally Red Hat, have been saying for years: not all vulnerabilities in software matter, and not all vulnerabilities in software are created equal. A number of industry leaders in the security space have been saying this, and those voices are becoming louder and harder to ignore. More importantly, as I talk to customers, the message is beginning to resonate. And that’s for one simple reason:

It’s no longer possible to scale traditional vulnerability management given the number of vulnerabilities discovered each year.

I talked about this last year when I asked the question: Do all vulnerabilities really matter?  Through data and evidence, looking at the 2021 Product Security Risk Report, it was clear that not all vulnerabilities get exploited, not even close!  If we look at the recently released 2022 Product Security Risk Report, we see a similar trend. Yet according to Kenna Security’s “By the numbers: 2022 CVE Data Review” we see that while CVE volume grew 25% year-to-year from 2021 to 2022, just over 4% of published vulnerabilities represent a real risk to organizations. That data mirrors our own. In fact, according to the 2022 Product Security Risk Report, only 0.4% of vulnerabilities found in 2022, of all severities, had known active exploitation, which was down considerably from 2021. In fact, out of 1086 Moderate severity vulnerabilities, only 2 had known active exploitation.

It’s also worth noting that, while the conversation is ongoing and getting more attention, this view of risk in vulnerabilities is not new. Nearly 20 years ago, when I was not yet at Red Hat, a few Linux distros issued a  joint statement about GNU/Linux security with similar themes.

CVE, or Common Vulnerability Enumeration, is a naming convention for vulnerabilities. It’s an excellent system that allows users, vendors and researchers to refer to the same flaw by the same name. And that’s all it is – a mechanism to name and track vulnerabilities. A CVE, in and of itself, is not an indicator of risk.

At the same time, CVSS, or the Common Vulnerability Scoring System, is a means to prioritize remediation of vulnerabilities through a common assessment approach. CVSS is built on three metrics: Base, Temporal and Environmental. And like any good three-legged stool, it needs all three legs to be useful. Most vendors only provide the Base metrics which are an objective view of the immutable characteristics of a vulnerability. Now, when we say immutable, we refer to how the affected software was built, which is not always the same vendor to vendor. I’ll touch on that more in a moment. The Temporal metrics can be provided with a good threat intelligence program or feed as it describes current exploitation of a vulnerability, whereas the Environmental metrics can really only be provided by the deployer of the software. After all, a vendor cannot know if you’re using their software in production, or in development and testing environments. The risks in these environments can differ greatly. Additionally, the use of software components in products, whether digital or tangible, may differ as well. A component included in one product may expose different functionality than in another. It is for this reason that some vendors, including Red Hat, will score differently on a per-product basis, taking into account the different use in each product.

Going back to the immutable characteristics of a vulnerability, we have to start with the recognition that different vendors build software in different ways – particularly in the open source space. As one example, a vendor might use a C compiler such as gcc with stack protection enabled, or PIE (Position Independent Executables) enabled and another may not. If you have a buffer overflow vulnerability in software built with stack protection, what could be a vulnerability that allows for arbitrary code execution may be changed to a denial of service. It’s a vulnerability, and it might even be a particularly bad one depending on the function of the software in question. But its impact is to whether or not the service is available, not whether or not an attacker can elevate privileges, steal information or intrude further into the network. In other words, through the simple difference of enabling a compiler feature, the characteristics of a vulnerability may change.

Practically speaking, this also means that any CVSS Base metrics being provided by a vendor should be taken as authoritative over a CVE aggregator. NVD, or the National Vulnerability Database, is a database run by NIST, a US government entity. The NVD is a great database, but its utility is limited because it only considers a CVE in the context of broad applicability and the worst case scenario. This means that in the above case of compiling with stack protection, NVD will represent the CVE as though stack protection was not enabled. Which may not be the case for the software you are running. Now, if the vendor of the software in question doesn’t provide a CVSS Base metric, certainly using what is provided by NVD is a reasonable choice. But preferring the NVD CVSS scores over a vendor that provides the same is no more reasonable than preferring the tax advice of a math teacher over that of an accountant, when you’re trying to file your taxes. Incidentally, Daniel Stenberg, author of the popular curl open source utility, recently proffered a scathing indictment of NVD's use of CVSS.

It is for precisely this reason that, for years, Red Hat has been publishing CVSS Base metrics on our CVE pages. We recognize that NVD has the unenviable job of providing data for software used on many different operating systems, built many different ways, and used in many different scenarios. On the other hand, as the builders of our own software, Red Hat knows how the software was built, how it’s designed to be used, and in some cases precisely how it is likely to be used. Through our understanding of the software, we can provide a more accurate set of metrics to be used when prioritizing remediation. There is, of course, the responsibility of the end user to provide the Temporal and Environmental metrics to get a true score, the way CVSS was designed to be used.

But all of this is the basics. It tells you that you have a vulnerability, and it tells you what the impact rating of that vulnerability is, as well as the CVSS metrics that can be used by end-user security teams to augment and get a true view of priority that can then be used in honest risk calculations. The bigger question comes down to risk, and more specifically, what risk do you run by not fixing something given it’s not possible to fix everything?  While many efforts are underway to reduce risk in the industry, through a variety of groups and organizations, we are years away from realizing those benefits in a practical way. This is partially due to product lifecycles – software released today in a lot of cases can be expected to be in use 5 to 10 years into the future, particularly software designed for the enterprise. New software created tomorrow may not even be deployed for years.

What strategy can enterprise users employ today to remediate risk?  The first is an understanding of software vulnerabilities. This means knowing what CVE is and isn’t, how CVSS should or shouldn’t be used, and understanding the limitations of CVE aggregators. 

It’s also an honest look at who your vendor is and how much you trust them. From a purely pragmatic standpoint, if you trust the vendor enough to supply software that operates your business, you should also trust their assessment of vulnerabilities and line that up with their track record. For example, if a vendor continually tells you they are ‘secure’ yet they continue to have exploits in their software for which there is no remedy, it’s not certain that the vendor is entirely trustworthy. On the other hand, if a vendor rates the software honestly, adjusts as new information is made available, addresses vulnerabilities with a high probability of exploitation proactively and addresses those with a lower probability, yet are currently being exploited, with speed, then perhaps that vendor has earned trust.

As an aside, there was a vendor a few years back who had their mail system exploited. They were quick to point out the problem clearly and transparently and were quick to resolve it. I remember a conversation with a co-worker who wanted to get rid of them as a vendor because the security issue was bad (and it was), despite the fact they had been transparent and fixed it quickly. My response to them was no, this is precisely what we want to see in a vendor!  A transparent and rapid response to an incident is exactly what engenders trust. Perhaps I’m the security pessimist, but I’m not sure I trust a vendor that claims to have had zero security incidents or zero vulnerabilities in their software. For me, at least, how a vendor responds to a security incident is a key indicator of how trustworthy they are.

Taking all of these pieces together allows security teams to build programs to address risk, but this is just the start. An article published recently, All CVEs Are Not Created Equal highlighted that the same vulnerability in different industry sectors can also create different risks. And these differences should be accounted for in the security teams that are tasked with assessing risk. Vulnerabilities in cloud services differ from those on-premise in the same way a vulnerability on a desktop differs from that in a vehicle. The environment absolutely has a say when considering risk.

Further complicating issues is the fact that while open source might be included as any number of dependencies in applications, a vulnerability present in a dependency may not be exposed to the application, as highlighted in Contrast Security’s 2021 Application Security Observability report. Vulnerabilities may be present, but entirely unreachable by legitimate users or malicious attackers, making the risk present virtually nonexistent. That means any effort spent patching is well and truly meaningless and wasted. This is also why, when vulnerabilities are being assessed, we take into account how a particular component is being used in any given application that includes it. 

At the end of the day, risk is a People, Process and Technology equation and yet most tend to focus solely on the Technology aspect with little regard to the People and Process. The choice to use end of life products is not a technology problem, it’s a process problem. The choice to apply patches every quarter is a process problem. The desire for 100% of known vulnerabilities to be fixed when only 4% represent a risk is, frankly, a very expensive people problem. Incidentally, this is also a flawed desire that disproportionately impacts open source where there is no hiding vulnerabilities behind the curtains of proprietary software that offer no access to source code. Yet the regard for the transparent nature of open source, which allows you to mitigate vulnerabilities you know about, yet don’t have a fix for, is entirely missing. This one of the best, and least highlighted, benefits of open source when it comes to vulnerability management.

Ultimately, the risk conversation and subsequent decisions have to be evaluated by each individual consumer. There is no one-size-fits-all approach that works, and with the volume of vulnerabilities present, fixing everything isn’t scalable and shouldn’t be desirable. It is a very expensive way to eliminate risk, a cost borne by both vendors and consumers. As humans, we take measured and appropriate risks all the time – the decision to eat at a new restaurant, buy milk at the grocery store, and even the decision to drive to that grocery store. 

The benefits afforded by these simple actions, each of which involve multiple risk calculations, outweigh the safety of remaining home (or eating bland!). We take these without thought, daily. Managing vulnerabilities in software is no different. Each vulnerability has to be considered and, often, the least risky thing to do is to leave it well enough alone – while we didn’t touch on it, it’s worth noting that risk is present every time you update a new package or change software. 

Is the risk of unintended effects worth the risk in patching?  What about the  risk of introducing new, as of yet unknown, vulnerabilities by upgrading to a later version?  While Red Hat aims to minimize this risk as much as possible through backporting, it isn’t entirely eliminated. How much risk is taken on by patching a thousand Moderate, unexploited, vulnerabilities?  That risk calculation is rarely considered.

At the end of the day, we’re all here to create value while balancing risk, because it can never be fully eliminated.

What is Security Detail?

Security Detail brings you an insider look into the world of IT security. Join Red Hat security experts to better understand cybersecurity threats and learn strategies to improve security processes and protect your enterprise from data breaches and cyberattacks.


About the author

Vincent Danen lives in Canada and is the Vice President of Product Security at Red Hat. He joined Red Hat in 2009 and has been working in the security field, specifically around Linux, operating security and vulnerability management, for over 20 years.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech