Dynamic Futures: Intro to Modern Vulnerability Management

There are a few ways that VM programs can be setup. The older approach essentially involves a process of performing the following actions:

  • Determining scope
  • Scanning assets
  • Scoring assets by CVSS or other indicator
  • Reviewing scan report and remediating based on scores

This method works, and it is certainly better than letting assets sit in the wild and praying your patch management solution (you have one of those, right?) is working properly. But there exist some underlying problems with this method, that is, they are static and use an overly-simplistic method of gauging risk. Let’s break these issues down.

Static Information

As stated, having some information is better than having none. But we’re not here to do the bare minimum, but to create a thriving program that can adapt as the technology used in our environment changes. This static method, then, severely limits our success as it only sees devices in a single instance of time. Since the information is not updated until the next scan, you could be left doing work on something that may have been patched or otherwise changed since scan time. Likewise, static scanning tends to miss items because as they only run at a certain point in time, they are all but guaranteed to miss assets due to them being offline or otherwise improperly inventoried. This means tedious work for your team with time that could have been better spent elsewhere.

Generic Risk Measurement

Classic vulnerability management relies primarily on CVSS scoring in order to show the organization what are allegedly the top issues needing to be addressed. Again, it is certainly better than nothing, but this method lacks nuance in the form of an organizational context around which to scope a path forward. For example, you could have an outdated version of Adobe Reader installed on machines on your internal network. It could even be an easily exploitable vulnerability that leads to a local ACE with a score of CVSS 10. Out of context, this could seem like a high priority for an organization to tackle because no one wants to have code execution available as an attack vector in their environment, and indeed, based on certain factors there may be a time when that is the case. 

However, what this fails to take into consideration are any other factors, such as the fact that in this scenario, the machine where the affected program resides is sitting on an internal VLAN only accessible after other systems are already compromised (phishing attempts notwithstanding). Maybe it’s even worse, and on an external facing system you have telnet enabled, or maybe your SNMP community string is set to public on an external device. These other issues are also scored CVSS 10, but because the old standard for handling vulnerabilities involves looking at this kind list without applying the contextual data, you choose to go through the list point by point based solely on score as if it were just some generic to-do list when in reality the risk from vulnerabilities with the same CVSS score can vary wildly.

What can an organization do, then, to bring their vulnerability management program forward? Enter dynamic risk-based vulnerability management.

Dynamic, Risk-Based Analysis

In contrast to the classic model, dynamic risk modeling attempts to qualify risk based on other criteria besides CVSS score. This isn’t to say that we throw CVSS out the door, not at all. Rather, it becomes a single metric in addition to others that can then be combined to assess overall risk of a specific vulnerability. Other factors taken into consideration would be number of instances, or affected devices, the location of these devices (external-facing, internal only), and the ease of exploit. And these other criteria only scratch the surface.

The other, and possibly more important aspect, is the nature of the device with the vulnerability. Again going back to the earlier example, an internal user machine with a single outdated program isn’t great, but there is obviously a major difference between this vulnerability &, say, an outdated Apache server with a known RCE vulnerability facing externally. But the only way to determine this is to use metrics beyond a simple score.

For this reason, a better methodology involves the following features:

  • Thorough Inventory Management
  • Determining scope & scanning comprehensively for vulnerabilities
  • Prioritizing & handling vulnerabilities accordingly
    • Remediating
    • Accepting Risk
    • Recognizing False Positives
  • Reporting & Determining Threat Level
  • Reprioritize Remediations as Needed for Organization

The above approach is more holistic as it not only involves more data from the context of an organization, but it allows for change as needs arise, the ability to pivot & prioritize based on new or incoming information. With this approach, more information is a very good thing, because it helps the organization make decisions that are more likely to be beneficial on the whole.

Conclusion

This all maybe seem like much ado about nothing, as again, doing something is normally better than nothing in terms of security. But my goal is to be the best at what I do and bring my organization up to the best level it can be, and it’s with that in mind that I share these thoughts. Of course, I haven’t mentioned it but it goes without saying that it will take a team-oriented approach, from the top-down, to really get this working well. It takes the support of C-Suite and the breadth of organizational knowledge the engineers bring to the table to really create a robust vulnerability management program, and that means you can only accomplish this, like so many other things in life, through good teamwork and a strong desire to make things better for yourself and those that come later.

Leave a comment