Subscribe to the feed

Traditional vulnerability management uses a singular “patch all things” approach that is ineffective and costly. It’s time we shifted to adopting appropriate risk-based approaches that consider more than an exclusive focus on patching security issues. Organizations are increasingly called upon to navigate a complex cybersecurity landscape which is more than just potential threats and attacks. It also includes the complexity and sheer volume of software. This requires going beyond “security by compliance” and utilizing comprehensive risk assessment, appropriate prioritization of resources, and embracing a risk-first mindset. Otherwise the number of breaches will continue to grow and we cannot make the claim that it was despite our best efforts.

This is the first post in a series where I’ll look at the changing demands surrounding the many aspects of IT security, how the role of patch management needs to evolve in the face of these dynamics, and how we got here.

Advances in security over the years

Our views on cybersecurity in the computing industry are still adolescent. Connected systems, while ubiquitous today, have not always been around and the evolution of those connections has progressed rapidly. Today’s technology, in many respects, is about 60 years old. ARPANET was created in 1969 and the internet is largely regarded as being publicly accessible since 1983. As of this writing, the internet has been available for 40 years. In that time we went from mainframes to bare metal computers to virtual machines to containers and the cloud, commoditizing computing, expanding access and lowering expenses.

Computer security has likewise had an evolution and, in some respects, that evolution has been quite good and profound. In other respects, such as patch management, we still rely on the things we did 40 years ago, disregarding advances in other technology domains that security is relevant to.

Here are a few examples. It used to be sheer folly to have a desktop system connected to the internet without running anti-malware or antivirus software. We created perimeter defenses around our networks, at home or at the office, in the form of firewalls. In the early days of desktop computing, most thought that all you needed was a firewall and antivirus software and you were golden. Antivirus was rather passive and if you didn’t update signatures, your risk of infection went up. And it was a catch-up game: when a new virus was created, it took time to see it and create new signatures to then push to users.

Firewalls were more responsive in that they could block a targeted attack on an end-user, unlike a scanner that only detected a problem when it was already present. But as networks became more sophisticated, and applications required bidirectional communication channels, new capabilities had to be created to offer better protection. Our firewalls got smarter and much more sophisticated. Even then, it was an all-or-nothing approach. If they didn’t get through the firewall, an attacker got nothing. If they did make it through the firewall, they potentially had access to everything. In those days, if you were behind the firewall, you had implicit trust on the network.

So our thinking adjusted again. We went from implicit trust inside the network to a focus on authorization, identification, and authentication that ultimately led to zero trust thinking. We created Demilitarized Zones (DMZs) and disconnected environments. More tools that required more sophistication, added more complexity, and often created implicit trust relationships.

Another evolution occurred in the methods for authentication and authorization. In the early days you had a username and password for every system. Often these were identical. If I could obtain your username and password on one, chances are I had it for everything else. In many cases, such as with the once infamous rsh/rlogin services, it wasn’t even necessary to type a password again; systems were configured to trust each other based purely on the host where the login originated. This created opportunities for attackers to spoof origins to gain host-based access, to target weak systems and obtain credentials for use on other more secure systems (that used the same credentials), and other ways to target weak authentication and authorization systems. As a result, we decided that best practice was to have different passwords for different systems, which is a good thing, and then have password aging policies to determine how often they needed to be changed (because rotating passwords was largely considered to be a good security practice).

However, we learned quickly that humans being humans, if I had to rotate my password every 30 days, I would go from “password123” to “password456” to “password789”. We worked to solve the individual user system problem by introducing centralized identity and access management systems like NIS/YP (Network Information Systems or Yellow Pages), and other systems based on things like LDAP (Lightweight Directory Access Protocol) and kerberos, such as AD (Active Directory) and some less successful offerings from now defunct companies that many of us would rather forget. All of these further expanded trust to users in internal networks, and often still required password rotation.

What we didn’t consider is that absent a breach, your password doesn’t need to be changed. In fact, changing it could provide a resident attacker monitoring client or server systems with your new password as you changed it, defeating the purpose of rotating it in the first place. So password aging fell out of favor roughly in 2019 when both Microsoft and NIST conceded that it was a lot of effort for very little benefit. The FTC said the same earlier in 2016. For reference, as far back as 1993, password rotation was official NIST guidance as per NISTIR 5153 - Minimum Security Requirements for Multi-User Operating Systems (see section 3.1.2.1).

We have a number of better solutions now. We have more sophisticated software to monitor for account and password compromises. We can have complicated and unique passwords stored in systems like 1Password or KeePass that make random, un-memorable passwords really easy to use. We have SSO (Single Sign-On), SAML (Security Assertion Markup Language) and OAuth (Open Authorization) as well as FIDO (Fast IDentity Online) and U2F (Universal 2nd Factor) to assist with multi-factor authentication. We use certificate and key-based authentication rather than shared secrets. Forced password expiration is a relic that common consensus says doesn’t actually solve any meaningful problems.

Now, we know where we’ve been and what “hygienic” security habits we’ve pulled through from the preceding decades. In my next post, however, I’ll lay out what’s forcing change to these habits and why some tried and true methods are starting to make less sense.


About the author

Vincent Danen lives in Canada and is the Vice President of Product Security at Red Hat. He joined Red Hat in 2009 and has been working in the security field, specifically around Linux, operating security and vulnerability management, for over 20 years.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech