To shed some light on this question we simulated the tech stack of a fictional company and used our data set of patch data to analyze how often they would need to apply a security patch or an update to the software they use. Here is what our fictional company uses:
Clients:
- Windows 10 workstations with Microsoft Office
Servers:
- Windows Server 2019
- Windows Server 2012
- Ubuntu Server
- CentOS Server
Virtualization:
- VMware
Development and Operations:
- Docker
- Python
- Jenkins
A real-life company would probably use several additional technologies but we want to keep it simple and cover some of the main components of a typical tech stack.
Four patches a day
For our analysis we took data points from our patch data set for the four-week period between May 14 and June 11. The data shows that all in all the system administrators at our example company would need to apply 83 patches in four weeks which translates to roughly 4 patches per day. This already gives an indication about the effort involved in keeping everything up-to-date and secure. Let’s break the data down a little further.
Microsoft software is probably top of the list when it comes to patching in many companies. During the analyzed period Microsofts patch Tuesday happened on June 9th and comprised of 129 vulnerabilities, 11 of which had a rating of critical. Since our example company uses only four Microsoft products most of these patches did not apply and the sysadmins could happily ignore them and only apply the updates to Windows 10, Windows Server 2019, Windows Server 2012 and Microsoft Office. We did count these updates as single items although they each fixed many vulnerabilities: 81 for Windows Server 2019, 36 for Windows Server 2012, 51 for Windows 10 and 5 for Microsoft Office.
The two Linux servers in our setup received 64 patches: 47 for Ubuntu and 18 for CentOS. Here it is important to note that we did not make further assumptions about the installed packages on both Linux distributions and some of the patches only applied to specific packages so the real patching needs could be lower.
But the server team at our example company was not only kept busy by their Linux systems: VMware released 6 patches in the four-week timeframe, among them the fix for a serious bug that would allow command injection in the VMware Cloud Director.
The development and operations department had a quieter month with only 8 patches. Amongst them were 3 new versions of Docker, 1 new Python release and 4 releases for Jenkins.
Patching takes up a lot of sysadmin time
Our analysis shows that every piece of technology used by our example company needed an update during our analysis timeframe. Estimating the effort needed to apply all these patches is difficult since the mechanics of patching and updating differ widely between technologies. Microsoft updates are (or at least should be) automated in most companies, however, this in many cases only applies to clients and client software (e.g. Windows, Office). For Microsoft server operating systems many teams use at least a partially manual process, for example by using a testing environment and a gradual rollout, to ensure that patches don’t bring down critical servers. The same process applies to Linux servers. Virtualization software often needs special caution while updating because it forms the backbone of a company IT architecture. Updating programming languages like Python does need to be planned carefully as well since it can break application features and make code rewrites necessary. Tools like Docker or Jenkins on the other hand can often be upgraded in a quick fashion.
If we assume a rather optimistic scenario in which it takes a system administrator on average 45 minutes to research, install and test a patch our example company would have spent around 62 man hours in the four weeks timeframe on patching and updating.
Looking at these numbers it is not surprising to see so many companies falling behind on patching their tech stack. Patching takes up around a third of a system administrators time, even in our optimistic scenario which assumes a fair degree of automation and patch management. This back-of-the-envelope calculation already shows how expensive it is to keep all systems patched. However, all this money is still a very good investment when considering how quickly unpatched systems are compromised by attackers. A new report by FireEye shows that 12 percent of vulnerabilities were exploited within one week after a patch was released and 15 percent were exploited within a month (most of the remaining ones were zero days which were exploited even before a patch was available). And a recent advisory by the Cybersecurity and Infrastructure Security Agency (CISA) of the Department of Homeland Security warns that cyber criminals are still actively exploiting several bugs in Pulse Secure VPN servers, nearly a year after patches for these bugs have been released.
Missing patches is dangerous
This also demonstrates the importance of not missing critical patches for your tech stack. You can’t patch what you don’t know. It’s a safe guess that many of the companies that are currently being compromised via the Pulse Secure vulnerabilities have missed this patch all together. It is easy to blame them but our analysis shows: Keeping up with patch publications is a lot of work. While some companies, most notably Microsoft, have well-known dates for releasing patch information, most security patches are released freely throughout a month. During the four-week timeframe there were only 6 days where no new patches were released.
A robust patch alerting system should therefore be part of every patch management program.
Conclusion
Patching your systems is one of the most important things you can do to protect against attacks but with todays diverse tech stacks it has become a high burden on IT teams. Patch management processes that are based on automation, testing and event-driven workflows can help lower this burden.