Opinion / Commentary

Patch Tuesday Is Not a Patching Programme

Every second Tuesday, the industry runs a collective sprint to triage, test, and deploy hundreds of Microsoft patches before the next cycle begins. We call this a patching programme. It isn't. It's a treadmill β€” and the real security question is whether we're measuring the right thing.

CipherWatch Editorial Β· Security Intelligence Platform
5 min read

April’s Patch Tuesday landed with 107 CVEs addressed across the Microsoft product portfolio β€” a number that prompted the usual cycle of security team briefings, SIEM tuning adjustments, and quiet negotiations with infrastructure leads about change windows. By the time the next cycle arrives, a meaningful portion of those patches will still not be deployed in most enterprise environments. That is not a failure of operational discipline. It is the predictable result of mistaking an event for a programme.

The Treadmill Problem

Patch Tuesday exists because vendors need a structured mechanism for releasing security fixes and organisations need to predict when patching effort will be required. Both of those objectives are served by a monthly release cadence. The security problem it was designed to solve β€” keeping systems up to date β€” is not.

At 107 CVEs per month, and that number is conservative for the Microsoft estate alone, the annual maintenance burden for a large enterprise runs to over 1,200 vulnerabilities requiring assessment, prioritisation, testing, and deployment. Before adding Linux, network devices, third-party applications, or the irregular out-of-band releases that accompany actively exploited zero-days, the number is already beyond the capacity of most vulnerability management teams to process with genuine rigour.

The response to this has been prioritisation frameworks β€” CVSS score cutoffs, CISA KEV integration, exploit probability scoring. These are improvements. But they measure the wrong outcome. The question is not β€œdid we patch everything above our CVSS threshold?” The question is β€œare the vulnerabilities that represent meaningful risk to our specific environment addressed within a timeframe that matters?”

Those are different questions, and the answer to the second one requires more context than a score provides.

What Gets Measured

Patch compliance metrics are ubiquitous in security reporting: percentage of systems patched within the SLA, mean time to remediate, critical CVE open rates. These metrics satisfy audit requirements and produce trendlines for board dashboards. They are reasonable proxies for operational effectiveness in environments where the threat model is generic.

In most enterprise environments, the threat model is not generic. A critical CVSS score on a vulnerability in a service that is not internet-exposed, has compensating controls in the network layer, and runs on a system with a six-week change freeze is a different risk than the same score on a VPN gateway exposed to the internet. The patching metric treats them identically. The risk profile does not.

This matters because patching resources are finite. The team that achieves 95% patch compliance by treating every critical CVE as equal priority is likely leaving genuinely high-risk systems exposed while deploying patches of marginal risk reduction in managed, segmented environments. The metric looks good. The risk posture may not have improved.

The Compliance Trap

Security frameworks have institutionalised the patch compliance metric in ways that are difficult to challenge from inside an organisation. PCI DSS requires critical vulnerabilities to be addressed within defined timeframes. ISO 27001 requires documented patch management procedures. NIST CSF maps patching cadence to its β€œProtect” function. These are reasonable baseline requirements.

What they have collectively produced is an incentive structure that rewards the metric over the outcome. Security teams focus on clearing the queue. Infrastructure teams focus on the SLA. Neither is focused primarily on the question that matters: which vulnerabilities, if exploited in our environment, would have a significant impact on our operations or data?

A patching programme built around that question looks different from one built around Patch Tuesday compliance. It requires threat modelling of specific systems. It requires asset classification that reflects actual criticality, not just what the CMDB says. It requires patching decisions made by people who understand both the vulnerability and the environment β€” not automated rules applied uniformly.

What a Real Programme Looks Like

The organisations that handle patching well share a characteristic that has nothing to do with tooling: they have built a continuous risk assessment capability rather than a monthly response capability.

This means vulnerability scanning results feed into a contextualised risk register updated continuously, not just when Patch Tuesday fires. It means systems are classified not just by criticality but by exposure β€” what can reach them, what they can reach, and what the blast radius of a compromise would be. It means patching decisions are made with knowledge of current exploitation activity, not just CVSS scores.

None of this is exotic. The technology to do it β€” asset inventory, network mapping, threat intelligence integration, risk scoring β€” exists in most enterprise security stacks. The gap is not tooling. The gap is that organisations have structured their patching function around the event calendar rather than around the risk.

Patch Tuesday will keep arriving every month. The question is whether we are running to keep up with it or running a programme that uses it as one input among many. The first is a treadmill. The second is security.