Opinion / Commentary

Vendor Security Ratings Are a Confidence Trick — And We Keep Buying Them

The third-party security ratings industry has built a billion-dollar business on a simple premise: that an outside-in scan of your suppliers' infrastructure tells you something meaningful about their security posture. It doesn't. And the gap between what these tools imply and what they deliver is creating a false sense of supply chain security in boardrooms everywhere.

CipherWatch Editorial · Security Intelligence Platform
5 min read

My organisation subscribes to two vendor security rating platforms. Between them, they score approximately 4,000 of our suppliers on a scale that implies something meaningful about their security posture. Leadership looks at the scores. The board gets a dashboard. When a supplier drops below the threshold, we send them a letter.

I have been running third-party risk for six years. I want to tell you what those scores actually measure.

They measure the attack surface visible from the internet. Open ports. Expired certificates. Domains associated with the supplier’s name that appear in breach databases. Web application headers that suggest outdated software versions. DNS configuration quality. These are real data points. They are also almost entirely disconnected from the questions that matter when a supplier handles your data or sits inside your network.

What the Score Doesn’t Know

The security rating does not know whether your supplier patches their internal systems. It does not know whether they have MFA enforced on their email. It does not know whether their backup architecture would survive a ransomware attack. It does not know whether the contractor who accesses your environment through their helpdesk tool has been background-checked. It does not know whether their incident response plan has ever been tested.

These are the failure modes that have produced the major supply chain compromises of the last five years. In almost every case, the supplier’s external attack surface was unremarkable. Their internal security controls — invisible to an outside-in scan — were the problem.

The SolarWinds compromise did not show up in security ratings. Neither did the Kaseya attack. The MOVEit exploitation chain targeted a file transfer application that had a reasonable security score right up until the zero-day dropped. What ratings measure is not what attackers exploit when they go after supply chains.

Why We Keep Buying Them

Security ratings platforms are successful for three reasons that have nothing to do with their efficacy as risk measurement tools.

First, they produce a number. Boards and audit committees are comfortable with numbers. A supplier scoring 720 out of 900 feels like a governance artefact — something that can be reviewed, trended, and held up as evidence of a risk management programme. The alternative — accepting that third-party risk is genuinely hard to quantify from outside — is uncomfortable in governance conversations.

Second, they create audit trail. When a supplier is compromised and regulators ask what due diligence you performed, the answer “we monitored their security rating and it was within acceptable parameters” is better than “we didn’t check.” Whether the check was meaningful is a question most investigations don’t get to.

Third, they scale. Assessing 4,000 suppliers through questionnaires and onsite visits is not feasible. Automated scoring of 4,000 suppliers is. The rating platforms solved the scale problem. They did not solve the accuracy problem. Those are different problems, and conflating them has cost the industry dearly.

What Rigorous Third-Party Risk Actually Looks Like

I’ve been part of programmes that did this well and programmes that didn’t. The distinguishing feature is almost always tiering — spending meaningful assessment resources on the suppliers where the risk actually is.

Not all 4,000 suppliers are equal. The SaaS tool that your marketing team uses for email newsletters is not in the same risk category as the managed service provider with administrative access to your Active Directory. A security rating treats them similarly because it’s measuring the same external signals. A mature third-party risk programme treats them differently because it’s thinking about impact and access, not just scores.

For your highest-tier suppliers — those with privileged access to your environment, those who process sensitive personal data at scale, those who provide critical operational technology — the right assessment methodology is a structured questionnaire validated by evidence, supplemented by a conversation with their security team, reviewed annually. The security rating is one input among many, not the headline metric.

For the long tail of lower-risk suppliers, a security rating is a reasonable screening tool: flag the outliers for more scrutiny, not as a verdict in itself.

The problem isn’t that security ratings exist. The problem is that they’ve been sold as a substitute for a risk management programme rather than a component of one. And because they’re cheaper and more scalable than real due diligence, organisations have let them crowd out the harder work.

The Uncomfortable Recommendation

Stop presenting security scores to your board as evidence that third-party risk is managed. Present them as screening tools that flag suppliers worth investigating further. The distinction matters because it changes what questions the board asks — and board questions are the primary driver of how resource-intensive third-party risk programmes get.

If the board believes the dashboard means the risk is managed, they will not fund anything more rigorous. If the board understands that the dashboard is a starting point, they will ask what happens next. That question is the beginning of a real programme.

The score is not the due diligence. It never was.