Three npm supply chain attacks in a single week. Axios (100 million weekly downloads) poisoned by North Korea for three hours. The @bitwarden/cli npm package served malicious code from a compromised Checkmarx CI/CD credential. CanisterSprawl, a self-propagating worm that uses stolen publish tokens to infect every other package a developer owns. Each incident produced the same ecosystem response: remove the malicious version, issue a security advisory, and remind developers to pin their dependencies with lockfiles.
This advice is not wrong. It is also not sufficient, and the gap between “not wrong” and “sufficient” is where these attacks live.
The Lockfile Assumption
A lockfile — package-lock.json, yarn.lock, pnpm-lock.yaml — records the exact version of every dependency resolved at the time the lockfile was written. npm ci respects that record, refusing to install a different version. This provides meaningful protection against a specific class of attack: a new malicious version appearing after your lockfile was committed.
The Axios attack exploited a different class: a malicious version appearing before many organisations’ lockfiles were written. An organisation that ran npm install axios on March 31 between 00:21 and 03:15 UTC received a backdoored package. Their lockfile then recorded that version as the trusted baseline. The lockfile worked correctly — it recorded what was actually installed — and that is precisely the problem.
The Axios attack window was three hours. CanisterSprawl used stolen publish tokens to inject itself into every package a developer owned, creating new malicious versions continuously. In both cases, the attacker published through the official npm registry, with the correct package name, using a legitimate publisher account, and the package was cryptographically signed. Every layer of the system validated that what arrived was exactly what it claimed to be. The system worked. The trust model failed.
The Trust Boundary We Are Not Defending
The npm ecosystem’s security model rests on a single implicit assumption: that individual package maintainers are not compromised. This is not a reasonable assumption for enterprise security purposes.
The Axios maintainer is an individual. The @bitwarden/cli maintainer credentials were stored in a Checkmarx CI/CD pipeline. The CanisterSprawl victims are individual developers whose npm tokens were harvested from their home directories. None of them are enterprise security teams. None of them are required to use hardware security keys for npm publish operations. None of them face the audit and compliance requirements that govern, say, a certificate authority issuing code-signing certificates.
Enterprises routinely spend significant effort securing their own code-signing infrastructure — HSMs, dual-control procedures, audit trails for every signed artefact. Then they allow their build pipelines to install packages signed by whoever currently controls an npm account protected by a password and perhaps a TOTP code.
This is not a criticism of open-source developers. It is a description of a structural trust gap that the industry has consistently refused to name clearly because doing so would require inconvenient conclusions.
Why “Pin Your Dependencies” Is Insufficient
The standard response to each supply chain attack follows a predictable pattern: identify the malicious version, advise pinning to the previous safe version, and remind the community about lockfiles. This response treats the problem as a patch management failure — specifically, as the equivalent of a user not updating to fix a CVE.
But a CVE in a library is a coding error by the maintainer. A compromised maintainer account is an attacker with the same trust level as the maintainer. These are categorically different threats. No amount of dependency pinning protects you from an attacker who can publish a new version of a dependency you trust. The moment you update your lockfile — which you must eventually do, because dependencies receive security patches — you are exposed again.
CanisterSprawl made this explicit with its self-propagation mechanism. The worm doesn’t wait for a victim to update their lockfile; it creates new malicious versions and propagates continuously until all the original publisher’s packages are infected. Lockfile pinning buys time. It does not address the trust model.
What Would Actually Help
The technical fixes exist. npm already supports package provenance attestation — cryptographic proof that a published package was built from a specific Git commit in a specific CI/CD environment. Sigstore, which underpins this, is production-ready. What is missing is requirement: npm does not mandate provenance attestation for any package, regardless of download volume.
The analogous precedent exists in certificate infrastructure. Certificate authorities operating public trust anchors face mandatory audits, hardware requirements for root key operations, and revocation infrastructure. They reached this state after a series of high-profile failures — DigiNotar, Comodo, Symantec — that made the status quo untenable. npm supply chain attacks are producing the same failures at higher frequency and broader impact, but the ecosystem has not yet converged on mandatory controls.
At the enterprise level, the missing control is treating the package registry as a trust boundary equivalent to third-party code access. Private registries with allowlisting, integrity pinning by hash rather than version number, and mandatory review for new transitive dependencies are all available and deployed at a minority of mature software engineering organisations. They are not the default, and they are not industry standard practice.
The Wrong Conversation
After each npm compromise, security teams brief developers about dependency hygiene. After the Axios attack, CISA advised organisations to audit their build logs. Both are correct and necessary. Neither addresses the underlying condition.
The underlying condition is that enterprises have outsourced one of their most sensitive trust decisions — what code runs in their build systems and, by extension, their production environments — to the individual maintainers of packages they have never audited, with no contractual obligations, no security requirements, and no incident response obligations. The industry discusses this as “open-source risk” and treats it as a reason to be careful, not as a reason to change the model.
Three attacks in a week, targeting packages with 100 million weekly downloads, operated by a nation-state and a sophisticated organised actor, should be sufficient data to decide the model needs to change. The question is whether the ecosystem will move proactively or wait for the failure that finally makes the cost of the current approach undeniable.