Running AI coding setups in containers (or even just VMs) seems like a solid default, and I’d love to see tooling move in that direction by default—less as a hard security perimeter, more as a safety net for people trying to move fast.
Re: the article’s conclusion—I get the skepticism. For what it’s worth, the product came after years of trying to solve the problem of package security and maintainer funding in the open. At some point, it felt like the best way to make a dent was to build something dedicated to it.
Hi — I’m the security firm CEO mentioned, though I wear a few other hats too: I’ve been maintaining open source projects for over a decade (some with 100s of millions of npm downloads), and I taught Stanford’s web security course (https://cs253.stanford.edu).
Totally understand the skepticism. It’s easy to assume commercial motives are always front and center. But in this case, the company actually came after the problem. I’ve been deep in this space for a long time, and eventually it felt like the best way to make progress was to build something focused on it full-time.
– The JavaScript ecosystem moves faster — way more packages, more frequent updates, and more transitive dependencies (avg 79 per package).
– npm has lower barriers to publishing, so it’s easier for malicious actors to get in.
– Java developers often use internal mirrors and have stricter review processes, while npm devs tend to install straight from the registry.
– But to be clear, supply chain attacks do happen in other ecosystems — they’re just underreported. We’ve seen similar issues in PyPI, RubyGems, and even Maven.
JavaScript just happens to be the canary in the coal mine.
Exactly. Linus’s Law — “given enough eyeballs, all bugs are shallow” — falls apart when everyone assumes someone else is doing the watching. In reality, most packages (and especially their transitive dependencies) get zero meaningful review. Attackers know this and exploit it. Community vigilance just doesn’t scale — we need better tools to actually inspect what code is doing.
I agree with you. To be fair though, the concept likely seemed more reasonable in 1999. Hardware, browsers, and websites (and their front- and back-end services) were all less complex back then. Also less bloat. Not that things were more secure, but a popular tool may have had more meaningful review.
At times, complexity is worth the trade-offs. Modern C++ compilers are more complex than ones in the 80s and 90s, but the assembly code they generate runs much faster. Rust is complex but provides massive security benefits while maintaining great performance.
At times though, stuff is just bloated or poorly designed.
But it's not always clear how to intelligently design a project. If you add too many features to a single large project, it becomes unwieldy to maintain that large project, and the harder it is to audit this critical piece of infrastructure. Yet, if you don't add enough features, people will use packages from random devs, risking their own security, while harming the maintainability of their own project.
I don't know how we solve that problem. Alternatively, you could ask devs to reinvent the wheel and write a lot more code on their own (which they probably won't, either because they don't want to, or because the employer requires a solution on too short of a timeline to do so), but that could also jeopardize security. Many if not most web devs have to deal with authentication and encryption, both of which (the overwhelming majority) very much should not do on their own. Good luck asking a junior dev to correctly implement AES-256 encryption (or something equivalent or better) on their own without using existing libraries.
The answer is almost certainly some kind of mix, but it's not clear what exactly that should look like.
Totally agree. Most companies using mirrors or proxies like Artifactory aren’t getting much real protection.
- They cache packages but don’t analyze what’s inside.
- They scan or review the first version, then auto-approve every update after that.
- They skip transitive deps — and in npm, that’s 79 on average per package.
- They rely on scanners that claim to detect supply chain attacks but just check for known CVEs. The CVE system doesn’t track malware or supply chain attacks (except rarely), so it misses 99%+ of real threats.
Almost everything on the market today gives a false sense of security.
One exception is Socket — we analyze the actual package behavior to detect risks in real time, even in transitive deps. https://socket.dev (Disclosure: I’m the founder.)
Totally agree — we’re going to look back and wonder how we ever shipped code without knowing what was in our dependencies. Socket is working on exactly this: we analyze the actual code of open source packages to detect supply chain risks, not just known CVEs. We support npm, PyPI, Maven, .NET, Rubygems, and Go. Would love to hear which ecosystems you care about most.
We built “safe npm”, a CLI tool transparently wraps the npm command and protects developers from malware, typosquats, install scripts, protestware, telemetry, and more.
You can set a custom security policy to block or warn on file system, network, shell, or environment variable access.