Hacker News new | past | comments | ask | show | jobs | submit login

The fact that http fetches and fs reads don't prompt the user are continually the craziest part of the `npx` and `package.json`'s `postinstall`.

Does anyone have a solution to wrap binary execution (or npm execution) and require explicit user authorization for network or fs calls?




I believe the Deno permission system[0] does what you're asking, and more.

(Deno is a JavaScript runtime co-created by Ryan Dahl, who created Node.js - see his talk "10 Things I Regret About Node.js"[1] for more of his motivations in designing it.)

[0] https://docs.deno.com/runtime/fundamentals/security/

[1] https://www.youtube.com/watch?v=M3BM9TB-8yA


Yes, explicitly asking you if you want to run the install script is the first warning (which pnpm can do too)

Then would halt due to file access or network permissions.

Could still get you if you lazily allow all everywhere though and this is why you shouldn’t do that.


Yes and you can run almost every npm packages:

  deno run npm:@angular/cli --help


pnpm skips all `postInstall` runs by default now. You can explicitly allow-list specific ones.

If you use that, I'd highly recommend configuring it to throw an error instead of just silently skipping the postInstall though: https://github.com/karlhorky/pnpm-tricks#fail-pnpm-install-o...


Bun does the same.


Sure, but switching from node to bun is a much more invasive change than switching from npm to pnpm. And not always possible.


It's quite easy, actually. We did this at work, recently.

Bun is two things. Most commonly, it's known for its Node-competitor runtime, which is of course a very invasive change. But it can also be used purely as a package manager, with Node as your runtime: https://bun.sh/docs/cli/install

As a package manager, it's much more efficient, and I would recommend switching over. Haven't used pnpm, though--we came from yarn (v2, and I've used v1 in past).

We still use Node for our runtime, but package install time has dropped significantly.

This is especially felt when switching branches on dev machines where one package in the workspace has changed, causing yarn to retry all packages (even though yarn.lock exists), where bun only downloads the new package.


Yes. I was more pointing out that blocking postinstall scripts is becoming a trend across multiple projects. Possibly a portent for the ecosystem as a whole. I could have communicated that more clearly.


I will say for all the (sometimes valid) complaints about NPM and the ecosystem, I don’t hear about Go.

Go encourages package authors to simply link to their git repository. It is quite literally cloning source files onto your computer without much thought


No code execution during dependency fetching. And the dependency tree is very shallow for most project, making it easier to audit.


still no guard rails, simply raw source code. It would be easy for anything to be hiding within. Given observed behavior I doubt most people are auditing the source either

It’s ripe for an exploit


That's a different issue, which most libraries will have - when you run their code, they may do extra things.

This is talking about the same thing, but at install time as well.


Two differences:

Best practice now is to not run postinstall scripts by default. Yarn and pnpm allow you to do this (and pnpm at least won’t run them by default) and I believe npm now does too, and is looking at a future where it won’t run them by default.

The other difference is Go had several chances to do better, and they didn’t take any steps to do so.

The maintainers of NPM (the registry and tool) I’m sure would love to make a lot of changes to the ecosystem but they can’t make some of them without breaking too much, and at the scale that NOM operates it’s going to always be playing catch up with work around a and such for previous choices so they don’t say, break hundreds of thousands of CI runs simultaneously.

Go iterated on its package ecosystem several times and ultimately did very little with it. They didn’t make it vastly more secure by default in any way, they were actually going to get rid of vendoring at one point, and a whole host of other SNAFUs.

Go’s packaging and distribution model while simple is extremely primitive and they have yet to really adopt anything in this area that would be beneficial for security


Just package node_modules subdirectories as tar files.

I stopped using npm a while back and push and pull tar files instead.

Naturally I get js modules from npm in the first place, but I never run code with it after initial install and testing of a library for my own use.


This is a valid choice, but you must accept some serious trade-offs. For one thing, anyone wanting to trust you must now scrutinize all of your dependencies for modification. Anyone wanting to contribute must learn whatever ad hoc method you used to fetch and package deps, and never be sure of fully reproducing your build.

The de facto compromise is to use package.json for deps, but your distributable blob is a docker image, which serializes a concrete node_modules. Something similar (and perhaps more elegant) is Java's "fat jar" approach where all dependencies are put into a single jar file (and a jar file is just a renamed zip so it's much like a tarball).


May not be a well known feature however npm can unpack tarballs as part of the install process, as that’s how they’re served from the CDN.

If you vendor and tar your dependencies correctly you could functionally build a system around trust layers by inspecting hashes before allowing unpacking for instance.

It’s a thought exercise certainly but there might be legs to this idea


I think Yarn zero install is now the default, and does the same thing you're advocating? I'm not really a JS person, but it looks like it's done reasonably competently (validating checksums etc).


You're already including arbitrary code into your application. Supposedly you're intending to run that application at some point.


What is the answer to that? Learn x86 and bootstrap?


Capability-based security within Node. The main module gets limited access to the system (restricted by the command-line, with secure defaults), all dependencies have to be explicitly provided with capabilities they need (e.g. instead of a module being able to import "fs", it receives only an open directory handle to the directory that the library consumer dictates). Deno already does the first half of this.


I kinda wish there were programming languages, where instead of saying `import module`, you said "I must be run in a context where I have access to a function with this prototype". Effectively functions instead of modules, alongside duck-typed OO if you use OO.

The problem is, as soon as it becomes remotely popular, every module is going to end up saying "I must be run in a context where I have access to all the functions version 13.2 of the filesystem module wrapped up in a structure that claims to be version 13.2 of the filesystem module and which has been signed by the private key that corresponds to the filesystem module author's public key" - even though they only need a random access file handle for use as a temporary file - because otherwise developers will be anxious about leaked implementation details preventing them from making version 1.4.16 (they'll just have to make version 2.0 - who cares? their implementation detail is my security).


As an alternative (and this is what capability-based design is all about), instead of replacing dependencies, just only give access to system calls to the main function of the app. It has to pass those on to any dependencies it wants, and so forth. The system calls are a small, mostly unchanging set of primitive items, and any dependency can wrap them up in whatever API suits them.

Example: in order to write to open a file, you need a capability object corresponding to write access to the file's parent directory. Now you can be sure that a dependency doesn't write any files unless you actually pass it one of these capability objects.


I think the WASM/WASI environment may be closest to this. But it's an interesting idea.


Same as Python (setup.py). It's even worse in Go, as they encourage to just link github repos at the currently latest version.

Only Java, .Net and R just download files, at a declared (reproducible) version.


The lavamoat npm package does something similar. It's maintained by the security team at MetaMask (crypto wallet extension and app). It's used in the extension runtime as well as wraps the build process.


We built “safe npm”, a CLI tool transparently wraps the npm command and protects developers from malware, typosquats, install scripts, protestware, telemetry, and more.

You can set a custom security policy to block or warn on file system, network, shell, or environment variable access.

https://socket.dev/blog/introducing-safe-npm


npm should run in Docker containers by default. At least to restrict access to the project being built.

But the result of a compiler will run on the machine anyway, but once again, it should be in a Docker.


It blows my mind that developers will install things like npm, random libraries and so on on their machine, sometimes their personal one with the keys to the kingdom so to speak. But then again, people are now installing MCP servers the same way and letting LLMs run the show. Incredible really.


So, what do you do for Windows and macOS users, in corporate environments, who don’t have access to virtualization on local machines? This describes most of the places I’ve worked as a consultant.

Container technology is awesome, and it’s a huge step forward for the industry, but are places where it’s not feasible to use, at least for now.


Docker is not a security boundary.


Sure it is. It isn't airtight but then what is?

Even KVM escapes have been demonstrated. KVM is not a security boundary ... except that in practice it is (a quite effective one at that).

Taken to the extreme you end up with something like "network connected physical machines aren't a security boundary" which is just silly.


> Taken to the extreme you end up with something like "network connected physical machines aren't a security boundary" which is just silly.

1. This is why some places with secret enough info keep things airgapped.

2. OTOH, from what I recall hearing the machines successfully targeted by Stuxnet were airgapped.


Yeah, you have to move it off-planet to achieve an actual security boundary.

In our threat model the upper bound on the useful lifetime of the system is limited by the light-distance time from the nearest adversary.


Ah yes, the "maximally aggressive grey goo" threat model.


No software is perfect but there is a massive difference betwen these two boundaries. If there is a escape in KVM its news worthy unlike in docker. I don't feel like pulling up cves but anybody following the space should know this.


There's an even bigger difference between using Docker and not using any sort of protection, it's always going to be a security vs convenience tradeoff. Telling people who want to improve their security posture (currently non-existent) that "Docker is not a security boundary" isn't very pragmatic.

What percentage of malware is programmed to exploit Docker CVEs vs. just scanning $HOME for something juicy? Swiss cheese model comes to mind.


It is better the same way a rope is better than no seat belt at all. Recommending Docker as a sandbox gives a false sense of security.


Use Rust



They are going to mess you up, but at least, that was memory safe.

Much better.


According to the comment below, it should be “Use Java”.


My comment was made in jest


Definitely stop using jest


I bet someone already had entertained an idea to add cryptominer to Jest, nobody would notice slight increase to those tests running times on CI. Maybe it could even start funding those open source maintainers enough to finally make ES6 modules non-experimental.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: