Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is insanity and hubris.

"It's so easy!" yes if you have the language and tools de jour installed and up to date. I want none of that.

It was node and npm.

Then go.

Now Rust and cargo.

Oh, I forgot ruby.

And all this needs to be up to date or things break. (And if you do update them then things you are actively using will break.)

I don't need more tamagochis, in fact the less I have, the better.

What happened to .deb and .rpm files? Especially since these days you can have github actions or a gitlab pipeline packaging for you. I can't care less what language you are using, don't try to force it down my throat.



Many of the popular rust cli tools like ripgrep, exa, delta, etc -do- have package manager install options.

How dare people writing cli tools not package them conveniently for my distro. The horror of using cargo instead of cloning the source and praying make/meson/etc works.

Feel free to package and maintain these tools yourself for your distro if you want.


I don't know about you, but in my experience, getting Cargo to work has been a much bigger pain than make/meson et al.


I've never had any issues with cargo. I use rustup to manage my rust toolchains and cargo for the most part.


> What happened to .deb and .rpm files?

The problem with those is they require global consistency. If one package needs libfoo-1.1 (or at least claims to), but something else needs libfoo-1.2+, we can't install both packages. It doesn't take long (e.g. 6 months to a year) before distro updates break one-off packages.

I think some people try hacking around this by installing multiple operating systems in a pile containers, but that sounds awful.

My preferred solution these days is Nix, which I think of as a glorified alternative/wrapper for Make: it doesn't care about language, "packages" can usually be defined using normal bash commands, and it doesn't require global consistency (different versions of things can exist side by side, seen only by packages which depend on them).


I'm the parent that you replied to. In my eyes there is nothing wrong with .deb and .rpm files. In fact, many of these tools are available for download in these formats and some others (docker, snap, etc). And it is good that they do but it comes with extra work to setup the pipelines/builds.

The concept of a language-specific package manager distributing not only libraries but also executables isn't new. Go get, ruby bundler, python pip, cargo and npm all have this feature.

I was originally answering a question about why we suddenly see all these "written in Rust" tools pop up. I think that is partly because Cargo provides this easier way to distribute native code to users on various platforms, without jumping through additional hoops like building a .deb, and setting up an apt repository.

Sometimes you just want to get some code out there into the world, and if the language ecosystem you are in provides easy publishing tools, why not use them for the first releases? And if later your tool evolves and becomes popular, the additional packaging for wider distribution can be added.


Ease of use and familiarity are different things. Tooling around rust really is easy, when the alternatives (for equivalent languages) are CMake, autotools, and the like.

As it stands, I can brew install ripgrep and it just works. I don’t need to know it’s written it rust. If, for some reason, homebrew (or whatever other package manager) is lagging behind and I need a new release now, cargo install is a much easier alternative compared to, again, other tools built in equivalent languages


Indeed. Thank for you stating this so clearly.

The "ease of use" and "familiarity" distinction reminds me of talks by people such as Rich Hickey who distinguish "simple" and "easy":

https://www.infoq.com/presentations/Simple-Made-Easy/

> Rich Hickey emphasizes simplicity’s virtues over easiness’, showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path.


The problem with .deb and .rpm is your dependencies, some things aren't packaged, you end up having to build separate packages for each major Debian and redhat release to link against the correct dependency version.

I'd love that to all be "one-command automated", but I haven't seen such a thing, unlike cargo, which I do find I can be productive with after a one page tutorial.


100% agree. I find it very funny, but in a sarcastic and totally wrong way, when a project's README has an Install section that reads:

  Run "cargo install myProject"
I know Rust, so Cargo is not alien to me. But come on, you know that your install instructions are a bit shitty.

Please, choose a target distro, then test your instructions in a clean Docker container. THEN you can sit down knowing you wrote proper guidance for users.

EDIT because this comment is being misunderstood: I meant that you should make sure your instructions work as-is from a clean installation of your intended distro(s), regardless of how you prefer to do so; using a Docker container is just my preferred method, but you can also do a clean VM or whatever else, as long as you don't assume anything beyond a default installed system.


Hold on, do you not see the insane contradiction of not wanting to rely on having cargo installed but requiring something is deployable and tested in a docker container? What?!


No, you misunderstood. I meant that if you're going to document a block of command-line instructions, you should first make sure those commands work as-is in a clean system.

A very easy way to do this (for me anyways) is using a Docker container. I use this method to test all of my documented commands. But there are other ways, of course, like using a clean VM. Regardless, just test the commands without assuming the customized state of your own workstation!

The point is that if I follow the hypothetical instructions of running "cargo install something", the result will probably be "cargo: command not found". When you test this in a clean system and find this error message, this places on you the burden of depending on Cargo, so the least you should do is to make sure "cargo" will work for the user who is reading your docs. At a minimum, you should link to somewhere that explains how to install Cargo.

tldr: you should make sure your instructions work as-is from a clean installation of your intended distro(s), regardless of how you prefer to do so.


You're telling me that people who want to replace a command-line utility are the same people who can't install a toolchain (or just download a binary and put it in their path)?


As a single-sample statistic I can share with you, I like to think I'm a well seasoned C/C++ developer, and have experience with all sorts of relatively low-level technical stuff and a good grasp on the internals of how things (like e.g. the kernel) work.

Yet I got confused the first time ever some README told me to run "npm install blah". WTF is NPM? I didn't care, really, I just wanted to use blah. Conversely, later I worked with Node devs who would not know where to even start if I asked them to install a C++ toolchain.

The point is don't assume too much about the background of the people reading your instructions. They don't have in their heads the same stuff you take for granted.


There was a time that I didn't know what npm is (I'm not even remotely a web developer). So I used my computer to do some basic research.


Don't focus on the specifics, consider the NPM thing an analogy for any other piece of software.

I've found instances where some documentation instructions pointed to run Maven, and the commands worked in their machine because Maven is highly dependent on customizations and local cache. But it failed in other machines that didn't have this parameter configured, or that package version cached locally. And trust me, Maven can be _very_ obtuse and hard to troubleshoot, too much implicit magic happening.

Testing in a clean container or VM would have raised those issues before the readme was written and published. Hence my point stands, testing commands in a clean system is akin to testing a web page in a private tab, to prevent any previous local state polluting the test.


Testing in a clean container tests deploying in a clean container. For me, I run a computer :) Maven sounds like a nightmare tbh so I can understand that that specific piece of software has warts. That said, a good piece of package management software will be relatively agnostic to where its run and have a dependable set of behaviours. I much prefer that to a world where every bit of software is run on any conceivable combination of OS and hardware. What an absolute drain on brain space and creative effort!


As someone who authors another shells (coincidentally similar to nushell) I can tell you that you'd be surprised at some of the bug reports you get.

Frankly I prefer the 10,000 approach suggested by XKCD: https://xkcd.com/1053/


If its deployable and tested in a docker container its much easier to generate user images, it takes the onus away from the user and the developer can just put it on the aur/publish a deb


You happen to have cmake or autotools installed, others happen to have cargo installed.

Once cargo/cmake/autotools/make/npm/mvn/setup.py/whatever runs, the process of taking the output and packaging it for your preferred distro is the same.

There's more work involved if you want a distro to actually pick it up and include it in their repos around not using static linking, but if you're asking for a .deb/.rpm on github actions, that's not needed.


Why don't you download the native binaries then?

Rust isn't an interpreted language, you only need the rust toolchain if you want to build from source.


Binary releases seem uncommon from my perspective. Every time I go to install a piece of software written in Rust from homebrew, it invariably starts installing some massive Rust toolchain as a dependency, at which point I give up and move on. Maybe it's a case of the packagers taking a lazy route or something, or maybe there is a reason for depending on cargo. I have no idea.


Isn't homebrew specifically supposed to build from source? e.g. the example on the homepage of a recipe is running ./configure && make on wget.

The fact that you installed the XCode CLI tools for that wget example to work when you first installed homebrew because homebrew itself requires it, and you only get Cargo the first time you get a rust dependency seems to be what you're really complaining about.


Homebrew tries to install binaries by default. (They call them bottles) Building from source happens if a suitable 'bottle' isn't available, or when `--build-from-source` is specified with the install command.

I know cargo is installed only once, but I don't want cargo. I don't build Rust software myself, so I don't want to have it hanging out on my system taking up space purely just so I can have one or two useful programs that were written in Rust and depend on it. I'll just go with some other alternative.


Do you have some specific examples?

E.g. ripgrep is packaged on most operating systems I have used, along with exa, and a few other Rust utils I use.

I certainly do not use Cargo to install them.


Perhaps the packagers on your platform went that extra mile to build binary packages. Taking a quick look, the Homebrew formula[0] for ripgrep on macOS just lists a dependency on Cargo (rust) and then seems to invoke the cargo command for installation. I'm not well versed in Ruby though, so my interpretation could be wrong.

I don't want to come off as entitled, either. I know the Homebrew folks are doing a ton of brilliant, ongoing work to make it work as well as it does, so I can't really blame them for potentially taking a shortcut here.

[0] https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/...


If it installs a bottle, then does it still require installing Rust? If so, then maybe that's a shortcoming of Homebrew.

Either way, it kinda seems like you're complaining about Homebrew here. Not Rust.

If having Cargo/Rust on your system is really a Hard No (...why?), then I guess find a package manager that only delivers whatever is necessary, or, if available, use the project's binary releases: https://github.com/BurntSushi/ripgrep/releases/tag/13.0.0

And actually, in the case of ripgrep, it provides a Homebrew tap that specifically uses the GitHub release binary: https://github.com/BurntSushi/ripgrep/blob/master/pkg/brew/r...


Ruby requires an interpreter at runtime. JavaScript too. Rust produces standalone binaries. So no, "things don't break" and you only compile things once.

// I can't care less about deb or rpm files so don't try to force that down my throat.


There's no win/win scenario when comparing libraries to static binaries. On the one hand, static binaries are more user friendly. But they remove the responsibility for keeping your OS secure away from the OS/distro maintainers.

For example, if a vulnerability is found in a create, you then have to hope that every maintainer who manages a Rust project that imports said create diligently pushes out newer binaries quickly. You then have multiple applications that need to be updated rather than one library.

This may well be a future problem we'll end up reading more about as Rust, Go and others become more embedded in our base Linux / macOS / etc install.


It is Gentoo all over again.


I agree that it's not ideal, but unfortunately bad decisions by Linux distributions and package maintainers have trained me as a user to avoid the package managers if I want up to date software with predictable and documented defaults.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: