Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

> rust insists on a very specific version of various dependencies

It only insists on semver-compatible versions (if a Rust/Cargo package specifies libfoo=5.1, it will work with libfoo=5.9). It's one per major version, not that different from Debian packaging "libfoo5" and "libfoo7" when both are needed.

The difference is that Cargo unifies versions by updating them to the newest compatible, and Debian unifies them by downgrading them to old unsupported versions, ignores compatibility, and reintroduces bugs, and disables required functionality.


1. Because Windows/macOS/iOS/Android don't have a built-in package manager at the same granularity of individual libraries, but modern programming languages still want to have first-class support for all these OSes, not just smugly tell users their OS is inferior.

2. Because most Linux distros can only handle very primitive updates based on simple file overwrites, and keep calling everything impossible to secure if it can't be split and patched within limitations of C-oriented dynamic linker.

3. Because Linux distros have a very wide spread of library versions they support, and they often make arbitrary choices on which versions and which features are allowed, which is a burden for programmers who can't simply pick a library and use it, and need to deal with extra compatibility matrix of outdated buggy versions and disabled features.

From developer perspective with lang-specific packages

• Use 1 or 2 languages in the project, and only need to deal a couple of package repositories, which give the exact deps they want, and it works the same on every OS, including cross-compilation to mobile.

From developer perspective of using OS package managers:

• Different names of packages on each Linux distro, installed differently with different commands. There's no way to specify deps in a universal way. Each distro has a range of LTS/stable/testing flavors, each with a different version of library. Debian has super old useless versions that are so old it's worse than not having them, plus bugs reintroduced by removal of vendored patches.

• macOS users may not have any package manager, may have an obscure one, and even if they have the popular Homebrew, there's no guarantee they have the libs you want installed and kept up-to-date. pkg-config will give you temporary paths to precise library version, and unless you work around that, your binary will break when the lib is updated.

• Windows users are screwed. There are several fragmented package managers, which almost nobody has installed. They have few packages, and there's a lot of fiddly work required to make anything build and install properly.

• Supporting mobile platforms means cross-compilation, and you can't use your OS's package manager.

OS-level packaging suuuuuuuucks. When people say that dependency management in C and C++ is a nightmare, they mean the OS-level package managers are a nightmare.


It's still an improvement, because you can find what makes that MysteryObject and you know how to pass it to a function that takes a &MysteryObject or Rc<MysteryObject> or other variation.

In C, it's all MysteryObject* and you have no clue if that is a Box, Rc/Arc, &mut, Cow, MutexGuard<MysteryObject>, maybe a slice of &[MysteryObject].

And if you're unlucky you get functions taking or returning void*.


Well sometimes it's not actually MysteryObject. It's actually some special type like SpecificCaseMysteryObject that's been unified inside of MysteryObject but only MysteryObject is in the signature.


Radiance Cascades is a technique for rendering real-time soft shadows/global illumination.

This paper uses it in the context of astrophysics, but it has a good up-to-date explanation of the algorithm in general, not limited to astrophysics.


Seriously talk about poor naming. Even the start of this paper is the same as the other Radiance Cascades GI paper.


> Packaging a dozen version of the same library?

Cargo uses semver for unifying minor dependency versions, so you don't have dozens versions. It's one per major version. It's not even that different from what Debian does when they package "libfoo5" and "libfoo7". Cargo can have them coexist automatically without renaming, but apt needs this done manually, which creates lots of useless busywork for maintainers who need to map Cargo's requirements to apt's patched forks of Cargo packages.

The real difference is that Cargo unifies versions by upgrading packages to the latest compatible version, and Debian unifies by downgrading to outdated incompatible unsupported junk.

> What is the alternative to relaxing deps?

In this case Debian has unforked bindgen, which removed an important patch that the project required:

https://github.com/koverstreet/bcachefs-tools/issues/202#iss...

This doesn't need any sophisticated technology. It needs not futzing with upstream's dependency requirements.


Things are called AI only until they can be done well by a computer, and then they become just an algorithm.

There was a time when winning in Chess was a proof of humans' superior intellect, and then it became just an algorithm. Then Go.


That's why Firefox needs a userbase too large to ignore.

If the overwhelming majority of users submits to Google, then Google has the power to erode privacy for everyone.


It works okay with Rust, but it's not really needed, except when integrating C code with Rust.

For verifying unsafe code, Rust has a MIRI interpreter that catches UB more precisely, e.g. it knows Rust's aliasing rules, precise object boundaries, and has access to lifetimes (they don't survive compilation).

Non-deliberate leaking of memory in Rust is not possible for the majority of Rust types. In safe Rust it requires a specific combination of a refcounted type that uses interior mutability which contains a type that makes the refcounted smart pointer recursive. Types that meet all three conditions at once are niche.

The only annoyance/incompatibility is that Valgrind complains that global variables are leaked. Rust does that intentionally, because static destructors have the same problem as SIOF[1] in reverse, plus tricky interactions with atexit mean there's no reliable way to destruct arbitrary globals.

[1]: https://en.cppreference.com/w/cpp/language/siof


Hard disagree that it’s not needed. MIRI’s ability to verify non-trivial examples of code is quite limited so I generally just discount it. Valgrind goes beyond just asan+msan with heap and CPU profiling which can be useful if you’re trying to extract every last bit of performance.


Poe's law.


Rust feels impossible to use until you "get" it. It eventually changes from fighting the borrow checker to a disbelief how you used to write programs without the assurances it gives.

And once you get past fighting the borrow checker it's a very productive language, with the standard containers and iterators you can get a lot done with high level code that looks more like Python than C.


I agree but it's not different than C with a decent library of data structures. And even when you become more borrow checker aware and able to anticipate most of the issues, still there are cases where the solution is either non obvious or requires doing things in indirect ways compared to C or C++.


The quality difference between generics and proc macros vs the hoops C jumps through instead is pretty significant. The way you solve this in C is also unobvious, but doesn't seem like it when you have a lot of C experience.

I've been programming in C for 20 years, and didn't realize how much of using it productively wasn't a skilful craft, but busywork that doesn't need to exist.

This may sound harsh, but sensitivity to order definition, and the fragility of headers combined with a global namespace is just a waste of time. These aren't problems worth caring about.

Every function having its own idea of error handling is also nuts. Having to be diligent about error checking and cleanup is not a point of pride, but a compiler deficiency.

Maintenance of build scripts is not only an unnecessary effort, but it makes everything downstream of them worse. I can literally not have build scripts at all, and be able to work on projects bigger than ever. I can open a large project, with an outrageous number of dependencies, and have it build on the first try, integrate with IDEs, generate API docs, run unit tests out of the box. Usually works on Windows too, because the POSIX vs Windows schism can be fixed with a good standard library and cross-platform dependency management.

Multi-threading can be the default standard for every function (automatically verified through the entire call graph including 3rd party code), and not an adventurous novelty.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: