Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The most interesting criticism / idea in the article was that the parts that are intended for Rust-ification should actually be removed from core apt.

> it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats [...] from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates [...]

Another interesting, although perhaps tangential, criticism was that the "new solver" currently lacks a testsuite (unit tests; it has integration tests). I'm actually kind of surprised that writing a dependency solver is a greenfield project instead of using an existing one. Or is this just a dig at something that pulls in a well-tested external module for solving?

Posted in curiosity, not knowing much about apt.





It seems silly to say that it has no tests. If I had to pick between unit and integration tests, I'd pick integration tests every time.

It has integration tests.

Dependency solvers are actually an area that can benefit from updating IMO.

Given that Cargo is written in Rust, you would think there would be at least one battle tested solver that could be used. Perhaps it was harder to extract and make generic than write a new one?

Cargo's solver incorporates concepts that .debs don't have, like Cargo features, and I'm sure that .debs have features that Cargo packages don't have either.

Historically apt hasn't had much of a "solver". It's basically take the user's upgrade/install action, if there's some conflict or versioned requirement, go to the candidate (≈newest barring pinfile shenanigans) of the involved packages, and if there's still a conflict, bail.

It was always second-tier utilities like Aptitude that tried to search for a "solution" to conflicting packaging constraints, but this has always been outside of the core functionality, and if you accepted one of Aptitude's proposed paths, you would do so knowing that the next apt dist-upgrade was almost certainly going to hose everything again.

I think the idea in Apt-world is that it's the responsibility of the archive maintainer to at all times present a consistent index for which the newest versions of everything can coexist happily together. But this obviously breaks down when multiple archives are active on the same apt conf.


Cargo isn't satisfied with its own solver either. Solvers are a hard and messy problem.

The problem is theoretically NP complete (a SAT solver), but even harder than that: users also care about picking solutions that optimize for multiple criteria like minimal changes, more recent versions, minimal duplication (if multiple versions can coexist), all while having easy-to-understand errors when dependencies can't be satisfied, and with better-than-NP performance. It ends up being complex and full of compromises.


Go’s solver has been my favorite so far. But it relies on semver actually being meaningful.

Uv uses pugrub and maybe APT could too.

Could the rust code be transpired to readable C?

> readable

No, because some things that are UB in C are not in Rust, and vice versa, so any codegen has to account for that and will result in additional verbosity that you wouldn't see in "native" code.


Thank you for the explanation

> the "new solver" currently lacks a testsuite

To borrow a phrase I recently coined:

If it's not tested then it's not Engineered.

You'd think that core tools would have proper Software Engineering behind them. Alas, it's surprising how many do not.


Integration tests are still tests. There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests. I've written a lot of code generation tools this way for instance.

Unit tests are for testing branchiness— what happens in condition X, what about condition Y? Does the internal state remain sane?

Integration tests are for overall sanity— do a few happy paths basically work? what about when we make changes to the packaging metadata or roll dependencies forward?

Going unit-test free makes total sense in the case of code that doesn't have much in the way of branching, or where the exceptional cases can just be an uncontrolled exit. Or if you feel confident that your type system's unions are forcing you to cover your bases. Either way, you don't need to test individual functions or modules if running the whole thing end to end gives you reasonable confidence in those.


> Integration tests are still tests.

I didn't say they're not. Integration tests definitely help towards "being tested".

> There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests.

Very strong disagree. I think there are no cases where a strong integration test regime can allow a software project to forego unit tests.

Now, that said, we're probably talking the same thing with different words. I think unit tests with mocks are practically useless. But mocks are the definition of most people's unit tests. Not to me; to me unit tests use real code and real objects. To me, a unit test is what a lot of people call an integration test. And, to me, what I call an integration test, is often what people call system tests or end-to-end tests.


> I think unit tests with mocks are practically useless

IMO that's on the extreme side too. I've seen a fair share of JUnit monstrosities with 10+ mocks injected "because the project has been written this way so we must continue this madness", but mocking can be done right, it's just overused so much that, well, maybe you're right - it's easier to preach it out than teach how to do it right.


Unit tests does not make Software Engineering. That's simply part of the development phase, which should be the smallest phase out of all the phases involved in REAL Software Engineering, which is rarely even done these days, outside of DO-178 (et al) monotony. The entire private-to-public industry has even polluted upper management in defense software engineering into accepting SCRUM as somehow more desirable than the ability to effectively plan your requirements and execute without deviation. Yes it's possible, and yes it's even plausible. SWE laziness turns Engineers into developers. Running some auto-documentation script or a generic non-official block diagram is not the same as a Civil PE creating blueprints for a house, let alone a mile long bridge or skyscraper.

As far as I understand the idea behind scrum it's not that you don't plan, it's that you significantly shorten the planning-implementation-review cycle.

Perhaps that is the ideal when it was laid out, but the reality of the common implementation is that planning is dispensed with. It gives some management a great excuse to look no further than the next jira ticket, if that.

The ideal implementation of a methodology is only relevant for a small number of management who would do well with almost any methodology because they will take initiative to improve whatever they are doing. The best methodology for wide adoption is the one that works okay for the largest number of management who struggle to take responsibility or initiative.

That is to say, the methodology that requires management to take responsibility in its "lowest energy state" is the best one for most people-- because they will migrate to the lowest energy state. If the "lowest energy state" allows management to do almost nothing, then they will. If the structure allows being clueless, a lot of managers will migrate to pointy haired Dilbert manager cluelessness.

With that said; I do agree with getting products to clients quickly, getting feedback quickly, and being "agile" in adapting to requirements; but having a good plan based on actual knowledge of the requirements is important. Any strict adherence to an extreme methodology is probably going to fail in edge cases, so having the judgement of when to apply which methodology is a characteristic of good management. You've got to know your domain, know your team, and use the right tool for the job.


I've got a bridge to sell. It's made from watered-down concrete and comes with blueprints written on site. It was very important to get the implementation started asap to shorten the review cycle.

Nonsense. I know and talk to multiple Engineers all the time and they all envy our position of continuing to fix issues in the project.

Mechanical engineers having to work around other component failures all the time because their lead time is gigantic and no matter how much planning they do, failures still pop-up.

The idea that Software Engineering has more bugs is absurd. Electronic engineers, mechanical, electric, all face similar issues to what we face and normally don't have the capacity to deploy fixes as fast as we do because of real world constraints.


Not nonsense. Don't be reductive.

I think you are being reductive on your original comment. The idea of cycling planning and implementation is nothing new, and quite used on the other disciplines. Saying that agile is the problem is misguided and pointing to other engineering disciplines for "they do it better" is usually a sign that you don't talk to those engineers.

Of course we can plan things better, but implementation does inform planning and vice versa and denying that is denying reality.


I don't think this is productive, since you're so adamant [1] that "big C memory safe programs don't exist." I know for a fact they do. Most of the software you won't ever see. What do you think powers the most critical sytems in, say a fifth gen fighter, or the software that NSA relies on in their routers?

I'll give you a hint. It's neither rust- nor scrum-based. I'd rather change careers or retire than work another day doing scrum standups.

[1] https://news.ycombinator.com/item?id=45353150




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: