Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

Pardon my ignorance, but isn't aircon an easy case for the grid? The same sun that causes the heat can power solar panels, so it doesn't even need storage or long-range transmission.


Heat comes from the sun, but in reality after the sun. Generally speaking, the hottest time of the day (late afternoon) isn't when the sun is brightest (noon). Then people need to sleep, which means they want to be cooler during the evening.

One way to beat the heat is to adjust the workday. Start work at noon. Let people sleep during the cool of morning. But that is a bridge too far for most cultures.


Start at noon? That’s the exact opposite of the approach people take when it’s hot around here (I live in southern Australia where it regularly breaks 40C in summer). Start early - before the sun’s up if you can - and be ready to knock off soon after lunch.


You are assuming that you don't need an aircon at night. Something that's very much not the case during a multiday Australian heatwave.


Spent some time in Oz in the 90s- trying to sleep at night in my 2nd floor apartment as the heat rolling off the desert met the Brisbane coast without air conditioning remains some of my most unpleasant memories to this day.

Despite a fan pointed right at me I’d awake every morning with a sweat soaked mattresses…


Brisbane is quite pleasant compared to the NQ. Good times.

Source: In brisbane and lived in the north.


Yep, visited Townsville and it was even more hot/humid, I imagine places like Cairns take it up even further but don’t have quite as much hot desert air sweeping in as Brisbane since its more of a peninsula.


Do you have any insulation? I'm in the UK, and my previous rental had walls that leaked any heat within half an hour, but my current one, it heats up to 23 in the day due to the large windows facing the sun, and barely drops to 18 at night when it's 5-10 degrees outside.


>Do you have any insulation?

No, this is Australia. Houses are more like tents than actual buildings.


> No, this is Australia. Houses are more like tents than actual buildings.

Pre-AC being ubiquitous (or being invented), the way for people to cool off in summer was to have airflow. Insulation was a more of a cold climate thing, as keeping the heat in / cold out was more intuitive when burning wood or coal: thermal boundaries were more 'needed' when there was snow outside.

Having thermal boundaries in your structure didn't make sense when it was as hot inside (due to lack of AC) as it was outside, and it has only been in recent years that people clued into that fact. But unfortunately there's a large stock of housing around that was built pre-clue in.

Even now there are people who insist that houses need to "breathe", i.e., have holes / be drafty. They do not:

* https://www.energyvanguard.com/blog/Myth-A-House-Needs-to-Br...

* https://www.greenbuildingadvisor.com/article/most-houses-tha...

* https://www.greenbuildingadvisor.com/article/buildings-dont-...

Buildings need to be as air tight as possible, and be ventilated through mechanical means so that stale air is vented out and fresh air is brought in after being filtered and tempered (via an ERV).


>> Buildings need to be as air tight as possible

Perhaps in California, Australia or other desert areas. The pacific northwest had major issues when OSB met the concept of "air tight" housing. No airflow in/through walls leads to condensation, mold and collapse of many "engineered" wood products held together with glue.


> The pacific northwest had major issues when OSB met the concept of "air tight" housing. No airflow in/through walls leads to condensation, mold and collapse of many "engineered" wood products held together with glue.

These are not indicators that air tightness is bad, but rather that people don't understand building science. Washington state (for one) has mandated air tightness levels (§R402.4.1.3) and testing (§R402.4.1.2):

* https://sbcc.wa.gov/sites/default/files/2023-04/2021_WSEC_R_...

Air tight passive houses are built without issues in the US PNW:

* https://phnw.org

* https://hammerandhand.com/high-performance/passive-house-bui...

* https://asiri-designs.com/f/how-to-design-an-energy-efficien...

The province of BC is mandating near-passive house levels of air tightness:

* https://www.energyadvisor.pro/airtight-energy-advisor-contac...

* https://energystepcode.ca/how-it-works/

* PDF: https://www.bchousing.org/publications/Illustrated-Guide-Ach...

Build your structure according to how physics works and it will work just as well from the tropics (IECC Zone 1) to the arctic (IECC Zone 8):

* https://buildingscience.com/documents/insights/bsi-001-the-p...

* https://basc.pnnl.gov/images/iecc-climate-zone-map

Air tightness is coming to building codes everywhere, and if you can't make it work then you shouldn't be building because you are incompetent:

* https://www.iccsafe.org/building-safety-journal/bsj-technica...


You massively overestimate how deserted Australia is. Most of the population of Australia lives in the greener areas near the coast, which has a much more moderate climate.

...also I doubt anyone from e.g. Brisbane would need to be told about mold and condensation.


> Pre-AC being ubiquitous (or being invented), the way for people to cool off in summer was to have airflow.

Modern A/C was invented in 1901, and A/C penetration in Australia is not that far behind the US.

But much as in Texas where houses are also made of paper and cardboard, cheap energy has made forcing temps down via AC much simpler (for property developers).


Tents are sturdier.


I have a "normal" Melbourne brick veneer home, with decent roof insulation and insulated curtains. It handles the heat well for 1 day but a 2 or 3 day heatwave of 38 degree plus temperatures the bricks heat up in the sun and it gets quite uncomfortable even at night.


outside sun shield like German houses have is the way to go. Any curtains inside does not work very well


How did people in Australia get by before air conditioning?

I'm not saying that to be snarky, I would generally love to know. I live in a moderately cold climate (England) and I often wonder how people here survived through the winter before the comforts of modern technology. And England's not that cold - how the hell did indigenous Canadians do it, for example? Did people just shiver by a fire for months at a time?

EDIT: I didn't think I was being snarky in my original final sentence, but to avoid misunderstandings I've removed it.


The first sentence is a good question, the last sentence is unnecessary snark.

I think there are two main things at play: buildings used to be built differently, and yes, people prefer not to suffer these days.

Old Australian buildings (e.g. "Queenslanders") were often built on poles to let the air circulate underneath, they'd have the roof extend far over windows to stop sunlight (and heat) getting in. Those are not the most enjoyable places to live in, these days people prefer a bit of sunlight during the day, and a constant draft is not pleasant in winter.

And yes, I imagine people just didn't sleep all that well and spent heatwaves sitting around in the coldest spot in the house waiting for it to be over. These days people want to sleep well throughout the year and want to be productive even during heatwaves.


People were also a lot harder back then because life was harder through the entirety of their existence. Obesity was rare, the average level of fitness was much, much higher, and the teens and adults probably showed a bunch of survivorship bias for hardiness because the weakest died during childhood pretty routinely. I suspect environmental conditions that would kill the average diabetic, obese 45 year old today was just a normal summer day to people back then and was well mostly well tolerated.


I live in Minnesota and have spent a reasonable amount of time winter camping. You use these cool canvas tents and a small, packable wood stove. This has brought me into contact with some folks that live lifestyles somewhat closer to those old days than the new ones - so far as expectations on infrastructure go. For one, they wear warm clothing almost all the time. It’s significantly more efficient to heat yourself (through clothing, food and movement) than heating an entire room/building.

Also: wood stoves large and small can pump out a lot of heat. You can kick up the small tin metal camping stove and make your canvas tent 75 degrees, dry out any wet clothes. Likewise, a large iron stove can make a room boiling. After all, saunas - in one form or another - are a common feature of northern cultures.


Go to the rural Philippines and you'll find plenty of homes without air conditioning in 40+ heat.

You'll also find westerners accustomed to air-conditioning who live there and have "adjusted".

It takes about 2 weeks and your body makes some sort of adjustment.

It's an interesting phenomenon that maybe someone here can comment on more fully. By that I mean what exactly is happening to us when we adjust? Is it purely psychological or are there actual biological changes/effects at play? Whatever it is you really do "adjust" and it's not that bad after a few weeks. You even learn to sleep well at night in sweltering heat...again after you adjust.

Of course there are limits but we're built to handle the heat.


Fans and light clothes, mostly.

A lot more people live out in cheaply built homes with small, barren yards and dark roofs that are literally designed for air conditioning. Older homes often have better airflow.

An example of a newer suburb in Sydney is Marsden Park: https://www.youtube.com/watch?v=WzV3H_K73To

I don't use/have aircon in my home in Brisbane and I get by fine :) Although it is nice to have aircon in the summer when I'm working on the computer and need to concentrate (my office at work is air conditioned, like most).


Where I live (French Alps) that's true, because there is a big circadian delta, meaning I can have 30+℃ (86℉) during the day, but at 7pm the temperature is already 25℃ (77℉), at 9pm 20℃ (68℉), at midnight 17℃ (63℉) etc.

But where I was born (northern Italy) the circadian delta is MUCH lower, if it's 35℃ during the day, the night minima is 30℃ at miminum. So you need aircon 24/7, not just during the day.



It’s amazing how different the customs can be.

I’ve only received a check once in my life, in 2004 from a US company. I could not cash it with my regular (online) bank in Poland. I had to open a foreign currency account with the national bank, just for this check, and go to their special customer service department in person, at their HQ. It was like bringing them an alien artifact.


Hah. Once I had to get a wire transfer from a foreign company for some consulting work. I had to open up a new bank account just for that. It's like a mirror image.


Are you sure about this? I thought client certs were only for plug'n'charge (which identifies individual cars for the purpose of payment), and charging paid out of band could be negotiated using dumber protocols.


You don't need to wait until program's exit. Valgrind has a GDB server that can be used to check for leaks during program's runtime:

https://valgrind.org/docs/manual/mc-manual.html#mc-manual.mo...


Thank you! I could have been more explicit that I was aware of this but I was ideally trying to avoid manual code changes and/or manual gdb intervention if possible.


For now the SHA-1 collisions are easily detectable, but it could get worse.

In case of MD5, there is now a collision I wouldn't expect was possible: in readable ASCII.

https://mastodon.social/@Ange/112124123552605003


> "For now the SHA-1 collisions are easily detectable, but it could get worse."

Your opinion: prove it! And Again, if you instead of trolling actually read the post in THIS BRANCH , the question is: shout SHA-1 inn GIT be substituted ?


egui's immediate mode works great with Rust's ownership and borrowing, unlike traditional stateful UI frameworks that can have arbitrary references between widgets and events.

It works very well in games, where it allows creation of complex completely custom UIs, with GPU rendering.

However, egui is an odd choice for desktop applications. It's completely non-native. Even "windows" in egui are custom widgets confined to its canvas, not real OS windows.


> However, egui is an odd choice for desktop applications. It's completely non-native.

I happened to immediately (pun intended) stumble with an example of this. Just by chance, I went straight to test the ComboBox (https://www.egui.rs/), and I have the tendency of not just clicking to make the drop-down appear; instead, I click and hold, and only release after selecting the desired choice.

This doesn't work at all on egui: the ComboBox doesn't show its choices until the mouse button is released, not upon the initial mousedown press.

I guess there must be a thousand cuts like this, which is understandable, given that all functionality is implemented from scratch trying to imitate the native widgets we all know and use since forever.


It's also difficult because what is the native behavior here? On Windows the different official Microsoft UI toolkits behave differently. For example the latest Windows 11 UI stuff works like egui in that it opens the dropdown on release, while older GDI stuff opens the dropdown on press, but won't let you make a choice until you release.


After reading this I tried click-and-hold on a folder in the bookmark bar in Chrome. Lo and behold, even Chrome doesn't do what you want. I completely agree about a thousand cuts, and I'm often the kind of wild-eyed idealist that wishes that everything would conform to some universal set of gui standards.


Yep, now imagine you are visually or auditory impaired and need magnification or a screen reader.

Accessibility always ends up on people's "I'll think about that later" bucket, and it's terrible.


You can have real separate windows in egui. Either running on the same thread or another. See here: https://github.com/emilk/egui/blob/master/examples/multiple_...


I’m very sorry, but “native looking ui components” is a thing of the past.

I might put some effort in if developing on Mac (but even Apple just does whatever), but on Windows and Linux, the components are a free for all. If even Microsoft won’t respect their own UI, why should I?

I’m not advocating for breaking usability in your apps, just saying that we are well past not using this for desktop.


Which is a shame, because those widgets work infinitely better than the rough-shod ones thrown together by these all-in-one libraries, for the user.

If I have no choice, I'll use one of these apps. But if there's an alternative with a native UI, I'll switch in a heartbeat. Basically, the same as I feel about most electron apps.


> for the user

For which user? I prefer non-native apps. It makes me feel more immersed in the tool, and gives the developer complete control over their design, whether by choosing a GUI toolkit they like, or rolling their own from scratch to perfectly fit the domain.


I don't like it when developers break from the conventions and theming of my operating system because they feel entitled to. Sure, very sophisticated applications that have niche needs like Blender, Godot. Photoshop, etc may have a justification to do so. But most apps are not those and I expect them to be consistent with the rest of the software on my computer. It's very difficult to retrain nontechnical people to use apps when they break from the conventions and styles of the software they are familiar with.


The only justification that's needed is simply, because that's what they wanted to make. You can incentivize them to make something you want with money, but if they don't choose to they're not unjustified.


> they feel entitled to

They are entitled to. Just as you're entitled to use different software if you don't like it.


"native" widget sets have been dead and were killed off by the far worse electron/ad-hoc web designed apps


I’d be inclined to agree, but I would tend to be critiquing what GP actually said, which is the the gui is totally ‘non-native’. I think you were in part correct to read the ‘native looking ui’ portion into the comment though, it would seem to be a common refrain that gui’s need to appear conforming to the OS’s chosen aesthetic and visual formatting. However, a very real complaint echoed in a sibling comment is the failure of non-native gui’s to respect or implement the OS’s operating behaviors for gui’s. I have no issue with visual differences between applications, but I want clicking and keyboard behavior to remain constant across my apps.

As sibling implies, it’s a thousand small differences that build up, creating friction that foster the push to native GUI’s being the preferred, if not nearly mandatory, solution.


I don’t mind GC per se, but languages based around a GC tend to lack other useful properties that Rust has, like deterministic destruction for resource management, and exclusive ownership that prevents surprising mutation of objects without making them immutable.


In straightforward terms, author of the most popular YAML parser (https://lib.rs/crates/serde_yaml) has suddenly given up on it, and marked it as deprecated and unmaintained, without warning or designating another maintainer.

It's not exactly a left-pad situation, because the package remains fully functional (and crates.io doesn't allow deletions), but the package is used in 4000 other crates. Audit and auto-update tools will be complaining about use of an unmaintained crate.


This isn't accurate. OP's article says they depend on yaml-rust (https://github.com/chyh1990/yaml-rust), which is now unmaintained. Simultaneously, serge-yaml was marked as unmaintained to prevent people from migrating there en masse.


Sorry, indeed it's been about the other library. However, these issues are connected: dtolnay is a co-owner of yaml-rust too, so either way without him YAML in Rust has a https://xkcd.com/2347 problem.


> so either way without him YAML in Rust has a https://xkcd.com/2347

You seem to be misunderstanding the comic you link.

With him that was the situation.

On crates.io, serde_yaml has 56m lifetime downloads and 8.5m recent downloads, and from his own announcement David hasn't used yaml in any of his project in a while, yet has kept plugging at it until now.

And yaml-rust has 53.5m lifetime downloads and 5m recent downloads, and has been unmaintained for 4 years.


> Audit and auto-update tools will be complaining about use of an unmaintained crate.

Not the case here but sometimes a library (with no dependencies) is really feature complete with no real need for changes other than security fixes.

I wish audit tools and corporate policies are smart enough to recognize this instead of nagging about "unmaintained" versions of projects with no recent commits.


Been doing this software thing for a bit now… while what you say is true in that it is possible in some rare cases, I think it’s a bit of a straw man exception. Code is, in the general sense, unfortunately never complete, regardless of how complete it seems in its context.

The issue is that the context is always changing. It is almost like suggesting there is an evolutionarily perfect organism. At best, one suited perfectly well to the current context can exist, but that perfect suitability falls off the second the environment changes. And the environment always changes.

I totally get the sentiment, but I think the idea that any code is ever complete is unfortunately just not a realistic one to apply outside of select and narrow contexts.


If I wrote a library to open jpegs, it correctly loads all standard jpegs, and it's free of security issues - is that not complete?

It's not like it's desirable for standard formats to one day start getting interpreted in a different way.


It's free of security issues in the current context, that's the point.

You depend on calling a standard library function that was found deficient (maybe it's impossible to make it secure) and deprecated. Now there is a new function you should call. Your software doesn't work anymore when the deprecated function is removed.

Sure, you can say your software is feature complete but you have to specify the external environment it's supposed to run on. And THAT is always changing.

You're both right but looking at different timelines.

Relevant: https://www.oreilly.com/library/view/software-engineering-at...


> and it's free of security issues

That's a biiiiig if


Just to add another thing that changes to the other ones which were already mentioned by sibling comments: “standard jpegs” are not necessarily set in stone.


Another Clojure programmer just got her parentheses-encrusted wings.

There are Clojure packages that go untouched for years... not because they're abandoned, but because they are stable and good enough.


> I wish audit tools and corporate policies are smart

The ball is in the court of the enterprise purchaser mandating the auditing company and they don’t give a ** if your dev team has to hot swap a yaml parsing library


They should. They are paying devs a ton of money to futz around with code that works perfectly well and is just as secure today as it was yesterday.

So you have security theater and TPS reports consuming the time of some of your most highly compensated human resources.


> just as secure today as it was yesterday.

Agreed but the gotcha is when a new issue comes out and there is no security fix available. It would be enough to say 'we are comfortable patching this if a CVE is found', but at that point they might as well start to maintain it.


Isn't that author kind of prolific within the rust library world? He didn't do that to any other libraries. Is there something with more info?


He does this all the time to libraries. These things are correlated; the more tools you make, the less time you have to maintain any single one of them. You learn to ignore it because they're also extremely well made libraries where the unmaintained status doesn't mean anything.


I found an explanation. Reading between the lines I think he found something less abysmal than YAML.

https://github.com/dtolnay/serde-yaml/releases/tag/0.9.34

> As of this release, I am not planning to publish further versions of serde_yaml as none of my projects have been using YAML for a long time, so I have archived the GitHub repo and marked the crate deprecated in the version number.


Hm... might still have been better to first ask around if anyone is willing to maintain it instead of marking it as deprecated?


From what I can tell this is a second order effect. No advisory was made against serde-yaml, it was made against rust-yaml. The maintainer of serde-yaml I believe just took the opportunity to mark it as deprecated to ensure people don't migrate there.


I'd think of it as a smart move considering how entitled, aggressive and demanding open source software users have become over maintainers.


Yeah, I'm really disappointed reading the comments here. This is his project, he can do whatever he wants with it, including delete and forget about it. The author doesn't owe you anything. Just use a fork or another implementation. This isn't the end of the world.


> The author doesn't owe you anything.

I don't think anyone is claiming he is legally required to do it. Just that it would be nice. But in this case I think he did try to look for someone anyway.


He'd been looking for a serde-yaml maintainer for a while, and none ever popped up.


Not the author, but I assume he is not against you taking over. Code is still there if you are willing. (It is not too late)


I'm not a particularly big fan of YAML either I'm afraid, so I would probably choose some other way-to-learn-rust...


Yes.

The release notes for the last release explain: https://github.com/dtolnay/serde-yaml/releases/tag/0.9.34


That is incredibly obnoxious and irresponsible and he should turn over the repo to someone else to manage.


What's so irresponsible about it? He clearly communicated the state of affairs.

I think he was generous enough with his time for creating the library in the first place and posting this update. It's not like he's owing anything to anybody. I don't understand how you can demand him to do further UNPAID work to recruit maintainers who would also do further unpaid work.


Anyone else is free to take it! You see that the repository is hosted under his own user. Anyone else can create an organization or host it under their own name.

Are we collectively having Subversion phantom pains or something? Just take it.


I’m not sure how much open source work you’ve done or what packages you’ve maintained, but in my experience, the “someone else” rarely ever comes. Heck I have even been the someone else and then done a bad job of it!

This is an unrealistic and frankly a bit entitled of a criticism.

This is before we even get to stuff like “it’s not like yaml is a fast moving target, what continued maintenance is really even needed” parts of the equation, or the comment above that said he’s tried already.


The “someone else” also needs to be vetted to ensure that their first update won’t include crypto mining malware.

We should remember that an unmaintained dependency isn’t the worst thing that can happen to a supply chain. There are far worse nightmare scenarios that involve exfiltrated keys, ransomewared files and such.

I’ll bet that if someone with a track record of contributing to the Rust community steps up, he’ll happily transfer control of the crate. But he’s not just going to assign it to some random internet user and put the whole ecosystem at risk.


No, it's incredibly irresponsible to hand over the repo to someone else to manage, even someone who is an active contributor. See https://boehs.org/node/everything-i-know-about-the-xz-backdo... for example.


With the greatest respect, I think it is your attitude that needs adjustment, not their behaviour.


I believe he offered for someone to take over and no one ever stepped up to the plate. This happens a lot.


> he should turn over the repo to someone else to manage

Why can't this "someone else" just fork the repo themselves?


The obnoxious and irresponsible part is the hubris of thinking you can have a package repository system that's going to not have problems like this and left-pad.

You can't. Ever. This is unironically why languages too old for package repositories like C and C++ lend themselves to stabler, higher-quality software.

On top of that is also the entitlement to feel like people providing you software for free MUST ALSO be responsible for polishing it forever. But, hey, I guess that attitude comes naturally with the package repository mindset. ;D People need to do all my work for me!


> C and C++

> stabler, higher-quality software.

Lol I’ll have some too buddy


AFAIK he'd been looking for a maintainer for this for a while, didn't find one, and so just marked it unmaintained. He's not doing it, nobody else wanted to, so it'd be irresponsible to leave it listed as maintained.


Thank you for describing the situation without the analogy.


It's incorrect description, though.


Unfortunately the finance analogy is sufficiently outside my wheelhouse as to be indecipherable to me. (Analogies are helpful to add intuition but they shouldn’t replace the description of the actual thing).

Maybe this description is wrong but I can’t tell.


The problem is not just the analogy but the names were changed to protect the guilty parties adding further obfuscation. Though the links are correct and you can follow the bouncing ball a little easier replacing the fake names in the article with the real names from the links:

stuff -> insta

learned-rust-this-way -> yaml-rust

to default/defaulted -> serde-yaml

As I read it: insta was depending on yaml-rust. RUSTSEC issued an advisory against yaml-rust. Some shade is thrown on yaml-rust for being unmaintained and basically a student project of someone learning the language for the first time, adding to the irony/brittleness of its unmaintained (though still working) state and now the RUSTSEC callout. In insta trying to switch to some alternative to yaml-rust the next most common library serde-yaml preemptively announced its unmaintained status. So insta just vendored yaml-rust and called it a day, which currently works to remove the RUSTSEC complaints but doesn't fix the long term maintenance problem.


Sometimes you just say fuck it.


This is exactly what we all do every time we

  cargo update
without reviewing the code changes in each dep (and their deps) wrt how they impact our compiled artifact. Ultimately we're each individually responsible for what we ship, and there's no amount of bureaucracy that will absolve us of the responsibility.


I think it is possible to build infrastructure that will allow entities to spread responsibility. They do it in other fields I think?

For example I can’t imagine Ford checks literally every single individual component that gets put in their cars, right? I assume they have contracts and standards with manufacturers that make the parts, probably including random spot checks, but at some point to handle complexity it is necessary to trust/offload responsibility on some suppliers.

I bet you could get those sort of guarantees from, like, Intel or IBM if people generally were willing to pay them enough (can you now? Not sure, probably depends on the specific package). But nobody would make that sort of promise for free. The transparency of the open source ecosystem is an alternative to that sort of thing, no promises but you can check everything.

Just to be explicit (since I’m sure this looks like needless pedantry), I think it is important to remember that the open source and free software ecosystems are an active rejection of that sort of liability management framework, because that’s the only way to get contributions from a community of volunteers.


There's something very different with software I think. For example, if a mechanism calls for a class 10.9 bolt with a certain diameter and thread pitch (let's say M6x1 for the sake of argument) then there are many different manufacturers who can supply equally suitable parts. They may each be dimensionally different, have different yield strengths, but they're all conforming to class 10.9 and dimensionally within tolerance. As the designer of the mechanical assembly I can trust the manufacturer of the bolts has a quality control procedure that'll make sure their bolts are good.

As a software assembly designer, I don't really have any way to say "this subcomponent remains intact" when I integrate it into my system. The compiler will chew things up and rearrange them at link time. This is a good thing! It makes the resulting binary go brr real good. But it's ultimately my responsibility to make sure the thing I ship--the compiled artifact--is correct. It's not really a valid move to blame one of my dependencies. I was supposed to review those dependencies and make sure they work in the context of the thing I shipped. Wouldn't it be a bit like the mechanical assembly designer blaming the bolt manufacturer for a failure of a bolt they modified while building the assembly (EDIT: or blaming the supplier when the assembler used the bolt in a way it wasn't designed for)? What guarantees can the bolt manufacturer reasonably give in that case?

EDIT: another thing is that source code isn't really analogous to a mechanical component. It's more like the design specification or blueprint for... something that converts electrical potential into heat and also does other stuff.


Technical standards, classes, testing procedures did not always exist for bolts either. For some classes of software I think one could create equivalent levels of standardization, if anyone would care enough. Having a standardized test suite would be a key component of that. I think that a YAML parsing library would be among the better candidates for this.


The chances of cargo update pulling in some updated dependency which is now compromised with malware is low. The chances of a compromised dependency getting past `cargo-audit` are low. The chances of compromised code causing measurable harm are low. The repercussions for me publishing compromised code are low. The effort I would have to expend to manually check the code is high.

So yes, I `cargo update`.


I do too, but I wonder if there's a way we can make it tractable to shoulder the responsibility of maintaining our dependency graphs? More: [1].

[1] https://news.ycombinator.com/item?id=39832559


> Sometimes you just say fork it.

FTFY. And vendoring a dependency is really just a hidden fork.


That raises the question: what is the current etiquette around deciding whether to pester a dependency's author to include your fixes or just fork it, especially if it looks like you will be the biggest user of that dependency? I'm dealing with that question for a dependency that I want to introduce in one of my crates, but ideally not until at least one of my own fixes is merged.


My thought process is:

1. You find a bug, open an issue. 2. If bug is a minor issue, wait. If I want a fix really bad, offer to fix it. 3. If I still want a fix really bad and author did not respond, or does not want a fix, then fork the project for my own consumption. Using alternative dependencies are also an option.


Yup, and forks don't have to be permanent.

Imo be ready to fork all the things if need be. Get comfortable with it before you're forced to.


In my opinion, you should always ask the author first. Having your changes accessible from the primary repository is much easier and better for everyone, and there is a good chance the author is happy to accept your PRs. On the other hand, they might not agree with your proposal or might want to make significant changes to the design, at which point it becomes a personal tradeoff for you.


Obviously fork it, implement your feature, battle-test it, fix and make it more generic, battle-test it again, then ask the maintainer if they would be interested in a pull request.

I mean, implementing your fixes in the first place means you are already inherently forking it.


> FTFY. And vendoring a dependency is really just a hidden fork.

It's not a fork until you start diverging from upstream with local modifications.


> (...) without warning or designating another maintainer.

Why would he?


The big one not mentioned is cancellation. It's very easy to cancel any future. OTOH cancellation with threads is a messy whack-a-mole problem, and a forced thread abort can't be reliable due to a risk of leaving locks locked.

In Rust's async model, it's possible to add timeouts to all futures externally. You don't need every leaf I/O function to support a timeout option, and you don't need to pass that timeout through the entire call stack.

Combined with use of Drop guards (which is Rust's best practice for managing in-progress state), it makes cancellation of even large and complex operations easy and reliable.


It's not easy to cancel any future. It's easy to *pretend* to cancel any future. E.g. if you cancel (drop) anything that uses spawn_blocking, it will just continue to run in the background without you being aware of it. If you cancel any async fs operation that is implemented in terms of a threadpool, it will also continue to run.

This all can lead to very hard to understand bugs - e.g. "why does my service fail because a file is still in use, while I'm sure nothing uses the file anymore"


Yes, if you have a blocking thread running then you have to use the classic threaded methods for cancelling it, like periodically checking a boolean. This can compose nicely with Futures if they flip the boolean on Drop.

I’ve also used custom executors that can tolerate long-blocking code in async, and then an occasional yield.await can cancel compute-bound code.


If you implemented async futures, you could have also instead implemented cancelable threads. The problem is fairly isomorphic. System calls are hard, but if you make an identical system call in a thread or an async future, then you have exactly the same cancellation problem.


I don't get your distinction. Async/await is just a syntax sugar on top of standard syscalls and design patterns, so of course it's possible to reimplement it without the syntax sugar.

But when you have a standard futures API and a run-time, you don't have to reinvent it yourself, plus you get a standard interface for composing tasks, instead of each project and library handling completion, cancellation, and timeouts in its own way.


I don't follow how threads are hard to cancel.

Set some state (like a flag) that all threads have access to.

In their work loop, they check this flag. If it's false, they return instead and the thread is joined. Done.


So you make an HTTP request to some server and it takes 60 seconds to respond.

Externally you set that flag 1 second into the HTTP request. Your program has to wait for 59 seconds before it finally has a chance at cancelling, even though you added a bunch of boilerplate to supposedly make cancellation possible.


If the server takes 60 seconds to respond, and you need responses on the order of 1 second, I'd say that is the problem - not threads.


Cancellation is not worth worrying over in my experience. If an op is no longer useful, then it is good enough if that information eventually becomes visible to whatever function is invoked on behalf of the op, but don't bother checking for it unless you are about to do something very expensive, like start an RPC.


It's been incredibly important for me in both high-traffic network back-ends, as well as in GUI apps.

When writing complex servers it's very problematic to have requests piling up waiting on something unresponsive (some other API, microservice, borked DNS, database having a bad time, etc.). Sometimes clients can be stuck waiting forever, eventually causing the server to run out of file descriptors or RAM. Everything needs timeouts and circuit breakers.

Poorly implemented cancellation that leaves some work running can create pathological situations that eat all CPU and RAM. If some data takes too long to retrieve, and you time out the request without stopping processing, the client will retry, asking for that huge slow thing again, piling up another and another and another huge task that doesn't get cancelled, making the problem worse with each retry.

Often threading is mixed with callbacks for returning results. The un-cancelled callbacks firing after the other part of the application aborted an operation can cause race conditions, by messing up some state or being misattributed to another operation.


Right, this is compatible with what I said and meant. Timeouts that fire while the op is asleep, waiting on something: good, practical to implement. Cancellations that try to stop an op that's running on the CPU: hard, not useful.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: