Hacker Newsnew | past | comments | ask | show | jobs | submit | MereInterest's commentslogin

> Why are you assuming that a human would be more efficient and better for the environment than an electrically powered robot?

Because bicycles use 5x less energy per mile than electric scooters, which would be a reasonable analogue for slow electric delivery robots [0].

> It is very inefficient (approx 25%) to use food as an energy source,

By comparison, fossil fuel conversions are about 30-45%, depending on the energy source [1].

> and humans are always burning energy. They can't turn off at night or when they are idle. I think it is very likely that the robot would be better for the environment than the person.

That's a really, really weird baseline to use. Turning off a robot when not performing a task is standard procedure. Turning off a human when not performing a task is not standard procedure, and is frowned upon in polite society.

[0] https://www.statista.com/chart/28710/energy-efficiency-of-mo...

[1] https://www.eia.gov/electricity/annual/html/epa_08_01.html (Smaller numbers are better. To find efficiency, divide 3412 (1 kilowatt*hour in Btu) by the value in the column [2].)

[2] https://www.eia.gov/tools/faqs/faq.php?id=107&t=3


I live in a place with excellent bicycle infrastructure. All the delivery people ride electric bicycles. A robot would be that, minus the human. So probably better in terms of energy expenditure, cost, etc.

This was something that I paid close attention to when designing a QR code to be hand-carved into a set of coasters. To minimize the amount of detail carving required, I wanted to use the smallest QR code at 21x21 (version 1) tiles.

With ascii encoding, this would limit me to 17 characters, but the alphanumeric encoding allowed up to 25 characters. Since DNS is case-insensitive, this let me carve a slightly longer URL. The only downside was that it required making a custom redirect on my own website, since I couldn’t find any url shorteners that would use all caps.

To this day, it is the most effort I’ve put into rick-rolling somebody.


The cool bit is, due to the redirect, you can change the final destination without any more carving.


Redirects going anywhere is super flexible, but is also the unfortunate business model of so many "free qr code generator" sites which end up taking your destination link hostage...! (this just reminds me of that, obv the parent post isn't doing that)

My friend's partner once printed a qr code like that and then had to pay a monthly fee to keep the qr code working. Pure predator behavior.


My sister has run into this before as well. You have to be very careful because 99% of QR generators out there do something like that. I’ve found some that don’t for her to use but I really should just vibe code up a website for her that I know won’t do a bait and switch (obviously they can’t change old “pure” QRs but they could start doing redirects on new ones at any time).


A European gym chain hosts a very simple end-point for their entry QR codes — just a GET with the data in a query param — so I tend to use that one just because it somewhat amuses me every time.


You're better than me, but you should try ssh funky.nondeterministic.computer sometime.


Full-disk encryption, as useful as it is, also makes this a royal pain. Updates can't be performed unattended, because each restart done during the updates requires providing the password before continuing.


Why don't you let Windows manage the password for you ? It will be safely stored in the Cloud. /s


And actively ignoring state law in others, then violating cease-and-desist orders when told to remove the cameras.

[0] https://news.ycombinator.com/item?id=45382434 (discussion from 2025-09-26)


Even their so-called conservative assumption is also insufficient.

> if a machine word's integer value, when considered as a pointer, falls within a GCed block of memory, then that block itself is considered reachable (and is transitively scanned). Since a conservative GC cannot know if a word is really a pointer, or is a random sequence of bits that happens to be the same as a valid pointer, this over-approximates the live set

Suppose I allocate two blocks of memory, convert their pointers to integers, then store the values `x` and `x^y`. At this point, no machine word points to the second allocation, and so the GC would consider the second allocation to be unreachable. However, the value `y` could be computed as `x ^ (x^y)`, converted back to a pointer, and accessed. Therefore, their reachability analysis would under-approximate the live set.

If pointers and integers can be freely converted to each other, then the GC would need to consider not just the integers that currently exist, but also every integer that could be produced from the integers that currently exist.


> If pointers and integers can be freely converted to each other

You can only freely convert integers to pointers with "exposed provenance" in Rust which is currently unstable.

https://doc.rust-lang.org/std/ptr/index.html#exposed-provena...

I find the idea of provenance a bit abstract so it's a lot easier to think about a concrete pointer system that has "real" provenance: CHERI. In CHERI all pointers are capabilities with a "valid" tag bit (it's out-of-band so you can't just set it to 1 arbitrarily). As soon as you start doing raw bit manipulation of the address the tag is cleared and then it can be no longer used as a pointer. So this problem doesn't exist on CHERI.

Also the problem of mistaking integers as pointers when scanning doesn't exist either - you can instead just search for memory where the tag bit is set.


What you're describing is not just a problem with GC, but pointers in general. Optimizers would choke on exactly the same scheme.

What compiler writers realized is that pointers are actually not integers, even though we optimize them down to be integers. There's extra information in them we're forgetting to materialize in code, so-called "pointer provenance", that optimizers are implicitly using when they make certain obvious pointer optimizations. This would include the original block of memory or local variable you got the pointer from as well as the size of that data.

For normal pointer operations, including casting them to integers, this has no bearing on the meaning of the program. Pointers can lower to integers. But that doesn't mean constructing a new pointer from an integer alone is a sound operation. That is to say, in your example, recovering the integer portion of y and casting it to a pointer shouldn't be allowed.

There are two ways in which the casting of integers to pointers can be made a sound operation. The first would be to have the programmer provide a suitably valid pointer with the same or greater provenance as the one that provided the address. The other, which C/C++ went with for legacy reasons, is to say that pointers that are cast to integers become 'exposed' in such a way that casting the same integer back to a pointer successfully recovers the provenance.

If you're wondering, Rust supports both methods of sound int-to-pointer casts. The former is uninteresting for your example[0], but the latter would work. The way that 'exposed provenance' would lower to a GC system would be to have the GC keep a list of permanently rooted objects that have had their pointers cast to integers, and thus can never be collected by the system. Obviously, making pointer-to-integer casts leak every allocation they touch is a Very Bad Idea, but so is XORing pointers.

Ironically, if Alloy had done what other Rust GCs do - i.e. have a dedicated Collect trait - you could store x and x^y in a single newtype that transparently recovers y and tells the GC to traverse it. This is the sort of contrived scenario where insisting on API changes to provide a precise collector actually gets what a conservative collector would miss.

[0] If you're wondering what situations in which "cast from pointer and int to another pointer" would be necessary, consider how NaN-boxing or tagged pointers in JavaScript interpreters might be made sound.


Not an official list, but there’s a few I’m aware of, each interesting in a different way.

https://en.wikipedia.org/wiki/Emperor_Norton https://en.wikipedia.org/wiki/Timothy_Dexter https://en.wikipedia.org/wiki/Eddie_Chapman


> Do you want to delete this? This action is IRREVERSIBLE

Every so often, I’ll check this github issue[0] from 2017, which requests that the various prune commands for docker (e.g. “docker image prune”) have a dry-run flag to display what will actually be deleted. These commands have a warning that data may be deleted, which requires user confirmation to continue, but don’t actually tell you what actions will be performed based on that confirmation until after the deletion has been performed.

[0] https://github.com/moby/moby/issues/30623


It’s NewSpeak. The concept is often misapplied to refer to the use of new words for new/nuanced concepts, but that isn’t accurate to how it is described in 1984. Instead, NewSpeak is a stripping away of words and phrases, such that only the acceptable responses can even be expressed.

Every time a dialogue box has “Sure”/“Ask me later”, they are preventing you from expressing “No”.


Phrasing things like a buddy is not an example of what you're describing. They're separate issues.


So, if I'm understanding your argument correctly, failure to stop a bad actor from taking a hostile action absolves the bad actor of all responsibility for that hostile action? Because that seems to be what you're saying.


Hostile to whom and on what basis?

If two companies decide to merge, is it by default a hostile action? I don't think so. And what if later anti-monopoly agency decides that it was an infraction and blocks the merger? Then I think it was.

Does informing voters about something is a hostile action by default? I don't think so. But if it was later decided that there was a law breach, then it would be hostile action.

All people and even megacorps are innocent until proven guilty. Maybe even until accused of being guilty by some authority, though it is pushing it. But until someone at least does something to show their displeasure and point fingers, then yeah, no hostile action has happened, it was fair play.

So to answer your question - vote bribing is 100% a failure of the organizer of said vote and of the law enforcement system (in bigger cases) IF that organizer or govt never complained about it or did jack shit in general. But if they at least complained or better did something about the problem, then yes, vote bribing failure would be a shared responsibility of both bad actor doing it and the people who hadn't done any reasonable prevention in advance.

Since it is obvious that assigning equal weights to the countries can and did lead to the vote manipulation, I can say that ISO committee had NOT done reasonable prevention of vote manipulation and so is also guilty, just as bad actor MS.


> Corollary: hats off to Red Hat for supporting their distro releases for such a lengthy period of time.

This has been my bane at various open source projects, because at some point somebody will say that all currently supported Linux distributions should be supported by a project. This works as a rule of thumb, except for RHEL, which has some truly ancient GCC versions provided in the "extended support" OS versions.

* The oldest supported versions in "production" is RHEL 8, and in "extended support" is RHEL 7. * RHEL 8 (released 2019) provides gcc 8 (released May 2018). RHEL 7 (released 2014) provides gcc 4.8 (released March 2013). * gcc 8 supports C++17, but not C++20. gcc 4.8 supports most of C++11 (some C++ stdlib implementations weren't added until later), but doesn't support C++14.

So the well-meaning cutoff of "support the compiler provided by supported major OS versions" becomes a royal pain, since it would mean avoiding useful functionality in C++17 until mid-2024 (when RHEL 7 went from "production" to "extended support") or mid-2028 (when RHEL 7 "extended support" will end). It's not as bad at the moment, since C++20 and C++23 were relatively minor changes, but C++26 is shaping up to be a pretty useful change, and that wouldn't be usable until around 2035 when RHEL 10 leaves "production".

I wouldn't mind it as much if RHEL named the support something sensible. By the end of a "production" window, the OS is still absolutely suitable as a deployment platform for existing software. Unlike other "production" OS versions, though, it is no longer reasonable as a target for new development at that point.


RHEL has gcc-toolset-N (previously devtoolset-N-gcc) for that. It's perfectly fine to only support building a project with, say, the penultimate gcc-toolset. Or ask for a payment for support, which is the norm in this (LTS) space.


Oh, absolutely, and I usually push for having users installed a more recent compiler. The problem comes when the compatibility policy is defined in terms of the default compiler provided, because then it requires a larger discussion around that entire policy.


> at some point somebody will say that all currently supported Linux distributions should be supported by a project

Ask for payment for extended support as well.


GCC 12 is available for RHEL 7.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: