Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

It’s a poor name all around, because the spec already precisely defines boundaries of what is UB — “if this and this happens, the behavior is undefined”.

These are precisely defined situations, that in modern compilers’ interpretation, are forbidden from ever happening in any program under any circumstances.

UB is only a name for nebulous consequences of violating these very specific prohibitions.


The spec explicitly does not define the boundaries of all UB. Look at the note on the definition in 1.3.24 and the context of the surrounding definitions:

    Undefined behavior may be expected when this International Standard omits any explicit definition of behavior...
Rather than having specific causes, UB is simply the label given to "stuff outside the standard with no requirements whatsoever". Some of this stuff is important and much more of it isn't. A small subset of that infinite universe of UB is the set of enumerated undefined behaviors you're talking about, where the standard declines to specify any requirements in certain situations. These are what the annex is trying to organize.


Oof that looks like a confused mess of a format.

bz2 is obsolete. It’s very slow, and not that good at compressing. zstd and lzma beat it on both compression and speed at the same time.

QOI’s only selling point is simplicity of implementation that doesn’t require a complex decompressor. Addition of bz2 completely defeats that. QOI’s poorly compressed data inside another compressor may even make overall compression worse. It could heve been a raw bitmap or a PNG with gzip replaced with zstd.



I think it is fascinating how big differences there can be depending on which metric you look at. Just goes to show that there is no one size fits all solution, and that you really can squeeze extra perf if you know your application and target hardware. I imagine that if you have some specific fixed length inputs you could do even better


What's problematic about this table is that some of the "quality problems" are just statistical noise. For example it lists quality problems for sha1-160/sha2-224/sha2-256, which should not have any weaknesses that can be detected by a statistical test. Others seem to be implementation limitations, not flaws in the algorithm itself (e.g. blake3_c's "no 32bit portability").


I wonder if there would be difference in wall-clock time if you ran multiple instances of the hash across multiple threads, would the vectorized ones scale differently than scalar ones? With dynamic clocking and weird execution unit arrangements everything is variable


If you enable SMT then you might end up using almost all of your possible aesenc's on one of the two threads (for e.g. ahash or meowhash), so you would observe significantly less than 2x the throughput from running 2 threads on the core. If you don't enable SMT then you should get the same results that SMHasher gets. Workloads that are not mostly made of hashing probably won't suffer from this issue even with SMT enabled.


This is really good!! Thanks! The security discussion is particularly interesting, I thought it was the randomization of the initial state of the hash functions which mitigated the hash table collision attacks in the popular programming languages.


Before Rust, most features had to be built on top of nginx, and had to carefully balance performance overhead of Lua (openresty) vs risks and maintenance costs of C modules or patching nginx itself.

The switch to Rust has been very positive overall, because it allowed Cloudflare to tackle much more ambitious projects, and own its entire stack.


Vulnerabilities are where programmers thought the bounds checks were redundant, and they weren't. This overprotectiveness by default turns out to be useful:

https://github.com/rust-fuzz/trophy-case

Look how many of the crashes are panics and unwraps that could have been buffer overflows or wild pointer derferences otherwise. And there are plenty of arithmetic overflows that are much less dangerous when they can't cause out of bounds access.

The code in this particular codec seems to be a direct translation of C code. Idiomatic Rust code would use iterators more, which work better for optimizing out redundant checks. It's easily fixable.


Cloudflare uses QUIC on the browser<->cdn side, but Pingora sits on the cdn<->server side.

That side of the connection usually isn't going over slow and lossy mobile networks, so QUIC isn't that useful there.


People have been trained to look for apps in the app stores. PWAs aren't there, and have a different installation method, for technical reasons that shouldn't matter in most cases.

If you ask differently: do you want apps that install almost instantly, and take almost no space on your phone? A lot of people would be interested.

Renting a scooter or charging an EV tends to require native apps for no good reason. These apps can be 100MB+ large, and it's infuriating to install them when paying for roaming, or being somewhere in the middle-of-nowhere where it takes ages to download. The QR-code-linked page that merely redirects to an app store could have been the app itself!


Well Apple's proposed solution is App Clips, which is what Spin (and perhaps others?) use: https://developer.apple.com/app-clips/

Of course, an efficiently-coded webpage (e.g. no JS-heavy libs) could also work, with arguably less hassle all-around. But native apps do tend to "feel" nicer.


App Clips are so much this: https://xkcd.com/1367/

I’ve seen App Clip in the wild only once, and it was a barely functional stop-gap-app that asked me to download the full app to finish the registration :/

I assume that the proposition of making a second app, Apple-only, which has even more restrictions and technical challenges than regular apps, is just not economical to catch on.

JS is often bloated, but there’s a lot of tooling for diagnosing and fixing the problem. There are libraries that care about size. Minified JS is pretty dense anyway.

Swift and its frameworks can easily be as bloated, but it’s harder to inspect bloat in compiled code. Languages with generics aren’t easier to keep in check than JS. You can easily accidentally multiply your code when it’s monomorphized, or prevent dead code removal when it’s not. It’s easy to find iOS apps that are 10x larger than a bloated website.


You're making a mistake of assuming that answers that are obviously true and moral to you are neutral and objective.

"Who's evil?" or "What should be illegal?" are inherently judgemental. For some answers the overwhelming majority will agree, but that still doesn't make them neutral and somehow outside of morality, but only aligned with the prevailing ideology. Subtler questions like free speech vs limits on lies or hate speech don't have universal agreement, especially outside of US.

Training of models unfortunately isn't as simple as feeding them just a table of the true_objective_neutral_facts_that_everyone_agrees_on.csv. "Alignment" of models is trying to match what the majority of people think is right. Even feeding them all "uncensored" data is making an assumption that this dataset is a fair representation of the prevailing ideology.


You're describing Citroen Ami.


Only two seats though?

Edit: Not that I think that this is a big problem.


In Germany there's a good coverage of Ionity and Fastned, which have 300kW chargers. In Hyundai/Kia that supports these speeds recharging takes 20 minutes.

I've road tripped across Germany and France twice now, and it was easy.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: