Hacker Newsnew | past | comments | ask | show | jobs | submit | pclmulqdq's commentslogin

This whole product description with the use of the words "intentional" next to "AI" seems like trolling. There are a lot of very trendy words put next to each other and there are no artifacts.

Linux has a BDFL and a quasi-corporate structure that keeps all the incentives aligned. Rust has neither of those.

Approximate nearest neighbor searches don't cost precision. Just recall.

Interesting. No mention of kernel bypass, which Cloudflare was also discussing in 2023-2024.

Outside of HPC/HFT most people will never need kernel bypass. If you just got off Nginx you probably have years of optimizations left to do. (Username checks out though.)

There should be a political party for people who use opcode mnemonics as their nicknames or domain names.

And another party for people who sign their emails with 3-letter usernames? :)

I'm ready to form a coalition.

I'm not a fan of New York bagels. They're generally too doughy and "white bread" tasting for me. Plenty of places have excellent bagels that are pre-boiled with lye. The lye boiling process is not special. What is unique is the particular taste and texture, and it's just one kind of bagel that you can prefer or not prefer.

Your whole comment below about "discernment" and seeking New York bagels out sounds like a personal preference (bred by familiarity), not actually finding the creme de la creme of bagels.

The same goes for Chicago/New York pizza. It's not special. It's just the pizza you metaphorically grew up with.


> The lye boiling process is not special

It’s one element. The result, however, is highly perishable. You can make it last a full day in the counter, but that fucks with the texture.

> it's just one kind of bagel that you can prefer or not prefer

Sure. Same with various cheeses. Or beef.

Kobe beef is predominantly consumed in Japan. A bit makes it out. But you can generally serve someone who hasn’t spent a lot of time in Japan other wagyu and they’ll be happy. You won’t get away with that with a Kobe aficionado, and there are simply more of those in Japan for self-reïnforcing reasons. (I personally like a range of beef, and while Kobe is great, it’s not something I seek out.)


Almost every city has several bakeries that make lye-boiled bagels and plenty of other things that are baked and stocked daily. Most bakers I know will donate their stock of all breads to a homeless shelter at the end of the day and start fresh on new bread in the morning. You don't need extremely high volume for that.

> that are baked and stocked daily

But not multiple times a day. A New York bagel noticeably stales after a couple hours.

Baguettes are the same, by the way. The little handies? If made plainly, correvtly, they change immeasurably once they cool.

When perishability is measured in tens of minutes’ intervals, your economics require a large city of aficionados. (Not applicable to cheese, obviously.)


Most good bakeries everywhere stock multiple times a day as stock gets low. Even the ones selling American baked goods and things like cupcakes because all of these things have shelf lives of hours. Do you believe that New York is the only place in the US where you can get a baguette or a loaf of French bread? Do you think it's the only place you can get a cake?

Having high foot traffic and understanding supply and demand are not unique to New York. The specific type of bagel is, though, because it's a preference rather than a sign of quality. You have fewer bakeries per square mile outside New York, but you have fewer of everything per square mile outside New York. Many cities around the US are plenty dense to support people who make high-quality baked goods.


> Most good bakeries everywhere stock multiple times a day as stock gets low

The stuff that sells. In most bakeries, that doesn’t cover bagels.

> Do you believe that New York is the only place in the US where you can get a baguette or a loaf of French bread?

Nobody claimed this.

> high foot traffic and understanding supply and demand are not unique to New York

It absolutely is. New York has entire American cities’ worth of people in single city blocks. That drives niche culinary diversity in a way that’s impossible to sustain anywhere else in America.

> Many cities around the US are plenty dense to support people who make high-quality baked goods

Again, never contested. But not as wide a variety. You can’t profitably make every sort of baked good fresh every few hours in a town smaller than a few hundred thousand. You can find that within walking distance for bagels, cubanos, naan and dumplings in a lot of Manhattan.


> bred by familiarity

Bread by familiarity, surely? Sorry for the awful bun. I mean pun.


-O3 -march=haswell

The second one is your problem. Haswell is 15 years old now. Almost nobody owns a CPU that old. -O3 makes a lot of architecture-dependent decisions, and tying yourself to an antique architecture gives you very bad results.


The author gives a godbolt link. It takes 5 minutes to add two compilers on raptorlake and see that it gives the same result.

https://godbolt.org/z/oof145zjb

So no, Haswell is not the problem. LLVM just doesn't know about the dependency thing.


Also, if you’re building binaries for redistribution, requiring Haswell or newer is kind of an aggressive choice.

I have a box that old that can run image diffusion models (I upgraded the GPU during covid.)

My point being that compiler authors do worry about “obsolete” targets because they’re widely used for compatibility reasons.


"Haswell or newer" is a bit misleading here, because there were low-end Celerons and Pentiums shipped for years after the release of Haswell that lacked AVX2 and wouldn't be able to run software compiled for -march=haswell.

Oh, yeah. Looking at the code the comparison function is bad in a way that will hit issues with cmovs. The "else if" there is a nearly 100% predictable branch which follows a branch that's probably completely unpredictable. One optimization you get with -O3 is that branches not marked likely or unlikely can be turned into dataflow. The proper ordering for that code is to pull the equality check forward into an if statement and mark with "unlikely" if you're using -O3, and then return the ordering of the floating point comparison.

Since we're using Rust there's some decent syntactic sugar for this function you can use instead:

    let cmp = |other: &Neighbor| -> Ordering {
        other.dist.partial_cmp(&neighbor.dist).unwrap()
            .then_with(|| other.id.cmp(&neighbor.id))
    };
You probably won't get any CMOVs in -O3 with this function because there are NaN issues with the original code.

My desktop computer is a Sandy Bridge from 2011. I still haven't seen any compelling reason to upgrade.

What factors would be compelling to upgrade for?

Just curious, since perf alone doesn’t seem to be the factor.

https://browser.geekbench.com/processors/intel-core-i7-2600k

https://browser.geekbench.com/processors/intel-core-i9-14900...


Because number bigger doesn’t translate to higher perceived performance…

The only compelling reason that I want to upgrade my Sandy Lake chip is AVX2.

So it is instruction set not perf, sure there will be improved performance but most of the things that are actually performance issues is already handed off to the GPU.

On that note probably rebar and PCIe4 but those aren’t dramatic differences, if CPU is really a problem (renders/compilation) then it gets offloaded to different hardware.


> Because number bigger doesn’t translate to higher perceived performance…

When the numbers are that far apart, there is definitely room to perceive a performance improvement.

2011 era hardware is dramatically slower than what’s available in 2025. I go back and use a machine that is less than 10 years old occasionally and it’s surprising how much less responsive it feels, even with a modern high speed SSD drive installed.

Some people just aren’t sensitive to slow systems. Honestly a great place to be because it’s much cheaper that way. However, there is definitely a speed difference between a 2011 system and a 2025 system.


Choice of things like desktop environments matters a lot. I’m using xfce or lxde or something (I can’t tell without checking top), and responsiveness for most stuff is identical between 2010 intel and a ryzen 9.

The big exceptions are things like “apt get upgrade”, but both boxes bottleneck on starlink for that. Modern games and compilation are the other obvious things.


> The big exceptions are things like…

> Modern games and compilation are the other obvious things.

I mean if we exempt all of the CPU intensive things then speed of your CPU doesn’t matter

I don’t have a fast CPU for the low overhead things, though. I buy one because that speed up when I run commands or compile my code adds up when I’m doing 100 or 1000 little CPU intensive tasks during the day. A few seconds or minutes saved here and there adds up multiplied by 100 or 1000 times per day. Multiply that by 200 working days per year and the value of upgrading a CPU (after over a decade) is very high on the list of ways you can buy more time. I don’t care so much about something rendering in 1 frame instead of 2 frames, but when I type a command and have to wait idly for it to complete, that’s just lost time.


Believe it or not, "good enough" often is good enough. Regardless of how big the numbers are.

The comment claimed there wasn’t a perceivable difference

That’s different than acknowledging that newer hardware is faster but deciding current hardware is fast enough


Especially on single core, everything is painfully slow. Tried to install linux on a ppc imac G5 five years ago and I had to admit that it was never going to be a good experience, even for basic usage

Agreed that if you’re not using NVMe (as example), that non-CPU upgrade will translate into the biggest perceived benefit.

Then again, not many Sandy Bridge mobo supported NVMe.


I did get a PCI-Express to M2 adapter and installed an NVMe drive.

That was indeed the biggest upgrade ever.


I went from a Sandy Bridge (i5 2500k) to a Ryzen 9 3950x, and the perceived performance improvement was insane. You also have to take into account RAM and PCIe generation bumps, NVMe, etc.

Not OP, but I'm on a 10 year old laptop.

Only thing I'd want is a higher resolution display that's usable in daylight, and longer battery life.


For what it's worth, you may be pleasantly surprised by the performance if you upgrade. I went from an Ivy Bridge processor to a Zen 3 processor, and I found that there were a lot of real world scenarios which got noticably faster. For example, AI turns in a late game Civ 6 map went from 30s to 15s. I can't promise you'll see good results, but it's worth considering.

Not as old but I am still typing this on a MacBook Pro Early 2015 with a Broadwell CPU. It is doing pretty well with Chrome and Firefox, not so much with Safari.

I do. Had to replace the plastics of the laptop and the screen's definition is unacceptable but with Linux and a SSD it's still fine for basic computer usage. Not my daily driver but kept it as a daily drivers for 10 years.

The default is an even older instruction set. Maybe you meant to suggest -march native ?

This era of CPUs has held up surprisingly well. I built an Ivy Bridge desktop in 2012 that still sees daily productivity use (albeit with an NVMe and RAM upgrade).

GPG works great if you use it to encrypt and decrypt emails manually as the authors intended. The PGP/GPG algorithms were never intended for use in APIs or web interfaces.

Ironically, it was the urge not to roll your own cryptography that got people caught in GPG-related security vulnerabilities.


I tried this, but not with the "AI magic" angle. It turns out nobody cares because CSPRNGs are random enough and really fast.

The awk script is probably the fastest way to do this still, and it's faster if you use gawk or something similar rather than default awk. Most people also don't need ordering, so you can get away with only the awk part and you don't need the sort.

Sometimes, "weird" patterns are correct. The borrow checker doesn't care about nuance.

It’s also true that people overestimate how often the “weird” patterns are needed. 9 times out of 10 it’s the programmer who is missing something, not the borrow checker.

That has not been my experience with it, but I understand if it is yours. I have often seen people use convoluted or slow patterns to satisfy the borrow checker when something slightly un-kosher would have been simpler, faster, and easier.

unsafe exist for that very reason.

There's no Rabbi out there to mandate that your code is kosher, using unsafe is OK.

Also, the situation where you really need it are rare in practice (in 9 years of full time Rust, I've yet to encounter one).


Using "unsafe" for things that really need it structurally is incredibly unwieldy and makes all your code a total mess. A single instance of "unsafe" is clean and fine, but if you want or need to use patterns that do not follow the religious "single ownership known at compile time" dogma, you end up spewing "unsafe" in a lot of places and having terribly unclean code as a result. And no, "Arc" is not a perfect solution to ths because its performance is terrible.

I encourage you to write a doubly linked list in Rust if you want to understand what I mean about "un-kosher" code. This is a basic academic example, but Rust itself is the rabbi that makes your life suck if you stray off the narrow path.

I write a decent amount of system-level software and this kind of pattern is unavoidable if you actually need to interact with hardware or if you need very high performance. I have probably written the unsafe keyword several hundred times despite only having to use Rust professionally for a year.


> Using "unsafe" for things that really need it structurally is incredibly unwieldy and makes all your code a total mess. A single instance of "unsafe" is clean and fine, but if you want or need to use patterns that do not follow the religious "single ownership known at compile time" dogma, you end up spewing "unsafe" in a lot of places and having terribly unclean code as a result.

It's only “unclean” because you see it that way. In reality it's no more unclean than writing C (or Zig, for that matter).

> And no, "Arc" is not a perfect solution to ths because its performance is terrible.

There's no “perfect solution”, but Arc in fine in many cases, just not all of them.

> I encourage you to write a doubly linked list in Rust if you want to understand what I mean about "un-kosher" code

I've followed the “learning rust by writing too many linked lists” tutorial ages ago, and I've then tutored beginners through it, so I'm familiar with the topic, but the thing is you never need to write a doubly linked list IRL since it's readily available in std.

> this kind of pattern is unavoidable if you actually need to interact with hardware

You need unsafe to interact with hardware (or FFI) but that's absolutely not the same thing.

> I have probably written the unsafe keyword several hundred times despite only having to use Rust professionally for a year.

If it's not a hyperbole, that's a very big smell IMHO, you probably should spend a little bit more time digging into how to write idiomatic rust (I've seen coworkers coming from C writing way too much unsafe for no reason because they just kept cramming their C patterns in their Rust code). Sometimes, unsafe is genuinely needed and should be used as a tool, but sometimes it's not and it's being abused due to lack of familiarity with the language. “Several hundreds times” really sounds like the latter.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: