Hacker Newsnew | past | comments | ask | show | jobs | submit | hedgehog's commentslogin

Durable consensus means this is waiting for confirmed write to disk on a majority of nodes, it will always be much slower than the time it takes a NIC to put bits on the wire. That's the price of durability until someone figures out a more efficient way.

A NVMe disk write is 20 microseconds.

Oh no, across the line Toyota still tries to sell you a subscription service that's required for things like remote start.

So? Why would people buying the cheapest possible model care?

Remote start is a luxury feature. Just ignore the subscription offer like a luxury trim option.


Remote start isn't a luxury feature unless you also classify remote unlock as a feature. Toyota (and many manufacturers) used to use the key fob to also start the car but they disabled that so they could charge for it as part of the subscription.

It depends. What else is included in that subscription? Mirror and seat heating by any chance?

Can you buy a Toyota that isn't always online? Because having remote start available, subscription or not, sounds like you can't.


My point was you can buy a car that is primarily safety+capability and not luxury+subscription. My rav4 has a touchscreen and power windows but that’s literally it as far as convenience/luxury. ~28k a couple years ago, no subscription.

Anything heated, remote, touch to start, etc is in the luxury category what gp was asking about avoiding.


Depending on the weather, heated mirrors may be a safety feature. I'd very much like that for rain/foggy days for example.

This comment section went from “I don’t understand why car makers don’t sell a cheap stripped down car without luxuries” to “Heated mirrors and seats are very important” very quickly.

HN comments discovering in real time why the stripped-down base model vehicles don’t actually sell. People like those luxury features and they choose to pay extra for them.


Same procedure as every year, James.

But which ones are luxury? I don’t want a fucking infotainment system that has the fucking a/c controls on the fucking touch screen for example.

And I’m not willing to pay 30% extra for electric and then wonder if it’s safe to rent a cabin in the woods for the new year’s.


> It depends. What else is included in that subscription? Mirror and seat heating by any chance?

Are you saying it is? Or is this a rhetorical question?

Either way, those are again luxury features. If someone is in the market for those features they’re not really looking for the base model any more.


> Are you saying it is? Or is this a rhetorical question?

i don't know. I'm not in the market for a new car atm. And considering how enthittified new cars are, I shouldn't be at all.


Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple. I have some demo code that uses fourier transform on webcam video to read a heartrate off a person's face, basically looking for what frequency holds peak energy.

It's effectively the underpinning of all modern lossy compression algorithms. The DCT which underlies codecs like Jpeg, h264, mp3, is really just a modified FFT.

Inter/intra-prediction is more important than the DCT. H264 and later use simpler degenerate forms of it because that's good enough and they can define it with bitwise accuracy.

>Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple.

Not the first time I've heard this on HN. I remember a user commenting once that it was one of the few perspective shifts in his life that completely turned things upside down professionally.


There is also a loose analogy with finance: act (trade) when prices cross a certain threshold, not after a specific time.

I don't think pulsing skin (due to blood flow) is visible from a webcam though.

Plenty of sources suggest it is:

https://github.com/giladoved/webcam-heart-rate-monitor

https://medium.com/dev-genius/remote-heart-rate-detection-us...

The Reddit comments on that second one have examples of people doing it with low quality webcams: https://www.reddit.com/r/programming/comments/llnv93/remote_...

It's honestly amazing that this is doable.


My dumb ass sat there for a good bit looking at the example in the first link thinking "How does a 30-60 Hz webcam have enough samples per cycle to know it's 77 BPM?". Then it finally clicked in my head beats per minute are indeed not to be conflated with beats per second... :).

Non-paywalled version of the second link https://archive.is/NeBzJ


MIT was able to reconstruct voice by filming a bag of chips on a 60FPS camera. I would hesitate to say how much information can leak through.

https://news.mit.edu/2014/algorithm-recovers-speech-from-vib...


I befriended the guy in high school who built a Tesla coil. For his next trick he was building a laser to read sound off of plate glass. The decoder was basically an AM radio. Which high school me found slightly disappointing.

I basically asked my math and physics teachers in high school what the Fourier transform was, but none of them knew how to answer my questions (which were about digital signal processing -- modems were important things to us back in the early '90s). If I had to do it over again, I would have audited the local university's electrical engineering and math courses in evenings. The first time MIT ran 6002x online back in 2012, the course finally answered a lot of those questions when touching upon filters and bandwidth.

Yeah I wish I had known about or had access to that stuff when I was a kid. To really learn and internalize ideas like negative frequency early would have been quite fun.

It is, I've done it live on a laptop and via the front camera of a phone. I actually wrote this thing twice, once in Swift a few years back, and then again in Python more recently because I wanted to remember the details of how to do it. Since a few people seem surprised this is feasible maybe it's worth posting the code somewhere.

You will be surprised of The Unreasonable Effectiveness of opencv.calcOpticalFlowPyrLK

Which is a special case of mathematics.

It is, but there's a lot of noise on top of it (in fact, the noise is kind of necessary to avoid it being 'flattened out' and disappearing). The fact that it covers a lot pixels and is relatively low bandwidth is what allows for this kind of magic trick.

The frequency resolution must be pretty bad though. You need 1 minute of samples for a resolution of 1/60 Hz. Hopefully the heartrate is staying constant during that minute.

It totally is. Look for motion-magnification in the literature for the start of the field, and then remote PPG for more recent work.

Sure it is. Smart watches even do it using the simplest possible “camera” (an LED).

You can do it with infrared and webcams see some of it, but I'm not sure if they're sensitive enough for that.

I have seen apps that use the principle for HRV. Finger pushed on phone cam.

I've followed the research a little bit. The general sense I get is that, specifically vehicle control at the edge of traction, software in the lab has far outperformed normal humans for over a decade. The problem is that delivering the "boring" point A to B reliably in all conditions is still unsolved. Relative safety is also a moving target because all the advances in the first bucket are directly applicable to human-driven cars as driver aids.

Yeah, my non-autonomous Toyota can already see the lane lines better than me in the rain. However, that's not too beneficial when no other driver can see the lanes and everyone is just driving to not crash into each other.

It gets real hard when the entire road surface is covered with snow for weeks at a time (like after small 1" snowfalls that might not get cleared immediately like a heavy snow would). Or when snow buildup on the road edges change road edge location and cars parked on the side project well into the "nominal" GPS derived lanes. Lanes which human drivers won't be using. They'll be using emergent lanes defined by flocking behavior. I haven't seen any evidence that autonomous vehicles can detect or navigate such emergent, non-marked except by tracks and human gut feeling lanes.

Being "better" than human in these situations will cause crashes. The real goal is to drive like a human.


Mixed and fractional scaling both mostly don't work (not complaining, but those a common for people with laptops and external displays).

They pitch their company as finding bugs "with AI". It's not hard to point one of the coding agents at a repo URL and have it find bugs even in code that's been in the wild for a long time, looking at their list that looks likely to be what they're doing.

The list is pretty short though for 8 months. ossfuzz has found a lot more even with the fuzzers often not covering a lot of the code base.

Manually paying people to write fuzzers by hand would yield a lot more and be less expensive than data centers and burning money, but who wants to pay people in 2026?


Bugs are not equivalently findable and different techniques surface different bugs. The direct comparison you're trying to draw here doesn't hold.

It does not matter what purported categories buffer overflows are in when manual fuzzing finds 100 and "AI" finds 5.

If Google gave open source projects $100,000 per year for a competent QA person, it would cost less than this "AI" money straw fire and produce better results. Maybe the QA person would also find the 5 "AI" detected bugs.


This would make sense if every memory corruption vulnerability was equivalently exploitable, which is of course not true. I think you'll find Google does in fact fuzz ffmpeg, though.

Google gives a pittance even for full ossfuzz integration. Which is why many projects just have the bare minimum fuzz tests. My original point was that even with these bare minimum tests ossfuzz has found way more than "AI" has.

Another weird assumption you've got here is that fuzzing outcomes scale linearly with funding, which, no. Further, the field of factory-scale fuzzing and triage is one Google security engineers basically invented, so it's especially odd to hold Google out as a bad actor here.

At any rate, Google didn't employ "AI" to find this vulnerability, and Google fuzzing probably wouldn't have outcompeted these researchers for this particular bug (totally different methods of bugfinding), so it's really hard to find a coherent point you'd be making about "fuzzers", "AI", and "Google" here.


My guess is the main "AI" contribution here is to automate some of the work around the actual fuzzing. Setting up the test environment and harness, reading the code + commit history + published vulns for similar projects, identifying likely trouble spots, gathering seed data, writing scripts to generate more seed data reaching the identified trouble spots, adding instrumentation to the target to detect conditions ASan etc don't, writing PoC code, writing draft patches... That's a lot of labor and the coding agents can do a mediocre job of all of it for the cost of compute.

If it's finding exploitable bugs prior factory-scale fuzzing of ffmpeg hasn't, seems like a pretty big win to me.

For sure, and I think it expands the scope of what factory scale efforts can find. The big question of course being how to handle remediation because more bugs without more maintainer capacity is a recipe for tears.

[flagged]


I am a professional software developer and have been since the 1990s.

I can't speak to what exactly this team is doing but I haven't seen any evidence that with-robot finds less bugs than without-robot. I do have some experience in this area.

FWIW I have some background in this area and got curious how Meshtastic works so I read some of the docs and code. It seems like they are unaware of existing work even 20+ years ago, a specific suggestion is to study the state of single radio CSMA meshes in say 2005 and make a list of subjects to read on, then do that. There's a lot of stuff that happened later but in the early 2000s many people tried to make meshes out of 802.11b IBSS and a lot got written about those efforts.


Have you tried Qwen3 Next 80B? It may run a lot faster, though I don't know how well it does coding tasks.


I did, it works well... although it is not good enough for agentic coding


Smaller open-weights models are also improving noticeably (like Qwen3 Coder 30B), the improvements are happening at all sizes.


Devstral Small 24b looks promising as something I want to try fine tuning on DSLs, etc. and then embedding in tooling.


I haven't tried it yet, but yes. Qwen3 Next 80B works decently in my testing, and fast. I had mixed results with the new Nemotron, but it and the new Qwen models are both very fast to run.


Same experience: on my old M2 Mac with just 32B of memory both Qwen 3 30B and the new Nemotron models are very useful for coding if I prepare a one-shot prompt with directions and relevant code. I don’t like them for agentic coding tools. I have mentioned this elsewhere: it is deeply satisfying to mix local model use with commercial APIs and services.


There are challenges with really big monolithic caches. IBM does something sort of like your idea in their Power and Telum chips, with different approaches. Power has a non-uniform cache within each die, Telum has a way to stitch together cache even across sockets (!).

https://chipsandcheese.com/p/telum-ii-at-hot-chips-2024-main...

https://www.eecg.utoronto.ca/~moshovos/ACA07/projectsuggesti...

(if you do ML things you might recognize Doug Burger's name on the authors line of the second one)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: