Hacker Newsnew | past | comments | ask | show | jobs | submit | beecafe's commentslogin

By that logic, anything that could potentially ever help catch a criminal is worth any potential cost to anyone who isn't you.


I think I see where you’re coming from here. Presuming that the police are incapable of performing their job without killing innocent people. We see a lot of that on the tv these days, so I understand your perspective. But if we presume that the police are incapable of performing their jobs, then you’re saying it’s never really worth involving them at all. And that’s an even bigger problem.


>you’re saying it’s never really worth involving them at all.

This is the reality for many millions of Americans. I have people in my own family for whom police interaction of virtually any kind will make their situation worse time. There are neighborhoods and communities where the idea of calling the police and actually expecting to receive help will get you laughed out of the room, as everyone in the room knows from actual experience it isn't true. A third of their local taxes go to a thing that will at best never help them.

It isn't by accident that nations have travel advisories warning their citizens about American police.


So how do we fix the actual problem? A functional police service is a useful thing to have and a moral imperative.

American police are sub optimal so we have to stop working with them and get rid of them is a non-sequiter


The idea that policing (or at least anything we would recognize as policing today) is necessary in society is from a universal opinion and certainly has not been the reality for most of human civilization.

"Abolish the Police" is not rhetorical. https://archive.ph/6E7mY


Maybe not having police worked when the population density was like half a person per square kilometer, but with people packed up into tight spaces as they do now, I think you’re living a dream.


In that case, I can’t think of a better advertisement for Ring. A doorbell camera with police on speed dial. Criminals know not to mess around because if Ring sees them doing anything funny an internationally recognized death squad will come knocking.


You missed my entire point. The ring owner in this case is not exempt from being death squad'ed.


Obviously B?


No black holes have ever been produced on Earth



What about using content defined chunking? Then inserting a few lines shouldn't cause all the subsequent chunks to shift


That's not quite right. The contents in the file would still shift. You'd just be able to still deduplicate most chunks between multiple files... But only in your copies because the filesystem still uses constant size chunks.


But I think they’re talking about if you’re doing content defined chunking of a pack or archive file - inserting data should only affect insertion chunks + 1. And since it’s content defined - those chunks are necessarily not constant size.

Ie this is how backup tools like Arq or duplicacy work.


Backup tools don't necessarily model mutation, since their purpose is to encode immutable snapshots of data. The modern ones that use deduped content storage also generally like to make snapshots independent of one another, so you can prune snapshots without worrying about invalidating another dependent snapshot. As a result, introducing a new version of a file likely has the same O(n) cost as backing up a new file, as long as they both contain content chunks already found in the storage area.

These backup tools also don't generally have to optimize random access to parts of a file. They may just store a linear sequence of chunk IDs to represent one file version. To bring this back to an active system that supports random access by a program with patching, I think you'd really need to adapt a copy-on-write filesystem to use content-defined chunking instead of fixed offset chunks. Then, your insertion is likely an operation on some tree structure that represents the list of chunk IDs as the leaves of the tree. But, this tree would now have to encode more byte offset info in the interior tree nodes, since it would vary at the leaves instead of each leaf representing a fixed size chunk.


Sounds like a "just so" story than a convincing explanation.


It's a reasonably testable hypothesis. It's trivial to translate into a theoretic framework.


Nothing about high-level qualitative behavior in an LLM is trivial to translate into a theoretical framework.


Sounds like you're just too bad at engineering to get stuff done on such a schedule. Why don't you get your brain checked out? Most normal people can easily get productive work done 4 days a few working 4 productive hours a day. If you think this is entitled, that must be since you are unable to achieve the same thing. In that case, I implore you to look for early signs of cognitive issues, since if you address that, you might find yourself able to enjoy things, get stuff done quickly, and not force yourself to burn out for someone else's paycheck.


Bro, why don't you just say what you're thinking in a more direct way. I think you're passive aggressively saying I'm a weak engineer but god engineers like yourself deserve the entitlement. Right?


You just ranted and called a bunch of engineers entitled for wanting a higher salary in addition to other things. What kind of responses did you expect to get?


I mean, are they wrong? ( ͡° ͜ʖ ͡°)


what happened to ballsemi?


Looks like the closed down. I have not been able to find any news though.


No, we have a generation of men who love shooting up schools and each other.


Not exactly a “generation”, but you are onto something here - men (broadly speaking) do appear to be more susceptible to radicalization.


Determinism is largely impossible due to arbitrary ordering of GPU threads and non-associativity of floating point operations


Is this true for LLMs and not for at least Stable Diffusion? Stable Diffusion is largely deterministic, with the main issues mainly when switching between software or hardware versions of torch, GPU architectures, CUDA/CUDANN, etc.

Or perhaps I'm wrong about Stable Diffusion too?


I thought so too, but I run a stable diffusion service, and we see small differences between generations with the same seed and same hardware class on different machines with the same CUDA drivers running in parallel. It’s really close but there will be subtle differences, (that a downstream upscaler sometimes magnifies), and I haven’t had the time to debug/understand this.


Ah okay that makes sense. In my experience I've only noticed differences when the entire composition changes so I'm guessing it's near pixel level or something?

I assume they're the most noticeable with the ancestral samplers like euler a and the DPM2 a (and variants)?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: