Hacker Newsnew | past | comments | ask | show | jobs | submit | Twirrim's commentslogin

I'm not in the Apple ecosystem, but I have an Apple TV. It really "just works", has been the less annoying out of the various devices I've used over the years (Roku, Fire stick etc.) My only nit is the stupid easy to lose remote, but I use a Harmony universal remote to avoid that stupidity.

They should be, but with the speed and resources available on machines these days, people don't spend as much time optimising every little thing, and even make trade-offs, e.g. Gnome 3 desktop has the spidermonkey javascript engine in it, and an increasing numbers of components are using javascript.

No, he's still dealing with a flood of crap, even in the last few weeks, off more modern models.

It's primarily from people just throwing source code at an LLM, asking it to find a vulnerability, and reporting it as-read, without having any actual understanding of if it is or isn't a vulnerability.

The difference in this particular case is it's someone who is: 1) Using tools specifically designed for security audits and investigations. 2) Takes the time to read and understand the vulnerability reported, and verifies that it is actually a vulnerability before reporting.

Point 2 is the most significant bar that people are woefully failing to meet and wasting a terrific amount of his time. The one that got shared from a couple of weeks ago https://hackerone.com/reports/3340109 didn't even call curl. It was straight up hallucination.


Failure to comply will also increasingly prejudice later legal cases and judgments down the road.

As someone who's never really read that much on compression stuff, I have absolutely zero clue what this visualisation is actually showing me.

That's compounded by the lack of legend. What do the different shades of blue and purple tell me? What is Orange?

e.g. on a given text in an orange block it puts e.g. x4<-135. x4 seems to indicate that the first 4 binary values for the block are important, but I can't figure out what that 135 is referencing (I assume it's some pointer to a value?)


It is a backreference, the main way of dealing with full or partial repetitions in the LZ77 algorithm. It literally means: copy 4 characters from the backward offset of 135. Note that this "backward offset" can overlap previously repeated characters, so x10<-1 equally means: copy the last character 10 times.

Using this example paragraph, at compression level 1 or higher (copy with the quotation symbols):

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.”

The red bit at the beginning is Zlib header information and parameters. This basically tells the decoder the format of the data coming up, how big the data is, etc.

The following grey section is the huffman coding tables - more common characters in the input are encoded in a fewer number of bits. This is what later tells the decoder, that 000 means 'e' and 1110110 means 'I'.

Getting into the content now - this is where the decoder can start emitting the uncompressed text. The first 3 purple characters are the unicode values for the fancy opening quote - because they're rare in this text, they're each encoded as 6 or 7 bits. Because they take a lot of bits, this website is showing them as a purple color, as well as physically wider. The nearby 't' is encoded in 4 bits, 0110, and is represented in a bluer color.

The orange bits you've mentioned are back references - "x10 <- 26" here means "go back 26 characters in what you've decoded, and then copy 10 characters again." In this way, we can represent "t was the " in only 12 bits, because we've seen it previously.

The grey at the end is a special "end of stream" marker, followed by a red checksum which allows decoders to make sure there wasn't any corruption in the input.

I think that's everything. Further reading: https://en.wikipedia.org/wiki/Zlib https://en.wikipedia.org/wiki/Deflate https://en.wikipedia.org/wiki/Huffman_coding


Thank you! I appreciate the explanation

Happy to help :) I think compression algorithms are super cool, and zlib is a nice example of how just two simple techniques (Huffman coding and dictionary compression) can combine to usefully compress nearly any real-world data.

Newer compression algorithms like zstd, brotli and lz4 basically just use these same methods in different ways. (There's also slightly newer alternatives to Huffman coding, like Asymmetric Numeral Systems and Arithmetic Coding, but fundamentally they're the same concept).


I've seen a few libraries that attempt to put strong unit types into languages, to use the type system to ensure correctness.

In rust I'm familiar with https://docs.rs/uom/latest/uom/ and typescript I've seen safe-units https://jscheiny.github.io/safe-units/ used.


Gemini does the sycophantic thing too, so I'm not sure that holds water. I keep having to remind it to stop with the praise whenever my previous instruction slips out of context window.

I've been finding it leaps and bounds above other models but I'm only using it via aistudio. I haven't tried any IDE integration or similar, so can't talk to that. I do still have to tell it to stop it with the effusive praise (I guess that also helps reduce context windows)

It's pretty common. GNOME's terminal does, as does Kconsole for KDE, and so on. I've got Gnome terminal bumped up to 100,000 lines.

There are definitely some that don't, or don't make it configurable, which for me just gives me pretty strong incentive not to use them.


Amazon biases towards Systems Oriented Architecture approach that is in the middle ground between monolith and microservices.

Biasing away from lots of small services in favour of larger ones that handle more of the work so that as much as possible you avoid the costs and latency of preparing, transmitting, receiving and processing requests.

I know S3 has changed since I was there nearly a decade ago, so this is outdated. Off the top of my head it used to be about a dozen main services at that time. A request to put an object would only touch a couple of services en route to disk, and similar on retrieval. There were a few services that handled fixity and data durability operations, the software on the storage servers themselves, and then stuff that maintained the mapping between object and storage.


Amusingly, I suspect that the "dozen main services" is still quite a few more than most smaller companies would consider on their stacks.

Probably. Conway's law comes into effect, naturally.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: