Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, and we all know (ask teachers) how reliable those summaries are. They are randomly lossy, which makes them unsuitable for any serious work.

I'm not arguing that LLMs don't compress data, I am arguing that they are technically compression tools, but not colloquially compression tools, and the overlap they have with colloquial compression tools is almost zero.



At this moment LLMs are used for much of the serious work across the globe so perhaps you will need to readjust your line of thinking. There's nothing inherently better or more trustworthy to have a person compile some knowledge than, let's say, a computer algorithm in this case. I place my bets on the latter to have better output.


But lossy compression algorithms for e.g. movies and music are also non-deterministic.

I'm not making an argument about whether the compression is good or useful, just like I don't find 144p bitrate starved videos particularly useful. But it doesn't seem so unlike other types of compression to me.


> They are randomly lossy, which makes them unsuitable for any serious work.

Ask ten people and they'll give ten different summaries. Are humans unsuitable too?


Yes, which is why we write things down, and when those archives become too big we use lossless compression on them, because we cannot tolerate a compression tool that drops the street address of a customer or even worse, hallucinates a slightly different one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: