Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you could do it hierarchically, and with redundancy.

You'd figure out a replication strategy based on observed reliability (Lindy effect + uptime %).

It would be less "5 million flaky randoms" and more "5,000 very reliable volunteers".

Though for the crawling layer you can and should absolutely utilize 5 million flaky randoms. That's actually the holy grail of crawling. One request per random consumer device.

I think the actual issue wouldn't be the technical issue but the selection. How do you decide what's worth keeping.

You could just do it on a volunteer basis. One volunteer really likes Lizard Facts and volunteers to host that. Or you could dynamically generate the "desired semantic subspace" based on the search traffic...





Let me illustrate this with a more poetic example.

In 2015, I was working at a startup incubator hosted inside of an art academy.

I took a nap on the couch. I was the only person in the building, so my full attention was devoted to the strange sounds produced by the computers.

There were dozens of computers there. They were all on. They were all wasting hundreds of watts. They were all doing essentially nothing. Nothing useful.

I could feel the power there. I could feel, suddenly, all the computers in a thousand mile radius. All sitting there, all wasting time and energy.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: