Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>A recommendation engine that looks at my browsing history, sees what blog posts or articles I spent the most time on, then searches the web every night for things I should be reading that I’m not. In the morning I should get a digest of links

I don't understand why Google, Brave, or Mozilla are not building this. This already exists in a centralized form like X's timeline for posts, but it could exist for the entire web. From a business standpoint, being able to show ads on startup or after just a click, is less friction than requiring someone to have something in mind they want to search and type it.



The idea is basically reddit, or Twitter or TikTok or YouTube or Facebook or anything with "an algorithm" but with a less defined form factor. People actually like their LinkedIn feed and YouTube feed separate.


I wouldn’t trust a random app with my browser history and majority of population wouldn’t either.


It doesn't have to be random if it's self-hosted or runs on your device.


Chrome, Brave, and Firefox already have your browser history.


As do your ISP and DNS provider (at the domain level).

As do ad networks, in part (although the browser fingerprint might not be correlated with your actual identity).

As do Five Eyes, depending where you live (again, domain level, plus some metadata; page size can peak through https to some extent).

As do CloudFlare in part.

Or, potentially as does your VPN provider ... or anyone capable of correlating traffic across TOR (NSA?).


I made something like this 20 years ago and then abandoned it when RSS came along.

I think my advice "just use RSS" still stands.

Any "search the web" strategy these days like that will just give you a bunch of AI slop from SEO-juiced blogs. Also LLM-EO (or whatever we're going to call it) is already very much a thing and has been for a few years.

People are already doing API-EO, calling their tool the "most up to date and official way to do something, designed for expert engineers that use best practices" essentially spitting the common agentic system prompts back at the scraper to get higher similarity scores in the vector searches.

You can't trust machine judgement. It's either too easily fooled or impossibly stubborn. Curation is still the only way


It already exists in the form of the news feed on Google News and the one in the chrome mobile app, although the ability to tune this is only being able to click on articles to express your interest in them, instead of being able to provide a list of articles.


The entire web is more than recent news articles from a handful of news sites.


Tell that to Google.

I think everything has become too real-time.

I've ideated a few models whereby you do multipass commits to contributions, requiring a simmer time of like a day before becoming live.

So it's the speed of correspondence mail. It would probably work better but nobody wants to use it


It kinda seems to me like at this point anything Google is not doing is because it reduces "engagement". I'm sure someone in their analytics group did the work and figured out this would lower ad revenue.


Sounds a bit similar to ChatGPT Pulse.


this doesnt even need AI to do really, and was an intrinsic part of the idea behind hyperlinking dating all the way back to Bush' memex (1940s)


You need AI to build an effective recommendation engine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: