I can recommend this. I'm sure it's a bug in the YouTube interface that they recommend literally nothing. My home screen has been completely empty for over a year, just a message saying "Your watch history is off". I have a couple subscriptions, which means a new video every so many days, which appear in the side bar, and they're still two or three clicks away, and that's perfect.
It's not a bug, it's extremely passive aggressive. They couple it with rewriting their browser, working group recommendations, and legal lobbying to make shoving ads down your throat their basic "human" right. When I saw they did it to me, my response was, "great, game on".
I work every day with people addicted to YouTube. A WorkMode client shared this approach; we tested it with a small group. Anecdotal but consistent - it works surprisingly well, cuts usage sharply, and seems to hold up long-term.
I did this with both reddit and YouTube on my phone. That's how I beat the doom scrolling loop.
On Android for YT, I've installed the NewPipe app which can be configured to show no suggestions. I just have a list of app level subscribed channels, no yt account required
Instead of using the YouTube app, I use the website on both my phone and computer. I have userscript setup to hide all shorts and community posts. Can be done easily with a CSS selector which sets it to:
highly recommend that as well. Disabling history greatly reduced my time spent in the app. UnTrap plugin for Safari helps as well.
Can't say that I feel like I spend a reasonable amount of time on the platform still, but now it's mostly me having an urge to distract myself, opening the website and finding the videos I have seen already then closing the tab.
I also do that; open the site and look for content. I have enough subscriptions to always be able to find something. Yet somehow not falling into the Shorts attention trap is what saves my time.
And even if you pull out the issue with the whole root, you have to fill it with something worthwhile, nurturing, and sustainable to you. Otherwise the hole will get filled by the next best time-wasting activity.
Interesting that title of this post was changed. I think I have seen this happening 2nd time now. It seems Hacker News does not favor AI negative narratives.
Has happened to me before. It seems they change anything that has a negative connotation to try to take something more positive out of it. I don't love that they do that without asking or confirming with the author. But this title is also fine with me. I actually thought about naming it "Curing your AI 10x Imposter Syndrome", but it felt like a stretch that someone would understand what the content would be about.
Jetbrains has 100MB models of languages for their IDEs that can auto complete single lines. It's good but I think we can do better for local code auto complete. I hope Apple succeeds in their on device AI attempts.
Is any of the Jetbrains offerings even competitive. I jumped shipped from Pycharm and have tried their AI offerings a few times since release but was always wildly impressed at how far behind they were of the competition.
You can run Qwen3 locally today if you want to. It can write whole files if you want (although not with <1 second latency like a sub 1GB model will which is what you want for interactive in-editor completions.)
LLMs have limits. They are super powerful but they can't make the kind of leap humans can. For example, I asked both Claude and Gemini below problem.
"I want to run webserver on Android but it does not allow binding on ports lower than 1000. What are my options?"
Both responded with below solutions
1. Use reverse proxy
2. Root the phone
3. Run on higher port
Even after asking them to rethink they couldn't come up with the solution I was expecting. The solution to this problem is HTTPS RR records[1]. Both models knew about HTTPS RR but couldn't suggest it as a solution. It's only after I included it in their context both agreed it as a possible solution.
I'm not sure I would measure LLMs on recommending a fairly obscure and rather new spec that isn't even fully supported by e.g. Chrome. That's a leap that I, a human, wouldn't have made either.
On the other hand, I find that "someone who has read all the blogs and papers and so can suggest something you might have missed" is my favorite use case for LLM, so seeing that it can miss a useful (and novel to me) idea is annoying.
Which is easier for you to remember: facts you’ve seen a hundred times in your lifetime, or facts you’ve seen once or twice?
For an LLM, I’d expect it to be similar. It can recall the stuff it’s seen thousands of times, but has a hard time recalling the niche/underdocumented stuff that it’s seen just a dozen times.
> Which is easier for you to remember: facts you’ve seen a hundred times in your lifetime, or facts you’ve seen once or twice?
The human brain isn't a statistical aggregator. If you see a psychologically socking thing once in your lifetime, you might remember it even after dementia hits when you're old.
On the other hand, you pass by hundrends of shops every day and receive the data signal of their signs over and over and over, yet you remember nothing.
You remeber stuff you pay attention to (for whatever reason)
If it's still 2020, then yes. In 2025 post-training like RLHF made it that these models do not just predict the next token, the reward function is a lot more involved than that.
Instruct models like ChatGPT are still token predictors. Instruction following is an emergent behavior from fine-tuning and reward modeling layered on top of the same core mechanism: autoregressive next-token prediction.
> So is the expectation for it to suggest obvious solutions that the majority of people already know?
Certainly a majority of people don't know this. What we're really asking is whether an LLM is expected to more than (or as much as) the average domain expert.
I'm adding this tidbit of knowledge to my context as well... :-P
Only recently have I started interacting with LLM's more (I tried out a previous "use it as a book club partner" suggestion, and it's pretty great!).
When coding with them (via cursor), there was an interaction where I nudged it: "hey, you forgot xyz when you wrote that code the first time" (ie: updating an associated data structure or cache or whatever), and I find myself INTENTIONALLY giving the machine at least the shadow of a benefit of the doubt that: "Yeah, I might have made that mistake too if I were writing that code" or "Yeah, I might have written the base case first and _then_ gotten around to updating the cache, or decrementing the overall number of found items or whatever".
In the "book club" and "movie club" case, I asked it to discuss two movies and there were a few flubs: was the main character "justly imprisoned", or "unjustly imprisoned" ... a human might have made that same typo? Correct it, don't dwell on it, go with the flow... even in a 100% human discussion on books and movies, people (and hallucinating AI/LLM's) can not remember with 100% pinpoint accuracy every little detail, and I find giving a bit of benefit of the doubt to the conversation partner lowers my stress level quite a bit.
I guess: even when it's an AI, try to keep your interactions positive.
TIL. I knew about the SRV reconds—which almost nobody uses I think?—but this was news to me.
I guess it's also actually supported, unlike SRV that are more like supported only by some applications? Matrix migrated from SRV to .well-known files for providing the data. (Or I maybe it supports both.)
Matrix supports both; when you're trying to get Matrix deployed on someone's domain it's a crapshoot on whether they have permission to write to .well-known on the webroot (and if they do, the chances of it getting vaped by a CMS update are high)... or whether they have permission to set DNS records.
Huh, that's a neat trick. Your comment is the first I'm learning of HTTPS RR records... so I won't pass judgement on whether an AI should've known enough to suggest it.
To be fair, the question implies the port number of the local service is the problem, when it's more about making sure users can access it without needing to specify a port number in the URL.
Yes, an experienced person might be able to suss out what the real problem was, but it's not really the LLMs fault for answering the specific question it was asked. Maybe you just wanted to run a server for testing and didn't realize that you can add a non-standard port to the URL.
It's not really an LLM being "faulty" in that we kinda know they have these limitations. I think they're pointing out that these models have a hard time "thinking outside the box" which is generally a lauded skill, especially for the problem-solving/planning that agents are expected to do.
Off-topic, but reading your article about hosting a website on your phone inspired me a lot. Is that possible on a non-jail-broken phone? And what webserver would you suggest?
Yes, no root required. I asked Claude to write Flutter app that would serve a static file from assets. There are plenty of webserver available on play store too.
If you want to block YouTube shorts just pause your watch history and clear entire history. It turns off all recommendations including shorts. You can subscription tab without any issue
> We need a fundamental re-evaluation of what our phones should be for, whether these platforms can ever return to their original purpose of actually bringing us together instead of keeping us scrolling
Unpopular opinion but I think we need to stop building social networks if we want to bring people together. Let people meet each other in real life. Let the relationships flourish organically. No amount of tech will ever build the trust that face to face interactions can build. When people are in presence of each other they are just not exchanging ideas. There is so much of non verbal exchange through body language, tone of voice, facial expressions. I think all this helps in building trust. Social media on other hand just does the opposite unless the user is very conscious of the effects of social media.
This is an idealized version of real life. If youre autistic you know the pains of having to feign every time in order to not stand out because of your inexpressions or how you dont find what others say very interesting. On top of that, most things youre interested on are in some small forum on the internet where its the only small space where you find your peace.
I agree with some things especially about how we spend so much time on unreal things but lets not idealize real life where if you dont talk about something, youre boring. And that something is most of the time about critisizing others all the time. We truly prefer being angry or very sad rather than alone. Thats basically why the algorithm works. It exploits our solitude. But its being built exponentially, its just the natural step on books, radio, tv , each medium more summarised, quick and polarising and monolithic than the previous one.
> Unpopular opinion but I think we need to stop building social networks if we want to bring people together.
Agreed. Social networks not only didn't bring us together, they've actually done the opposite and made us more tribal. Excellent book on the topic is Superbloom: How Technologies of Connection Tear us Apart by Nicholas Carr.