Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I collect bookmarks of things I want to read, but inevitably will never have time to read. Sometimes I go back and look and the webpage domain is expired, or the page 404s. But I still meticulously folder and tune my bookmarks

I’ve never admitted this to anyone, but it feels good to get that off my chest



I think that needs to be coupled to something implementing Douglas Adams idea of an Electric Monk:

"The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe."


I did the same for a while, but it was a mess (700+ unsorted bookmarks on my main computer, 100s more on others).

I tried shaarli, but soon after I tried to build something myself, and I created share-links.

It's an open-source Django app that you can self-host, and that lets you store (and share!) links, titles, descriptions, and tags. Then it display them in a nice way (for me : not much css, a simple page with no js).

It took some dozen hours to get to the point where it's really usable, and it still have problems now (comments are not moderated, I just realized that you can't add a description in links or tags, but I will fix this soonTM).

Here's the link of the repo: https://gitlab.com/sodimel/share-links/

One cool feature is to set your browser homepage to the url that loads a random page : each day I get a cool article to read/concept to discover! (here's my own instance random link url: https://links.l3m.in/en/random/)

That's my useless side project (because shaarli already exist and it's way more mature).


Nice! Doesn't solve the core problem though: we almost never go back to read these hundreds of bookmarked websites.


The almost is the interesting part here, since it's not "never" anymore :P

Another goal is to be able to retrieve links with ease. I haven't added all the tags on my links yet since I lack time (1800+ right now and counting!), but I should be able to retrieve this cool article about Python decorators, or the article from the cheapskatesguide.org that inform us that bluedwarf.top is launched :P

I really hope that one day there will be multiple share-links instances that will allow people to find interesting links, and then when they thought that they have found all the cool content they discover that there's more websites containing hundreds of fresh or old links leading to even more interesting content!


If websites were downloadable like a (versioned) document you could create your own offline bookmark web (archive), which would be neat. I guess scraping could help with getting the contents to disk, but it would probably be kinda broken.

If you can then direct your search engine to discover content from this archive I would perhaps more often (re)stumble upon this neat discontinued blog that's stuck 5 levels deep in my folder structure.


There's shiori [0] that's been linked below, and ArchiveBox [1] that seems to do exactly that.

Share-links on the other hand can convert the page content into a pdf file using weasyprint [2].

[0]: https://github.com/go-shiori/shiori

[1]: https://archivebox.io/

[2]: https://weasyprint.org/


That's pretty cool. Will try!


Just today I set this up for exactly this problem https://github.com/go-shiori/shiori

Now I can peacefully forget these bookmarks!


What does it use for archiving the pages and how capable is it? What does it do e. g. for a dynamic web map?


I feel seen. I have bookmarks folders named “To Read $NUMBER” from 10 years ago. Why do they end by a number? Because I create a new folder every time the previous one feels overwhelming.


From time to time (~6 months) I review all of them. Quickly delete most, make effort to process the rest and save the 'few' that I will not take an immediate action on a topical markdown note.

e.g. Let's say that I find a book interesting. I go to '~/fun/read/read.md' paste the URL, delete the bookmark and forget about it.


https://news.ycombinator.com/item?id=31848210 was on the frontpage just the other day with lots of solutions to this problem.


> I still meticulously folder and tune my bookmarks

I tag mine instead... and I read them first (before bookmarking)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: