Hacker News new | past | comments | ask | show | jobs | submit login
Self-Host and Tech Independence: The Joy of Building Your Own (ssp.sh)
275 points by articsputnik 18 hours ago | hide | past | favorite | 125 comments





Warning: shameless plug ahead

Self-hosting doesn’t mean you have to buy hardware. After a few years, low-end machines are borderline unusable with Windows, but they are still plenty strong for a Linux server. It’s quite likely you or a friend has an old laptop laying around, which can be repurposed. I’ve done this with an i3 from 2011 [1] for two users, and in 2025 I have no signs that I need an upgrade.

Laptops are also quite power efficient at idle, so in the long run they make more sense than a desktop. If you are just starting, they are a great first server.

(And no, laptops don’t have an inbuilt UPS. I recommend everyone to remove the battery before using it plugged 24x7)

1: https://www.kassner.com.br/en/2023/05/16/reusing-old-hardwar...


Speaking of laptop batteries as a UPS source, some laptops come with battery management features that keep the battery healthy even when plugged in full time, usually exposed as a setting in the BIOS/UEFI. I've found that business/enterprise type laptops like Thinkpads and Probooks have this as standard, for example Thinkpads from 2010 already had this, assuming you're lucky enough to find one with a usable battery of course.

I'm posting right now from a 13 year old Acer laptop running Linux Mint XFCE. I always feel bad about throwing away old tech so when the time came to buy a new laptop I hooked this one up to my living room TV via HDMI, bought a $25 Logitech K400+ wireless keyboard/trackpad combo, and it's still trucking along just fine. Surfs the web, handles YouTube, Netflix with no problems, I occasionally pop open VS Code or Thunderbird to check into something work-related. Even runs a couple indie games on Steam with gamepad support.

I bet Framework laptops would take this dynamic into overdrive, sadly I live in a country that they don't ship to.


I've got an old Mac-Mini 2012 laying around. It was a gift. I never wanted to switch to Mac on this solid, but not very powerful machine. Over xmas last year I booted the thing, and it was unbearable slow, even with the original version of the OS on it. After an macOS update, it was unusable. I put an SSD in (thanks YouTube for the guidance) and booted it with Debian and on top of that installed CasaOS (web-based home server OS/UI). Now I can access my music (thanks Navidrome) from on the road (thanks Wireguard). Docker is still a mystery to me, but I already learned a lot (mapping paths)

I have a 2009 MacBook Pro (Core 2 Duo) which I wanted to give a similar fate, but unfortunately it idles at 18W on Debian.

I hope Asahi for Mac Mini M4 becomes a thing. That machine will be an amazing little server 10 years from now.


old comment: https://news.ycombinator.com/item?id=41150483

Where I live (250 apartment complex in Sweden) people throw old computers in the electronics trash room, I scavenge the room every day multiple times when I take my dog out for a walk like some character out of Mad Max. I mix and match components from various computers and drop debian on them then run docker containers for various purposes. I've given my parents, cousins and friends Frankenstein servers like this. You'd be amazed at what people throw away, not uncommon to find working laptops with no passwords that log straight into Windows filled with all kinds of family photos. Sometimes unlocked iPhones from 5 years ago. It's a sick world we live in. We deserve everything that's coming for us.


Yes but arguably anything below the equivalent of RAID6/RAIDZ2 puts you at a not inconsiderable risk of data loss. Most laptops cannot do parity of any sort because of a lack of SATA/M.2 ports so you will need new hardware if you want the resilience offered by RAID. Ideally you will want that twice on different machines if you go by the "backups in at least 2 different physical locations" rule.

To be honest I never understood the purpose of RAID for personal use cases. RAID is not a backup, so you need frequent, incremental backups anyway. It only makes sense for things where you need that 99.99% uptime. OK, maybe if you're hosting a service that many people depend on then I could see it (although I suspect downtime would still be dominated by other causes) but then I go over to r/DataHoarder and I see people using RAID for their media vaults which just blows my mind.

RAID is not backup, but in some circumstances it's better than a backup. If you don't have RAID and your disk dies you need to replace it ASAP and you've lost all changes since your last backup. If you have RAID you just replace the disk and suffer 0 data loss.

That being said, the reason why I'm afraid of not using RAID is data integrity. What happens when the single HDD/SSD in your system is near its end of life? Can it be trusted to fail cleanly or might it return corrupted data (which then propagates to your backup)? I don't know and I'd be happy to be convinced that it's never an issue nowadays. But I do know that with a btrfs or zfs RAID and the checksuming done by these file systems you don't have to trust the specific consumer-grade disk in some random laptop, but instead can rely on data integrity being ensured by the FS.


Absolutely!

> if you want the resilience offered by RAID

IMHO, at that stage, you are knowledgeable enough to not listed to me anymore :P

My argument is more on the lines of using an old laptop as a gateway drug to the self-hosting world. Given enough time everyone will have a 42U rack in their basements.


> Most laptops cannot do parity of any sort because of a lack of SATA/M.2 ports

raid is NOT media or connection dependent and will happily do parity over mixed media and even remote blockdevs


I can also recommend Lenovo ThinkCentre MiniPCs or similar brands. Those can often be found cheap when companies upgrade their Hardware. These machines are also power efficient when idling, use even less space than a laptop and the case fan is very quiet (which can be annoying with laptops under load).

I'm currently running Syncthing, Forgejo, Pihole, Grafana, a DB, Jellyfin, etc... on a M910 with an i5 (6th or 7th Gen) without problems.


Yeah I would recommend this too. I've only used Dell Optiplex Micro series, no issues so far. They use external PSU similar to those in laptops, which helps with power efficiency.

Something with 8th gen i5 can be had for about 100-150 USD from ebay, and that's more than powerful enough for nearly all self-hosting needs. Supports 32-64gb of RAM and two SSD.


I second this, I have a 4 node Proxmox cluster running on MFF Optiplexes and it's been great. 32gb of RAM in each and a second USB NIC (bonded with the built-in NIC) makes for a powerful little machine with low power draw in a convenient package.

I you are not afraid of shopping the used market, I'm currently building a Proxmox node with 3rd gen Threadripper 32Cores/64Threads, 256GB ram and 2x10G, 2x2,5G and a dedicated IPMI mgmnt 1G interface, 64 PCIe gen 4 lanes, all for less than 2k Euro.

Why do you recommend removing the battery? Risk of fire?

I would have thought any reasonably recent laptop would be fine to leave plugged in indefinitely. Not to mention many won't have an easily removable battery anyway


Also interested in the answer to this.

Glad I am not alone in this. Old laptops are much better than Raspberry pies and often free and power efficient.

And: they have a crash cart (keyboard, mouse and display) and battery backup built-in. An old laptop is perfect for starting a homelab. The only major downside I can think of, and as another commenter already mentioned, is the limited storage (RAID) options.

A lot of older 17" laptops had dual HDD slots.

Or DVD drives in which you could add a disk caddy.

Ah yes, optical drives were very common for a while.

> free and power efficient

Free yes. Power efficient no. Unless you switch your laptops every two years, it's unlikely to be more efficient.


My laptop from 2011 idles at 8W, with two SATA SSDs. I have an Intel 10th-gen mini PC that idles at 5W with one SSD. 3W is not groundbreaking, but for a computer you paid $0, it would take many years to offset the $180 paid on a mini PC.

Say power costs 25¢/kWh. That's $2 per year per watt of standby power. Adjust to your local prices.

So that'd take 30 years to pay back. Or, with discounted cash flow applied... Probably never.


> My laptop from 2011 idles at 8W, with two SATA SSDs.

some benchmarks show the Raspberry Pi 4 idling below 3W and consuming a tad over 6W under sustained high load.

Power consumption is not an argument that's in favor of old laptops.


> tad over 6W

That is the key. The RPi works for idling, but anything else gets throttled pretty bad. I used to self host on the RPi, but it was just not enough[1]. Laptops/mini-PCs will have a much better burstable-to-idle power ratio (6/3W vs 35/8W).

1: https://www.kassner.com.br/en/2022/03/16/update-to-my-zfs-ba...


> That is the key. The RPi works for idling, but anything else gets throttled pretty bad.

I don't have a dog in this race, but I recall that RPi's throttling issues when subjected to high loads were actually thermal throttling. Meaning, you picked up a naked board and started blasting benchmarks until it overheated.

You cannot make sweeping statements about RPi's throttling while leaving out the root cause.


amd64 processors will have lots of hardware acceleration built in. I couldn’t get past 20MB/s over SSH on the Pi4, vs 80MB/s on my i3. So while they can show similar geekbench results, the experience of using the Pi is a bit more frustrating than on paper.

I get why you want to self host, although I also get why you don’t want.

Selfhosting is a pain in the ass, it needs updating docker, things break sometimes, sometimes it’s only you and not anyone else so you’re left alone searching the solution, and even when it works it’s often a bit clunky.

I have a extremely limited list of self hosted tool that just work and are saving me time (first one on that list would be firefly) but god knows i wasted quite a bit of my time setting up stuffs that eventually broke and that i just abandoned.

Today I’m very happy with paying for stuff if the company is respecting privacy and has descent pricing.


> docker

There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.


I would agree with that.

Docker has a lot of use cases but self hosting is not one of them.

When self-hosting you wanna think long term and the fact you will loose interest in the fiddling after a while. So sticking with software packaged in a good distribution is probably the way to go. This is the forgotten added value of a Linux or BSD distribution, a coherent system with maintenance and an easy upgrade path.

The exception are things like Umbrel which I would say use docker as their package manager and maintain everything, so it is ok.


I feel the exact opposite. Docker has made self-hosting so much easier and painless.

Backing up relevant configuration and data is a breeze with Docker. Upgrading is typically a breeze as well. No need to suffer with a 5-year old out of date version from your distro, run the version you want to and upgrade when you want to. And if shit hits the fan, it's trivial to roll back.

Sure, OS tools should be updated by the distro. But for the things you actually use the OS for, Docker all the way in my view.


What are you talking about?

Docker is THE solution for self hosting stuff since one often has one server and runs a ton of stuff on it, with different PHP, Python versions, for example.

Docker makes it incredibly easy to a multitude of services on one machine however different they may be.

And if you ever need to move to a new server, all you need to do is move the volumes (if even necessary) and run the containers on the new machine.

So YES, self hosting stuff is a huge use case for docker.


Maybe. There are pros and cons. Docker means you can run two+ different things on the same machine and update them separately. This is sometimes important when one project releases a feature you really want, while a different one just did a major update that broke something you care about. Running on the OS often means you have to update both.

Single binary sometimes works, but means you need more memory and disk space. (granted much less a concern today than it was back in 1996 when I first started self hosting, but it still can be an issue)


How can running a single binary under systemd need more memory/disk space than having that identical binary with supporting docker container layers under it on the same system, plus the overhead of all of docker?

Conflicting versions, I'll give you that, but how frequently does that happen, especially if you mostly source from upstream OS vendor repos?

The most frequent conflict is if everything wants port 80/443, and for most self-hosted services you can have them listen on internal ports and be fronted by a single instance of a webserver (take your pick of apache/nginx/caddy).


I didn't mean the two paragraphs to imply that they are somehow opposites (though on hindsight I obviously did). There are tradeoffs. a single binary is between docker and a library that uses shared libraries. What is right depends on your situation. I use all three in my selfhosted environment - you probably should too.

If you are using docker, do you save anything by using shared libraries? I thought docker copies everything. So every container has its own shared libraries and the OS running all those containers has its own as well.

Not necessarily. You are still running within the same kernel.

If your images use the same base container then the libraries exist only once and you get the same benefits of a non-docker setup.

This depends on the storage driver though. It is true at least for the default and most common overlayfs driver [1]

[1] https://docs.docker.com/engine/storage/drivers/overlayfs-dri...


The difference between a native package manager provided by the OS vendor and docker is that in a native package manager allows you to upgrade parts of the system under the applications.

Let's say some Heartbleed (which affected OpenSSL, primarily) happens again. With native packages, you update the package, restart a few things that depend on it with shared libraries, and you're patched. OS vendors are highly motivated to do this update, and often get pre-announcement info around security issues so it tends to go quickly.

With docker, someone has to rebuild every container that contains a copy of the library. This will necessarily lag and be delivered in a piecemeal fashion - if you have 5 containers, all of them need their own updates, which if you don't self-build and self-update, can take a while and is substantially more work than `apt get update && reboot`.

Incidentally, the same applies for most languages that prefer/require static linking.

As mentioned elsewhere in the thread, it's a tradeoff, and people should be aware of the tradeoffs around update and data lifecycle before making deployment decisions.


> With docker, someone has to rebuild every container that contains a copy of the library.

I think you're grossly overblowing how much work it takes to refresh your containers.

In my case, I have personal projects which have nightly builds that pull the latest version of the base image, and services are just redeployed right under your nose. All it take to do this was to add a cron trigger to the same CICD pipeline.


There are more options than docker for that. FreeBSD jails for example.

I run Debian on my machine, so package are not really up to date and I would be stuck, not being able to update my self hosted software because some dependencies were too old.

And then, some software would require older one and break when you update the dependencies for another package.

Docker is a godsend when you are hosting multiple tools.

For the limited stuff I host, navidrome, firefly, nginx, .. I have yet to see single binary package. It doesn’t seem very common in my experience.


Oh my god no, docker is so damn useful I will never return to package managers/manual installation.

>>Oh my god no, docker is so damn useful I will never return to package managers/manual installation.

This. These anti-containerisation comments read like something someone oblivious to containers would say if they were desperately grabbing onto tech from 30 years ago and refused to even spend 5 minutes exploring anything else.


Or they have explored other options and find docker lacking. I've used docker and k8s plenty professionally, and they're both vastly more work to maintain and debug than nixos and systemd units (which can optionally easily be wrapped into containers if you want on nixos, but there you're using containers for their isolation features, not for the ability to 'docker pull').

Containers as practiced by many are basically static linking and "declarative" configuration done poorly because people aren't familiar with dynamic linking or declarative OS config done well.


> There's your problem. Docker adds indirection on storage, networking, etc., and also makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

None of your points make any sense. Docker works beautifully well as an abstraction layer. It makes trivially simple to upgrade anything and everything running on it, to the point you do not even consider it as a concern. Your assertions are so far off that you managed to.l get all your points entirely backwards.

To top things off, you get clustering for free with Docker swarm mode.

> If you stick to things that can be deployed as an upstream OS vendor package, or as a single binary (go-based projects frequently do this), you'll likely have a better time in the long run.

I have news for you. In fact, you should be surprised to learn that nowadays that today you even get full blown Kubernetes distributions up and running in Linux distributions after a quick snap package install.


Absolutely everything they said makes sense.

Everything you're saying is complete overkill, even in most Enterprise environments. We're talking about a home server here for hosting eBooks and paperless documents, and you're implying Kubernetes clusters are easy enough to run and so are a good solution here. Madness.

> I have news for you.

I have news for _you_: using Docker to run anything that doesn't need it (i.e. it's the only officially supported deployment mechanism) is like putting your groceries into the boot of your car, then driving your car onto the tray of a truck, then driving the truck home because "it abstracts the manual transmission of the car with the automatic transmission of the truck". Good job, you're really showing us who's boss there.

Operating systems are easy. You've just fallen for the Kool Aid.


I completely disagree.

> Docker adds indirection on storage, networking, etc.,

What do you mean by "indirection"? It adds OS level isolation. It's not an overhead or a bad thing.

> makes upgrades difficult as you have to either rebuild the container, or rely on others to do so to get security and other updates.

Literally the entire selfhost stack could be updated and redeployed in a matter of:

      docker compose pull
      docker compose build .
      docker compose up -d
Self hosting with something like docker compose means that your server is entirely describable in 1 docker-compose.yml file (or a set of files if you like to break things apart) + storage.

You have clean separation between your applications/services and their versions/configurations (docker-compose.yml), and yous state/storage (usually a NAS share or a drive mount somewhere).

Not only are you no longer depended on a particular OS vendor (wanna move your setup to a cheap instance on a random VPS provider but they only have CentOS for some reason?), but also the clean seperation of all the parts allows to very easily scale individual components as needed.

There is 1 place where everything goes. With the OS vendor package everytime you need to check is it in systemd unit? is it a config file in /etc/? wth?

Then next time you're trying to move the host, you forget the random /etc/foo.d/conf change you made. With docker-compose, that change has to be stored somewhere for the docker-compose to mount or rebuild, so moving is trivial.

It's not Nixos, sure. but it's much much better than a list of APT or dnf or yum packages and scripts to copy files around


Tools like Ansible exist and can do everything you mention on the deploy side and more, and are also cross platform to a wider range of platforms than Linux-only Docker.

Isolation technologies are also available outside of docker, through systemd, jails, and other similar tools.


> Tools like Ansible exist and can do everything you mention on the deploy side and more (...)

Your comment is technically correct, but factually wrong. What you are leaving out is the fact that, in order to do what Docker provides out of the box, you need to come up with a huge custom Ansible script to even implement the happy path.

So, is your goal to self host your own services, or to endlessly toy with the likes of Ansible?


Why do you need to update docker? I kept my box running for more than 1 year without upgrading docker. I upgrade my images but it hardly takes 15 minutes for me, in let's say a month.

>>> if the company is respecting privacy It's very rare to see companies doing it, and moreover it is hard to trust them to even maintain a unique stance as years pass by.


It doesn't matter if you upgrade Docker or not. All tech, self hosted or not, fails for three reasons:

1) You did something to it (changed a setting, upgraded software, etc.)

2) You didn't do something to it (change a setting, upgrade a software, etc.)

3) Just because.

When it does you get the wonderful "work-like" experience, frantically trying to troubleshoot while the things around your house are failing and your family is giving you looks for it.

Self host but be aware that there's a tradeoff. The work that used to be done by someone else, somewhere else, before issues hit you is now done by you alone.


And if you're security conscious like me and want to do things the "right way" just because you can (or should be able to), you now have to think about firewall rules, certificate authorities, DNS names, notifications, backup strategies, automating it in Ansible, managing configs with git, using that newfangled IPv6, ... the complexity piles up quickly.

Coincidentally, I just decided to tackle this issue again on my Sunday afternoon: https://github.com/geerlingguy/ansible-role-firewall/pull/11...

Sometimes it's not fun anymore.


> if the company is respecting privacy It's very rare to see companies doing it, and moreover it is hard to trust them to even maintain a unique stance as years pass by.

Indeed, no one can predict the future but there are companies with bigger and stronger reputation than other. I pay for instance for iCloud because it’s e2e in my country and pricing is fair, it’s been like that for years and so I don’t have to set up baikal server for calendar, something for file archieving, something else for photo and so on.

I’d be surprised apple did willingly something damaging to user privacy, for the simple reason that they paid so much ads on privacy, they would instantly loose a lot of credibility.

And even stuff you self host, yes you can let it be, not update it for a year but I wouldn’t do that because of security issue. Somethings like navidrome (music player), it’s accessible from the web, no one want to launch a vpn each time you listen to music and so it got to be updated or you may get hacked. And no one can say that the navidrome maintainer will still be there in the coming years, could stop the project, be sick, die… it’s not a guarantee that others take back on the project and provide security update.


> Why do you need to update docker?

For starters, addressing security vulnerabilities.

https://docs.docker.com/security/security-announcements/

> I kept my box running for more than 1 year without upgrading docker.

You inadvertently raised the primary point against self-hosting: security vulnerabilities. Apparently you might have been running software with known CVEs for over a year.


> if the company is respecting privacy and has descent pricing.

Also an extremely limited list.


What project did you run into issues with? I've found any project that has gotten to the point of offering a Docker Compose seems to just work.

Plus I've found nearly every company will betray your trust in them at some point so why even give them the chance? I self host Home Assistant, but they seem to be the only company that actively enacts legal barriers for themselves so if Paulus gets hit by a bus tomorrow the project can't suddenly start going against the users.


I self-host most of what I need but I recently faced the ultimate test when my Internet went down intermittently.

It raised some interesting questions:

- How long can I be productive without the Internet?

- What am I missing?

The answer for me was I should archive more documentation and NixOS is unusable offline if you do not host a cache (so that is pretty bad).

Ultimately I also found out self-hosting most of what I need and being offline really improve my productivity.


I find that self hosting "devdocs" [1] and having zeal (on linux) [2] solves a lot of these problems with the offline docs.

[1] https://github.com/freeCodeCamp/devdocs

[2] https://zealdocs.org/


For offline documentation, I use these in order of preference:

• Info¹ documentation, which I read directly in Emacs. (If you have ever used the terminal-based standalone “info” program, please try to forget all about it. Use Emacs to read Info documentation, and preferably use a graphical Emacs instead of a terminal-based one; Info documentation occasionally has images.)

• Gnome Devhelp².

• Zeal³

• RFC archive⁴ dumps provided by the Debian “doc-rfc“ package⁵.

1. https://www.gnu.org/software/emacs/manual/html_node/info/

2. https://wiki.gnome.org/Apps/Devhelp

3. https://zealdocs.org/

4. https://www.rfc-editor.org/

5. https://tracker.debian.org/pkg/doc-rfc


Each downtime is an opportunity to learn the weaknesses of your own system.

There are certain scenarios you have no control over (upstream problems), but others have contingencies. I enjoy working out these contingencies and determining whether the costs are worth the likelihoods - and even if they're not, that doesn't necessarily mean I won't cater for it.


When my rental was damaged by a neighbouring house fire, we were kicked out of the house the next day. This was a contingency I hadn't planned well for.

I have long thought that I need my homelab/tools to have hardcases and a low power, modularity to them. Now I am certain of it. Not that I need first world technology hosting in emergency situations, but I am now staying with family for at least a few weeks, maybe months, and it would be amazing to just plonk a few hardcases down and be back in business.


I've taken this as far as I can. I love being disconnected from the internet for extended periods - they're my most productive times

I have a bash alias to use wget to recursively save full websites

yt-dlp will download videos you want to watch

Kiwix will give you a full offline copy of Wikipedia

My email is saved locally. I can queue up drafts offline

SingleFile extension will allow you to save single pages really effectively

Zeal is a great open source documentation browser


Could you share the bash alias? I would love this too.

https://srcb.in/nPU2jIU5Ca

Unfortunately it doesn't work well on single page apps. Let me know if anyone has a good way of saving those


The only way I know of is prepossessing with a web browser and piping it to some thing like monolith [0]

So you end up with something like this [1]:

> chromium --headless --window-size=1920,1080 --run-all-compositor-stages-before-draw --virtual-time-budget=9000 --incognito --dump-dom https://github.com | monolith - -I -b https://github.com -o github.html

- [0] https://github.com/Y2Z/monolith

- [1] https://github.com/Y2Z/monolith?tab=readme-ov-file#dynamic-c...


> and NixOS is unusable offline if you do not host a cache (so that is pretty bad).

I think a cache or other repository backup system is important for any software using package managers.

Relying on hundreds if not thousands of individuals to keep their part of the dependency tree available and working is one of the wildest parts of modern software developmemt to me. For end use software I much prefer a discrete package, all dependencies bundled. That's what sits on the hard-drive in practice either way.


https://kiwix.org/en/ and some jellyfin setups are a great offline resource.

But yeah, things like NixOS and Gentoo get very unhappy when they don't have Internet for more things. And mirroring all the packages ain't usually an option.


I'm not too familiar with NixOS, but I've been running Gentoo for ages and don't know why you'd need constant internet. Would you mind elaborating?

You can reverse resolve Nix back down to just the source code links though, which should be enough to build everything if those URLs are available on your local network.

having a .zip of the world, also helps, even though being a lossy one. i mean - always have one of the latest models around, ready for spin. we can easily argue llms are killing the IT sphere, but they also are a reasonable insurance against doomsday.

If by doomsday you mean “power out for a few hours”, sure.

Or few days. But I can also imagine being power independent with your own robotry to sustain even longer power offs. But you’ll also need be very well hidden as society likely collapses in matter of days if this ever happens.

> I always say to buy a domain first.

You can only rent a domain. The landlord is merciless if you miss a payment, you are out.

There are risks everywhere, and it depresses me how fragile is our online identity.


"You can only rent a domain."

If ICANN-approved root.zone and ICANN-approved registries are the only options.

As an experiment I created own registry, not shared with anyone. For many years I have run own root server, i.e., I serve own custom root.zone to all computers I own. I have a search experiment that uses a custom TLD that embeds a well-known classification system. The TLD portion of the domainname can catgorise any product or service on Earth.

ICANN TLDs are vague, ambiguous, sometimes even deceptive.


You should write something about this…

It's something of a technical limitation though: there's no reason all my devices - the consumers of my domain name - couldn't just accept that anything signed with some key is actually XorNot.com or whatever...but good luck keeping that configuration together.

You very reasonably could replace the whole system with just "lists of trusted keys to names" if the concept has enough popular technical support.


Tooling for self-hosting is quite powerful nowadays. You can start with hosted components and swap various things in for a self-hosted bit. For instance, my blog is self-hosted on a home-server.

It has Cloudflare Tunnel in front of it, but I previously have used nginx+letsencrypt+public_ip. It stores data on Cloudflare R2 but I've stored on S3 or I could store on a local NAS (since I access R2 through FUSE it wouldn't matter that much).

You have to rent:

* your domain name - and it is right that this is not a permanent purchase

* your internet access

But almost all other things now have tools that you can optionally use. If you turn them off the experience gets worse but everything still works. It's a much easier time than ever before. Back in the '90s and early 2000s, there was nothing like this. It is a glorious time. The one big difference is that email anti-spam is much stricter but I've handled mail myself as recently as 8 years ago without any trouble (though I now use G Suite).


I propose a slightly different boundary: not ”to self-host” but ”ability to self-host”. It simply means that you can if you want to, but you can let someone else host it. This is a lot more inclusive, both to those who are less technical and those who are willing to pay for it.

People who don’t care, ”I’ll just pay”, are especially affected, and the ones who should care the most. Why? Because today, businesses are more predatory, preying on future technical dependence of their victims. Even if you don’t care about FOSS, it’s incredibly important to be able to migrate providers. If you are locked in they will exploit that. Some do it so systematically they are not interested in any other kind of business.


This sounds like the "credible exit" idea Bluesky talk about.

Also shout-out to Zulip for being open source, self hostable, with a cloud hosted service and transfer between these setups.


Can definitely become a trend given so many devs out there and so much that AI can produce at home which can be of arbitrary code quality…

> The premise is that by learning some of the fundamentals, in this case Linux, you can host most things yourself. Not because you need to, but because you want to, and the feeling of using your own services just gives you pleasure. And you learn from it.

Not only that, but it helps to eliminate the very real risk that you get kicked off of a platform that you depend on without recourse. Imagine if you lost your Gmail account. I'd bet that most normies would be in deep shit, since that's basically their identity online, and they need it to reset passwords and maybe even to log into things. I bet there are a non-zero number of HN commenters who would be fucked if they so much as lost their Gmail account. You've got to at least own your own E-mail identity! Rinse and repeat for every other online service you depend on. What if your web host suddenly deleted you? Or AWS? Or Spotify or Netflix? Or some other cloud service? What's your backup? If your answer is "a new cloud host" you're just trading identical problems.


My singular issue with self hosting specifically with email is not setting it up. Lots of documentation on setting up an email server.

But running it is different issue. Notably, I have no idea, and have not seen a resource talking about troubleshooting and problem solving for a self hosted service. Particularly in regards with interoperability with other providers.

As a contrived example, if Google blackballs your server, who do you talk to about it? How do you know? Do that have email addresses, or procedures for resolution in the error messages you get talking with them?

Or these other global, IP ban sites.

I’d like to see a troubleshooting guide for email. Not so much for the protocols like DKIM, or setting DNS up properly, but in dealing with these other actors that can impact your service even if it’s, technically, according to Hoyle, set up and configured properly.


> But running it is different issue. Notably, I have no idea, and have not seen a resource talking about troubleshooting and problem solving for a self hosted service. Particularly in regards with interoperability with other providers.

It's nearly impossible to get 100% email deliverability if you self host and don't use a SMTP relay. It might work if all your contacts are with a major provider like google, but otherwise you'll get 97% deliverability but then that one person using sbcglobal/att won't ever get your email for a 4 week period or that company using barracuda puts your email in a black hole. You put in effort to get your email server whitelisted but many email providers don't respond or only give you a temporary fix.

However, you can still self host most of the email stack, including most importantly storage of your email, by using an SMTP relay, like AWS, postmark, or mailgun. It's quick and easy to switch SMTP relays if the one you're using doesn't work out. In postfix you can choose to use a relay only for certain domains.


IME the communities around packaged open-source solutions like mailinabox, mailco, mailu tend to help each other out with stuff like this and the shared bases help. Maybe camp a few chatrooms and forums and see if any fits your vibe.

Most services, including email providers, spam databases, and "ip-ban sites" have clear documentation, in terms of how to get on their good side, if needed, and it is often surprisingly straightforward to do so. Often it's as simple as filling out a relatively form.

Have you ever tried to use it? Because I fought for about 2 months with both Google and Microsoft, trying to self-host my mail server, to no success. The only answer was amongst the lines 'your server has not enough reputation'. Even though perfectly configured, DKIM, DMARC, etc. Now imagine a business not being able to send a message to anyone hosted on Gmail or Outlook, probably 80-90 percents of the companies out there.

I feel you. I had my email on OVH for a while, but they handle abuse so bad that Apple just blanketed banned the /17 my IP was in. And I was lucky that Apple actually answered my emails and explained why I was banned. I doubt Microsoft and Google would give you any useful information.

They claim that, but everyone small I know who self hosted email has discovered that forms don't do anything. I switched to fastmail 15 years ago and my email got a lot better because they are big enough that nobody dares ignore them. (maybe the forms work better today than 15 years ago, but enough people keep complaining about this issue that I doubt it)

Own your own domain, point it to the email hosting provider of your choice, and if something went horribly wrong, switch providers.

Domains are cheap; never use an email address that's email-provider-specific. That's orthogonal to whether you host your own email or use a professional service to do it for you.


This is my plan.

I will lose some email history, but at least I don’t lose my email future.

However, you can’t own a domain, you are just borrowing it. There is still a risk that gets shut down too, but I don’t think it is super common.


As for the domain risks, my suggestions is to stick with the .com/.net/.org or something common in your country and avoid novelty ones such as .app, .dev, etc, even if you can't get the shortest and simpler name. And if you have some money to spare, just renew it to 10 years.

Even if you renew for 10 years, set a calendar reminder annually to check in and make sure your renewal info is still good.

> I will lose some email history, but at least I don’t lose my email future.

I back up all my email every day, independent of my hosting provider. I have an automatic nightly sync to my laptop, which happens right before my nightly laptop backups.


Why should you lose some email history? Just move the mails to a differente folder.

I self host my mails but still use a freemail for the contact address for my providers. No chicken and egg problem for me.


If doing so id also recommend not using the same email or domain for the registrar and for your email host…. If you are locked out of one you’d want to be able to access the other to change things.

Agreed. I’ve had the same email address for a decade now but cycled through the registrar’s email, Gmail, and M365 in that time. Makes it easy to switch.

The risk may be real, but is it likely to happen to many people?

The reason why I bring this up is because many early adopters of Gmail switched to it or grew to rely upon it because the alternatives were much worse. The account through your ISP, gone as soon as you switched to another ISP. That switch may have been a necessary switch if you moved to a place the ISP did not service. University email address, gone soon after graduation. Employer's email address, gone as soon as you switched employers (and risky to use for personal use anyhow). Through another dedicated provider, I suspect most of those dedicated providers are now gone.

Yeap, self-hosting can sort of resolve the problem. The key word being sort of. Controlling your identity doesn't mean terribly much if you don't have the knowledge to setup and maintain a secure email server. If you know how to do it, and noone is targetting you in particular, you'll probably be fine. Otherwise, all bets are off. Any you don't have total control anyhow. You still have the domain name to deal with after all. You should be okay if you do your homework and stay on top of renewals, almost certainly better off than you would be with Google, but again it is only as reliable as you are.

There are reasons why people go with Gmail, and a handful of other providers. In the end, virtually all of those people will be better off in both the short to mid-term.


Self hosting at home - what is higher risk? Your HDD dying or losing Gmail account?

Oh now you don’t only self host, now you have to have space to keep gear, plan backups, install updates, oh would be good to test updates so some bug doesn’t mess your system.

Oh you know installing updates or while backups are running it would be bad if you have power outage- now you need a UPS.

Oh you know what - my UPS turned out to be faulty and it f-up my HDD in my NAS.

No I don’t have time to deal with any of it anymore I have other things to do with my life ;)


Different strokes for different folks. Motivation for me has been a combination of independence and mistrust. Every single one of the larger tech companies have shown their priority to growth above making good products and services, and not being directly user hostile. Google search is worse now than it was 10 years ago. Netflix has ads with a paid subscription, so does YouTube. Windows is absolute joke, more and more we see user hostile software. Incentives aren’t aligned at all. As people who work in software, I get not wanting to do this stuff at home as well. But honestly I’m hoping for a future where a lot of these services can legit be self hosted by technical people for their local communities. Mastodon is doing this really well IMO. Self hosted software is also getting a lot easier to manage, so I’m quite optimistic that things will keep heading this way.

Note, I’ve got all the things you mentioned down to the UPSes setup in my garage, as well as multiple levels of backups. It’s not perfect, but works for me without much time input vs utility it provides. Each to their own.


Well I hope we don’t keep on discussing Google vs Self Hosting hardware at home.

There are alternatives that should be promoted.


If your trust is violated, typically the worst that happens is you are fed a couple more relevant ads or your data is used for some commercial purpose that has little to no effect on your life.

Is it really worth going through so much effort to mitigate that risk?


Again, it's a value judgement, so the answer is largely personal. For me, yes. The social license we give these larger companies after all the violated trust doesn't make sense. If your local shop owner/operator that you talked to everyday had the same attitude towards your when you went shopping and exchanged pleasantries with most weeks, people would confront them about their actions, and that shop wouldn't last long. We have created the disconnect for convenience, and tried to ignore the level of control these companies have on our day to day lives if they are so inclined or instructed to change their systems.

Cloud is just someone else's computer. These systems aren't special. Yes they are impressively engineered to deal with the scale they deal with, but when systems are smaller, they can get a lot simpler. I think as an industry we have conflated distributed systems with really hard engineering problems, when it really matter at what level of abstraction the distribution happens when it comes to down stream complexity.


The cloud is someone else’s computer and an apartment is just someone else’s property.

How far do we take this philosophy?


It introduces some pretty important risks of its own though. If you accidentally delete/forget a local private key or lose your primary email domain there is no recourse. It's significantly easier to set up 2FA and account recovery on a third party service

Note that I'm not saying you shouldn't self-host email or anything else. But it's probably more risky for 99% of people compared to just making sure they can recover their accounts.


I have seen much more stories about people losing access to their Gmail because of a comment flagged somewhere else (i.e YouTube) than people losing access to their domains (it is hard to miss all these reminders about renewal and you shouldn't wait until then anyway so that's something under you control).

And good luck getting anyone from Google to solve your problem assuming you get to a human.


> losing access to their Gmail because

Google will never comment on the reasons they disable an account, so all you've read are the unilateral claims of people who may or may not be admitting what they actually did to lose their accounts.


Ever since arch got an installer I’m not sure I’d consider it hard anymore. Still dumps you into a command line sure but it’s a long way away from the days of trying to figure out arcane partition block math

RIP "I use arch btw"

Hello, I'm "I use gentoo btw"

I run a Kubernetes 4x pi cluster and an Intel N150 mini PC both managed with Portainer in my homelab. The following open source ops tools have been a game changer. All tools below run in containers.

- kubetail: Kubernetes log viewer for the entire cluster. Deployments, pods, statefulsets. Installed via Helm chart. Really awesome.

- Dozzle: Docker container log viewing for the N150 mini pc which just runs docker not Kubernetes. Portainer manual install.

- UptimeKuma: Monitor and alerting for all servers, http/https endpoints, and even PostgreSQL. Portainer manual install.

- Beszel: Monitoring of server cpu, memory, disk, network and docker containers. Can be installed into Kubernetes via helm chart. Also installed manually via Portainer on the N150 mini pc.

- Semaphore UI: UI for running ansible playbooks. Support for scheduling as well. Portainer manual install.


Nice article!

It's heartening in the new millennium to see some younger people show awareness of the crippling dependency on big tech.

Way back in the stone ages, before instagram and tic toc, when the internet was new, anyone having a presence on the net was rolling their own.

It's actually only gotten easier, but the corporate candy has gotten exponentially more candyfied, and most people think it's the most straightforward solution to getting a little corner on the net.

Like the fluffy fluffy "cloud", it's just another shrink-wrap of vendor lockin. Hook 'em and gouge 'em, as we used to say.

There are many ways to stake your own little piece of virtual ground. Email is another whole category. It's linked to in the article, but still uses an external service to access port 25. I've found it not too expensive to have a "business" ISP account, that allows connections on port 25 (and others).

Email is much more critical than having a place to blag on, and port 25 access is only the beginning of the "journey". The modern email "reputation" system is a big tech blockade between people and the net, but it can, and should, be overcome by all individuals with the interest in doing so.


Just for reference, take a look at this email system using FreeBSD:

https://www.purplehat.org/?page_id=1450

p.s. That was another place the article could mention a broader scope, there is always the BSDs, not just linux...



I'm going with Pangolin, small hosted VPS on Hetzner, to front my Homelab. Takes away much of the complications of serving securely directly from the home LAN.

I spend quite some years with linux systems, but i am using llms for configurating systems a lot these days. Last week i setup a server for a group of interns. They needed a docker kubernetes setup with some other tooling. I would have spend at least a day or two to set it up normally. Now it took maybe an hour. All the configurations, commands and some issues were solved with help of chatgpt. You still need to know your stuff, but its like having a super tool at hand. Nice.

Similarly, I was reconfiguring my home server and having Claude generate systemd units and timers was very handy. As you said you do need to know the material to fix the few mistakes and know what to ask for. But it can do the busywork of turning "I need this backup job to run once a week" into the .service and .timer file syntax for you to tweak instead of writing it from scratch.

Isn't depending on Claude to administer your systems rather divergent from the theme of "Self-Host and Tech Independence?"

I think it's just a turbo mode for figuring things out. Like posting to a forum and getting an answer immediately, without all those idiots asking you why you even want to do this, how software X is better than what you are using etc.

Obviously you should have enough technical knowledge to do a rough sanity check on the reply, as there's still a chance you get stupid shit out of it, but mostly it's really efficient for getting started with some tooling or programming language you're not familiar with. You can perfectly do without, it just takes longer. Plus You're not dependent on it to keep your stuff running once it's set up.


Not in this case. It's a learning accelerator, like having an experienced engineer sitting next to you.

I would describe it as the opposite- like having an inexperienced but very fast engineer next to you.

And using a hosted email service is like having hundreds of experienced engineers managing your account around the clock!

No. I've been a sysadmin before and know how to write the files from scratch. But Claude is like having a very fast intern I can tell to do the boring part for me and review the work, so it takes 30 seconds instead of 5 minutes.

But if I didn't know how to do it myself, it'd be useless- the subtle bugs Claude occasionally includes would be showstopper issues instead of a quick fix.


Claude and others are still in the adoption phase so the services are good, and not user hostile as they will be in the extraction phase. Hopefully by then some agreement on how to setup RAG systems for actual human constructed documentation for these systems will be way more accessible, and have good results with much smaller self hosted models. IMO, this is where I think/hope the LLMs value to the average person will land long term. Search, but better at understanding the query. Sadly, they will also been used for a lot of user hostile nonsense as well.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: