Planweb looks really promising.
Keen to get in touch and see how I can help support the vision of Plainweb with Sidekick then you can reference it in the docs.
No Sidekick doesn't use firecracker. I know fly.io is built around it yes. They do that so they can put your app to sleep - basically shutting it down - then spin it up real quick when it gets a request. No place for this in Sidekick vision
1) Kamal is more geared towards having one VPS for project - it' made for big projects really. They also show on the demo that even the db is hosted on its own VPS. Which is great! But not for me or Sidekick target audience. Kamal V2 will support multi-projects on a single VPS afaik
2) yes yes yes! I really like litestream. Also backup is one of those critical but annoying thing that Sidekick is meant to take care of for you. I'll look into Bearman. My vision is like we would have one command for most popular db types and it would use stubs to configure everything the right way. Need to sort out docker-compose support first though...
Locally here means the locally on your laptop locally, not locally on your VPS. Contrary to popular opinion, I believe your source code shouldn't be on your prod machine - a docker image is all you need. Lots of other projects push your code to VPS to build the image there then use it. I see no point in doing that...
hahah seems like we went down the same rabbit hole. I also considered `docker-rollout` but decided to write my own script. Heavily inspired by the docker-rollout source code btw.
Just curious, why did you decide to go with docker plugins?
Thanks man! I'm working on the docker-compose support. I got it working locally, but the ergonomics are really hard to get right, cus compose files are so flexible. I was even considering using the `sidekick.yaml` file as the main config and then turn that into docker compose - similar to what fly.io does with fly.toml. But I wanna keep this docker centric... so yeah I am still doing more thinking around this
Sounds like you have a great setup. My vision is to make a setup like yours more accessible really w/o having to play with low level config like ansible.
I think you should try to replace nginx with Traefik - it handles certs out of the box!
Mine is dead simple. I just have a repo with all my ansible in it, and have a nested module called "service". It takes in an app name, domain name, backup schedule, and a true/false on whether it should get a public nginx setup.
Then it finds the compose file based on the app name. It templates in the domain name wherever needed in the compose file, and if it's meant to be public it'll setup a nginx config (which runs on the host, not in docker). If the folder with the compose file has a backup.sh and restore.sh it also copies those over, and sets up a cron for the backup schedule. It's less than 70 lines of yaml, plus some more for restart handlers.
The only bit that irks me is the initial tls/ssl setup. Certbot changes the nginx config to insert the various certificates, which then makes my original nginx config out of date. I really like nginx and have used it for a long time so feel comfortable with it, but I've been considering traefik and caddy for a while just to get around this.
Although another option for me is to use a cloudflare tunnel instead, and then ignoring certificate management altogether. This is really attractive because it also means I can close some ports. I'll have to find some time to play around with traefik and caddy first though!
I have not automated much of my setup...but for the nginx and certbot portion, i think there is an option you can choose to have certbot NOT alter your nginx config (basically, leave it as-is)...and because the change that certbot applies (if i recall correctly) is really only insert the location(s) of the cert files under like /etc...i think you can apply the cert location initially in your nginx config, have certbot do its thing, then have it not change your config, and proceed. If nginx complains about having the cert locations present in congif file and the certs technically don't exist yet (since certbot has not done its thing at this stage)...then there's always the not-sopisticated method of starting with your nginx config without those cert locations, then have certbot alter your config, then have one of your automated steps re-replace the config with one that has all your needs plus has the expected certbot cert location parths inserted....like i said, not sophisticated, but it would work. I'm sure there are severral other ways to do this beyond what i noted. ;-)
As you say, nginx does complain about the cert files not existing, so that's pretty close to what I do. I just start with the non-ssl version, let certbot do it's thing, and then copy the result after it's deployed (if I remember). It's mildly annoying, but it takes about 2 minutes in total so it's been like that for 2 years now.
I'm sure there's something smarter I can do, like reading back the result afterwards or someting and altering my local file. But honestly, once nginx is configured for an application, I almost never touch the config again anyway.
I suspect I'm more likely to move everything over to cloudflare tunnels and ditch dealing with ssl locally altogether at this point.
Ah ok, gotcha. Yeah I'm the same that when my nginx config is done, I rarely touch it again. Also I have been playing with Caddy just to avoid the hassle of the cert management...to see if it can handle traffic like nginx can. So far my tests are showing Caddy as a pretty good replacement. (I should note that my scaling needs are quite low since, you know, I'm not a startup nor am I a massively large web service :-) )
I'll reach out on twitter