Hacker Newsnew | past | comments | ask | show | jobs | submit | jdsleppy's commentslogin

Multiple processes, multiple threads per process, and/or greenlets (monkey patch network calls, like async but no keywords involved). Scale out horizontally when there's a problem. It could get expensive.

I can second the other commenter: you are having a different discussion than the rest of the comments and OP.


Er... is it software? Hardware?


Has both. Hardware is mostly in the prototype phase.


If you continue to not find what you need and are willing to be a subject matter expert on what Pivotal actually is (because I never saw it), I would be interested in building this. A lot of people share your sentiment so it could be successful, but it's hard to clone something unless you know the thing.


Please not yet another failed clone attempt. If you never used it, you have absolutely zero chance of replicating it, its something you need to have experienced.


Also disadvantaged groups might consume less healthcare and might be less aware of air quality, etc. and so may be more likely to have bad health outcomes for given environmental inputs.


I've been very happy doing this:

DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.


This approach is akin to the prod server pulling an image from a registry. The op method is push based.


No, in my example the docker-compose.yml would exist alongside your application's source code and you can use the `build` directive https://docs.docker.com/reference/compose-file/services/#bui... to instruct the remote host (Hetzner VPS, or whatever else) to build the image. That image does not go to an external registry, but is used internal to that remote host.

For 3rd party images like `postgres`, etc., then yes it will pull those from DockerHub or the registry you configure.

But in this method you push the source code, not a finished docker image, to the server.


Seems like it makes more sense to build on the build machine, and then just copy images out to PROD servers. Having source code on PROD servers is generally considered bad practice.


The source code does not get to the filesystem on the prod server. It is sent to the Docker daemon when it builds the image. After the build ends, there's only the image on the prod server.

I am now convinced that this is a hidden docker feature that too many people aren't aware of and do not understand.


Yeah, I definitely didn't understand that! Thanks for explaining. I've bookmarked this thread, because there's several commands that look more powerful and clean than what I'm currently doing which is to "docker save" to TAR, copy the TAR up to prod and then "docker load".


This is for running code on your own servers, he doesn't deal with running others' code.


Did you compile the Python yourself? If so, you may need to add optimization flags https://devguide.python.org/getting-started/setup-building/i...


I assume they are saying that in practice, if wealth gives one influence (if one lives in capitalism), one will use that influence to make one's market less free to one's benefit.


You also skip the docker compose pull if you configure it to always pull in the compose file or in the up command.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: