WebAssembly is fundamently a very well isolated piece of executable code, and Cloudflare focussed on JS by creating their own stripped-down version of the V8 runtime, so you don't have to ship one. They focussed on some efficient solutions, rather than a versetile "just give us a docker image" approach.
Other cloud providers just let you push code, charge you extortionate amounts of money to build the several hundred MB docker image that includes Node.js and node_modules, the data bandwidth of transfering those images between datacentres, storing the build cache in a multi-regional bucket (why???), then they charge you to store the image in their registry and then finaly give you a 10+ cold start time.
The Cloudflare worker JS is a mess of incompatibility and design idiosyncrasies that make it really impractical for true edge application design. What I mean by cloud native & serverless is specifically containerized platforms where you ship your container and they run it - where a golang container can be 30MB with everything included vs a full-fat nodejs container in the 100-500MB range. Scala is also huge when bundled and shipped as a fat jar. Hell even windows binaries if you really wanted to push something to the end client, are trivial in golang and invoke none of the .net client library, c++ redistributable or "shipping java with your app" headaches of other solutions. Only Delphi and C linked against winapi can offer this same level of portability on Windows.
AWS's lambda golang is first-class as well as GCP's golang "Cloud Functions".
Again, can't speak highly enough of using golang on cloud.
My only grievance with Go in the cloud is that it's tedious to have to set up a CI job or similar to build your binaries to deploy in your functions. I really wish I could just ship the source code and have it compile for me (or failing that, if CloudFormation or AWS SAM or Terraform or whatever could transparently manage the compilation).
For us it's as simple as docker build right from the repo -> push to registry for test deployment then tag with prod for rollout from there.
Probably not ideal for 20+ dev teams or high complexity deployments but this is the simplest CI and its much quicker build than node, so dev's can do it locally in all cases.
Very interesting, Can you give some specific examples of where CI/CD tools which works fine for large organizations for other programming languages don't work to your satisfaction with Go?
I think you’ve misunderstood the thread. I was arguing that you can get away with “dev builds image locally and pushes straight to prod” in an early startup, but in a mature organization you need CI/CD.
> My only grievance with Go in the cloud is that it's tedious to have to set up a CI job or similar to build your binaries to deploy in your functions.
So I thought your CI/CD works to your satisfaction for other programming languages, But you found Go tedious and so I wanted to understand some specific cases.
As far as serverless Go is concerned my experience has been only with Heroku as I prefer hosting on my own instances and Heroku seems to be reasonable for my use case. I hear fly.io is better in terms of performance and they do offer distributed PostgreSQL.
I guess those who are already into AWS would probably be choosing Lambda for serverless Go. If the experience is same as Node JS or Python on Lambda, Then I don't think there would be much to complain other than the cost; But of course cannot match the speed or cost of a CF worker for the reasons you've pointed.
Fly.io is amazing, I'm very excitied for it! They spin up a long-lived instance on the edge closest to the user and create a Wireguard tunnel in-between. They also have a free tier and reasonable pricing. Heroku is just far too expensive for a hobby project.
Usually the database (not the compute) is where things get expensive (Heroku's 10K rows is not practical for most tasks unless you're willing to do unholy things to put more data into a row).
I've written many lambdas in Python but have switched entirely to Go for all my new lambda work. Go is just better in every way - uses a fraction of the memory and executions take a tiny amount of the time Python takes.
I have a feeling AWS lambda's hypervisor has a lot of overhead, and simply making your workload smaller/lighter has a very noticeable reduction in execution time (cold start is <10ms on most of our GO lambdas).
re: full serverless:
I mean, try for yourselves but unless you're running an echo statement, you'll probably be hard pressed to get lower than this. Some early experiments with Rust showed 5x that, and that is as close to GO as I would dare to compare.
At my workplace we're using Go with Lambda and Fargate for several mid-size and one large application. The costs (image storage, transfer ...) you're referring to are actually almost neglectable. Aurora is by far the largest in our stack.
On the other hand: Creating WASM is a big pain. I understand that Cloudflare and others promote it for their own business reasons, but from a developer/customer point of view docker or lambda are much more robust and simple.
Other cloud providers just let you push code, charge you extortionate amounts of money to build the several hundred MB docker image that includes Node.js and node_modules, the data bandwidth of transfering those images between datacentres, storing the build cache in a multi-regional bucket (why???), then they charge you to store the image in their registry and then finaly give you a 10+ cold start time.