I'll give you a candid scenario that we went through:
nodejs app built with express receiving json events from 10M clients around the world on very regular basis, pushing to a queue and checking that it was received by the queue.
Extremely simple app: receive and parse json, do some very simple sanity checks on schema, convert to bson with some predefined mapping, and push to a queue (happened to be azure eventhub). To handle ~5BN events per month, peaking around 4000 events/sec, it was using up to 20 instances of node at ~200-300MB per instance in-memory and with scale-out trigger set to 75% cpu... 95th percentile was 20 cores and 12GB of ram in a serverless environment just for that one service. Add to that base container overhead and it was peaking at 16GB memory. That's not nothing in a serverless world. If it was a VM, sure, not too bad, but we're talking elastic containers and that service was built for 500% tolerance for high watermark. Not about to provision two 48GB VMs in two AZs and worry about all the plumbing "just incase". That is the point of going serverless.
Moved it to golang and it was handling 2000 req/s on 1 core with 60MB in memory. It has never went over 3 cores in the 2 years since it was moved.
nodejs app built with express receiving json events from 10M clients around the world on very regular basis, pushing to a queue and checking that it was received by the queue.
Extremely simple app: receive and parse json, do some very simple sanity checks on schema, convert to bson with some predefined mapping, and push to a queue (happened to be azure eventhub). To handle ~5BN events per month, peaking around 4000 events/sec, it was using up to 20 instances of node at ~200-300MB per instance in-memory and with scale-out trigger set to 75% cpu... 95th percentile was 20 cores and 12GB of ram in a serverless environment just for that one service. Add to that base container overhead and it was peaking at 16GB memory. That's not nothing in a serverless world. If it was a VM, sure, not too bad, but we're talking elastic containers and that service was built for 500% tolerance for high watermark. Not about to provision two 48GB VMs in two AZs and worry about all the plumbing "just incase". That is the point of going serverless.
Moved it to golang and it was handling 2000 req/s on 1 core with 60MB in memory. It has never went over 3 cores in the 2 years since it was moved.