> "There is a limit to optimization, and that ends with optimized machine code."
There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.
> "Go being a compiled language"
All code is eventually machine code. That's the only way a CPU can execute it. Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions; balancing memory safety, fast startup, amortized optimization, and steady state throughput.
> "As far as cloud costs are concerned, a well developed program should cost least with go."
This is meaningless without context. I've built 3 ad exchanges running billions of requests per day with C#/.NET which rivaled other stacks, including C++. I also know several fintech companies that have Java in their performance critical sections. It seems you're working with outdated and limited context of languages and scenarios.
> There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.
True. I agree. But, take a program (add two numbers, and print the result), and implement it in C / Go / Python / C#. You will find that, the most optimized program in each language, will generate different outputs, machine code wise.
While the C & Go will generate binaries that have X processor instructions, Python & C# will generate more than X.
And there, you have the crux of the issue. Python & C# require runtimes. And those runtimes will have an overhead.
> Virtual machines are an implementation detail
Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.
> It seems you're working with outdated and limited context of languages and scenarios.
That just reeks of arrogance.
My point, and the point made by the original comment author is not that C# / Java / Python cannot run billions of requests, its that when you compare cloud charges for running those billions of requests, costs will be less for programs produced by Go / C (and C++ too)
> "Python & C# require runtimes. And those runtimes will have an overhead."
Golang also has a runtime with memory safety, garbage collection, and threading. It's just linked into the native executable and run alongside without a VM but still present.
> "That just reeks of arrogance... costs will be less for programs produced by Go / C (and C++ too)"
You claimed that costs would be the least with Go without any other context. This is neither related to my point (that all stacks get faster over time) nor accurate since other languages are used in similar environments delivering the same or better performance and costs. Again this comes down to "optimization" being far broader than just the language or compilation process.
> Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.
After the bytecode has been JIT compiled it runs as close to the metal as a Go program. If you want fast startup you can even compile the code ahead of time and there is absolutely no difference. Apart from .NET not insisting on its own calling convention, which means it has far less overhead when interacting with external libraries than Go.
This whole thread is childish and pointless. Literally 14 year old me would have recognised these arguments and relished getting stuck into them.
I've seen a stack built almost entirely in TCL which handled 10,000 transactions per second for a FTSE 100 company. If that's possible then language is just one choice in the infinite possibilities of production stacks.
> C program will run directly on the OS. Adding a layer of runtime means reduced performance.
A C program that isn't compiled in an embedded context has a runtime layer on top of it. The OS doesn't call your main, it runs the init function of the platforms C runtime to initialize all the bloat and indirection that comes with C. Just like your compiled C program doesn't just execute the native sqrt instruction but runs a wrapper that sets all the state C expects, like the globally visible errno value no one ever checks but some long dead 1980s UNIX guru insisted on. C is portable because just like python and C# it too is specified on top of an abstract machine with properties not actually present in hardware. If you really set out to optimize a C program at the low level you immediately run into all the nonsense the C standard is up to.
JIT compilers run machine code without any overhead, when it comes to actual execution. And the overhead of the compilation can be pushed to another thread, cached, etc.
> Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions.
In theory, practice is like theory. In practice, it isn't.
Maybe in 20 years, JITs can provide AOT-like throughput without significant memory / power consumption. Not with current tech, except on cherry picked benchmarks.
Note that akka is currently second for single core, and fastest with multicore. .Net has a nice result, but lets use up to date results. Especially when it still fits your argument that JIT can compete in benchmarks with suitable warmup, etc..
In addition Go is garbage collected. So you have a latency there as well. However, in most Go apps, you don't use that much memory for the garbage collector to become an issue.
Rust does not have a GC, but it's a much more complex language, which causes its own problems. IMHO, Go has the upper hand in simplicity. However, go also limits you a little, because you can't write memory unsafe code by default. You can't do pointer magic like you do in C. But things also get complicated in Go when you start to use concurrency. Again IMHO, event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.
I don't think this comment deserves being downvoted much. It states an important issue: In simplicity Go might win, but practically you might be better off learning Rust and have the compiler warn you about issues with your concurrency usage for any non-trivial problem.
I don't agree with the "callback functions is easier" part at all though. It takes a bit of wrapping ones head around channels and so on, but when understood, I think they are easier to handle than all those callbacks.
> But things also get complicated in Go when you start to use concurrency.
Things always get complicated when concurrency is involved. And Go handles this much better than most other languages, with language primitives (go, select, chan) dedicated to the task, and CSP being much easier to handle than memory sharing.
> event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.
How so?
The code becomes non-linear, what bit of code is executed when and how its synchronized is suddenly up to the event loop, and entire libraries suddenly have to be written around a concept (cooperative multitasking) which OS design abandoned in the early 90s for good reason.
And this isn't even taking into account that the whole show is now much harder to scale horizontally, or that it is only suitable for io intensive tasks.
From a self-taught programmer's perspective, you mostly write concurrent code to speed-up io. Writing code with callbacks gives me the illusion of linearity. I know it's prone to errors but for simple tasks it's good enough. That's the reason for nodejs's popularity I think.
CSP is a brilliant concept, but it was a bit hard for me to wrap my head around without diving into graduate level computer science topics. This is not Go's fault, concurrency itself is complicated.
Whenever I try to write concurrent code with go, I resort back to using locks, waitgroups etc. I couldn't make the switch from thinking linearly to concurrently yet.
I might be downvoted again :), but actor based languages seem much more suitable for concurrency for me.
Again, I'm out of programming for a while, the last I coded it was a pain to do network requests concurrently in Rust due to needing intrusive libraries. Python was hard because the whole syntax changed a couple of times, you had to find the right documentation with the right version etc. Go was surprisingly easy, but as I usually plan the program as I write it, trying to write non-trivial concurrent code always results in me getting unexpected behaviour or panics.
The usual material you find on concurrency and parallelism teaches you locks/semaphores using Java, these are easier to wrap your head around. The last I checked material on CSP was limited to graduate level computer science books and articles.
I originally got into Go, because I thought it is memory safe, the code I write will be bulletproof etc. Then I understood that the achilles heel of Go (or any procedural-at-heart programming language) was dynamic memory allocation paired with concurrency.
This is my view as a self-taught programmer having no formal training in computer science. YMMV
Millions-per-second of what? That's a serious load for any stack and depends on far more than just the language.
And it can also be done by other stacks as clearly seen in the TechEmpower benchmarks. Here's .NET and Java doing 7 million HTTP requests/second and maxing out the network card throughput: https://www.techempower.com/benchmarks/#section=data-r20&hw=...
> Optimized native like C++/Rust could handle millions per second without blinking an eye
Keyword: optimized. Almost anything we have now would handle as much as "native optimized".
And no, neither C++ nor Rust would handle millions (multiple) without blinking because the underlying OS wouldn't let you without some heavy, heavy optimizations at the OS layer.
There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.
> "Go being a compiled language"
All code is eventually machine code. That's the only way a CPU can execute it. Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions; balancing memory safety, fast startup, amortized optimization, and steady state throughput.
> "As far as cloud costs are concerned, a well developed program should cost least with go."
This is meaningless without context. I've built 3 ad exchanges running billions of requests per day with C#/.NET which rivaled other stacks, including C++. I also know several fintech companies that have Java in their performance critical sections. It seems you're working with outdated and limited context of languages and scenarios.