Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah no.

There is a limit to optimization, and that ends with optimized machine code.

Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.

Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

As far as cloud costs are concerned, a well developed program should cost least with go.

Go is best suited for running intensive compute loads in serverless environments.



> "There is a limit to optimization, and that ends with optimized machine code."

There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.

> "Go being a compiled language"

All code is eventually machine code. That's the only way a CPU can execute it. Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions; balancing memory safety, fast startup, amortized optimization, and steady state throughput.

> "As far as cloud costs are concerned, a well developed program should cost least with go."

This is meaningless without context. I've built 3 ad exchanges running billions of requests per day with C#/.NET which rivaled other stacks, including C++. I also know several fintech companies that have Java in their performance critical sections. It seems you're working with outdated and limited context of languages and scenarios.


> There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.

True. I agree. But, take a program (add two numbers, and print the result), and implement it in C / Go / Python / C#. You will find that, the most optimized program in each language, will generate different outputs, machine code wise.

While the C & Go will generate binaries that have X processor instructions, Python & C# will generate more than X.

And there, you have the crux of the issue. Python & C# require runtimes. And those runtimes will have an overhead.

> Virtual machines are an implementation detail

Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.

> It seems you're working with outdated and limited context of languages and scenarios.

That just reeks of arrogance.

My point, and the point made by the original comment author is not that C# / Java / Python cannot run billions of requests, its that when you compare cloud charges for running those billions of requests, costs will be less for programs produced by Go / C (and C++ too)


> "Python & C# require runtimes. And those runtimes will have an overhead."

Golang also has a runtime with memory safety, garbage collection, and threading. It's just linked into the native executable and run alongside without a VM but still present.

> "That just reeks of arrogance... costs will be less for programs produced by Go / C (and C++ too)"

You claimed that costs would be the least with Go without any other context. This is neither related to my point (that all stacks get faster over time) nor accurate since other languages are used in similar environments delivering the same or better performance and costs. Again this comes down to "optimization" being far broader than just the language or compilation process.


> Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.

After the bytecode has been JIT compiled it runs as close to the metal as a Go program. If you want fast startup you can even compile the code ahead of time and there is absolutely no difference. Apart from .NET not insisting on its own calling convention, which means it has far less overhead when interacting with external libraries than Go.


This whole thread is childish and pointless. Literally 14 year old me would have recognised these arguments and relished getting stuck into them.

I've seen a stack built almost entirely in TCL which handled 10,000 transactions per second for a FTSE 100 company. If that's possible then language is just one choice in the infinite possibilities of production stacks.


> C program will run directly on the OS. Adding a layer of runtime means reduced performance.

A C program that isn't compiled in an embedded context has a runtime layer on top of it. The OS doesn't call your main, it runs the init function of the platforms C runtime to initialize all the bloat and indirection that comes with C. Just like your compiled C program doesn't just execute the native sqrt instruction but runs a wrapper that sets all the state C expects, like the globally visible errno value no one ever checks but some long dead 1980s UNIX guru insisted on. C is portable because just like python and C# it too is specified on top of an abstract machine with properties not actually present in hardware. If you really set out to optimize a C program at the low level you immediately run into all the nonsense the C standard is up to.


JIT compilers run machine code without any overhead, when it comes to actual execution. And the overhead of the compilation can be pushed to another thread, cached, etc.


> A Go / C program will run directly on the OS

https://go.dev/doc/faq#runtime


> Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions.

In theory, practice is like theory. In practice, it isn't.

Maybe in 20 years, JITs can provide AOT-like throughput without significant memory / power consumption. Not with current tech, except on cherry picked benchmarks.


Like this one, where .NET is capable to outperform Go and C++ for gRPC parsing?

https://devblogs.microsoft.com/dotnet/grpc-performance-impro...

Or this one where JIT compilers beat Swift?

https://github.com/ixy-languages/ixy-languages


You keep posting a 2 year old msft blog post which is out of date. The benchmark project page they link to points to this for the most recent results: https://www.nexthink.com/blog/comparing-grpc-performance/

Note that akka is currently second for single core, and fastest with multicore. .Net has a nice result, but lets use up to date results. Especially when it still fits your argument that JIT can compete in benchmarks with suitable warmup, etc..


Maybe because I wasn't aware of other ones?

Still it confirms the point that Go isn't up to the competition, which was my point, not that .NET is the super dupper best thing in the world.


Your article shows that Java, Scala, and .NET are all faster than significantly faster than Go in latency and throughput, and can beat C++ as well.

This only affirms our point that JIT can keep up with, or beat, AOT already.


It can, after a warm-up period and by using 10x the memory and likely non-idiomatic code. Sometimes one can live with that, sometimes not.


Look at benchmark game’s java programs - it is very idiomatic compared to most other contenders yet it is exceedingly fast


Only when the users are clueless to configure JIT caches.

As for non idiomatic code, that is even worse for Go code, as it lacks many of the performance tuning knobs.


I literally said this in my comment.


In addition Go is garbage collected. So you have a latency there as well. However, in most Go apps, you don't use that much memory for the garbage collector to become an issue.

Rust does not have a GC, but it's a much more complex language, which causes its own problems. IMHO, Go has the upper hand in simplicity. However, go also limits you a little, because you can't write memory unsafe code by default. You can't do pointer magic like you do in C. But things also get complicated in Go when you start to use concurrency. Again IMHO, event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.


I don't think this comment deserves being downvoted much. It states an important issue: In simplicity Go might win, but practically you might be better off learning Rust and have the compiler warn you about issues with your concurrency usage for any non-trivial problem.

I don't agree with the "callback functions is easier" part at all though. It takes a bit of wrapping ones head around channels and so on, but when understood, I think they are easier to handle than all those callbacks.


> But things also get complicated in Go when you start to use concurrency.

Things always get complicated when concurrency is involved. And Go handles this much better than most other languages, with language primitives (go, select, chan) dedicated to the task, and CSP being much easier to handle than memory sharing.

> event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.

How so?

The code becomes non-linear, what bit of code is executed when and how its synchronized is suddenly up to the event loop, and entire libraries suddenly have to be written around a concept (cooperative multitasking) which OS design abandoned in the early 90s for good reason.

And this isn't even taking into account that the whole show is now much harder to scale horizontally, or that it is only suitable for io intensive tasks.


From a self-taught programmer's perspective, you mostly write concurrent code to speed-up io. Writing code with callbacks gives me the illusion of linearity. I know it's prone to errors but for simple tasks it's good enough. That's the reason for nodejs's popularity I think.

CSP is a brilliant concept, but it was a bit hard for me to wrap my head around without diving into graduate level computer science topics. This is not Go's fault, concurrency itself is complicated.

Whenever I try to write concurrent code with go, I resort back to using locks, waitgroups etc. I couldn't make the switch from thinking linearly to concurrently yet.

I might be downvoted again :), but actor based languages seem much more suitable for concurrency for me.

Again, I'm out of programming for a while, the last I coded it was a pain to do network requests concurrently in Rust due to needing intrusive libraries. Python was hard because the whole syntax changed a couple of times, you had to find the right documentation with the right version etc. Go was surprisingly easy, but as I usually plan the program as I write it, trying to write non-trivial concurrent code always results in me getting unexpected behaviour or panics.

The usual material you find on concurrency and parallelism teaches you locks/semaphores using Java, these are easier to wrap your head around. The last I checked material on CSP was limited to graduate level computer science books and articles.

I originally got into Go, because I thought it is memory safe, the code I write will be bulletproof etc. Then I understood that the achilles heel of Go (or any procedural-at-heart programming language) was dynamic memory allocation paired with concurrency.

This is my view as a self-taught programmer having no formal training in computer science. YMMV


> billions of requests per day with C#/.NET which rivaled other stacks, including C++

Ok let's say 2 billions per day.

Meaning 2'000'000 / 24 / 3600 ~= 24Kreq/s

That's nothing, you can probably even do that with python.

On any modern machine, Optimized native code like C++/Rust could handle millions per second without blinking an eye if done properly.


Millions-per-second of what? That's a serious load for any stack and depends on far more than just the language.

And it can also be done by other stacks as clearly seen in the TechEmpower benchmarks. Here's .NET and Java doing 7 million HTTP requests/second and maxing out the network card throughput: https://www.techempower.com/benchmarks/#section=data-r20&hw=...


> Optimized native like C++/Rust could handle millions per second without blinking an eye

Keyword: optimized. Almost anything we have now would handle as much as "native optimized".

And no, neither C++ nor Rust would handle millions (multiple) without blinking because the underlying OS wouldn't let you without some heavy, heavy optimizations at the OS layer.


Bing runs on .NET, just as most of Azure, while Amazon on Java, I think they can handle the load.


> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

A virtual machine is just an intermediate language specification. C, C++, and Rust also "run on a virtual machine" (LLVM). But I guess you mean the VM runtime, which includes two main components: a GC, that the Go runtime also includes, and a JIT compiler. The reason Java gives you better performance than Go (after warmup) is that its GC is more advanced than Go's GC, and its JIT compiler is more advanced than Go's compiler. The fact that there's an intermediate compilation into a virtual machine neither improves nor harms performance in any way. The use of a JIT rather than an AOT compiler does imply a need for warmup, but that's not required by a VM (e.g. LLVM normally uses AOT compilation, and only rarely a JIT).


And even then, people should do themselves a favour and learn about JIT caches, but I guess they keep using Java 8....


> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

Nope, Java and C# have JIT and AOT compilers available, while Python still has to come up to terms with what PyPy is capable of.

https://devblogs.microsoft.com/dotnet/grpc-performance-impro...


AOT vs JIT compilation is not really that important of a difference to steady state performance. It does impose other performance costs though.

For example, C# and, even more so, Java, are poorly suited to AWS Lambda-style or CLI style commands, since you pay the cost of JIT at every program start-up.

On the other hand, if you have a long-running service, Java and .NET will usually blow Go out of the water performance wise, since Go is ultimately a memory managed language with a very basic GC, a pretty heavy runtime, and a compiler that puts a lot more emphasis on compilation speed than advanced optimizations.

Go's lack of generic data structures, and particularly its lack of covariant (readonly) lists, can lead to a lot of unnecessary copying if not specifically writing for performance (if you have a function that takes a []InterfaceA, but you have a []TypeThatImplementsInterfaceA, you have to copy your entire slice to call that function, since slices are - correctly - not covariant, and there is no readonly-slice alternative that could be covariant).


So Go's runtime is pretty heavy in comparison to... .NET and the JVM? Or pretty heavy in general? This is the first I would be hearing either of these two opinions in either case, so interested in the rationale.


It's pretty heavy compared to C and C++, not .NET or JVM.

The point is that Go, even if AOT compiled, is much nearer to .NET and JVM (and often slower than those two) than to C or C++.

To make that least clearer: it's generally slower than C or C++ because the compiler is not doing very advanced optimizations, the runtime is significantly heavier than them, and GC use has certain unavoidable costs for certain workloads.

Go is generally slower than .NET or Java because its AOT compiler is not as advanced as their JIT compilers, it's GC is not tunable and it's a much more primitive design, and certain common Go patterns are known to sacrifice speed for ease of use (particularly channels).


> There is a limit to optimization, and that ends with optimized machine code.

There might be a low limit, but there’s no high limit: you can generate completely unoptimised machine code, and that’s much closer to what Go’s compiler does.

> Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.

That’s assuming that Go’s optimisation pipeline is anywhere near what mainstream C compilers provide. Against all evidence.


Just check .NET Framework 4 to .NET Core transition. You can dig MSDN blogs. While Go should be lauded for their performance improvements, it should not be done by reducing other languages/stacks’ achievements.


While that’s what they’d teach you in CS it’s missing enough nuance to be incorrect.

> There is a limit to optimization, and that ends with optimized machine code.

While that’s true, you overlook the cost of language abstractions. I’m don’t mean runtime decisions like any use of byte code but rather the language design decisions like support for garbage collection, macros, use of generics and/or reflection, green threads vs POSIX threads and even basic features like bounds checking, how strings are modelled and arrays vs slices.

These will all directly impact just how far you can optimise the machine code because they are abstractions that have a direct impact to the machine code that is generated.

> Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.

No it shouldn’t. C doesn’t have GC, bounds checking, green threads, strings, slices, nor any of the other nice things that makes life a little easier as a modern developer (I say this not as a criticism of C, I do like that language a lot but it suits a different domain to Go).

> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

Aside from CPythons runtime, their instructions still compiles to machine code.

These days no popular language, not even scripting ones, interprets byte code upon execution. [edit: I posted that before drinking coffee and now realise it was a gross exaggeration and ultimately untrue.]

You’ll find a lot of these languages compile their instructions to machine code. Some might use an intermediate virtual machine with byte code but in a lot of cases that’s just part of the compiler pass. Some might be just in time (JIT) compiled, but in many cases they’re still compiled to machine code.

> As far as cloud costs are concerned, a well developed program should cost least with go. > Go is best suited for running intensive compute loads in serverless environments.

That’s a hugely generalised statement and the reality is that it just depends on your workloads.

Want machine learning? Well then Go isn’t the best language. Have lots of small queries and need to rely on a lower start up time because execution time is always going to be short, we’ll node will often prove superior.

Don’t get me wrong, Go has its place too. But hugely generalised statements seldom work generally in IT.


> These days no popular language, not even scripting ones, interprets byte code upon execution.

Python does. At least, the CPython implementation does, and that’s what 99% of Python code runs on.


Yeah, now I think about it, plenty of runtimes do. I have no idea why I posted that when I know it’s not true. Must have been related to the time of the morning and the lack of coffee.


> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

A JIT compiler can optimize _more_ than an AOT one can. It has way more profile guided data and can make assumptions based on how the live application is being used.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: