The thing people don't seem to appreciate/understand with go is the effects of using it in serverless and cloud native environments.
Every high throughput service I've had moved to GO has used a tiny amount of Memory and CPU compared to the implementation it replaced, typically nodejs, python, Scala and .net.
In a cloud world where it's usually boiled down to pay-per MB of in-process memory and per cycle... That can be 4-5 figures a month in reduced cloud cost for the same traffic.
Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.
The benefits over time of essentially knowing things will get faster with each new version is really encouraging as well. Imagine your cloud bill went down over time as a result of simply rebuilding your service on newer golang core and redeploying.
Bored by Java and After getting burned due to performance limitations by Python applications at scale I switched to Go and never looked back. I still use Python for ML, DS and I don't think it'll change anytime soon due to the tools/library ecosystem.
> Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.
What's your serverless stack for Go? Cloudflare claims Go is not the right language for their workers due to sandboxing constraints[1].
> Go fundamentally is not designed to support secure sandboxing of tenants without running each tenant in their own process...
WebAssembly is fundamently a very well isolated piece of executable code, and Cloudflare focussed on JS by creating their own stripped-down version of the V8 runtime, so you don't have to ship one. They focussed on some efficient solutions, rather than a versetile "just give us a docker image" approach.
Other cloud providers just let you push code, charge you extortionate amounts of money to build the several hundred MB docker image that includes Node.js and node_modules, the data bandwidth of transfering those images between datacentres, storing the build cache in a multi-regional bucket (why???), then they charge you to store the image in their registry and then finaly give you a 10+ cold start time.
The Cloudflare worker JS is a mess of incompatibility and design idiosyncrasies that make it really impractical for true edge application design. What I mean by cloud native & serverless is specifically containerized platforms where you ship your container and they run it - where a golang container can be 30MB with everything included vs a full-fat nodejs container in the 100-500MB range. Scala is also huge when bundled and shipped as a fat jar. Hell even windows binaries if you really wanted to push something to the end client, are trivial in golang and invoke none of the .net client library, c++ redistributable or "shipping java with your app" headaches of other solutions. Only Delphi and C linked against winapi can offer this same level of portability on Windows.
AWS's lambda golang is first-class as well as GCP's golang "Cloud Functions".
Again, can't speak highly enough of using golang on cloud.
My only grievance with Go in the cloud is that it's tedious to have to set up a CI job or similar to build your binaries to deploy in your functions. I really wish I could just ship the source code and have it compile for me (or failing that, if CloudFormation or AWS SAM or Terraform or whatever could transparently manage the compilation).
For us it's as simple as docker build right from the repo -> push to registry for test deployment then tag with prod for rollout from there.
Probably not ideal for 20+ dev teams or high complexity deployments but this is the simplest CI and its much quicker build than node, so dev's can do it locally in all cases.
Very interesting, Can you give some specific examples of where CI/CD tools which works fine for large organizations for other programming languages don't work to your satisfaction with Go?
I think you’ve misunderstood the thread. I was arguing that you can get away with “dev builds image locally and pushes straight to prod” in an early startup, but in a mature organization you need CI/CD.
> My only grievance with Go in the cloud is that it's tedious to have to set up a CI job or similar to build your binaries to deploy in your functions.
So I thought your CI/CD works to your satisfaction for other programming languages, But you found Go tedious and so I wanted to understand some specific cases.
As far as serverless Go is concerned my experience has been only with Heroku as I prefer hosting on my own instances and Heroku seems to be reasonable for my use case. I hear fly.io is better in terms of performance and they do offer distributed PostgreSQL.
I guess those who are already into AWS would probably be choosing Lambda for serverless Go. If the experience is same as Node JS or Python on Lambda, Then I don't think there would be much to complain other than the cost; But of course cannot match the speed or cost of a CF worker for the reasons you've pointed.
Fly.io is amazing, I'm very excitied for it! They spin up a long-lived instance on the edge closest to the user and create a Wireguard tunnel in-between. They also have a free tier and reasonable pricing. Heroku is just far too expensive for a hobby project.
Usually the database (not the compute) is where things get expensive (Heroku's 10K rows is not practical for most tasks unless you're willing to do unholy things to put more data into a row).
I've written many lambdas in Python but have switched entirely to Go for all my new lambda work. Go is just better in every way - uses a fraction of the memory and executions take a tiny amount of the time Python takes.
I have a feeling AWS lambda's hypervisor has a lot of overhead, and simply making your workload smaller/lighter has a very noticeable reduction in execution time (cold start is <10ms on most of our GO lambdas).
re: full serverless:
I mean, try for yourselves but unless you're running an echo statement, you'll probably be hard pressed to get lower than this. Some early experiments with Rust showed 5x that, and that is as close to GO as I would dare to compare.
At my workplace we're using Go with Lambda and Fargate for several mid-size and one large application. The costs (image storage, transfer ...) you're referring to are actually almost neglectable. Aurora is by far the largest in our stack.
On the other hand: Creating WASM is a big pain. I understand that Cloudflare and others promote it for their own business reasons, but from a developer/customer point of view docker or lambda are much more robust and simple.
>> Cloudflare claims Go is not the right language for their workers due to sandboxing constraints[1].
That seems to be only relevant for functions / workers at the edge due to resource limitations. For traditional functions (I'm using AWS Lambda) Go is perfectly suited.
> After getting burned due to performance limitations by Python applications at scale I switched to Go and never looked back
Same here. Not just performance, but also tooling.
We used to use pipenv for reproducible dependency management, but literally any change to the lockfile would take 30+ minutes to rebuild--this isn't including the time it takes to pull the dependencies in the first place, which was only a few minutes anyway. This meant that our CI jobs would take ~45 minutes and we would spend a ton of time trying different caching tricks and adding various complexity to our CI process to bring these times down, but they were still unsavory. With Go, we don't even need caching and everything builds and tests run in just a few minutes (end to end).
Not only that, but Go's static typing was a boon. Years ago we would try keeping our types up-to-date with Sphinx (and code review checks), but inevitably they would fall out of date and the only thing worse than no type documentation is incorrect type documentation. Further, this didn't stop people from writing stupid code nearly as well as a type checker does (think "code whose return type varies based on the value of some input parameter" and various other such things). We eventually tried mypy but it was immature (couldn't express recursive types), cumbersome (e.g., expressing a callback that takes kwargs was tedious), confusing (I don't think I ever figured out how to publish a pypi package with type annotations that would be picked up automatically by mypy), slow, etc. I'm sure these things will improve with time, and there are also various other Python type checkers (although with everything in Python, each tool seems to have its own hidden pitfalls). On the plus side, Python's Unions are better than Go's various patterns for emulating algebraic data types aka enums aka sum types.
Similarly, with Python we would try to deploy AWS Lambdas, but it didn't take long and these would bust the (at the time) 250MB limit (compressed). I'm guessing Lambda has since raised this limit, but even still. Go binaries with as many (direct and transitive) dependencies would yield an artifact that was two orders of magnitude smaller and it would include the standard library and runtime. This also meant that our Docker images could be a lot smaller which has a whole bunch of other benefits as discussed here: https://news.ycombinator.com/item?id=30209023.
With respect to performance, beyond the obvious problems, it also made tests far slower, and slow tests (and slow test suites) get run less frequently and later in the software development lifecycle which has a lot of knock-on effects (especially in a language that relies so heavily on tests rather than static analysis to catch bugs). We would also have to go through and purge tests or move them out of band of CI because we wanted to be able to do Continuous Deployment (and even if you don't want to be able to continuously deploy to production, being able to rapidly iterate in lower/ephemeral environments is really nice--getting high-fidelity local dev environments is either futile or very close thereto).
It's really surprising how many advantages Go confers (at least relative to Python) beyond those which are obvious from the marketing.
Your experience with Python perfectly reflects mine when I used it for building large scale applications with it.
Except for few niche areas such as ML, DS, algorithm prototyping (or) archaic fintech systems which expects python code as an input I think Go is a perfect replacement for Python.
When ever this topic comes up, People bring up the beginner friendly web-frameworks in python; Which I admit has some merit but Go as a language is beginner friendly 'enough' and not having to rely upon large monolith 3rd party frameworks is one of its key advantages.
> Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.
Your point is specifically about cloud so maybe I shouldn't use go on Edge? In fact I'm struggling with the binary size right now (and have only just started with go after decades of C - because reasons beyond my control).
with go 1.17 the resulting executable (running on ARM) is 11MB (stripped ldflags -w -s). I can run upx --brute and reduce it to 3MB. But that is still heavy when another lua based application (a complete NodeRed clone - with massive complexity) and 100x as many lines of code (before importing external packages) is still just short of half a MB. (apples & oranges yes ...)
Maybe I shouldn't use go on an edge devices with such poor hardware?
I heard about tiny-go but looking at their website could only find instructions on how to x-compile for specific boards but not for specific HW (like ARM-5 softfloat).
Though I totally love the free stuff I get out of the box with go while also it feeling very familiar coming from C. The only thing I dislike is that it's not C :P
If you're coming from C, I understand that 11MB seems like a lot to you. I've done quite a bit of serverless Node.js, a relatively slim docker image is always a few hundred MB and can easily stretch to a GB. However, I don't consider using C a viable alternative for that purpose.
What's cool about Cloudflare workers, for example, is that they provide the V8 runtime, you just provide the code, which like with Lua will be a very small footprint.
I'm curious, as a C developer, would you say that you prefer Go over Rust? I've used both and in my opinion Rust feels closer to C/C++ in terms of control and flexibility that the language gives you. To me (from a web service point of view, not systems programming), Go is just a faster (and better designed) pre-compiled version of JS/Node, it may look like C with it's simplicity (you end up having to write a lot of repetitive code), but still packs a garbage collector and (albeit very fast!) runtime under the hood.
I've written a lot of C code and to me Go is like a high level C with some sharp corners removed and a GC. Maybe I don't see the limitations in Go as a problem because of this. Have done quite a bit of C++ as well. I prefer simpler languages that I can keep in my head and like C and Go for that reason.
Do I prefer it to rust? I've not bothered to learn rust yet. I think I would like it better than C++ but it's a bit large for my taste. I like to see the language level as an abstraction level that I want to be able to keep entirely in my head. I fear that rust (just as C++) may have too many features for that.
> I'm curious, as a C developer, would you say that you prefer Go over Rust?
Not the guy you asked, but coming from a similar background: Yes, I absolutely would.
Reason: Rust is an amazing language: Amazingly safe, amazingly fast, amazingly complicated. And the last isn't an advantage but a huge showstopper. What do I get from Rust over Go? A bit more performance. Okay, but I also get that from C, so the real question is, what do I get from Rust over C? Memory safety guaranteed by the compiler. Cool. But in exchange, I have to learn a language easily as complex as C++, but completely different from C.
No thank you.
Bottom Line: For me, the advantages of Rust over C do not justify the added complexity, not even close.
> If you're coming from C, I understand that 11MB seems like a lot to you.
My issue is that the environment this thing is supposed to run on only has 30MB disk space.
> would you say that you prefer Go over Rust?
my answer is similar to my siblings. what stops me from using Rust the way it's intended is 100% learning curve. Unless I get paid for the time I invest (on the job) I would spend my evenings or weekends on gaining experience in yet another system/language or whatever. Pretty much did that during the first 20 years in this profession. I get that type-safety is a massive advantage so hopefully I'll get thrown on a Rust project eventually and then I'll meddle through :).
I'm not a fan of C++ especially for embedded. Nobody forces one to use exceptions but reality is, if I start an embedded project in C++, then the minute I don't pay attention somebody will end up refactoring and go overboard with templates or exceptions and then my flamegraphs look like the fires of hell.
2 things that I would love to get good at are Zig and Nim (in addition to Go where I'm far from an expert yet).
> If you're coming from C, I understand that 11MB seems like a lot to you. I've done quite a bit of serverless Node.js, a relatively slim docker image is always a few hundred MB and can easily stretch to a GB. However, I don't consider using C a viable alternative for that purpose.
As someone who's worked in a number of stacks with higher abstraction levels, i can more or less confirm this. When you add containers in the mix, it gets much much worse, file size wise, even if i think that the increased reproducibility is definitely worth it in many types of deployments - mostly because file sizes have never been a primary concern for me, but misconfiguration and inconsistent environments have.
Now, nowadays i manage to keep all of my apps under 500 MB when shipping the full sized containers, typically they're in the 200 - 300 MB range: apps in Java (which need JDK), apps in Ruby (typically with the default interpreter), some .NET ones (at least Core, or .NET 6), some with Python (CPython) and Node.js etc. Of course, using Debian or Alpine for container base images is also a pretty good choice in regards to file sizes!
Of course, the beauty of all of it is that you can take advantage of layers - if you base everything on the same old container image between any two releases (or maybe have an intermediate layer for updates which may change) and only your ~50 MB .jar app changes for the most part, you'll be able to just ship those layers instead of shipping the base image all over, at least in most Docker registry implementations.
I guess that just goes to show how much variety and difference in opinions/concerns there is out there! I think GraalVM and other technologies (project Jigsaw in Java comes to mind IIRC) have the promise of even smaller deployments with stripped down runtimes even for these higher abstraction level stacks, but that's hard to do due to how dynamic they can be (e.g. class loading, reflection etc.).
From where i stand, Go is amazing in that regard, as are lower abstraction level languages.
> But that is still heavy when another lua based application [...] and 100x as many lines of code (before importing external packages) is still just short of half a MB. (apples & oranges yes ...)
Apples and oranges indeed; a Go binary is a fully self-contained, external-runtime-less application. That said, I can picture (in my head, don't expect me to build it) an edge / lambda environment that only takes your codebase and will run / feel like a script moreso than a complete binary.
I'm willing to put money that if you add the libraries + runtime to the lua application, you'd end up with more than 11MB.
Anyway, I'm trying to keep neutral; if your binary size is that important then staying with C may be the way to go. It's a compromise between binary size, language features, 'safety' (e.g. memory management is easier in Go thanks to auto zeroed variables and garbage collection), concurrency (not as much an issue in lambdas), etc.
compiled LUA interpreter and runtime is pretty slim - but we're glossing over the reality that LUA is a drastically less functional language than golang - much in the way LUA is less functional than C.
I think to each their own - it is possible to have a lua app compile down, including the interpreter, to sub-megabyte size, but a 10k LOC lua app could probably be a 200 LOC golang app with some libraries pulled in.
I shudder thinking about the development cost of building and maintaining a lua codebase but if you had 100 million client devices running your code (say a wifi lightbulb) then it might make sense to save on client device and put that money into development of something very bespoke to optimize it.
To me Go feels like a natural progression of C, And could see C often when I write it.
That said my experience with Go on ARM with low memory and CPU power hasn't been as same as my cloud services. I had to resort to ugly GC hacks and even preemptively killing the application on memory threshold.
But it's just my experience, I wouldn't be surprised if there are highly optimized Go apps running successfully on low-mem devices on edge. I don't think Go can replace C for embedded programming, Perhaps newer systems programming language like Rust, Zig is the way to Go.
I used to have Netdata on my edge devices, It would trip on its own Go plugin usage(mem) while the plugins written on NodeJS, Python ran fine; Likely because of unabated use of Goroutines.
Go's strength is its Network stack, It's been years since I've touched Apache/ngnix as deploying Go binary and setting up systemd rule is all that's required. For non-network application on edge, Especially on low memory devices I think Go is not ready yet.
It's almost never worth using binary packers - every saving in on-disk size is lost on extra memory usage and extra cpu usage on starting up; it also interferes with memory management features (a binary typically takes up no writeable swappable memory - a packed one does).
You can make notable compression gains by using a simple 7z self-extracting PE file on your already-static golang binary. In some cases this can yield another 10-50% savings in binary size. For environments where fresh startup time are important, then I would skip that as it does add some unnecessary overhead before your app comes to life. If you need something smaller than that, you're looking at writing c and building for your target arch.
If you're thinking about an edge where you have less than 3MB of storage space, and likely very limited in-memory space, you'd probably still be better to go with C. These kinds of environments are becoming very small % of use case over time. Usually such resource-constrained devices & systems have their own build system or crutch libraries built in to help reduce binary sizes to this scale. It is against golang's design principle to make a language that requires external frameworks/libraries to run. As it is, you can build your golang app, send it over an ssh tunnel and have it running on the remote system without much thought at all.
Comparing size of executables is basically useless. You have to compare memory footprint and execution speed to be honest in a comparison between applications, and the applications shoudl serve roughly the same complexity of the problem being solved.
Serverless model as implemented in aws lambda is not faster or cheaper. A single node.js process with fastify.js or hapi.js can handle hundreds if not thousands of requests per second. In lambda to handle 1000 requests per second, assuming a single request take 100ms you need to have 100 instances of lambdas running. Serverless is only cheaper if your application see very little traffic and somehow fit in very limited feature set of lambda. Otherwise, you still need to pay extra for provisioned concurrency and spend engineering resources to workaround lambda limitations.
What often happen is people build own framework on top of serverless instead of just using Rails, Django, Phoenix etc.
All language stacks get faster over time and redeploying with a new version can improve performance and save costs. Nothing exclusive to Golang about that.
Edit - I'm curious as to what is so controversial about this comment? What stack has gotten slower over time?
Python 3 was slower than the preceding Python 2.x version, I believe; it's getting more difficult to find sources now (it's been a while) but there's plenty of resources that showed Python 3 had performance regressions.
Python 3 is the most striking example but it's unfortunately common in Python. Generators were originally slower than list comps, so after their introduction some modules got slower as they switched internally. super was slower than explicit parent calls, and some modules switched even if they didn't immediately need the fancier MRO. True and False were slightly faster when they were alternate names for 1/0 rather than a different type due to various fast-paths for int handling. I think before 2.3 old-style classes were still faster than new-style classes in some cases, but my memory is pretty fuzzy.
It's not so much about it getting faster over time, it's about it NEVER getting slower, and never breaking.
That is something you want to have in your stack when you want to stay on latest and not get stuck on a rewrite 6 years down the road because it's TCO-slow and incompatible with the latest-and-greatest version that is finally fast.
Look no further than python and node to find languages that have notable performance regressions on new releases over time.
No popular language is interpreted these days outside of some specific domains where you might have an embedded scripting environment. But for all the major runtimes, even the JIT languages are now compiled.
This has been true for as long as Go has existed too.
Er. Python is one of, if not the most popular languages on the planet, and its official/primary implementation is only just now moving towards JIT, having been interpreted for the previous 3 decades.
It’s still compiled to byte code. It’s not been interpreted in decades.
I’ll accept that the definition of “interpreter” is a little more blurred these days now that pretty much all JIT languages compile to byte code. But I’d argue that the byte code is typically closer to machine code than it is source and thus the only languages that are common place and truly still fully interpreted are shell scripts (like Bash).
>>> Go will generally be vastly faster than interpreted languages regardless of versions.
Python is clearly in the latter group.
As for,
> byte code is typically closer to machine code than it is source
I would very much like to see some evidence; last I'd seen, it was basically tokenized source code and that's it, still nowhere near anything resembling binary executable code.
It’s compiled to a stack machine. Exactly like how a lot of AOT compiled languages originally used byte code too. Oddly enough they were never called “interpreted” languages. Which means the difference here isn’t the tech stack / compilation process but rather at what stage the compiler is invoked.
> But it seems people have warped “interpreted” to mean JIT to compensate for the advancements in scripting runtimes. That is a bastardisation of the term in my opinion.
Python's not JIT, either. It reads bytecode - which AFAIK is just the source code but tokenized - and it runs it, one operation at a time. It doesn't compile anything to native CPU instructions.
That’s the 2nd time you’ve posted that and it wasn’t right the first time you said it either.
CPython’s virtual machine is stack based thus the byte code is more than just tokenised source code.
In fact there’d be very little point just tokenising the source code and interpreting that because you get no performance benefit over that vs running straight off the source. Whereas compiling to a stack machine does allow you to make stronger assertions about the runtime.
One could argue that the byte code in the VM is interpreted but one could also argue that instructions in a Windows PE or Linux ELF are interpreted too. However normal people don’t say that. Normal people define “interpreted” as languages that execute from source. CPython doesn’t do this, it compiles to byte code that is executed on a stack machine.
Hence why I keep saying the term “interpreted” is misused these days.
Or to put it another way, Visual Basic and Java behaved similarly in the 90s. They compiled to P-Code/byte code that would execute inside a virtual machine and at that time the pseudo code (as some called it - not to be confused with human readable pseudo code but technically “pseudo” is what the “P” stands for in “P-code”) was interpreted instruction by instruction inside a stack machine.
Those languages were not classed as “interpreted”.
The only distinction between them and CPython is that they were AOT and CPython is JIT. And now we are back to my point about how you’re conflating “interpretation” with “JIT”.
>The only distinction between them and CPython is that they were AOT and CPython is JIT. And now we are back to my point about how you’re conflating “interpretation” with “JIT”.
Talking about conflating though, AOT and JIT mean different things in a programming context...
I’m not conflating the terms AOT and JIT. I’m using examples from how AOT compilers work to illustrate how modern JIT compilers might have passes that are described as an interpreter but that doesn’t make the language an interpreted language.
Ie many languages are still called “interpreted” despite the fact that their compiler more or less functions exactly the same as many “compiled languages” except rather than being invoked by the developer and the byte code shipped, it’s invoked by the user with the source shipped. But the underlying compiler tech is roughly the same (ie the language is compiled and not interpreted).
Thus the reason people call (for example) Python and JavaScript “interpreted” is outdated habit rather than technical accuracy.
Edit:
Let’s phrase this a different way. The context is “what is an interpreted language?”
Python compiles to byte code that runs on a stack machine. That stack machine might be a VM that offers an abstraction between the host and Python but none the less it’s still a new form of code. Much like you could compile C to machine code and you no longer have C. Or Nim to C. Or C to WASM. In every instance you’re compiling from one language to another (using the term “language in a looser sense here”).
Now you could argue that the byte code is an interpreted language, and in the case of CPython that is definitely true. But that doesn’t extend the definition backwards to Python.
The reason I cite that the definition cannot be extended backwards is because we already have precedence of that not happening with languages like Java (at least with regards to its 90s implementation. I understand things have evolved since but not poked at the JVM internals for a while).
So what is the difference between Java and Python to make this weird double standard?
The difference is (or rather was) just that JIT languages like Python used to be fully interpreted and thus lazily still referred to that way. Where as AOT languages like Java were often lumped in the same category as C (I’m not saying their compilation process is equivalent because clearly its not. But colloquially people to often lump them in the same group due to them both being AOT).
Hence why I make comparisons to some AOT languages when demonstrating how JIT compilers are similar. And hence why I make the statement that aside from shell scripting, no popular language is interpreted these days. It’s just too damn slow and compilers are fast so it makes logical sense to compile to byte code (even machine code in some instances) and have that assembled language interpreted instead.
Personally (as someone who writes compilers for fun) I think this distinction is pretty obvious and very important to make. But it seems to have thrown a lot of people.
So to summarise: Python isn’t an interpreted language these days. Though it’s runtime does have an interpretation stage, it’s not interpreting Python source. However this is also true for some languages we don’t colloquially call “interpreted”.
That’s property of the JIT compiler though, not a lack of compilation. You want to keep compiler times low so you only analyse functions on demand (and cache the byte code).
If CPython behaved identical to Javas compiler people would moan about start up times.
Some AOT languages can mimic this behaviour too with hot loading code. Though a lot of them might still perform some syntax analysis first given that’s an expectation. (For what it’s worth, some “scripting languages” can do a complete check on the source inc unused functions. Eg there’s an optional flag to do this in Perl 5).
I will concede things are a lot more nuanced than I perhaps have credit for though.
You are being confidently incorrect and making sweeping generalizations and assumptions. The vast majority of anything Javascript is still being interpreted (or as you mentioned going through a JIT interpreter), which represents all browsers and a lot of apps.
You’d have more luck if you exampled Bash scripting :P
If you want to be pedantic then these days even those languages that were considered “interpreted” have had their interpreters rewritten to produce byte code that is generally closer to machine code than it is the original source. So they’ve grown beyond the original definition of “interpreted” and evolved into something much closer to a compiled languages.
So if think it’s disingenuous to still call them “interpreted”. And in the case of JavaScript (your example), the largest runtime in use is most definitely a compiler considering it spits out machine code. So that’s definitely not interpreted like you’ve claimed.
It contains a byte code compiler. The point is that one of the stages of the compiler rather than the byte code being interpreted at runtime.
In the link I posted:
> V8 first generates an abstract syntax tree with its own parser.[12] Then, Ignition generates bytecode from this syntax tree using the internal V8 bytecode format.[13] TurboFan compiles this bytecode into machine code. In other words, V8 compiles ECMAScript directly to native machine code using just-in-time compilation before executing it.[14] The compiled code is additionally optimized (and re-optimized) dynamically at runtime, based on heuristics of the code's execution profile. Optimization techniques used include inlining, elision of expensive runtime properties, and inline caching. The garbage collector is a generational incremental collector.[15]
Emphasis mine.
So no, it is not an interpreter. It is definitely a compiler.
It seems people today are confusing “interpreter” with “just in time”…
You're right, ignition does generate bytecode from the AST; it also interprets it.
> With Ignition, V8 compiles JavaScript functions to a concise bytecode, which is between 50% to 25% the size of the equivalent baseline machine code. This bytecode is then executed by a high-performance interpreter which yields execution speeds on real-world websites close to those of code generated by V8’s existing baseline compiler.
You can also find this information on the Wikipedia article you linked to
> In 2016, the Ignition interpreter was added to V8 with the design goal of reducing the memory usage on small memory Android phones in comparison with TurboFan and Crankshaft. Ignition is a Register based machine and shares a similar (albeit not the exact same) design to the Templating Interpreter utilized by HotSpot.
> In 2017, V8 shipped a brand-new compiler pipeline, consisting of Ignition (the interpreter) and TurboFan (the optimizing compiler).
Here lies the problem. Just because a component of the v8 compiler is called an “interpreter” it doesn’t mean that JavaScript (via v8) is an interpreted language.
Which is the point I’m making. Back in the 80s and 90s scripting languages often had no compilation stage aside maybe building an AST and were pretty much 100% interpreted. Anything that ran from byte code was considered compiled (like BASIC vs Java, VB6). Interpreters often ran line by line too.
These days most scripting languages that were traditionally interpreted languages run more like Java except compiled JIT rather than AOT.
But it seems people have warped “interpreted” to mean JIT to compensate for the advancements in scripting runtimes. That is a bastardisation of the term in my opinion. But when you look through this thread you’ll see the same error repeated over and over.
Go isn’t run through an optimising runtime either. Plenty of compiled languages aren’t. However Python code is still transpiled to stack based byte code that runs on a virtual machine.
If you want to get pedantic then that byte code is interpreted but frankly where do you draw the line, are retro console emulators interpreters since they translate instructions from on architecture to another in much the same way? Technically yes but we don’t like to describe them in that way.
This is why I keep saying term “interpreted language” used to mean something quite specific in the 80s and 90s but these days it’s pretty much just slang for “JIT”.
> "There is a limit to optimization, and that ends with optimized machine code."
There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.
> "Go being a compiled language"
All code is eventually machine code. That's the only way a CPU can execute it. Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions; balancing memory safety, fast startup, amortized optimization, and steady state throughput.
> "As far as cloud costs are concerned, a well developed program should cost least with go."
This is meaningless without context. I've built 3 ad exchanges running billions of requests per day with C#/.NET which rivaled other stacks, including C++. I also know several fintech companies that have Java in their performance critical sections. It seems you're working with outdated and limited context of languages and scenarios.
> There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.
True. I agree. But, take a program (add two numbers, and print the result), and implement it in C / Go / Python / C#. You will find that, the most optimized program in each language, will generate different outputs, machine code wise.
While the C & Go will generate binaries that have X processor instructions, Python & C# will generate more than X.
And there, you have the crux of the issue. Python & C# require runtimes. And those runtimes will have an overhead.
> Virtual machines are an implementation detail
Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.
> It seems you're working with outdated and limited context of languages and scenarios.
That just reeks of arrogance.
My point, and the point made by the original comment author is not that C# / Java / Python cannot run billions of requests, its that when you compare cloud charges for running those billions of requests, costs will be less for programs produced by Go / C (and C++ too)
> "Python & C# require runtimes. And those runtimes will have an overhead."
Golang also has a runtime with memory safety, garbage collection, and threading. It's just linked into the native executable and run alongside without a VM but still present.
> "That just reeks of arrogance... costs will be less for programs produced by Go / C (and C++ too)"
You claimed that costs would be the least with Go without any other context. This is neither related to my point (that all stacks get faster over time) nor accurate since other languages are used in similar environments delivering the same or better performance and costs. Again this comes down to "optimization" being far broader than just the language or compilation process.
> Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.
After the bytecode has been JIT compiled it runs as close to the metal as a Go program. If you want fast startup you can even compile the code ahead of time and there is absolutely no difference. Apart from .NET not insisting on its own calling convention, which means it has far less overhead when interacting with external libraries than Go.
This whole thread is childish and pointless. Literally 14 year old me would have recognised these arguments and relished getting stuck into them.
I've seen a stack built almost entirely in TCL which handled 10,000 transactions per second for a FTSE 100 company. If that's possible then language is just one choice in the infinite possibilities of production stacks.
> C program will run directly on the OS. Adding a layer of runtime means reduced performance.
A C program that isn't compiled in an embedded context has a runtime layer on top of it. The OS doesn't call your main, it runs the init function of the platforms C runtime to initialize all the bloat and indirection that comes with C. Just like your compiled C program doesn't just execute the native sqrt instruction but runs a wrapper that sets all the state C expects, like the globally visible errno value no one ever checks but some long dead 1980s UNIX guru insisted on. C is portable because just like python and C# it too is specified on top of an abstract machine with properties not actually present in hardware. If you really set out to optimize a C program at the low level you immediately run into all the nonsense the C standard is up to.
JIT compilers run machine code without any overhead, when it comes to actual execution. And the overhead of the compilation can be pushed to another thread, cached, etc.
> Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions.
In theory, practice is like theory. In practice, it isn't.
Maybe in 20 years, JITs can provide AOT-like throughput without significant memory / power consumption. Not with current tech, except on cherry picked benchmarks.
Note that akka is currently second for single core, and fastest with multicore. .Net has a nice result, but lets use up to date results. Especially when it still fits your argument that JIT can compete in benchmarks with suitable warmup, etc..
In addition Go is garbage collected. So you have a latency there as well. However, in most Go apps, you don't use that much memory for the garbage collector to become an issue.
Rust does not have a GC, but it's a much more complex language, which causes its own problems. IMHO, Go has the upper hand in simplicity. However, go also limits you a little, because you can't write memory unsafe code by default. You can't do pointer magic like you do in C. But things also get complicated in Go when you start to use concurrency. Again IMHO, event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.
I don't think this comment deserves being downvoted much. It states an important issue: In simplicity Go might win, but practically you might be better off learning Rust and have the compiler warn you about issues with your concurrency usage for any non-trivial problem.
I don't agree with the "callback functions is easier" part at all though. It takes a bit of wrapping ones head around channels and so on, but when understood, I think they are easier to handle than all those callbacks.
> But things also get complicated in Go when you start to use concurrency.
Things always get complicated when concurrency is involved. And Go handles this much better than most other languages, with language primitives (go, select, chan) dedicated to the task, and CSP being much easier to handle than memory sharing.
> event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.
How so?
The code becomes non-linear, what bit of code is executed when and how its synchronized is suddenly up to the event loop, and entire libraries suddenly have to be written around a concept (cooperative multitasking) which OS design abandoned in the early 90s for good reason.
And this isn't even taking into account that the whole show is now much harder to scale horizontally, or that it is only suitable for io intensive tasks.
From a self-taught programmer's perspective, you mostly write concurrent code to speed-up io. Writing code with callbacks gives me the illusion of linearity. I know it's prone to errors but for simple tasks it's good enough. That's the reason for nodejs's popularity I think.
CSP is a brilliant concept, but it was a bit hard for me to wrap my head around without diving into graduate level computer science topics. This is not Go's fault, concurrency itself is complicated.
Whenever I try to write concurrent code with go, I resort back to using locks, waitgroups etc. I couldn't make the switch from thinking linearly to concurrently yet.
I might be downvoted again :), but actor based languages seem much more suitable for concurrency for me.
Again, I'm out of programming for a while, the last I coded it was a pain to do network requests concurrently in Rust due to needing intrusive libraries. Python was hard because the whole syntax changed a couple of times, you had to find the right documentation with the right version etc. Go was surprisingly easy, but as I usually plan the program as I write it, trying to write non-trivial concurrent code always results in me getting unexpected behaviour or panics.
The usual material you find on concurrency and parallelism teaches you locks/semaphores using Java, these are easier to wrap your head around. The last I checked material on CSP was limited to graduate level computer science books and articles.
I originally got into Go, because I thought it is memory safe, the code I write will be bulletproof etc. Then I understood that the achilles heel of Go (or any procedural-at-heart programming language) was dynamic memory allocation paired with concurrency.
This is my view as a self-taught programmer having no formal training in computer science. YMMV
Millions-per-second of what? That's a serious load for any stack and depends on far more than just the language.
And it can also be done by other stacks as clearly seen in the TechEmpower benchmarks. Here's .NET and Java doing 7 million HTTP requests/second and maxing out the network card throughput: https://www.techempower.com/benchmarks/#section=data-r20&hw=...
> Optimized native like C++/Rust could handle millions per second without blinking an eye
Keyword: optimized. Almost anything we have now would handle as much as "native optimized".
And no, neither C++ nor Rust would handle millions (multiple) without blinking because the underlying OS wouldn't let you without some heavy, heavy optimizations at the OS layer.
> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.
A virtual machine is just an intermediate language specification. C, C++, and Rust also "run on a virtual machine" (LLVM). But I guess you mean the VM runtime, which includes two main components: a GC, that the Go runtime also includes, and a JIT compiler. The reason Java gives you better performance than Go (after warmup) is that its GC is more advanced than Go's GC, and its JIT compiler is more advanced than Go's compiler. The fact that there's an intermediate compilation into a virtual machine neither improves nor harms performance in any way. The use of a JIT rather than an AOT compiler does imply a need for warmup, but that's not required by a VM (e.g. LLVM normally uses AOT compilation, and only rarely a JIT).
AOT vs JIT compilation is not really that important of a difference to steady state performance. It does impose other performance costs though.
For example, C# and, even more so, Java, are poorly suited to AWS Lambda-style or CLI style commands, since you pay the cost of JIT at every program start-up.
On the other hand, if you have a long-running service, Java and .NET will usually blow Go out of the water performance wise, since Go is ultimately a memory managed language with a very basic GC, a pretty heavy runtime, and a compiler that puts a lot more emphasis on compilation speed than advanced optimizations.
Go's lack of generic data structures, and particularly its lack of covariant (readonly) lists, can lead to a lot of unnecessary copying if not specifically writing for performance (if you have a function that takes a []InterfaceA, but you have a []TypeThatImplementsInterfaceA, you have to copy your entire slice to call that function, since slices are - correctly - not covariant, and there is no readonly-slice alternative that could be covariant).
So Go's runtime is pretty heavy in comparison to... .NET and the JVM? Or pretty heavy in general? This is the first I would be hearing either of these two opinions in either case, so interested in the rationale.
It's pretty heavy compared to C and C++, not .NET or JVM.
The point is that Go, even if AOT compiled, is much nearer to .NET and JVM (and often slower than those two) than to C or C++.
To make that least clearer: it's generally slower than C or C++ because the compiler is not doing very advanced optimizations, the runtime is significantly heavier than them, and GC use has certain unavoidable costs for certain workloads.
Go is generally slower than .NET or Java because its AOT compiler is not as advanced as their JIT compilers, it's GC is not tunable and it's a much more primitive design, and certain common Go patterns are known to sacrifice speed for ease of use (particularly channels).
> There is a limit to optimization, and that ends with optimized machine code.
There might be a low limit, but there’s no high limit: you can generate completely unoptimised machine code, and that’s much closer to what Go’s compiler does.
> Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.
That’s assuming that Go’s optimisation pipeline is anywhere near what mainstream C compilers provide. Against all evidence.
Just check .NET Framework 4 to .NET Core transition. You can dig MSDN blogs. While Go should be lauded for their performance improvements, it should not be done by reducing other languages/stacks’ achievements.
While that’s what they’d teach you in CS it’s missing enough nuance to be incorrect.
> There is a limit to optimization, and that ends with optimized machine code.
While that’s true, you overlook the cost of language abstractions. I’m don’t mean runtime decisions like any use of byte code but rather the language design decisions like support for garbage collection, macros, use of generics and/or reflection, green threads vs POSIX threads and even basic features like bounds checking, how strings are modelled and arrays vs slices.
These will all directly impact just how far you can optimise the machine code because they are abstractions that have a direct impact to the machine code that is generated.
> Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.
No it shouldn’t. C doesn’t have GC, bounds checking, green threads, strings, slices, nor any of the other nice things that makes life a little easier as a modern developer (I say this not as a criticism of C, I do like that language a lot but it suits a different domain to Go).
> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.
Aside from CPythons runtime, their instructions still compiles to machine code.
These days no popular language, not even scripting ones, interprets byte code upon execution. [edit: I posted that before drinking coffee and now realise it was a gross exaggeration and ultimately untrue.]
You’ll find a lot of these languages compile their instructions to machine code. Some might use an intermediate virtual machine with byte code but in a lot of cases that’s just part of the compiler pass. Some might be just in time (JIT) compiled, but in many cases they’re still compiled to machine code.
> As far as cloud costs are concerned, a well developed program should cost least with go.
> Go is best suited for running intensive compute loads in serverless environments.
That’s a hugely generalised statement and the reality is that it just depends on your workloads.
Want machine learning? Well then Go isn’t the best language. Have lots of small queries and need to rely on a lower start up time because execution time is always going to be short, we’ll node will often prove superior.
Don’t get me wrong, Go has its place too. But hugely generalised statements seldom work generally in IT.
Yeah, now I think about it, plenty of runtimes do. I have no idea why I posted that when I know it’s not true. Must have been related to the time of the morning and the lack of coffee.
> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.
A JIT compiler can optimize _more_ than an AOT one can. It has way more profile guided data and can make assumptions based on how the live application is being used.
> Tiny output binaries are also very quick to transfer and spin up in serverless as well
This is a big deal when using servers too.
If you have a single server deploy that could be the difference of 1 second of downtime with Go vs 5-15 seconds for an app in a different language to start up.
For multi-server deploys this could also drastically impact how long it takes to do a rolling update. Having a 10 second delay across N servers adds up.
For me it hasn't been enough to seriously consider switching away from Python and Ruby to build web apps because I like the web frameworks that these languages have available for productivity but it has for sure made me wish I could live in a world where it takes any app 1 second to start up. I know Docker helps with being able to ship around essentially a single binary but it can't change how long it takes a process to start.
For environments where build is included in deployment (Not best practice at all, but sadly common) it can be very pronounced.
Don't forget that CI systems are increasingly memory/cycle/time based for billing, and having a nice lightweight build/artifact is very appreciated here.
Yes, and speaking of CI there's also linting and formatting code. In large code bases with certain languages it can take a substantial amount of time to do this step (tens of seconds).
Being able to zip through linting, formatting, building, running, testing and pushing a binary / Docker image in 1 minute vs 10 minutes is a massive win -- especially when amplified over a number of developers.
I think it depends on personal mental models. I found rust easier to get to grips with because it was more straightforward for the way I thought, whereas with Go I struggled with a lot of things that felt funky to me.
I fully agree that Go is a simpler language up front though at the syntax level.
I found C++ template meta-programming easier to learn than Rust. The only thing new to learn in Go was Go-routines. Followed by a scan of Effective Go, a single page of "gotchas" and one can immediately churn out software. Never found a language/stdlib so easy to start coding in.
Could you share which things those were? I mean error handling is a common issue people first raise an eyebrow at, but other than that I've found it a really straightforward language to use (coming from Java and JS myself, a bit of obj-c/swift, some scala in between).
Error handling is a big one for me. It just seems illogical that it's so easily skipped.
There's a lot of magic in the APIs. This blog post covered it but I hit quite a few of them. I get why they're appealing but I mentally feel the API is wrong which causes friction when I use them.
https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...
I find I don't agree with the package and versioning story either. Publicness based on casing, the way dependencies are declared in the source code.
It's lots of little things that, while I can see the appeal, they create friction with how I like to work.
My background is somewhat similar, Python, C++, ObjC, Swift, Java/Kotlin, JS, C#,GLSL etc...
In particular, I've used some APIs in the past that were very similar to Go and been bitten by it. E.g defining dependencies in the source quickly became an exercise in frustration when dealing with multiple packages needing updates for a shared runtime. It's not an issue people would hit when writing microservices or CLI tools, but it hit hard for working on graphics tools where everything runs in the same runtime. I know go has ways around it though but still not a fan.
I wonder if the poipnt you're making about dependency conflicts is the diamond dependency problem. rsc wrote a very detailed post about the rationale here, and how go versioning solves diamond dependencies, which you might find interesting: https://research.swtch.com/vgo-principles
I think you probably didn’t spend enough time with Go, it is objectively much easier to learn because there is no borrow checker, no generics, no macros, etc.
I've already addressed that the language is simpler, but that doesn't always translate to easier to learn.
If the level of "learned a language" is just being able to write code then Go is easy. But writing an app and dealing with all of Go's oddities made it much more cumbersome for me.
Why compare a low level language to a high level one? Of course one will be more complex, because the GC takes many of the details out of the picture, reducing the essential complexity of the problem domain.
I'll give you a candid scenario that we went through:
nodejs app built with express receiving json events from 10M clients around the world on very regular basis, pushing to a queue and checking that it was received by the queue.
Extremely simple app: receive and parse json, do some very simple sanity checks on schema, convert to bson with some predefined mapping, and push to a queue (happened to be azure eventhub). To handle ~5BN events per month, peaking around 4000 events/sec, it was using up to 20 instances of node at ~200-300MB per instance in-memory and with scale-out trigger set to 75% cpu... 95th percentile was 20 cores and 12GB of ram in a serverless environment just for that one service. Add to that base container overhead and it was peaking at 16GB memory. That's not nothing in a serverless world. If it was a VM, sure, not too bad, but we're talking elastic containers and that service was built for 500% tolerance for high watermark. Not about to provision two 48GB VMs in two AZs and worry about all the plumbing "just incase". That is the point of going serverless.
Moved it to golang and it was handling 2000 req/s on 1 core with 60MB in memory. It has never went over 3 cores in the 2 years since it was moved.
> Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.
I think people underestimate the benefits of services that can start super quickly (IMHO, "cost" is nowhere near the most significant benefit!).
Fast restarts means your service can respond very quickly to a scale up event (which can make the difference between "things get a little slow for a moment" and "everything comes crashing down"). It also means developers iterating in a cloud environment can get faster feedback, which itself seems underrated. Further, when you push a change that takes down production, the rollback is much faster.
Of course, all of these things have various workarounds (e.g., canary deploys so you're less likely to take down prod in the first place) but if you can have 5MB images instead of 5+GB images you can go a lot farther without needing to implement them. Of course, small images are necessary but not sufficient for fast startups; however, in practice the overwhelming majority of Go programs also start up quickly (although in a cloud context, this can be wasted if they're deployed on bulky distro base images).
.net 5 (and former .net core) was pretty fast. The issue was in serverless there is a pretty long cold start since the interpreter is still invoked which is like a mini virtual machine. If you're sticking to azure, they have a highly customized environment for it that I'm pretty sure saves interpreted output vs source code.
I would highly recommend golang to old school C# devs who are very comfortable with threading, thread safety, and mix of high and low level code in the same codebase.
edit: Best go devs I know are seasoned C or C# backgrounds.
Why? C# fits the bill for everything I want to do. It's fast enough, it has great IDEs, it runs almost everywhere and there are more UI frameworks than there are JS frameworks /jk ;).
I have almost 15 years experience in it, why would I slow down my next projects by using a language I'm not familiar with (as long as C# fits the bill ofc). Learning a new language isn't a hobby for me, it has to be useful to me (I learned Flutter because MS Ui frameworks aren't a panacea).
Maybe I'm not following the thread (I'm not sure what "In what concerns .NET is wasn't really better" means), but what's the issue here? C#'s fastest framework beats Go's fastest by a marginal amount (60.1% vs 59.1%). That's probably within the error margin. Anyway, drawing sweeping performance conclusions from microbenchmarks is perilous--a high ceiling for an HTTP framework doesn't indicate anything about real-world performance for web applications (which are almost never bottlenecked on the HTTP framework).
The issue here is how many in the Go community like to boost themselves about how fast Go is, only because it is the very first AOT compiled language that they ever used, usually come form scripting language backgrounds and are influenced by the traditional Java/.NET hate, without ever using them in anger, including their AOT tooling.
So then we come these kind of assertions like Go compiles faster than any other AOT language, it doesn't have a runtime, its performance beats Java/.NET and so on.
I’m sure if you searched hard you could find a few people in the Go community who say silly things like that, but (1) you could find these kinds of people in any community and (2) in the Go community these people are quickly corrected and (3) no one in this thread is making these sorts of arguments (as far as I can tell) so your comment seems wildly off-topic.
Note the difference between “Go is better for certain cloud applications than Python, NodeJS, etc” and “Go is better than all other languages, full stop”.
This says a lot more about the quality of C# vs. Go's protobuf code generator (which was already poor and now even worse - you're going to the trouble of codegen but then implement it using reflection?) than either language itself.
There are plenty of other examples one can reach for.
Go would lose in performace benchmarks that make heavy use of generics or SIMD code in .NET, because:
1) We all know the current state of generics in Go
2) Currently most of the SIMD support in Go is basically "write your own Assembly code in hex", because many of the opcodes aren't even supported by the Go assembler
Then there are other factors like since .NET 6 there is the possibility to use a JIT cache with PGO, and there is only so much a single AOT compilation can bring into the picture, specially since Go tooling isn't that big into making use of PGO data.
Or the fact that .NET was designed to support languages like C++, so there are enough knobs to turn if one really wants to go full speed.
Naturaly because hating Microsoft and their technologies is fashionable, few bother to actually learn the extent of ecosystems like .NET actually offer in detail.
> Naturaly because hating Microsoft and their technologies is fashionable, few bother to actually learn the extent of ecosystems like .NET actually offer in detail.
True. Bit savings are significant only when volumes are large enough.
Also, developing from ground up to target a serverless runtime, means you can reach market faster with python and other langs, but reduce cost by moving to Go or C over time.
I kind of have to agree with the guy there, it would be kind of trash if everyone just inlined assembly instructions. Manual SIMD already feels a bit like cheating. Ideally, there should be two solution categories, idiomatic solutions and optimised solutions. I would prefer if the idiomatic solutions matched what the average programmer would write.
I think we worry about performance far too much when choosing a language. However, when performance is important, it's important to recognise the different classes of languages, say {Go, Rust, C} vs {Python, JS, Ruby} in terms of speed and memory footprint vs productivity.
No inlining of assembly is required to vastly improve the performance of Go in many such benchmarks. Pool or Arena Allocation are commonly used techniques and whether or not they are "idiomatic" isn't even up for discussion any more since the stdlib includes the former.
Every high throughput service I've had moved to GO has used a tiny amount of Memory and CPU compared to the implementation it replaced, typically nodejs, python, Scala and .net.
In a cloud world where it's usually boiled down to pay-per MB of in-process memory and per cycle... That can be 4-5 figures a month in reduced cloud cost for the same traffic.
Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.
The benefits over time of essentially knowing things will get faster with each new version is really encouraging as well. Imagine your cloud bill went down over time as a result of simply rebuilding your service on newer golang core and redeploying.