Hacker News new | past | comments | ask | show | jobs | submit login
Go performance from version 1.2 to 1.18 (benhoyt.com)
340 points by todsacerdoti on Feb 4, 2022 | hide | past | favorite | 282 comments



> A recent improvement was in version 1.17, which passes function arguments and results in registers instead of on the stack. This was a significant improvement for GoAWK’s old tree-walking interpreter, where I saw a 38% increase in speed on one microbenchmark, and a 17% average speedup across all microbenchmarks

I find that surprising that Go did not use registers for its (internal) calling convention until 1.17. And I also find it surprising that they only expected a 5% boost from it.


Since its initial release, Go has used a stack-based calling convention based on the Plan 9 ABI, in which arguments and result values are passed via memory on the stack. This has significant simplicity benefits: the rules of the calling convention are simple and build on existing struct layout rules; all platforms can use essentially the same conventions, leading to shared, portable compiler and runtime code; and call frames have an obvious first-class representation, which simplifies the implementation of the go and defer statements and reflection calls. Furthermore, the current Go ABI has no callee-save registers, meaning that no register contents live across a function call (any live state in a function must be flushed to the stack before a call). This simplifies stack tracing for garbage collection and stack growth and stack unwinding during panic recovery.

https://go.googlesource.com/proposal/+/refs/changes/78/24817...


I didn't consider the effects of goroutines, but it's pretty common that JITs that have to emit precise stackmaps simply don't use callee-saved registers, so it doesn't seem any harder to make goroutine stacks first-class. But the detail is in the details, always.


This is basically argument from implementation simplicity. I am of the opinion that implementation simplicity is overrated, and should be essentially ignored, especially for projects as widely used as Go.


It's not. It's in a document proposing a more complex approach.

The point of that discussion, which is clear in the original text, is to identify the benefits of the existing approach and retain them if possible through the transition to a register based ABI.


The cost/benefit of simplicity vs. register calling conventions is probably a part of the reason they switched to register calling conventions.


> I find that surprising that Go did not use registers for its (internal) calling convention until 1.17. And I also find it surprising that they only expected a 5% boost from it.

It's because there's no difference for a normal non-leaf function between a stack calling convention and a register calling convention. If 'f' is a non-leaf function either the caller of 'f' will have to put f's arguments onto the stack (stack calling convention) or 'f' will have to save its argument registers onto the stack before doing its calls (register calling convention).

Register calling conventions end up being a big win if you have lots of non-leaf functions that can forward their arguments (rare) or lots of leaf functions that can't be inlined (also rare). Some programs will get a big boost out of it but most will only get a small one.


We published a 5% win, but it was really closer to 10% (for amd64). We were conservative in our marketing. The arm64 register ABI (which will be in 1.18 [March]), is showing performance improvements of over ≥ 15. (Rob Pike saw 20% on some of his stuff.)


Are you on the go team?


>> I’m looking forward to someone improving the performance of Go’s regexp package, which is quite slow.

Hah, this reminds me how over several years ago, I rewrote one of my homework from Go to Python, since Go version was so terribly slow because of regex. I really hoped it was better in 2021.


Interesting, though not too surprising. Do you remember what the regexes were that your assignment used?

Go's regexp package does have a couple of advantages over Python, Perl, and so on: 1) it's guaranteed linear time in the length of the input, regardless of the regex, see https://swtch.com/~rsc/regexp/regexp1.html, and 2) it's a relatively simple implementation.


I have done some optimisations in Go regex recently; I have a talk coming up on Saturday:

https://fosdem.org/2022/schedule/event/go_finite_automata/

This repo collects all the changes so you can try them out: https://github.com/grafana/regexp/tree/speedup#readme


That's excellent! Those all look like pretty nice small code changes that all add up. I especially like the very small "avoid copy" change (https://go-review.googlesource.com/c/go/+/355789) that adds up to a 30% speedup on many benchmarks. I hope they get included in Go 1.19. Good work!


Simplicity of implementation isn't what users need, though; they need performance. For example, you can make GCC into a much simpler C compiler by compiling at -O0, but in practice nobody does that.


Totally agreed: almost all users (me/GoAWK included) want performance and don't care nearly as much about simplicity under the hood. Simplicity of implementation is of value for educational purposes, but we could easily have a small, simple 3rd party package for that. Go's regexp package is kinda too complex for a simple educational demonstration and too simple to be fast. :-)

I actually tried BurntSushi's https://github.com/BurntSushi/rure-go (bindings to Rust's regex engine) with GoAWK and it made regex handling 4-5x as fast for many regexes, despite the CGo overhead. However, rure-go (and CGo in general) is a bit painful to build, so I'm not going to use that. Maybe I'll create a branch for speed freaks who want it.

I've also thought of using https://gitlab.com/cznic/ccgo to convert Mawk's fast regex engine to Go source and see how that performs. Maybe on the next rainy day...


Have you considered writing your own string matcher for the simple cases like fixed patterns?

I got some pretty solid wins just by guarding some regex executions with simple strings.indexof calls.


Yeah, that's a good idea, I did consider it, but haven't tried it yet. Do you hook and look at the regex string before it's compiled, or do you hook in at the parsed regex AST level? (eg: regexp/syntax in Go).


For something like awk, I think you'd look before compiling, then create your own matcher. With an abstract Matcher interface that regexp implements.

It's C, but openbsd grep does something like this because libc regex is super slow. Look for fastcomp on https://github.com/openbsd/src/blob/master/usr.bin/grep/util... It's not super sophisticated, but enough to beat the full regex engine.

In the go code where I did this, it was a little different, with a static pattern. Something like "(\w+) apple" to find all apple adjectives or whatever, but the regexp wasted so much time matching words before not apples. A quick scan for "apple" to eliminate impossible matches made it faster. This depends more on knowing regex and corpus, so probably less relevant for awk.


Go's regexp package even exposes a routine for this: https://pkg.go.dev/regexp#Regexp.LiteralPrefix

It's been a while since I've looked at the source code, but it is almost certainly already doing basic prefix literal optimizations.

The more advanced literal optimizations come into play when every match must start with one of a few possible characters or strings. Then you get into things like Aho-Corasick or fancy SIMD algorithms (Teddy).


Oh, regarding rure go, the bugs note about using int is inaccurate. Go spec says max length of any object can be stored in int. You can't build a too big haystack.


Ah that's right! Nice catch, thanks.


I think GNU grep does something similar. When it has a fixed patter it uses Boyer-Moore [1].

[1]: https://lists.freebsd.org/pipermail/freebsd-current/2010-Aug...


> almost all users (me/GoAWK included) want performance and don't care nearly as much about simplicity under the hood.

that works up until you're at scale and the complexity of the implementation causes hard to understand edge conditions and just bugs from the complexity being beyond what the developers are able to handle.


Exactly, at some point code is such a mess that further optimization and features are harder to get merged. And the smaller the number of contributors the bigger the problem gets, till the main maintainer calls it quits and there's need for a completely new library.


Is it also true when even the essential complexity of the project is quite demanding? Like runtimes, especially with JIT compilers are not easy to begin with.


It is even more important then.


I don't exactly agree. Sure end users don't care about implementation directly. But simplicity of implementation does affect them indirectly. Go is already over 10 years old with maybe many more years ahead. All code bases rot. I think the simpler the implementation, the easier it is to cure rot and code smells which hopefully means Go has a long life as the implementation becomes easier to work on over time. While user's maybe don't care, it does impact them.


You can make the same argument about CPUs. Modern CPUs are horrendously complex. But nobody is asking to remove, say, out-of-order execution on simplicity grounds, because that would hurt users and cost millions of dollars at scale for no reason other than engineering aesthetics.

It's only in a few areas, like programming languages and operating systems, that we're obsessed with simplicity, and it makes little sense to me.


Branch prediction is a CPU "complexity" that got us in to some amount of trouble.

I don't see simplicity as a "virtue", as such: it's all about the ability to reason about a system. The more complex it is, the harder this becomes. This makes it harder for implementers to work on it, and harder for users to figure out what is going wrong.

On the other hand, complexity often offers useful enhancements such as features or performance. There is some amount of tension here, which is hardly unique to software: you see this in, say, tax systems, regulation/law, design of cars and all sort of things, etc.


I'd say it is probably because we are all worst at writing code than we'd like to imagine. So writing necessarily complex code, especially in a FOSS compiler or system, makes little sense since some day someone else is going to have to step in and learn it.


I agree, but simplicity of implementation is a net positive in a vacuum. When balanced against things like performance, it's definitely worth some trade-offs for better performance... but simplicity of implementation definitely has lots of upsides that users indirectly benefit from. Therefore, I think it's important to at least have a balance.


Simplicity of implementation also contributes to Go’s fast compile times, which is a different sort of performance. Trying to find a sweet spot between “slow” interpreted languages and “fast” compiled languages with long compile times (e.g. C++ template hell) is a worthy goal.


I think multi-profile compilations is much better - have a really fast debug build that hardly does any optimizations, and a release one that can take whatever amount of time but will be really optimized.


> Simplicity of implementation isn't what users need, though; they need performance.

It's a tradeoff, in the end. I mean sure, users don't really need to know how things work under the hood, but the people building and maintaining the language do; Go's philosophies on maintainability extend to the language's internals as well.

This is one reason why generics took over ten years to land; they needed to find the middle ground between functionality and sanity within the compiler. Java's generics implementation takes up a huge chunk of the spec and tooling. I don't even want to know about Scala's. It added so much more complexity to the language that I'm not surprised Java stagnated for as long as it did.


> Java's generics implementation takes up a huge chunk of the spec and tooling.

Does it? It is a pretty straightforward generic implementation, imo.


Regexp is one of the things you see attacks on all the time (mostly DOS). Users care a lot about security, and simplicity of implementation correlates with it. It's not something users need, but they do benefit from it.


Performance is relative, so I'm not really sure the point being made here. Sure if your program has regex as the constrained resource this matters, but again it's all relative.


Simplicity is what users indirectly need.

Go is not about providing the fastest implementations out of the box, it's about having a broad toolset in the standard library to solve the problems Go was built for.

Faster (and often more complex) implementations are a maintenance burden for Go contributors. It's far better for a high performance regex library to be a third party package for those that need it.

For those where regex is a limiting factor in performance they'll soon find out why. But for most people fast regex is nothing compared to the overhead of a simple HTTP request.


Given that Go's original raison d'etre was Internet-facing services, the choice for guaranteed linear time execution makes sense as a default.


That ... isn't really true at all. Go's original raison was logs processing.


I'd be interested in reading more about that. Do you have a reference?


I think they're referring to Rob Pike's Sawzall language. However, I wouldn't call Go a descendant of it.


As Knuth opined in an often quoted passage, it is criminally negligent to avoid 10% performance improvement to keep implementation simplicity.


Way to never deliver. Absolute numbers like this are pointless.


> 1) it's guaranteed linear time in the length of the input

What’s the multiplicative factor? Does it dominate for “typical” regexes?


For most regexes, backtracking and Thompson NFA have the same asymptotic complexity, which is why most languages adopted backtracking. The implementors of such languages knew what they were doing, especially when you consider that by adopting the Thompson NFA means you give up backreferences. The differences only arise with pathological regexes.

I used to think that backtracking was superior to the Thompson NFA in practice on typical regexes, but modern implementations of the Thompson NFA have shown that I was wrong in that and the Thompson NFA can match backtracking's performance. Still, the situation isn't nearly as simple as Russ's article makes it out to be by only focusing on /a?a?a?aaa/, a regex which nobody would write. (This is not to deny that people do write pathological regexes from time to time. But what's the size of the userbase that writes pathological regexes compared to the size of the userbase that uses backreferences?)


FWIW, I would say that it's difficult for a Thompson NFA on its own to beat backtracking in non-pathological cases. So I actually think your prior is still mostly correct.

Now a hybrid NFA/DFA (or a "lazy DFA") that does subset construction at search time using a Thompson NFA can definitely beat out a backtracker. A lazy DFA will generally match the speed of a fully compiled DFA, and is about an order of magnitude faster than a Thompson NFA simulation:

    [andrew@frink regex-automata]$ regex-cli find nfa thompson pikevm "@$medium" '(?m)^\w{30}$'
    build pike vm time:  6.584596ms
     create cache time:  278.231µs
           search time:  7.798138892s
                counts:  [3]
    [andrew@frink regex-automata]$ regex-cli find hybrid dfa "@$medium" '(?m)^\w{30}$'
               parse time:  24.256µs
           translate time:  20.202µs
         compile nfa time:  5.966991ms
               nfa memory:  486196
    build hybrid dfa time:  3.137µs
        hybrid dfa memory:  486196
        create cache time:  21.793µs
              search time:  406.746917ms
        cache clear count:  0
                   counts:  [3]
    [andrew@frink regex-automata]$ regex-cli find dfa dense "@$medium" '(?m)^\w{30}$'
                parse time:  22.824µs
            translate time:  15.513µs
          compile nfa time:  6.000195ms
                nfa memory:  486196
    compile dense dfa time:  106.35009ms
          dense dfa memory:  4501024
     dense alphabet length:  117
              dense stride:  128
               search time:  448.568888ms
                    counts:  [3]
    $ ls -lh $medium
    -rw-rw-r-- 1 andrew users 280M Jul 14  2021 /home/andrew/data/benchsuite/subtitles/2018/OpenSubtitles2018.raw.sample.medium.en
(This is using an yet-unreleased tool as part of ongoing regex development.)


People will gleefully write that regex if it causes a denial of your service. People would similarly blow up ftp servers with "ls starstarstarstar" globs.


Arguably the best implementation would be a backtracking-based implementation that goes fast, with some sort of strong API for controlling the amount of effort the implementation puts into matching, because the major concern is the truly pathological cases. For the most part I'm not all that concerned about cases where one or the other implementation is slightly faster on this or that input, I'm worried about the DOS attack. However, determining the correct amount of "effort" to put into a match could be nontrivial; there's the obvious "time out after this amount of clock time" which has its uses for security, but if you want something more sophisticated (including measuring by actual effort applied rather than just clock time, which only loosely correlates with how long you've actually had to run) it's difficult to even know exactly how to express what you want, let alone implement it sensibly in the current programming environment.

If such a thing exists, I haven't seen it, but that's not to say it doesn't exist. In general, though, programming languages and libraries have poor-to-nonexistent generalized support for bounding the amount of resources to spend on a given internal computation. Because that's obviously a rather niche use that's hard to justify a lot of effort, especially since "bound the amount of resources for a given program", a well-implemented bit of functionality, tends to hit a lot of the use cases.


It was 7 years ago, so I don't exactly remember what regex I used. I remember it was information retrieval course, and was supposed to write a crawler and index the webpage. So I think part of it was definitely to find all the links within webpage. I was working on extra credit so there might be some funky stuff.


According to my totally unscientific benchmarks, if the performance of Rust's regular expression module were 1x, Go's was 8x and Ruby's was 32x, and Java's was 42x. Using Google's Java regex module improved the speed quite a bit but still was at ~18x. I was very impressed to see Rust doing so well. And it was sad to see Java so underperforming in such a typical workload. I know we're measuring libraries not languages, but I think regexes are so prevalent that not optimizing for it would hinder the language's real life performance.


For the JVM benchmark, unless you used Pattern.compile() and let the VM warm up (or used the JMH benchmark framework), your numbers are likely wildly off.


It depends on the goal of the measurement. For example, if you wanted to write a grep in Java, would the total runtime of the program be faster or slower if you "waited for the VM to warm up?"


By performance you mean elapsed time, right? So by 8x you mean 8x slower?


Yeah that's right. I compiled a pattern and matched it against a huge wall of text, measuring elapsed time.


Interesting. Looking at this repo, they have

Rust -> Ruby -> Java -> Golang

https://github.com/mariomka/regex-benchmark

Though it appears the numbers are two years old or so, and only for 3 specific regexes.


convenient to show your benchmarks?


Not OP, but Benchmarks Game has a performance test based on regex: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

The top times for the mentioned languages are:

Rust: 0.78s

Ruby: 12.33s

Java: 5.34s

Go: 3.80s

Python: 1.34s


The Java and golang numbers are not apples to apples. The Java solution is using the standard library code, while golang's is calling out to native code (wrapper C library) as far as I can tell. Same with Python.


Other programs are shown:

    Python #2  1.34s PCRE2
    Go #5      3.80s PCRE
    Java #3    5.34s
    Java #6    5.39s
    Java #1    8.49s
    Python #1  9.25s "standard library"
    Go #4     15.60s PCRE
    Go #3     26.86s "standard library"
    Go #1     27.01s "standard library"


To be fair to non-Javas, the java version is using Java code because Java sucks at calling out to faster languages.


Oh yes, that Python code looks so idiomatic:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Can't comment on that but why are these people putting spaces after commas (correct) and not to the left and right of the assignment sign = ? It's so weird to read a=b


    regex=PCRE2.pcre2_compile_8(pattern, c_size_t(len(pattern)), c_uint32(0),
   byref(c_int()), byref(c_size_t()), None)
As far as FFI goes, I would say this looks amazingly simple.


That’s very straightforward translation of C code using pcre to Python using ctypes. It doesn’t look great because the ugliness isn’t wrapped in a library. But Python ctypes definitely isn’t hard to use. It is unsafe as hell though.



Seems to be OK in pidigits — Java #3 program

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

    C gcc #6 0.56s
    Java  #3 0.79s


There's JNI that exists now. And Project Panama will be a big improvement.


That's interesting that Ruby is as slow as it is, the regex engine (Onigmo) is written in C. I wonder where the bottleneck is compared to the other languages.


Seems to be mainly a test of compiling regexes, rather than executing them. Which is a bit pointless because few applications would not pre-compile regexes ... and then it would actually be penalising languages that put more effort into optimisation and hence perform better in the real world at execution.


Why would you even say this? It is certainly not a test of compiling regexes. Go and profile any one of those programs. You'll see that the majority of time is spent executing the search, not the compilation.

Look at the runtimes being mentioned here. We're talking on the order of seconds. The benchmark itself calls for 15 pretty simple regexes to be compiled. Regex compilation of 1ms or longer would be considered slow I think. So that's 15ms. That's nearly nothing compared to the full runtime of the benchmark. Even if you assumed each regex took 10ms on average to compile, you're still only looking at 150ms total.

I say this as the author of Rust's regex engine and as someone who has spent a non-trivial time looking at and thinking about this particular benchmark.


Apologies, you're correct. I mistook the use of Pattern.compile chained with replaceAll to mean that it was compiling the regex every time it used it.


Unfortunately, the test code belongs to the company I work for so I cannot take it out. It was done to determine what language we should use for an internal tool. I hope to conduct another benchmark one day, publicly this time.


There's a lot of room for improvement on the compiler and library end. RE2 and Hyperscan demonstrate the ceiling here.

The NFA simulator is heavily optimized for readability, which means lots of recursion, and fewer special cases outside of small regexps. The compiler also doesn't perform certain optimizations like vectorization and emitting jump tables, which might be useful here.

There isn't a metaprogramming facility to generate equivalent Go code like in re2go: https://re2c.org/manual/manual_go.html. The best we can do is pre-declare the regexps globally as to initialize them once, but we still have to run the interpreter.

Moreover, thus far, a DFA matcher is out of a picture, as discussed here: https://github.com/golang/go/issues/11646.


Version Numbering bothers me.

Mathematically 1.2 is greater than 1.18. Yet in versions it is lesser.

I find this annoying and counter intuitive.


If it helps, a version is simply not a decimal number and the dot is not a decimal point. It’s just a separator. The dot often used for purposes other than a decimal point (a few times in this comment!).


Version numbers are effectively base infinity, not base 10. Periods are digit separators, and each digit is written in decimal. The usual representation for IP addresses is basically the same (four base-256 digits separated by periods)


I find it not quite as unintuitive when there are more than two fields: 1.2.0 is almost impossible to be misread as greater than 1.18.0.


Interesting how somebody can be tripped up by 1.2 vs 1.18 yet not tripped up by 1/2 vs 1/18. It could just take time, but I think there’s also a factor of feeling that dates are worth learning while version numbers don’t deserve direct attention and instead should be a subset of a nearby concept that does.


Yeah I always get tripped up. I feel like we should write it as 1.02, that way it can get up to 1.99 without this issue :)


You can think of them like 1.2.0 and 1.18.0 if that makes your life easier.

That way your brain can't think of them as numbers any more =)


The worst thing is when laymen abbreviate the version number 1.20 to 1.2 - quite confusing.

Versions aren't numbers.


Me too.

For Virgil, because I am a fan of the roman poet, I am using roman numerals for the major versions and append a monotonically-increasing-regardless-of-major-version "00XXX" build number to that. This ensures they always sort lexicographically.

...until I get to version IX, i.e. 9, which would sort incorrectly. So I'll just skip 9, 19, 29...90...heh. :)


If your algorithm is “choose the next number for which its Roman numeral sorts lexicographically higher than the current number”, then if you start at 1 you will only get 35 elements, since 38 (XXXVIII) is the lexicographically-highest Roman numeral.

Using this precise rule, the starting point that gets you furthest is 250, which grants you 153 elements before stopping at 1038 (MXXXVIII).

It’s possible to devise a longer sequence by skipping some numbers (e.g. after 1000 you’d jump to 1100), but I’m too lazy to calculate how you’d do all that.

All this also assumes subtraction rules, which were a late addition to Roman numerals; without subtraction rules, four is written IIII instead of IV, and nine VIIII instead of IX, and so all of 1–49 would be lexicographically sorted.

  from docutils.utils.roman import toRoman

  def sequence(start):
      last = ''
      out = []
      for i in range(start, 5000):
          this = toRoman(i)
          if this > last:
              last = this
              out.append(i)
      return out

  # Brute force way because I’m lazy; takes around ten seconds on my machine.
  start, s = max(((start, sequence(start)) for start in range(1, 5000)), key=lambda x: len(x[1]))
  print(f'{start} gives you {len(s)} items, {s}')


OMG I didn't even realise, I was looking at the graphs getting so confused.


Me too. I have a project suffering pretty badly from the performance hit taken when a regexp doesn't start with a fixed string. Matching against `[ab]cd` is up to 9x slower than against `ab[cd]`.


One of our ACLs at work relies heavily on Go regexp. The performance of evaluation is actually not too bad. What is quite terrible is the performance of *compiling* regexps.


I find this a bit surprising -- do you have numbers? Though even if compiling them is relatively slow it doesn't matter too much, because usually you're compiling once and evaluating many many times (e.g., in the case of a typical AWK script, you compile once and evaluate for each line in the input).


https://gist.github.com/jncornett/4a908250d701aec52a11d61a89...

This is not super surprising, and like you said, in a typical AWK script you would only need to compile once.

In short:

$ go test -bench .

...

BenchmarkRegexpCompile-8 513030 2307 ns/op

BenchmarkRegexpEval1-8 2795020 425.5 ns/op

BenchmarkRegexpEval2-8 3218155 370.2 ns/op

...

Edit: formatting


Yeah, if you can call straight into a C module (e.g., regex or csv or etc) and not do anything of consequence in Python, that’s the sweet spot from a performance perspective.


I've been using go services for my backends for years and I've totally fallen in love. I'm always amazed whenever I write something for like an hour, then compile it and it just works. It usually never does.


The thing people don't seem to appreciate/understand with go is the effects of using it in serverless and cloud native environments.

Every high throughput service I've had moved to GO has used a tiny amount of Memory and CPU compared to the implementation it replaced, typically nodejs, python, Scala and .net.

In a cloud world where it's usually boiled down to pay-per MB of in-process memory and per cycle... That can be 4-5 figures a month in reduced cloud cost for the same traffic.

Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.

The benefits over time of essentially knowing things will get faster with each new version is really encouraging as well. Imagine your cloud bill went down over time as a result of simply rebuilding your service on newer golang core and redeploying.


I absolutely agree.

Bored by Java and After getting burned due to performance limitations by Python applications at scale I switched to Go and never looked back. I still use Python for ML, DS and I don't think it'll change anytime soon due to the tools/library ecosystem.

> Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.

What's your serverless stack for Go? Cloudflare claims Go is not the right language for their workers due to sandboxing constraints[1].

> Go fundamentally is not designed to support secure sandboxing of tenants without running each tenant in their own process...

[1] https://community.cloudflare.com/t/native-golang-support-for...


WebAssembly is fundamently a very well isolated piece of executable code, and Cloudflare focussed on JS by creating their own stripped-down version of the V8 runtime, so you don't have to ship one. They focussed on some efficient solutions, rather than a versetile "just give us a docker image" approach.

Other cloud providers just let you push code, charge you extortionate amounts of money to build the several hundred MB docker image that includes Node.js and node_modules, the data bandwidth of transfering those images between datacentres, storing the build cache in a multi-regional bucket (why???), then they charge you to store the image in their registry and then finaly give you a 10+ cold start time.


The Cloudflare worker JS is a mess of incompatibility and design idiosyncrasies that make it really impractical for true edge application design. What I mean by cloud native & serverless is specifically containerized platforms where you ship your container and they run it - where a golang container can be 30MB with everything included vs a full-fat nodejs container in the 100-500MB range. Scala is also huge when bundled and shipped as a fat jar. Hell even windows binaries if you really wanted to push something to the end client, are trivial in golang and invoke none of the .net client library, c++ redistributable or "shipping java with your app" headaches of other solutions. Only Delphi and C linked against winapi can offer this same level of portability on Windows.

AWS's lambda golang is first-class as well as GCP's golang "Cloud Functions".

Again, can't speak highly enough of using golang on cloud.


My only grievance with Go in the cloud is that it's tedious to have to set up a CI job or similar to build your binaries to deploy in your functions. I really wish I could just ship the source code and have it compile for me (or failing that, if CloudFormation or AWS SAM or Terraform or whatever could transparently manage the compilation).


For us it's as simple as docker build right from the repo -> push to registry for test deployment then tag with prod for rollout from there.

Probably not ideal for 20+ dev teams or high complexity deployments but this is the simplest CI and its much quicker build than node, so dev's can do it locally in all cases.


Yeah, this is fine for early stage startups, but eventually you don't want developers to deploy directly to production.


Very interesting, Can you give some specific examples of where CI/CD tools which works fine for large organizations for other programming languages don't work to your satisfaction with Go?


I think you’ve misunderstood the thread. I was arguing that you can get away with “dev builds image locally and pushes straight to prod” in an early startup, but in a mature organization you need CI/CD.


I think I got that part correctly,

> My only grievance with Go in the cloud is that it's tedious to have to set up a CI job or similar to build your binaries to deploy in your functions.

So I thought your CI/CD works to your satisfaction for other programming languages, But you found Go tedious and so I wanted to understand some specific cases.


As far as serverless Go is concerned my experience has been only with Heroku as I prefer hosting on my own instances and Heroku seems to be reasonable for my use case. I hear fly.io is better in terms of performance and they do offer distributed PostgreSQL.

I guess those who are already into AWS would probably be choosing Lambda for serverless Go. If the experience is same as Node JS or Python on Lambda, Then I don't think there would be much to complain other than the cost; But of course cannot match the speed or cost of a CF worker for the reasons you've pointed.


Fly.io is amazing, I'm very excitied for it! They spin up a long-lived instance on the edge closest to the user and create a Wireguard tunnel in-between. They also have a free tier and reasonable pricing. Heroku is just far too expensive for a hobby project.


Usually the database (not the compute) is where things get expensive (Heroku's 10K rows is not practical for most tasks unless you're willing to do unholy things to put more data into a row).


I've written many lambdas in Python but have switched entirely to Go for all my new lambda work. Go is just better in every way - uses a fraction of the memory and executions take a tiny amount of the time Python takes.


I have a feeling AWS lambda's hypervisor has a lot of overhead, and simply making your workload smaller/lighter has a very noticeable reduction in execution time (cold start is <10ms on most of our GO lambdas).

re: full serverless:

I mean, try for yourselves but unless you're running an echo statement, you'll probably be hard pressed to get lower than this. Some early experiments with Rust showed 5x that, and that is as close to GO as I would dare to compare.


A while back I saw perf. tests on lambda cold starts per language. And somewhat surprisingly python was the fastest.

It's probably not the same test but python is still at the top [1]

[1] https://www.simform.com/blog/aws-lambda-performance/#Startup...


I have very different experiences:

At my workplace we're using Go with Lambda and Fargate for several mid-size and one large application. The costs (image storage, transfer ...) you're referring to are actually almost neglectable. Aurora is by far the largest in our stack.

On the other hand: Creating WASM is a big pain. I understand that Cloudflare and others promote it for their own business reasons, but from a developer/customer point of view docker or lambda are much more robust and simple.


>> Cloudflare claims Go is not the right language for their workers due to sandboxing constraints[1].

That seems to be only relevant for functions / workers at the edge due to resource limitations. For traditional functions (I'm using AWS Lambda) Go is perfectly suited.


> After getting burned due to performance limitations by Python applications at scale I switched to Go and never looked back

Same here. Not just performance, but also tooling.

We used to use pipenv for reproducible dependency management, but literally any change to the lockfile would take 30+ minutes to rebuild--this isn't including the time it takes to pull the dependencies in the first place, which was only a few minutes anyway. This meant that our CI jobs would take ~45 minutes and we would spend a ton of time trying different caching tricks and adding various complexity to our CI process to bring these times down, but they were still unsavory. With Go, we don't even need caching and everything builds and tests run in just a few minutes (end to end).

Not only that, but Go's static typing was a boon. Years ago we would try keeping our types up-to-date with Sphinx (and code review checks), but inevitably they would fall out of date and the only thing worse than no type documentation is incorrect type documentation. Further, this didn't stop people from writing stupid code nearly as well as a type checker does (think "code whose return type varies based on the value of some input parameter" and various other such things). We eventually tried mypy but it was immature (couldn't express recursive types), cumbersome (e.g., expressing a callback that takes kwargs was tedious), confusing (I don't think I ever figured out how to publish a pypi package with type annotations that would be picked up automatically by mypy), slow, etc. I'm sure these things will improve with time, and there are also various other Python type checkers (although with everything in Python, each tool seems to have its own hidden pitfalls). On the plus side, Python's Unions are better than Go's various patterns for emulating algebraic data types aka enums aka sum types.

Similarly, with Python we would try to deploy AWS Lambdas, but it didn't take long and these would bust the (at the time) 250MB limit (compressed). I'm guessing Lambda has since raised this limit, but even still. Go binaries with as many (direct and transitive) dependencies would yield an artifact that was two orders of magnitude smaller and it would include the standard library and runtime. This also meant that our Docker images could be a lot smaller which has a whole bunch of other benefits as discussed here: https://news.ycombinator.com/item?id=30209023.

With respect to performance, beyond the obvious problems, it also made tests far slower, and slow tests (and slow test suites) get run less frequently and later in the software development lifecycle which has a lot of knock-on effects (especially in a language that relies so heavily on tests rather than static analysis to catch bugs). We would also have to go through and purge tests or move them out of band of CI because we wanted to be able to do Continuous Deployment (and even if you don't want to be able to continuously deploy to production, being able to rapidly iterate in lower/ephemeral environments is really nice--getting high-fidelity local dev environments is either futile or very close thereto).

It's really surprising how many advantages Go confers (at least relative to Python) beyond those which are obvious from the marketing.


Your experience with Python perfectly reflects mine when I used it for building large scale applications with it.

Except for few niche areas such as ML, DS, algorithm prototyping (or) archaic fintech systems which expects python code as an input I think Go is a perfect replacement for Python.

When ever this topic comes up, People bring up the beginner friendly web-frameworks in python; Which I admit has some merit but Go as a language is beginner friendly 'enough' and not having to rely upon large monolith 3rd party frameworks is one of its key advantages.


> Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.

Your point is specifically about cloud so maybe I shouldn't use go on Edge? In fact I'm struggling with the binary size right now (and have only just started with go after decades of C - because reasons beyond my control).

with go 1.17 the resulting executable (running on ARM) is 11MB (stripped ldflags -w -s). I can run upx --brute and reduce it to 3MB. But that is still heavy when another lua based application (a complete NodeRed clone - with massive complexity) and 100x as many lines of code (before importing external packages) is still just short of half a MB. (apples & oranges yes ...)

Maybe I shouldn't use go on an edge devices with such poor hardware?

I heard about tiny-go but looking at their website could only find instructions on how to x-compile for specific boards but not for specific HW (like ARM-5 softfloat).

Though I totally love the free stuff I get out of the box with go while also it feeling very familiar coming from C. The only thing I dislike is that it's not C :P


If you're coming from C, I understand that 11MB seems like a lot to you. I've done quite a bit of serverless Node.js, a relatively slim docker image is always a few hundred MB and can easily stretch to a GB. However, I don't consider using C a viable alternative for that purpose.

What's cool about Cloudflare workers, for example, is that they provide the V8 runtime, you just provide the code, which like with Lua will be a very small footprint.

I'm curious, as a C developer, would you say that you prefer Go over Rust? I've used both and in my opinion Rust feels closer to C/C++ in terms of control and flexibility that the language gives you. To me (from a web service point of view, not systems programming), Go is just a faster (and better designed) pre-compiled version of JS/Node, it may look like C with it's simplicity (you end up having to write a lot of repetitive code), but still packs a garbage collector and (albeit very fast!) runtime under the hood.


I've written a lot of C code and to me Go is like a high level C with some sharp corners removed and a GC. Maybe I don't see the limitations in Go as a problem because of this. Have done quite a bit of C++ as well. I prefer simpler languages that I can keep in my head and like C and Go for that reason.

Do I prefer it to rust? I've not bothered to learn rust yet. I think I would like it better than C++ but it's a bit large for my taste. I like to see the language level as an abstraction level that I want to be able to keep entirely in my head. I fear that rust (just as C++) may have too many features for that.


> I'm curious, as a C developer, would you say that you prefer Go over Rust?

Not the guy you asked, but coming from a similar background: Yes, I absolutely would.

Reason: Rust is an amazing language: Amazingly safe, amazingly fast, amazingly complicated. And the last isn't an advantage but a huge showstopper. What do I get from Rust over Go? A bit more performance. Okay, but I also get that from C, so the real question is, what do I get from Rust over C? Memory safety guaranteed by the compiler. Cool. But in exchange, I have to learn a language easily as complex as C++, but completely different from C.

No thank you.

Bottom Line: For me, the advantages of Rust over C do not justify the added complexity, not even close.


> If you're coming from C, I understand that 11MB seems like a lot to you.

My issue is that the environment this thing is supposed to run on only has 30MB disk space.

> would you say that you prefer Go over Rust?

my answer is similar to my siblings. what stops me from using Rust the way it's intended is 100% learning curve. Unless I get paid for the time I invest (on the job) I would spend my evenings or weekends on gaining experience in yet another system/language or whatever. Pretty much did that during the first 20 years in this profession. I get that type-safety is a massive advantage so hopefully I'll get thrown on a Rust project eventually and then I'll meddle through :).

I'm not a fan of C++ especially for embedded. Nobody forces one to use exceptions but reality is, if I start an embedded project in C++, then the minute I don't pay attention somebody will end up refactoring and go overboard with templates or exceptions and then my flamegraphs look like the fires of hell.

2 things that I would love to get good at are Zig and Nim (in addition to Go where I'm far from an expert yet).


> If you're coming from C, I understand that 11MB seems like a lot to you. I've done quite a bit of serverless Node.js, a relatively slim docker image is always a few hundred MB and can easily stretch to a GB. However, I don't consider using C a viable alternative for that purpose.

As someone who's worked in a number of stacks with higher abstraction levels, i can more or less confirm this. When you add containers in the mix, it gets much much worse, file size wise, even if i think that the increased reproducibility is definitely worth it in many types of deployments - mostly because file sizes have never been a primary concern for me, but misconfiguration and inconsistent environments have.

Now, nowadays i manage to keep all of my apps under 500 MB when shipping the full sized containers, typically they're in the 200 - 300 MB range: apps in Java (which need JDK), apps in Ruby (typically with the default interpreter), some .NET ones (at least Core, or .NET 6), some with Python (CPython) and Node.js etc. Of course, using Debian or Alpine for container base images is also a pretty good choice in regards to file sizes!

Of course, the beauty of all of it is that you can take advantage of layers - if you base everything on the same old container image between any two releases (or maybe have an intermediate layer for updates which may change) and only your ~50 MB .jar app changes for the most part, you'll be able to just ship those layers instead of shipping the base image all over, at least in most Docker registry implementations.

I guess that just goes to show how much variety and difference in opinions/concerns there is out there! I think GraalVM and other technologies (project Jigsaw in Java comes to mind IIRC) have the promise of even smaller deployments with stripped down runtimes even for these higher abstraction level stacks, but that's hard to do due to how dynamic they can be (e.g. class loading, reflection etc.).

From where i stand, Go is amazing in that regard, as are lower abstraction level languages.


> But that is still heavy when another lua based application [...] and 100x as many lines of code (before importing external packages) is still just short of half a MB. (apples & oranges yes ...)

Apples and oranges indeed; a Go binary is a fully self-contained, external-runtime-less application. That said, I can picture (in my head, don't expect me to build it) an edge / lambda environment that only takes your codebase and will run / feel like a script moreso than a complete binary.

I'm willing to put money that if you add the libraries + runtime to the lua application, you'd end up with more than 11MB.

Anyway, I'm trying to keep neutral; if your binary size is that important then staying with C may be the way to go. It's a compromise between binary size, language features, 'safety' (e.g. memory management is easier in Go thanks to auto zeroed variables and garbage collection), concurrency (not as much an issue in lambdas), etc.


compiled LUA interpreter and runtime is pretty slim - but we're glossing over the reality that LUA is a drastically less functional language than golang - much in the way LUA is less functional than C.

I think to each their own - it is possible to have a lua app compile down, including the interpreter, to sub-megabyte size, but a 10k LOC lua app could probably be a 200 LOC golang app with some libraries pulled in.

I shudder thinking about the development cost of building and maintaining a lua codebase but if you had 100 million client devices running your code (say a wifi lightbulb) then it might make sense to save on client device and put that money into development of something very bespoke to optimize it.


To me Go feels like a natural progression of C, And could see C often when I write it.

That said my experience with Go on ARM with low memory and CPU power hasn't been as same as my cloud services. I had to resort to ugly GC hacks and even preemptively killing the application on memory threshold.

But it's just my experience, I wouldn't be surprised if there are highly optimized Go apps running successfully on low-mem devices on edge. I don't think Go can replace C for embedded programming, Perhaps newer systems programming language like Rust, Zig is the way to Go.

I used to have Netdata on my edge devices, It would trip on its own Go plugin usage(mem) while the plugins written on NodeJS, Python ran fine; Likely because of unabated use of Goroutines.

Go's strength is its Network stack, It's been years since I've touched Apache/ngnix as deploying Go binary and setting up systemd rule is all that's required. For non-network application on edge, Especially on low memory devices I think Go is not ready yet.


It's almost never worth using binary packers - every saving in on-disk size is lost on extra memory usage and extra cpu usage on starting up; it also interferes with memory management features (a binary typically takes up no writeable swappable memory - a packed one does).


You can make notable compression gains by using a simple 7z self-extracting PE file on your already-static golang binary. In some cases this can yield another 10-50% savings in binary size. For environments where fresh startup time are important, then I would skip that as it does add some unnecessary overhead before your app comes to life. If you need something smaller than that, you're looking at writing c and building for your target arch.


Is your Lua code standalone? i.e is the interpreter part of the application size?


Even if it isn't, LuaJIT is half a meg, and regular Lua is about half that.


If you're thinking about an edge where you have less than 3MB of storage space, and likely very limited in-memory space, you'd probably still be better to go with C. These kinds of environments are becoming very small % of use case over time. Usually such resource-constrained devices & systems have their own build system or crutch libraries built in to help reduce binary sizes to this scale. It is against golang's design principle to make a language that requires external frameworks/libraries to run. As it is, you can build your golang app, send it over an ssh tunnel and have it running on the remote system without much thought at all.


Comparing size of executables is basically useless. You have to compare memory footprint and execution speed to be honest in a comparison between applications, and the applications shoudl serve roughly the same complexity of the problem being solved.


Why is the size a concern, start-up or limited disk space? If the latter, this seems quite resource constrained and Go might not be a good fit.

If you know C, you could try a minimal C++: C + vector,string,unique_ptr. And depending on your tastes, classes and shared_ptr.


In addition to that, look at the compiler flags how to enable bounds checking even in release builds.


Serverless model as implemented in aws lambda is not faster or cheaper. A single node.js process with fastify.js or hapi.js can handle hundreds if not thousands of requests per second. In lambda to handle 1000 requests per second, assuming a single request take 100ms you need to have 100 instances of lambdas running. Serverless is only cheaper if your application see very little traffic and somehow fit in very limited feature set of lambda. Otherwise, you still need to pay extra for provisioned concurrency and spend engineering resources to workaround lambda limitations.

What often happen is people build own framework on top of serverless instead of just using Rails, Django, Phoenix etc.


The poster was comparing Go implementations versus Node.js....not serverless vs dedicated server...


All language stacks get faster over time and redeploying with a new version can improve performance and save costs. Nothing exclusive to Golang about that.

Edit - I'm curious as to what is so controversial about this comment? What stack has gotten slower over time?


> What stack has gotten slower over time?

Python 3 was slower than the preceding Python 2.x version, I believe; it's getting more difficult to find sources now (it's been a while) but there's plenty of resources that showed Python 3 had performance regressions.


Python 3 is the most striking example but it's unfortunately common in Python. Generators were originally slower than list comps, so after their introduction some modules got slower as they switched internally. super was slower than explicit parent calls, and some modules switched even if they didn't immediately need the fancier MRO. True and False were slightly faster when they were alternate names for 1/0 rather than a different type due to various fast-paths for int handling. I think before 2.3 old-style classes were still faster than new-style classes in some cases, but my memory is pretty fuzzy.


It's not so much about it getting faster over time, it's about it NEVER getting slower, and never breaking.

That is something you want to have in your stack when you want to stay on latest and not get stuck on a rewrite 6 years down the road because it's TCO-slow and incompatible with the latest-and-greatest version that is finally fast.

Look no further than python and node to find languages that have notable performance regressions on new releases over time.


All languages get faster over time (generally), but Go will generally be vastly faster than interpreted languages regardless of versions.


No popular language is interpreted these days outside of some specific domains where you might have an embedded scripting environment. But for all the major runtimes, even the JIT languages are now compiled.

This has been true for as long as Go has existed too.


Er. Python is one of, if not the most popular languages on the planet, and its official/primary implementation is only just now moving towards JIT, having been interpreted for the previous 3 decades.


It’s still compiled to byte code. It’s not been interpreted in decades.

I’ll accept that the definition of “interpreter” is a little more blurred these days now that pretty much all JIT languages compile to byte code. But I’d argue that the byte code is typically closer to machine code than it is source and thus the only languages that are common place and truly still fully interpreted are shell scripts (like Bash).


In the context of my statement

>>> Go will generally be vastly faster than interpreted languages regardless of versions.

Python is clearly in the latter group.

As for,

> byte code is typically closer to machine code than it is source

I would very much like to see some evidence; last I'd seen, it was basically tokenized source code and that's it, still nowhere near anything resembling binary executable code.


It’s compiled to a stack machine. Exactly like how a lot of AOT compiled languages originally used byte code too. Oddly enough they were never called “interpreted” languages. Which means the difference here isn’t the tech stack / compilation process but rather at what stage the compiler is invoked.

I’ve written more on to your other post https://news.ycombinator.com/item?id=30215323


>It’s still compiled to byte code. It’s not been interpreted in decades.

Its "byte code" is a not that different than the interpreter reading the lines directly.


As an old engineer who’s written a number of compilers in their time, I very much disagree with your analysis there.

I posted more about my point below (https://news.ycombinator.com/item?id=30208039) little point repeating myself here too


> But it seems people have warped “interpreted” to mean JIT to compensate for the advancements in scripting runtimes. That is a bastardisation of the term in my opinion.

Python's not JIT, either. It reads bytecode - which AFAIK is just the source code but tokenized - and it runs it, one operation at a time. It doesn't compile anything to native CPU instructions.


That’s the 2nd time you’ve posted that and it wasn’t right the first time you said it either.

CPython’s virtual machine is stack based thus the byte code is more than just tokenised source code.

In fact there’d be very little point just tokenising the source code and interpreting that because you get no performance benefit over that vs running straight off the source. Whereas compiling to a stack machine does allow you to make stronger assertions about the runtime.

One could argue that the byte code in the VM is interpreted but one could also argue that instructions in a Windows PE or Linux ELF are interpreted too. However normal people don’t say that. Normal people define “interpreted” as languages that execute from source. CPython doesn’t do this, it compiles to byte code that is executed on a stack machine.

Hence why I keep saying the term “interpreted” is misused these days.

Or to put it another way, Visual Basic and Java behaved similarly in the 90s. They compiled to P-Code/byte code that would execute inside a virtual machine and at that time the pseudo code (as some called it - not to be confused with human readable pseudo code but technically “pseudo” is what the “P” stands for in “P-code”) was interpreted instruction by instruction inside a stack machine.

Those languages were not classed as “interpreted”.

The only distinction between them and CPython is that they were AOT and CPython is JIT. And now we are back to my point about how you’re conflating “interpretation” with “JIT”.


>The only distinction between them and CPython is that they were AOT and CPython is JIT. And now we are back to my point about how you’re conflating “interpretation” with “JIT”.

Talking about conflating though, AOT and JIT mean different things in a programming context...


I’m not conflating the terms AOT and JIT. I’m using examples from how AOT compilers work to illustrate how modern JIT compilers might have passes that are described as an interpreter but that doesn’t make the language an interpreted language.

Ie many languages are still called “interpreted” despite the fact that their compiler more or less functions exactly the same as many “compiled languages” except rather than being invoked by the developer and the byte code shipped, it’s invoked by the user with the source shipped. But the underlying compiler tech is roughly the same (ie the language is compiled and not interpreted).

Thus the reason people call (for example) Python and JavaScript “interpreted” is outdated habit rather than technical accuracy.

Edit:

Let’s phrase this a different way. The context is “what is an interpreted language?”

Python compiles to byte code that runs on a stack machine. That stack machine might be a VM that offers an abstraction between the host and Python but none the less it’s still a new form of code. Much like you could compile C to machine code and you no longer have C. Or Nim to C. Or C to WASM. In every instance you’re compiling from one language to another (using the term “language in a looser sense here”).

Now you could argue that the byte code is an interpreted language, and in the case of CPython that is definitely true. But that doesn’t extend the definition backwards to Python.

The reason I cite that the definition cannot be extended backwards is because we already have precedence of that not happening with languages like Java (at least with regards to its 90s implementation. I understand things have evolved since but not poked at the JVM internals for a while).

So what is the difference between Java and Python to make this weird double standard?

The difference is (or rather was) just that JIT languages like Python used to be fully interpreted and thus lazily still referred to that way. Where as AOT languages like Java were often lumped in the same category as C (I’m not saying their compilation process is equivalent because clearly its not. But colloquially people to often lump them in the same group due to them both being AOT).

Hence why I make comparisons to some AOT languages when demonstrating how JIT compilers are similar. And hence why I make the statement that aside from shell scripting, no popular language is interpreted these days. It’s just too damn slow and compilers are fast so it makes logical sense to compile to byte code (even machine code in some instances) and have that assembled language interpreted instead.

Personally (as someone who writes compilers for fun) I think this distinction is pretty obvious and very important to make. But it seems to have thrown a lot of people.

So to summarise: Python isn’t an interpreted language these days. Though it’s runtime does have an interpretation stage, it’s not interpreting Python source. However this is also true for some languages we don’t colloquially call “interpreted”.


>So what is the difference between Java and Python to make this weird double standard?

For one, a Python syntax issue in a function that's not called will never emerge even if the source file is read and run.

With Java you wouldn't even get that source to bytecode format to begin with.


That’s property of the JIT compiler though, not a lack of compilation. You want to keep compiler times low so you only analyse functions on demand (and cache the byte code).

If CPython behaved identical to Javas compiler people would moan about start up times.

Some AOT languages can mimic this behaviour too with hot loading code. Though a lot of them might still perform some syntax analysis first given that’s an expectation. (For what it’s worth, some “scripting languages” can do a complete check on the source inc unused functions. Eg there’s an optional flag to do this in Perl 5).

I will concede things are a lot more nuanced than I perhaps have credit for though.


You are being confidently incorrect and making sweeping generalizations and assumptions. The vast majority of anything Javascript is still being interpreted (or as you mentioned going through a JIT interpreter), which represents all browsers and a lot of apps.


The v8 engine in Chrome and node is a JIT compiler not an interpreter. Moreover it even compiles to machine code:

https://en.wikipedia.org/wiki/V8_(JavaScript_engine)

You’d have more luck if you exampled Bash scripting :P

If you want to be pedantic then these days even those languages that were considered “interpreted” have had their interpreters rewritten to produce byte code that is generally closer to machine code than it is the original source. So they’ve grown beyond the original definition of “interpreted” and evolved into something much closer to a compiled languages.

So if think it’s disingenuous to still call them “interpreted”. And in the case of JavaScript (your example), the largest runtime in use is most definitely a compiler considering it spits out machine code. So that’s definitely not interpreted like you’ve claimed.


V8 does actually include a bytecode interpreter called Ignition to reduce memory usage on stuff that doesn't really need to be heavily optimized.


It contains a byte code compiler. The point is that one of the stages of the compiler rather than the byte code being interpreted at runtime.

In the link I posted:

> V8 first generates an abstract syntax tree with its own parser.[12] Then, Ignition generates bytecode from this syntax tree using the internal V8 bytecode format.[13] TurboFan compiles this bytecode into machine code. In other words, V8 compiles ECMAScript directly to native machine code using just-in-time compilation before executing it.[14] The compiled code is additionally optimized (and re-optimized) dynamically at runtime, based on heuristics of the code's execution profile. Optimization techniques used include inlining, elision of expensive runtime properties, and inline caching. The garbage collector is a generational incremental collector.[15]

Emphasis mine.

So no, it is not an interpreter. It is definitely a compiler.

It seems people today are confusing “interpreter” with “just in time”…


You're right, ignition does generate bytecode from the AST; it also interprets it.

> With Ignition, V8 compiles JavaScript functions to a concise bytecode, which is between 50% to 25% the size of the equivalent baseline machine code. This bytecode is then executed by a high-performance interpreter which yields execution speeds on real-world websites close to those of code generated by V8’s existing baseline compiler.

Emphasis mine.

https://v8.dev/blog/ignition-interpreter

You can also find this information on the Wikipedia article you linked to

> In 2016, the Ignition interpreter was added to V8 with the design goal of reducing the memory usage on small memory Android phones in comparison with TurboFan and Crankshaft. Ignition is a Register based machine and shares a similar (albeit not the exact same) design to the Templating Interpreter utilized by HotSpot.

> In 2017, V8 shipped a brand-new compiler pipeline, consisting of Ignition (the interpreter) and TurboFan (the optimizing compiler).


Here lies the problem. Just because a component of the v8 compiler is called an “interpreter” it doesn’t mean that JavaScript (via v8) is an interpreted language.

Which is the point I’m making. Back in the 80s and 90s scripting languages often had no compilation stage aside maybe building an AST and were pretty much 100% interpreted. Anything that ran from byte code was considered compiled (like BASIC vs Java, VB6). Interpreters often ran line by line too.

These days most scripting languages that were traditionally interpreted languages run more like Java except compiled JIT rather than AOT.

But it seems people have warped “interpreted” to mean JIT to compensate for the advancements in scripting runtimes. That is a bastardisation of the term in my opinion. But when you look through this thread you’ll see the same error repeated over and over.


No, that isn't true. Java and JavaScript are typically run through a good, optimizing JIT runtime; Python rarely is. Hence the general classification.


Go isn’t run through an optimising runtime either. Plenty of compiled languages aren’t. However Python code is still transpiled to stack based byte code that runs on a virtual machine.

If you want to get pedantic then that byte code is interpreted but frankly where do you draw the line, are retro console emulators interpreters since they translate instructions from on architecture to another in much the same way? Technically yes but we don’t like to describe them in that way.

This is why I keep saying term “interpreted language” used to mean something quite specific in the 80s and 90s but these days it’s pretty much just slang for “JIT”.


>All language stacks get faster over time

You'd be very suprised.


Yeah no.

There is a limit to optimization, and that ends with optimized machine code.

Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.

Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

As far as cloud costs are concerned, a well developed program should cost least with go.

Go is best suited for running intensive compute loads in serverless environments.


> "There is a limit to optimization, and that ends with optimized machine code."

There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.

> "Go being a compiled language"

All code is eventually machine code. That's the only way a CPU can execute it. Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions; balancing memory safety, fast startup, amortized optimization, and steady state throughput.

> "As far as cloud costs are concerned, a well developed program should cost least with go."

This is meaningless without context. I've built 3 ad exchanges running billions of requests per day with C#/.NET which rivaled other stacks, including C++. I also know several fintech companies that have Java in their performance critical sections. It seems you're working with outdated and limited context of languages and scenarios.


> There's a vast amount of concepts when it comes to "optimization" and we are very far from the limit. There's still tons of research just to improve compiler intelligence for high-level code intention, let alone in runtime performance.

True. I agree. But, take a program (add two numbers, and print the result), and implement it in C / Go / Python / C#. You will find that, the most optimized program in each language, will generate different outputs, machine code wise.

While the C & Go will generate binaries that have X processor instructions, Python & C# will generate more than X.

And there, you have the crux of the issue. Python & C# require runtimes. And those runtimes will have an overhead.

> Virtual machines are an implementation detail

Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.

> It seems you're working with outdated and limited context of languages and scenarios.

That just reeks of arrogance.

My point, and the point made by the original comment author is not that C# / Java / Python cannot run billions of requests, its that when you compare cloud charges for running those billions of requests, costs will be less for programs produced by Go / C (and C++ too)


> "Python & C# require runtimes. And those runtimes will have an overhead."

Golang also has a runtime with memory safety, garbage collection, and threading. It's just linked into the native executable and run alongside without a VM but still present.

> "That just reeks of arrogance... costs will be less for programs produced by Go / C (and C++ too)"

You claimed that costs would be the least with Go without any other context. This is neither related to my point (that all stacks get faster over time) nor accurate since other languages are used in similar environments delivering the same or better performance and costs. Again this comes down to "optimization" being far broader than just the language or compilation process.


> Sorry, I think you have the wrong idea. A VM, is an implementation detail that gets carried to the execution. A VM is the runtime that runs on top of the original runtime(tm) that is the OS. A Go / C program will run directly on the OS. Adding a layer of runtime means reduced performance.

After the bytecode has been JIT compiled it runs as close to the metal as a Go program. If you want fast startup you can even compile the code ahead of time and there is absolutely no difference. Apart from .NET not insisting on its own calling convention, which means it has far less overhead when interacting with external libraries than Go.


This whole thread is childish and pointless. Literally 14 year old me would have recognised these arguments and relished getting stuck into them.

I've seen a stack built almost entirely in TCL which handled 10,000 transactions per second for a FTSE 100 company. If that's possible then language is just one choice in the infinite possibilities of production stacks.


> C program will run directly on the OS. Adding a layer of runtime means reduced performance.

A C program that isn't compiled in an embedded context has a runtime layer on top of it. The OS doesn't call your main, it runs the init function of the platforms C runtime to initialize all the bloat and indirection that comes with C. Just like your compiled C program doesn't just execute the native sqrt instruction but runs a wrapper that sets all the state C expects, like the globally visible errno value no one ever checks but some long dead 1980s UNIX guru insisted on. C is portable because just like python and C# it too is specified on top of an abstract machine with properties not actually present in hardware. If you really set out to optimize a C program at the low level you immediately run into all the nonsense the C standard is up to.


JIT compilers run machine code without any overhead, when it comes to actual execution. And the overhead of the compilation can be pushed to another thread, cached, etc.


> A Go / C program will run directly on the OS

https://go.dev/doc/faq#runtime


> Virtual machines are an implementation detail and the latest versions with dynamic tiered JITers, POGO, vectorization, and other techniques can match or exceed static compilation because they optimize based on actual runtime behavior instead of assumptions.

In theory, practice is like theory. In practice, it isn't.

Maybe in 20 years, JITs can provide AOT-like throughput without significant memory / power consumption. Not with current tech, except on cherry picked benchmarks.


Like this one, where .NET is capable to outperform Go and C++ for gRPC parsing?

https://devblogs.microsoft.com/dotnet/grpc-performance-impro...

Or this one where JIT compilers beat Swift?

https://github.com/ixy-languages/ixy-languages


You keep posting a 2 year old msft blog post which is out of date. The benchmark project page they link to points to this for the most recent results: https://www.nexthink.com/blog/comparing-grpc-performance/

Note that akka is currently second for single core, and fastest with multicore. .Net has a nice result, but lets use up to date results. Especially when it still fits your argument that JIT can compete in benchmarks with suitable warmup, etc..


Maybe because I wasn't aware of other ones?

Still it confirms the point that Go isn't up to the competition, which was my point, not that .NET is the super dupper best thing in the world.


Your article shows that Java, Scala, and .NET are all faster than significantly faster than Go in latency and throughput, and can beat C++ as well.

This only affirms our point that JIT can keep up with, or beat, AOT already.


It can, after a warm-up period and by using 10x the memory and likely non-idiomatic code. Sometimes one can live with that, sometimes not.


Look at benchmark game’s java programs - it is very idiomatic compared to most other contenders yet it is exceedingly fast


Only when the users are clueless to configure JIT caches.

As for non idiomatic code, that is even worse for Go code, as it lacks many of the performance tuning knobs.


I literally said this in my comment.


In addition Go is garbage collected. So you have a latency there as well. However, in most Go apps, you don't use that much memory for the garbage collector to become an issue.

Rust does not have a GC, but it's a much more complex language, which causes its own problems. IMHO, Go has the upper hand in simplicity. However, go also limits you a little, because you can't write memory unsafe code by default. You can't do pointer magic like you do in C. But things also get complicated in Go when you start to use concurrency. Again IMHO, event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.


I don't think this comment deserves being downvoted much. It states an important issue: In simplicity Go might win, but practically you might be better off learning Rust and have the compiler warn you about issues with your concurrency usage for any non-trivial problem.

I don't agree with the "callback functions is easier" part at all though. It takes a bit of wrapping ones head around channels and so on, but when understood, I think they are easier to handle than all those callbacks.


> But things also get complicated in Go when you start to use concurrency.

Things always get complicated when concurrency is involved. And Go handles this much better than most other languages, with language primitives (go, select, chan) dedicated to the task, and CSP being much easier to handle than memory sharing.

> event driven concurrency with callback functions (as in nodejs) is easier to wrap your head around then channels.

How so?

The code becomes non-linear, what bit of code is executed when and how its synchronized is suddenly up to the event loop, and entire libraries suddenly have to be written around a concept (cooperative multitasking) which OS design abandoned in the early 90s for good reason.

And this isn't even taking into account that the whole show is now much harder to scale horizontally, or that it is only suitable for io intensive tasks.


From a self-taught programmer's perspective, you mostly write concurrent code to speed-up io. Writing code with callbacks gives me the illusion of linearity. I know it's prone to errors but for simple tasks it's good enough. That's the reason for nodejs's popularity I think.

CSP is a brilliant concept, but it was a bit hard for me to wrap my head around without diving into graduate level computer science topics. This is not Go's fault, concurrency itself is complicated.

Whenever I try to write concurrent code with go, I resort back to using locks, waitgroups etc. I couldn't make the switch from thinking linearly to concurrently yet.

I might be downvoted again :), but actor based languages seem much more suitable for concurrency for me.

Again, I'm out of programming for a while, the last I coded it was a pain to do network requests concurrently in Rust due to needing intrusive libraries. Python was hard because the whole syntax changed a couple of times, you had to find the right documentation with the right version etc. Go was surprisingly easy, but as I usually plan the program as I write it, trying to write non-trivial concurrent code always results in me getting unexpected behaviour or panics.

The usual material you find on concurrency and parallelism teaches you locks/semaphores using Java, these are easier to wrap your head around. The last I checked material on CSP was limited to graduate level computer science books and articles.

I originally got into Go, because I thought it is memory safe, the code I write will be bulletproof etc. Then I understood that the achilles heel of Go (or any procedural-at-heart programming language) was dynamic memory allocation paired with concurrency.

This is my view as a self-taught programmer having no formal training in computer science. YMMV


> billions of requests per day with C#/.NET which rivaled other stacks, including C++

Ok let's say 2 billions per day.

Meaning 2'000'000 / 24 / 3600 ~= 24Kreq/s

That's nothing, you can probably even do that with python.

On any modern machine, Optimized native code like C++/Rust could handle millions per second without blinking an eye if done properly.


Millions-per-second of what? That's a serious load for any stack and depends on far more than just the language.

And it can also be done by other stacks as clearly seen in the TechEmpower benchmarks. Here's .NET and Java doing 7 million HTTP requests/second and maxing out the network card throughput: https://www.techempower.com/benchmarks/#section=data-r20&hw=...


> Optimized native like C++/Rust could handle millions per second without blinking an eye

Keyword: optimized. Almost anything we have now would handle as much as "native optimized".

And no, neither C++ nor Rust would handle millions (multiple) without blinking because the underlying OS wouldn't let you without some heavy, heavy optimizations at the OS layer.


Bing runs on .NET, just as most of Azure, while Amazon on Java, I think they can handle the load.


> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

A virtual machine is just an intermediate language specification. C, C++, and Rust also "run on a virtual machine" (LLVM). But I guess you mean the VM runtime, which includes two main components: a GC, that the Go runtime also includes, and a JIT compiler. The reason Java gives you better performance than Go (after warmup) is that its GC is more advanced than Go's GC, and its JIT compiler is more advanced than Go's compiler. The fact that there's an intermediate compilation into a virtual machine neither improves nor harms performance in any way. The use of a JIT rather than an AOT compiler does imply a need for warmup, but that's not required by a VM (e.g. LLVM normally uses AOT compilation, and only rarely a JIT).


And even then, people should do themselves a favour and learn about JIT caches, but I guess they keep using Java 8....


> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

Nope, Java and C# have JIT and AOT compilers available, while Python still has to come up to terms with what PyPy is capable of.

https://devblogs.microsoft.com/dotnet/grpc-performance-impro...


AOT vs JIT compilation is not really that important of a difference to steady state performance. It does impose other performance costs though.

For example, C# and, even more so, Java, are poorly suited to AWS Lambda-style or CLI style commands, since you pay the cost of JIT at every program start-up.

On the other hand, if you have a long-running service, Java and .NET will usually blow Go out of the water performance wise, since Go is ultimately a memory managed language with a very basic GC, a pretty heavy runtime, and a compiler that puts a lot more emphasis on compilation speed than advanced optimizations.

Go's lack of generic data structures, and particularly its lack of covariant (readonly) lists, can lead to a lot of unnecessary copying if not specifically writing for performance (if you have a function that takes a []InterfaceA, but you have a []TypeThatImplementsInterfaceA, you have to copy your entire slice to call that function, since slices are - correctly - not covariant, and there is no readonly-slice alternative that could be covariant).


So Go's runtime is pretty heavy in comparison to... .NET and the JVM? Or pretty heavy in general? This is the first I would be hearing either of these two opinions in either case, so interested in the rationale.


It's pretty heavy compared to C and C++, not .NET or JVM.

The point is that Go, even if AOT compiled, is much nearer to .NET and JVM (and often slower than those two) than to C or C++.

To make that least clearer: it's generally slower than C or C++ because the compiler is not doing very advanced optimizations, the runtime is significantly heavier than them, and GC use has certain unavoidable costs for certain workloads.

Go is generally slower than .NET or Java because its AOT compiler is not as advanced as their JIT compilers, it's GC is not tunable and it's a much more primitive design, and certain common Go patterns are known to sacrifice speed for ease of use (particularly channels).


> There is a limit to optimization, and that ends with optimized machine code.

There might be a low limit, but there’s no high limit: you can generate completely unoptimised machine code, and that’s much closer to what Go’s compiler does.

> Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.

That’s assuming that Go’s optimisation pipeline is anywhere near what mainstream C compilers provide. Against all evidence.


Just check .NET Framework 4 to .NET Core transition. You can dig MSDN blogs. While Go should be lauded for their performance improvements, it should not be done by reducing other languages/stacks’ achievements.


While that’s what they’d teach you in CS it’s missing enough nuance to be incorrect.

> There is a limit to optimization, and that ends with optimized machine code.

While that’s true, you overlook the cost of language abstractions. I’m don’t mean runtime decisions like any use of byte code but rather the language design decisions like support for garbage collection, macros, use of generics and/or reflection, green threads vs POSIX threads and even basic features like bounds checking, how strings are modelled and arrays vs slices.

These will all directly impact just how far you can optimise the machine code because they are abstractions that have a direct impact to the machine code that is generated.

> Go being a compiled language, that is cross paltform but separate binaries, should be very close to C in terms of performance.

No it shouldn’t. C doesn’t have GC, bounds checking, green threads, strings, slices, nor any of the other nice things that makes life a little easier as a modern developer (I say this not as a criticism of C, I do like that language a lot but it suits a different domain to Go).

> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

Aside from CPythons runtime, their instructions still compiles to machine code.

These days no popular language, not even scripting ones, interprets byte code upon execution. [edit: I posted that before drinking coffee and now realise it was a gross exaggeration and ultimately untrue.]

You’ll find a lot of these languages compile their instructions to machine code. Some might use an intermediate virtual machine with byte code but in a lot of cases that’s just part of the compiler pass. Some might be just in time (JIT) compiled, but in many cases they’re still compiled to machine code.

> As far as cloud costs are concerned, a well developed program should cost least with go. > Go is best suited for running intensive compute loads in serverless environments.

That’s a hugely generalised statement and the reality is that it just depends on your workloads.

Want machine learning? Well then Go isn’t the best language. Have lots of small queries and need to rely on a lower start up time because execution time is always going to be short, we’ll node will often prove superior.

Don’t get me wrong, Go has its place too. But hugely generalised statements seldom work generally in IT.


> These days no popular language, not even scripting ones, interprets byte code upon execution.

Python does. At least, the CPython implementation does, and that’s what 99% of Python code runs on.


Yeah, now I think about it, plenty of runtimes do. I have no idea why I posted that when I know it’s not true. Must have been related to the time of the morning and the lack of coffee.


> Languages like Python, Java, C#, are run on virtual machines, meaning you can optimize them only so far.

A JIT compiler can optimize _more_ than an AOT one can. It has way more profile guided data and can make assumptions based on how the live application is being used.


> Tiny output binaries are also very quick to transfer and spin up in serverless as well

This is a big deal when using servers too.

If you have a single server deploy that could be the difference of 1 second of downtime with Go vs 5-15 seconds for an app in a different language to start up.

For multi-server deploys this could also drastically impact how long it takes to do a rolling update. Having a 10 second delay across N servers adds up.

For me it hasn't been enough to seriously consider switching away from Python and Ruby to build web apps because I like the web frameworks that these languages have available for productivity but it has for sure made me wish I could live in a world where it takes any app 1 second to start up. I know Docker helps with being able to ship around essentially a single binary but it can't change how long it takes a process to start.


For environments where build is included in deployment (Not best practice at all, but sadly common) it can be very pronounced.

Don't forget that CI systems are increasingly memory/cycle/time based for billing, and having a nice lightweight build/artifact is very appreciated here.


Yes, and speaking of CI there's also linting and formatting code. In large code bases with certain languages it can take a substantial amount of time to do this step (tens of seconds).

Being able to zip through linting, formatting, building, running, testing and pushing a binary / Docker image in 1 minute vs 10 minutes is a massive win -- especially when amplified over a number of developers.


Not trying to nitpick but knowing that SIM cards run a version of Java, you cannot make much less than that, as far as virtual machines go.

Also forth and lisp have really tiny tiny version with large capabilities.

So, if size matters, I guess lots of languages have you covered on that, not just Go.


+1 for this. There is some upfront cost though around learning Golang. Writing idiomatic Golang code takes a bit of getting used to.


Honestly Golang is not hard to learn, compared to other typed languages like Rust.


I think it depends on personal mental models. I found rust easier to get to grips with because it was more straightforward for the way I thought, whereas with Go I struggled with a lot of things that felt funky to me.

I fully agree that Go is a simpler language up front though at the syntax level.


I found C++ template meta-programming easier to learn than Rust. The only thing new to learn in Go was Go-routines. Followed by a scan of Effective Go, a single page of "gotchas" and one can immediately churn out software. Never found a language/stdlib so easy to start coding in.


Could you share which things those were? I mean error handling is a common issue people first raise an eyebrow at, but other than that I've found it a really straightforward language to use (coming from Java and JS myself, a bit of obj-c/swift, some scala in between).


Error handling is a big one for me. It just seems illogical that it's so easily skipped.

There's a lot of magic in the APIs. This blog post covered it but I hit quite a few of them. I get why they're appealing but I mentally feel the API is wrong which causes friction when I use them. https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...

I find I don't agree with the package and versioning story either. Publicness based on casing, the way dependencies are declared in the source code.

It's lots of little things that, while I can see the appeal, they create friction with how I like to work.

My background is somewhat similar, Python, C++, ObjC, Swift, Java/Kotlin, JS, C#,GLSL etc...

In particular, I've used some APIs in the past that were very similar to Go and been bitten by it. E.g defining dependencies in the source quickly became an exercise in frustration when dealing with multiple packages needing updates for a shared runtime. It's not an issue people would hit when writing microservices or CLI tools, but it hit hard for working on graphics tools where everything runs in the same runtime. I know go has ways around it though but still not a fan.


I wonder if the poipnt you're making about dependency conflicts is the diamond dependency problem. rsc wrote a very detailed post about the rationale here, and how go versioning solves diamond dependencies, which you might find interesting: https://research.swtch.com/vgo-principles


Thanks, I'll give that a read.


I think you probably didn’t spend enough time with Go, it is objectively much easier to learn because there is no borrow checker, no generics, no macros, etc.


I've already addressed that the language is simpler, but that doesn't always translate to easier to learn.

If the level of "learned a language" is just being able to write code then Go is easy. But writing an app and dealing with all of Go's oddities made it much more cumbersome for me.


Took me maybe two days to feel comfortable rewriting one of my bigger projects from Python to Go. Absolutely wonderful language.


Why compare a low level language to a high level one? Of course one will be more complex, because the GC takes many of the details out of the picture, reducing the essential complexity of the problem domain.


I have no clue in which scenario you talk.

The normal webstuff is idling around at plenty of companies while sometimes having performance issues due to mostly bad code and bad sqls.

If you talk about batch processingmost cases I'm aware are io issues.

And memory is very cheap. I really don't careand it doesn't make any difference in my cases, if the tool is 5 or 500mb.

Not that I prefer very small efficient languages and binaries! But I will take a language I'm confident in over a faster language most of the time.


I'll give you a candid scenario that we went through:

nodejs app built with express receiving json events from 10M clients around the world on very regular basis, pushing to a queue and checking that it was received by the queue.

Extremely simple app: receive and parse json, do some very simple sanity checks on schema, convert to bson with some predefined mapping, and push to a queue (happened to be azure eventhub). To handle ~5BN events per month, peaking around 4000 events/sec, it was using up to 20 instances of node at ~200-300MB per instance in-memory and with scale-out trigger set to 75% cpu... 95th percentile was 20 cores and 12GB of ram in a serverless environment just for that one service. Add to that base container overhead and it was peaking at 16GB memory. That's not nothing in a serverless world. If it was a VM, sure, not too bad, but we're talking elastic containers and that service was built for 500% tolerance for high watermark. Not about to provision two 48GB VMs in two AZs and worry about all the plumbing "just incase". That is the point of going serverless.

Moved it to golang and it was handling 2000 req/s on 1 core with 60MB in memory. It has never went over 3 cores in the 2 years since it was moved.


> Tiny output binaries are also very quick to transfer and spin up in serverless as well - often reducing costs further.

I think people underestimate the benefits of services that can start super quickly (IMHO, "cost" is nowhere near the most significant benefit!).

Fast restarts means your service can respond very quickly to a scale up event (which can make the difference between "things get a little slow for a moment" and "everything comes crashing down"). It also means developers iterating in a cloud environment can get faster feedback, which itself seems underrated. Further, when you push a change that takes down production, the rollback is much faster.

Of course, all of these things have various workarounds (e.g., canary deploys so you're less likely to take down prod in the first place) but if you can have 5MB images instead of 5+GB images you can go a lot farther without needing to implement them. Of course, small images are necessary but not sufficient for fast startups; however, in practice the overwhelming majority of Go programs also start up quickly (although in a cloud context, this can be wasted if they're deployed on bulky distro base images).


how about using rust in cloud native env?

I write in go for a living (my company uses it company wide) but I use rust for my personal project

never had the chance to compare both performance


In what concerns .NET is wasn't really better,

https://devblogs.microsoft.com/dotnet/grpc-performance-impro...


The difference is meaningless, it's just gRPC and it's a MS blog... so nothing to call home about.

And I'm saying this as a C# dev that never wrote anything in Go.


.net 5 (and former .net core) was pretty fast. The issue was in serverless there is a pretty long cold start since the interpreter is still invoked which is like a mini virtual machine. If you're sticking to azure, they have a highly customized environment for it that I'm pretty sure saves interpreted output vs source code.

I would highly recommend golang to old school C# devs who are very comfortable with threading, thread safety, and mix of high and low level code in the same codebase.

edit: Best go devs I know are seasoned C or C# backgrounds.


Ok, here is another one, then.

https://www.techempower.com/benchmarks/

You should learn to write some Go code then.


> You should learn to write some Go code then.

Why? C# fits the bill for everything I want to do. It's fast enough, it has great IDEs, it runs almost everywhere and there are more UI frameworks than there are JS frameworks /jk ;).

I have almost 15 years experience in it, why would I slow down my next projects by using a language I'm not familiar with (as long as C# fits the bill ofc). Learning a new language isn't a hobby for me, it has to be useful to me (I learned Flutter because MS Ui frameworks aren't a panacea).


Maybe I'm not following the thread (I'm not sure what "In what concerns .NET is wasn't really better" means), but what's the issue here? C#'s fastest framework beats Go's fastest by a marginal amount (60.1% vs 59.1%). That's probably within the error margin. Anyway, drawing sweeping performance conclusions from microbenchmarks is perilous--a high ceiling for an HTTP framework doesn't indicate anything about real-world performance for web applications (which are almost never bottlenecked on the HTTP framework).


The issue here is how many in the Go community like to boost themselves about how fast Go is, only because it is the very first AOT compiled language that they ever used, usually come form scripting language backgrounds and are influenced by the traditional Java/.NET hate, without ever using them in anger, including their AOT tooling.

So then we come these kind of assertions like Go compiles faster than any other AOT language, it doesn't have a runtime, its performance beats Java/.NET and so on.


I’m sure if you searched hard you could find a few people in the Go community who say silly things like that, but (1) you could find these kinds of people in any community and (2) in the Go community these people are quickly corrected and (3) no one in this thread is making these sorts of arguments (as far as I can tell) so your comment seems wildly off-topic.

Note the difference between “Go is better for certain cloud applications than Python, NodeJS, etc” and “Go is better than all other languages, full stop”.


This says a lot more about the quality of C# vs. Go's protobuf code generator (which was already poor and now even worse - you're going to the trouble of codegen but then implement it using reflection?) than either language itself.


There are plenty of other examples one can reach for.

Go would lose in performace benchmarks that make heavy use of generics or SIMD code in .NET, because:

1) We all know the current state of generics in Go

2) Currently most of the SIMD support in Go is basically "write your own Assembly code in hex", because many of the opcodes aren't even supported by the Go assembler

Then there are other factors like since .NET 6 there is the possibility to use a JIT cache with PGO, and there is only so much a single AOT compilation can bring into the picture, specially since Go tooling isn't that big into making use of PGO data.

Or the fact that .NET was designed to support languages like C++, so there are enough knobs to turn if one really wants to go full speed.

Naturaly because hating Microsoft and their technologies is fashionable, few bother to actually learn the extent of ecosystems like .NET actually offer in detail.


> Naturaly because hating Microsoft and their technologies is fashionable, few bother to actually learn the extent of ecosystems like .NET actually offer in detail.

You're definitely projecting something here.


[flagged]


I didn't say anything bad at all about .NET.


True. Bit savings are significant only when volumes are large enough.

Also, developing from ground up to target a serverless runtime, means you can reach market faster with python and other langs, but reduce cost by moving to Go or C over time.


Rust gives many times more performance than go but is harder to learn: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...



I kind of have to agree with the guy there, it would be kind of trash if everyone just inlined assembly instructions. Manual SIMD already feels a bit like cheating. Ideally, there should be two solution categories, idiomatic solutions and optimised solutions. I would prefer if the idiomatic solutions matched what the average programmer would write.

I think we worry about performance far too much when choosing a language. However, when performance is important, it's important to recognise the different classes of languages, say {Go, Rust, C} vs {Python, JS, Ruby} in terms of speed and memory footprint vs productivity.


No inlining of assembly is required to vastly improve the performance of Go in many such benchmarks. Pool or Arena Allocation are commonly used techniques and whether or not they are "idiomatic" isn't even up for discussion any more since the stdlib includes the former.


> … stdlib includes the former.

sync.Pool — binary-trees Go #6 program

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

----

Go does provide GC

https://news.ycombinator.com/item?id=29323468


Yeah, about such comments:

Go does provide GC

https://news.ycombinator.com/item?id=29323468


Modern .Net is more CPU and memory performant than Go.

https://www.techempower.com/benchmarks/


I love Go.

It's the new Java, but fewer batteries included, and very easy to learn. I spent a day going over https://gobyexample.com/ and became productive the next day. I tried to do the same with Rust, but I failed. Nobody was able to succinctly explain all the things you have to learn to become productive with Rust, let alone understand anyone else's code, since there are too many different ways of doing the same shit.

I fear generics will make Go go in a worse direction, but I'm not losing sleep over it.

I would even go as far as to say Go is the fastest way to land a $100K/yr job with zero knowledge about programming and still be able to live with yourself (unlike with the JavaScript/npm oligopoly).

Solid Go'ers eschew huge dependencies and prefer classic Unix KISS philosophies. It's a breath of fresh air. And like the performance figures show, your apps magically get slightly faster over the years!

Life is short, use Go.


Don't mean to advocate, I've had the opposite experience. I love listening to the guys over at the Go Time podcast, but for simple things I reach for Node (that I desperately want to escape!), and for a better language I reach for Rust, it just feels right, although I completely agree about library code being complex to understand and using a lot of magic to squeeze out every last drop of performance.

As a corollary, I still haven't been able to understand Go modules and the way code is supposed to be laid out! In fact, a Go 1.16 broke one of my npm packages as it deprecated `go get` with no real alternative for my use case, without manually creating a go.mod file for the cloned repository (that I don't own) in question. I guess this creates a natural... distate for said language.

I think if I would have started with Go, I would have loved it from day 1, but now it provides too much friction to get stuff done. Yeah, it's fast (but most new languages are), it fools C programmers into thinking it's like C due to its syntax, but people coming from the functional world which they believe to be the holy grail will run away in fear with the number of mutations (no Array.map or iterators) and pointers flying around. I still haven't managed to get SQLite working with Go on Windows (stop! a lot of developers do use Windows!), the Rust crate just worked.

The only language that I've actually had fun trying out recently is Elixir, which completely surprised me. The AoC challenges were fun to write in it, and honestly still quite fast!


> for simple things I reach for Node

That's fair enough; it's a language you're more confident with. I've spent two years doing Go regularly now, and at the moment I can whip up valid Go code without tooling or running it in between, but it took me a good amount of time and practice.

> As a corollary, I still haven't been able to understand Go modules and the way code is supposed to be laid out!

I also believe these are the biggest challenges in the Go ecosystem at the moment, and it's probably the most frequently asked question in the various communities. Code / codebase layout is still a very opinionated thing, and while there's a handful of good ideas, there's no standard. (on that note, ignore the "golang-standards" github account, those are NOT standards).


> Code / codebase layout is still a very opinionated thing

And will remain so, because there simply is no silver bullet. Does it sometimes make sense to have a "cmd" directory? Absolutely. Does it sometimes make sense to have every subcomponent live its own package? Yes.

Does it always make sense? No.


Similar experience, but different outcome:

I used Node (with Typescript) as well for small and simple stuff. Typescript is probably still my favorite language.

Now, I'm using Go for about 2 years and to my shame I haven't really understood Go modules properly. I still need to look up simple stuff in the standard lib regularly. I can relate that Go is quite different than what many from us coming from other languages are used to. To me, Rust syntax felt much simpler to learn.

However, Go is now my standard tool. Rust takes just too much brain power just for memory management. The Go std lib is a huge time and headache saver. Go gets overall so much right that I learned to live with some of it's quirks.


TypeScript is actually pretty amazing, a lot of people strongly dislike it, but I think it gives a reasonable amount of safety for such a dynamic language, I would use it even if only for the autocompletion. It encourages a better (imo) style of programming than traditional JS that did all kinds of dark magic, like modifying the prototype chain.

Sometimes I wonder what a world with strong ESM support, a slim runtime such as just [1], strong typing (like Rescript is trying to do), a unified format/lint toolchain and a solid standard library would like like, look how far Node.js has gotten despite its numerous flaws.

[1]: https://github.com/just-js/just


What's your issue with SQLite+Go on Windows? I might be able to help


I just tried it now (a year later), same compiler error (using msys2):

    # github.com/mattn/go-sqlite3
    /usr/lib/gcc/x86_64-pc-msys/10.2.0/../../../../x86_64-pc-msys/bin/ld: cannot find -lmingwex
    /usr/lib/gcc/x86_64-pc-msys/10.2.0/../../../../x86_64-pc-msys/bin/ld: cannot find -lmingw32
    collect2: error: ld returned 1 exit status
The go-sqlite3 GitHub repo has tons of Windows-related issues, one just opened yesterday in fact. I'm not an experienced C programmer, I'm not well versed in compiler/linker toolchains, I know it's super tricky (especially on Windows) but I don't expect my clients to be experts either. I actually spent a few hours trying to solve this last time, time I could have spent coding.

It's not a problem with Go, per se, but it does create non-trivial requirements toolchain setup on Windows, which ruled out Go for me. The whole toolchain just feels a bit... flakey, setting up GOPATH and such (although, that seems to no longer be the case?). Rust's crates, just as an example, are amazing, I've never had a single problem with them (more than I can say for npm..) I know it's just one example, but sqlite3 is brilliant. This immediately ruled out Go for me for that project.


> It's not a problem with Go, per se, but it does create non-trivial requirements toolchain setup on Windows, which ruled out Go for me. The whole toolchain just feels a bit... flakey, setting up GOPATH and such (although, that seems to no longer be the case?). Rust's crates, just as an example, are amazing, I've never had a single problem with them (more than I can say for npm..) I know it's just one example, but sqlite3 is brilliant. This immediately ruled out Go for me for that project.

GOPATH is gone, thankfully, and with the introduction of modules. As with many things, crappy tutorials littered around the internet make things difficult. The toolchain itself is pretty damn easy to set up, it "just works" unless you want to use cgo (or one of your modules does), and in that case you're just left hanging. It's pretty clear that go despite _running_ cross platform doesn't particularly care about windows, e.g. the path package basically doesn't support windows natively, the cgo mess, etc.

> Rust's crates, just as an example, are amazing,

Rust has it's own set of problems that you're glossing over here. As an example, many popular crates required nightly last time I looked. Rust's compile times are incredibly painful too, and one of the worst offenders is the #1 json library for rust (serde).


The only popular crate that I can think of that still requires nigthly is Rocket[0], the 0.5 release does not but the lack of maintenance means that it has been three years since 0.4 and months since the last update. In that time Warp, Axum, Tide, Actix and many more frameworks that are all on stable has eaten its lunch. As for long compile times, I agree, it has gotten heaps better but is still far from Go (nor will it ever get that good), but for most my projects with incremental debug builds its in the ballpark of ~2 seconds, which is good enough for me and `cargo check` is close to instant. Release builds, yeah, they are slow.

[0]: https://github.com/SergioBenitez/Rocket


> Rust's compile times are incredibly painful too, and one of the worst offenders is the #1 json library for rust (serde).

Go and Rust sort of took opposite paths in terms of the compiler.

Rust spends the compilation time (for release compiles) to generate the fastest binaries the compiler can emit. The article talks about Go adding things like "A recent improvement was in version 1.17, which passes function arguments and results in registers" that's stuff that compilers like llvm have been doing pretty much at inception (20 years).

Go has optimized for compilation speed with the understanding that "Well, it's probably not CPU time that is REALLY making your software slow".

Go compiles faster and rust compiles faster results.


This is why I think Go could benefit with an LLVM target.


They are painful... Running the executable at the end with such a tiny footprint and outstanding predictable performance is very gratifying, however. It feels like the language and the compiler love you, they just have to work a little harder to deal with the complexity of macros and generics!


Yeah, if you use msys2 then you need mingw and such installed. But that's only an issue if you want to compile from a Windows host. You can compile from a POSIX host and target Windows just fine.

There are also pure Go implementations of sqlite3, should you wish to compile from a Windows host and do away with the msys2 dependency.

I'm curious what your clients are if they need to compile the code themselves (which suggests they're using your code as a package?) but which they don't need to use Go (which suggests you could just share them a compiled executable and thus they don't need to resolve this problem themselves). In either case, Go native code would fix that (albeit you don't get that for free -- you'd see a drop in query performance).

The GOPATH issue I never saw as flaky personally. It's one environmental variable, it was dead easy to set up and once it was set it worked fine. But my personal opinions aside, it was unpopular so yes, it no longer needs to be set. Ironically though, I personally think that's made things more complicated than less (or rather it's shifted the complexity elsewhere in the toolchain).

Rust crates are good but you'll find compiling against POSIX C code on Windows to be problematic regardless of the language. Rust just hides that problem because its developers are dogmatic enough that any C code is rewritten in Rust anyway. And Rust is performant enough that such rewrites are usually comparable, often even favorable, in speed to their C counterpart. That rarely seems the case with Go.

So it sounds like there isn't much point trying to solve your issue because you've already rewritten it in another language. However I assure you it is solvable in a number of different ways depending on what your underlying requirements were.


Thank you for your explanation and fair comparison. I'm sure that a seasoned C/Go programmer would skirt around those difficulties or be using POSIX in the first place, I prefer languages that do not impose any constraints on your choice of OS, even if I do prefer Linux for development, but it's not good enough to replace my daily driver.

I don't think there will be a Rust implementation of sqlite3 in the near future, it would be a monumental task. Having the C bindings just work with static compilation in Rust was a deal-maker for that particular personal project, even if it's just because some crate maintainer went to the extra effort to make a robust cross-platform build script and nothing to do with the language itself, it shows that they put in some effort and pride, but I also get tired of the hype and the Rewrite in Rust catchphrase.

I'm a university student, as part of my student job I had to develop a backend application. Someone has to be able to pick it up after me, hence simplicity and easy of use. In the end, I chose Node.js after strongly considering Go, Rust and Elixir that have more cohesive tooling (formatter, linter, better module system!), it was the easiest to justify. I couldn't trust myself to not find any issues/complications with Rust or Go and I just can't afford them running into these issues and explaining to them "oh, you need to set up a C compiler toolchain on Windows".


> I'm sure that a seasoned C/Go programmer would skirt around those difficulties or be using POSIX in the first place, I prefer languages that do not impose any constraints on your choice of OS, even if I do prefer Linux for development, but it's not good enough to replace my daily driver.

It's not Go that imposed that restraint. It was the C API used for sqlite3 that did. There's nothing stopping you using C in Go on Windows without requiring msys2.

> I don't think there will be a Rust implementation of sqlite3 in the near future, it would be a monumental task.

You say that but there's already a pure Go version and Rust ecosystem is famed for rewriting C/C++ stuff. So I wouldn't be so sure.

> I'm a university student, as part of my student job I had to develop a backend application. Someone has to be able to pick it up after me, hence simplicity and easy of use. In the end, I chose Node.js after strongly considering Go, Rust and Elixir that have more cohesive tooling (formatter, linter, better module system!), it was the easiest to justify. I couldn't trust myself to not find any issues/complications with Rust or Go and I just can't afford them running into these issues and explaining to them "oh, you need to set up a C compiler toolchain on Windows".

Sounds like you made a really smart choice there. I'm impressed too because mature judgements like these are a skill even a great many senior engineers lack so to have that kind of foresight while you're still at university is impressive.


You certainly have to set one up for Rust. I'm not sure about Go. Unless I need speed I generally reach for python, but even it sometimes needs a C compiler. I'm surprised the Node.js never needs one? Usually these "higher level" languages end up (on large projects) ultimately calling out to c/c++ libraries of some sort that aren't included in the "native libraries". Is that not true for node.js ?


Yes, Rust guides you though setting up tge MSVC toolchain, then everything Just Works™, I'm not sure which toolchain the Rust sqlite3 bindings crate uses, but it also just works.

Actually, a lot of Node.js is written in plain JS, even large parts of the runtime itself, I've never needed a C compiler. This was mostly to make it easier for contributers that don't want to learn C++.

It's extremely rare for libraries to require any system dependencies, such as a C toolchain. Sharp (libvips bindings) will try to download precompiled binaries from a bucket somewhere, this is all done in ad hoc postinstall scripts, for better or for worse... Other libraries use JS as a C compile target with asm.js, opinions aside, it actually works pretty well.


Why don't you try a pure-Go implementation? Should have enough features implemented for basic use

https://github.com/cvilsmeier/sqinn-go


Go is probably most productive language I'd use for CLI tools and quick server stuff. There are libraries for most things you need, often better documented than python counterparts. Single binary and good tooling.

Now, one may hate something about the language but at the end we bite the bullet and get things done.

That said, Go doesn't have a great ecosystem for website backend dev. You can do it, but often less convenient.


When you say website backend dev. What can be improved?


Compared to say Rails/Django, there’s very little magic in the Go ecosystem. You end up rewriting a lot of boilerplate. People would often recommend that you start with bare bones net/http, database/sql, etc., not even a minimal framework like Gin/Echo. This is okay for building performant microservices, but it does get annoying when you’re just trying to spin up your twentieth boring CRUD backend quickly.


I wrote a REST API / boilerplate reducing codegen tool for this exact reason for the Go Fiber framework. Maybe might help.

https://github.com/tompston/gomakeme


as someone not familiar with Go, always wondering why Go people like to recommend the net/http package instead of those framework, are most go backend services actually written from scratch? I find it quite hard to understand since in the Java world it's usually SpringBoot everywhere


I would love to see some "plug and play" user account management. Like handling registration (with confirmation emails), updating/deleting data, build in handling of changing password to increase complexity over time (or forcing change when some leak happen), some protection from login attacks, account levels (and simple adding new level over time), feature toggles, build in multitenancy (so I don't need to code it later when page grow and need it, something like [1]), using ULID (or something similar) instead simple incremental number for ID, build in login history, some login security features (like email on each login, block account login if someone try login from predefined/differen country and email user about that with unblock/allow access link) and more high level stuff (like account anonymising instead of deleting for reporting). So all the stuff that is good to have when you need it but you don't have time to work on it when you focus on business logic.

[1]: https://blog.checklyhq.com/building-a-multi-tenant-saas-data...


Typesafe SQL, typesafe template (or just useful template, not the ones in the stdlib which sucks tremendously), model metaprogramming, etc...


Do you have any examples of other languages offering typesafe SQL integrations? Not sure what to think of it, given how you'd end up having to map SQL datatypes (across multiple database engines) with Go's own (not very powerful) types or a custom type validation layer (which already moves the problem to runtime instead of compile-time).

I did see a post the other day criticizing Go's standard template language and approach. Maybe someday someone will stand up and build an alternative, more advanced template engine - it can just be a library, after all.


In Rust: https://github.com/launchbadge/sqlx In Haskell: https://hackage.haskell.org/package/esqueleto

Either it analyzes the given SQL to determine the in/out types of each SQL query, or it calls the database describe feature at compile-time.


FWIW for SQL, sqlc[1] is probably the nicest SQL layer I've used in any language.

[1] https://github.com/kyleconroy/sqlc


Really, really wanted to use it a few days ago.. but not sure if it's a mysql8 issue but seems most of my "very simple" table definitions couldn't be parsed :/

Looking at the github there seems to a lot of "parsing problems" for the schema files :(


I do like Go, but I have to say I rewrote a small (500 line) Go file I'd written into Rust and was constantly thinking "this is better in Rust". And I suspect that Rust performance would have started pretty much where Go 1.18 is now.

But I definitely agree that reading other people's Go code is a lot easier than Rust. Especially async code. I don't understand why so many people use async outside of web servers which AFAIK is the only place you really need it.


not even necessarily in web servers. If one writes a frontend proxy server which needs to work on > 10k concurrent connections it's likely the way to go. But for an application server which sees much less concurrency it might not matter that much.


Web servers and web browsers, which is a lot of todays dev work.


Why would a web browser need async? It can use threads.


Well nothing needs async, you can use threads anywhere. But it is the predominant model for JS


It's the model in JS because JS does not really support threading (web workers are more like separate processes) so it's the only way to do it.

The actual browser doesn't need async.

When I say "need" I mean that it provides a decent performance increase.


It really is becoming the path of least resistance. I'm totally not in love with the language itself, though the whole ecosystem -- runtime platform/performance, community, simple multi-arch compilation -- is awesome!


I like how Tour of Go includes Generics tutorials is simple to grasp on first reading, wasn't difficult as those have been arguing on golang-nut mailing list before Generics was landed.


Well, if pay is the only thing you want to optimize for and you don't enjoy spending time learning about concepts like Rust's borrow checking and lifetimes ... Sure go ahead. But then don't worry later, when other people have way more knowledge accumulated than you did with the easy-going / not learning much approach.

You need to consider, that there are languages, which make use of more advanced or generally less known concepts, and that those concepts take more time than a day to really understand and make use of. Great knowledge and skill is not acquired within a day. It takes time. The "learn in one day" idea sounds more like you only syntax translated previously had knowledge into Go. That enables you to make use of a new ecosystem, which may be great, but it does not enable you to do things in a more intelligent way. You merely switched ecosystems.

Life is short, learn interesting things.


> Life is short, learn interesting things.

The interesting thing about software engineering to me, is not the features of the languages, its the programs I get to write in them.

I don't learn programming languages because I want to know the latest concepts in language design, I learn them so I can write interesting software.


I agree with you in general that you should be learning interesting things. But I also agree with ammmir about the productivity of Go. I think there (or should be) two worlds of programming in everyones life. There is the business side, where you aim for quick (and quality) output and productivity, and for the business sake, you aim for high development trajectory so that others can work to improve your code.

I believe this is why PHP was/is so successful and also why Go is so successful. The documentation is solid, contains real world examples, and the environment is easy to get started in (php example.php, go run main.go). The syntax is also very straight forward and easy to make sense of.

And then theres the other world. The things you do outside of business to educate yourself so if the time comes you are prepared. Learning more complex things like Rust, and how ASM works, and the inner workings of the Linux kernel. This is great to learn on your own, and nice to have in your back pocket in the business world if it's required. But for the sake of business growth I think tinkering with these more complex to learn things should be mostly left at home.

I guess it really depends on what type of business you want to run (small team with great talent or large team of decent devs), but at the end of the day I want the code I read at work to be so simple that it's almost boring to look at.

TL,DR: If you do complex things at work, you'll likely be the one getting calls at midnight about it crashing.


> so if the time comes you are prepared.

The thing is, how many times did "the time come"?

15 years ago, I was told I need to learn all about design patterns, and for 10 years after that, I heard about a new OOP-design philosophy almost every month.

Then it turns out wrapping everything into design patterns makes the code harder to read and we are right back to procedural programming.

7 years ago, friends told me I need to learn pure functional programming NOW, because it's the future of programming.

Well, long story short, I still don't know what a Monad is, and the code is still procedural.


Ideally the business side would listen to experienced developers, how to best solve a problem. Developers, who have learned about a lot of things and have widened their horizon. Of course those will only exist, if we encourage learning about lots of concepts. Someone has to do the learning, or else we get stuck at the status quo. Lots of concepts are found in many programming languages. This is why it makes sense to learn quite a few languages at least to some degree, and better, to learn languages, which significantly differ from other language families.

People will always think learning something, which they do not have to learn right now, as the "academic detail/exercise", "non-business" or "not needed". Until they get into a situation, where they lack that knowledge/experience. Like you said, good to have it in your pocket, when you need it. It will be "too late" to learn about something, when one already needs it, unless the hypothetical business allows us time to learn about it then, and does not expect us to know already.

It may be the current situation with developers, but I don't buy into the philosophy of being held back by other developers on the team not educating themselves about stuff. That will be the downfall of many businesses. Of course you should not simply jump on every hype train passing by. Tech choices should have an advantage over the not chosen tech. I wish businesses were wise enough to recognize, that allowing someone to learn for a few weeks might safe them many weeks of debugging and fixing errors later, but the reality is, that you will have best chances, if you already know your stuff.

With Go and Rust the difference is, that very much practically Rust enables you to write safer programs, so that you do not get that call at midnight about something crashing. It is not just something you learn about "outside of the business world", but very much part of doing a good job, by using appropriate tools. If the business world does not allow developers to choose good tools for their job, then the business world should not call at midnight.

One goal of learning is to avoid making the mistakes others have made many times before you. While Go makes concurrency easily accessible, the Rust compiler watches over your shoulder and points out many mistakes to you. How will you practically match the level of safety in concurrent scenarios using Go? If you are not a PLT/type systems expert and concurrency expert with lots of time to write some kind of code that gives similar guarantees, I would say it is not practical to try to get close to the level of safety.

Using a safer or a more concept heavy language does not necessarily mean, that the resulting program or its code will be more complex. The opposite can be true, as the system does more for you already. You might have to write a lot of code in Go to get the same guarantees of safety, or you will need to have really long thinks when changing anything, because the type system does not protect you like in Rust and any change could introduce a concurrency bug. You will need to write more tests, if you want something as safe. Those might be concurrency tests, which are notoriously hard to write. All that code will be gone, as the Rust type system and compiler take care of that for you. Of course Rust will not be perfect either and find every single mistake. It is about avoiding as many mistakes as one can with reasonable effort.

Lets get away from the idea, that learning Rust is very hard. It is a programming language with quite a few very common ideas. Many things we usually already know from other languages, which we learned earlier. There are new ideas as well, obviously, but I would expect every developer to be able to cope with that. It's not like it is an impossible task.


In the world where Scala and Kotlin exist "It's the new Java" is not exactly a compliment.


Would using a library like Hyperscan improve Go's regex performance?

Reference: https://www.hyperscan.io/


In my understanding Hyperscan is interesting if you want to check thousands or millions of Regexes against a single string. Normal regular expression libraries would force you to create a state machine for each regex and run them all one by one against a string/byte sequence, whereas Hyperscan builds a single state machine from all the regexes, which it then runs against the string/byte sequence. As most states/regexes will usually be inactive that can dramatically speed up the matching process.

Hyperscan is often used in situations where you have many patterns that you want to test against the same string/byte sequence, e.g. in malware signature detection.

I thought about writing a Hyperscan-alike Regex library in Golang, just not sure how well that would perform with the dynamic memory allocation. I think if you could ensure to do allocations only once during initialization it should be pretty fast. I once built parser generators in Golang and I remember it was a bit painful optimizing them due to dynamic memory allocation, I think for such low-level stuff C or C++ is still better (though I was not very well versed in Golang at the time, so maybe that was on me).


[flagged]


No, you are here to try to be funny. But you don't understand your audience so you failed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: