Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go Fuzzing (golang.org)
269 points by 0xedb on Jan 1, 2022 | hide | past | favorite | 71 comments


Like many others, this was the wakeup call I needed to download go1.18beta1 and start writing some fuzz tests. I unleashed it upon my JSON log viewer, which has the goal of never suppressing a line even in the face of egregious parse errors. (The last thing you need when software goes wrong is your UI hiding stuff from you, after all.)

As expected, within seconds it found that the input `{"": 0}` causes a panic. (This is a weird special case that I handled for other classes of this problem, but not this exact case!) I also got to see what happens when you have more bytes on a line than bufio.MaxScanTokenSize and realized that that is pretty low for an interactive program, especially when all you can do is exit.

The ergonomics are excellent. When the fuzzer finds a failing input, it creates a file that contains the test case. You check this in and future invocations of plain "go test ./..." will use this input. "go test -fuzz=FuzzWhatever what/ever" will cause it to search for more failing inputs. Really well-designed.


This is a great example of fuzz testing finding real bugs in the wild. I think people who are against or resistant to fuzz testing haven’t properly grokked where they should be used and what class of bugs they find.


Fuzzing is awesome. I just discovered an accidental O(2^n) code path in my project with fuzzing and fixed it: https://github.com/elves/elvish/commit/9cda3f643efafce2df567...

Edit: shortly after I wrote this comment, fuzzing discovered another pathological input - and that was fixed in https://github.com/elves/elvish/commit/04173ee8ab3c7fc4a9e79...

(In case people are curious, the project is a Unix shell, Elvish: https://elv.sh)


> I just discovered an accidental O(2^n) code path in my project with fuzzing and fixed it

I've used fuzzing to find crash/panic conditions in Go, but never to find slow paths. How does that work?


It timed out. Since this case is O(2^n) the fuzzer managed to build a relatively short input that caused the function to not terminate for a few minutes.


Oh wow, I misread your original post as O(n^2) and thought “no way a fuzz test would notice mere quadratic time complexity”, but exponential is another beast entirely :-D


Mere quadratic time complexity is something I would expect to timeout and catch at fuzzing time, as long as we do encourage non-tiny inputs, say 10K and beyond.


Oh, fair. I wonder if there is a practical, general way to test for expected time complexity using fuzz tests…


Great to see fuzzing becoming more mainstream. Ultimately we have absurd program states, with even a trivial program's state vastly exceeding the number of particles in the universe. We need to start finding order-of-magnitude-better approaches for testing.

I almost always write generated tests at this point with unit tests being a fallback for slow code or niche cases. What I dont generally write though is fuzz tests, which would really be a 'next step'. In Rust it's not very hard to do so, but it hasn't quite hit the "trivial" mark yet for me, whereas quickcheck is virtually the same amount of work to use as to not use.

Languages like Go adopting and mainstreaming these practices will be a benefit to everyone.

I'm curious if there's documentation on:

a) The coverage approach taken

b) The mutation approach taken

Can you configure these? Plugin different fuzzing backends?


> Great to see fuzzing becoming more mainstream.

Agreed. I wrote a fuzzer at my last job and it found a bunch of bugs right before a release. Nobody knew what fuzzing was so I was attacked by the program owner for trying to break the software and given an insulting performance review for it. Then I had all the fuzzing results and coredumps deleted out of their directories by the program owner so the release looked immaculate. Defense software ftw


Yikes, sounds pretty toxic on their part, but good on you for taking a strong approach to software stability.

Also, writing fuzzers is super fun.


In such projects there is often a promised release date and a contract stipulating e.g. a total test pass rate of at least, say, 95% or something like that.

Now if you use a fuzzer to generate a lot of test cases that fail (if it only saves the failing ones) it will impair the ability to release.

So sadly there are often incentives to not fuzz close to a release.

The management euphemism is “taking a risk based testing approach”, when selectively removing test cases to make the numbers to qualify for a release.


> I almost always write generated tests at this point with unit tests being a fallback for slow code or niche cases. What I dont generally write though is fuzz tests, which would really be a 'next step'. In Rust it's not very hard to do so, but it hasn't quite hit the "trivial" mark yet for me, whereas quickcheck is virtually the same amount of work to use as to not use.

Did you mean generative tests? You're talking about quickcheck and that's what it does.

"Generated tests" would usually be interpreted as codegen'd test which you commit.


Tests with generated input. Call it what you like.


What kind of generated tests are you writing?

Is it more similar to 'golden files'? Generate expected output and assert versus current implementation output?


Just taking what would normally be a unit test and having the input values be generated. Some examples:

1. I have a test for encryption/decryption functions. The data that's provided for the plaintext, additional data, and key, is generated. The assertions are:

assert_ne!(plaintext, encrypted_data);

assert_eq!(plaintext, decrypted_data);

assert_eq!(aad, decrypted_aad);

etc

2. I have some generated integration tests. For example, in our product, there are certain properties that should always hold for a given database entry. I generate a new entry on every test and have the fields for that entry provided by quickcheck, then I perform the operation, query the database, and assert that properties on those values hold.

So to answer your question, yes. Sometimes you want to check a concrete output (ie: "this base64 encoded string should always equal this other value) for sanity, but in general property tests give me more confidence.

I find it works particularly well with a 'given, when, then' approach, personally.

edit: I'll also note that for the base64 case I'd suggest:

a) A hardcoded suite of values.

b) Generate property tests.

assert_eq!(value, base64decode(base64encode(value));

As well as things like "contains only these characters" and "ends with [=a-zA-Z]" etc.

c) Oracle tests against a "known good" implementation.


Sounds like a sensible mix. There is really no single silver bullet.

We at https://symflower.com/ are working on a product to generate unit tests. Unlike quickcheck/proptest we promise to find errors, even if they are unlikely (for example [this input](https://github.com/AltSysrq/proptest/blob/master/proptest/RE...) would be trivial for Symflower). Also, unlike fuzzing our technology is deterministic.

Here's one of our blog posts that explains the approach: https://symflower.com/en/company/blog/2021/symflower-finds-m...


> we promise to find errors

What does this actually mean, because I assume you didn't disprove Rice's theorem?


I don't think Rice's theorem being proven invalid (or rather, more likely (but imo unlikely), P=NP) is important to this. In fact, a somewhat loose interpretation of Rice's theorem would imply that you can not "prove your way" out of having bugs of arbitrary classes, which seems entirely compatible with "our testing framework will find bugs".


Rice's theorem is applicable; the paradox is the usual trivial case,

    if symflowerSaysItsBroken() {
        DoTheRightThing()
    } else {
        DoTheBrokeThing()
    }
I have no doubt that symbolic analysis can find a large class of problems, but if you write stuff like "we promise to find errors" you will a) dupe a lot of junior devs who will genuinely believe your tool is doing impossible magic, b) alienate experienced devs who know that's 100% marketing fluff and find nothing more specific on your site.

I'm pretty much the ideal customer for a product like this - staff+ with final say about toolchain decisions on a product with a lot of data-driven behavior. But what I want to see isn't a vague "promise" (really? is it in a contract, that you're liable for bugs your tool misses? of course not) but some actual comparison to quickcheck/fuzzing. I want to see your approach finds a superset or at least mostly-disjoint set of what we already have invested in, or finds the same set more effectively. Like, survey through the bugs fuzzing found in go stdlib and show me your tool finds them all faster.

I'm also skeptical of some "purely" coverage-driven approach from the start - coverage-driven fuzzing already has a tendency to miss interesting cases that don't come from branchiness - a classic example is subnormal value handling. Fuzzing usually still finds these eventually just by virtue of exhaustiveness; a tool driven only by symbolic methods better come armed with a _lot_ of encoded knowledge about the language semantics.


From looking at their stuff I think their value comes not from the the way they promise to write tests that exercises more code paths than human generated testing can. This is different from the Go fuzz testing approach, which requires you to write a specialized kind of test by hand. I can even imagine a future evolution of Symflower's product that writes fuzz tests for you.


Go stdlib has property testing built it. It's not as powerful as some quick check frameworks, but it's built right in. I wrote an article on it.

https://earthly.dev/blog/property-based-testing/


If state space is your concern, I would try proofs. Here’s a live demo from Martin Kleppmann about proving something about a stateful distributed system: https://youtu.be/7w4KC6i9Yac.

The tools have come a very long way, Isabelle is quite usable after learning a few concepts.

I believe this is the order-of-magnitude better approach you’re thinking of. We can apply finite effort to a proof about an infinite state space, with no runtime cost.

And, Isabelle has great automation. As seen in the video I shared, proofs can often be found with a little nudging in the right direction. You don’t have to write the whole thing out.


Anyone seen good articles on converting go-fuzz tests to native fuzzing? Specifics on the new corpus format and a converter from go-fuzz would be really useful.

It’s great to hear that the fuzzer is built on go-fuzz so hopefully the conversion process won’t be too bad: https://github.com/dvyukov/go-fuzz/issues/329


I've pre-emptively migrated a couple projects and found that loading the old corpus files wherever you already had them and then `Add`ing them as whatever new appropriate type was the easiest way. The inclusion of types necessitates at least a minor migration. I did not find any official documentation on the format, though it's trivial to read, e.g.:

    go test fuzz v1
    string("\xff0")
Overall while the API (and of course tooling) is a huge step forward, corpus management feels like a small step backwards compared to go-fuzz - I didn't find a way to pull non-crashers into an in-repo corpus other than manually copying them out of my cache directory. And one-file-per-case still blows up a lot of repo management tools.



Past related thread:

Go: Fuzzing Is Beta Ready - https://news.ycombinator.com/item?id=27391048 - June 2021 (53 comments)


What would you say are the main differences between a fuzzer and QuickCheck? The authors of quickcheck don't call it a fuzzer so I assume there is some difference but both seem to randomize inputs?


I think that fuzz testing is best described in the context of two other types of tests that go beyond "traditional" unit/integration tests: property and mutation testing. The three complement each other so well in ways that are hard to describe without trying them yourself.

Quickcheck is a property tester, which essentially can be thought of as a "bytecode fuzzer that also happens to check for correct behavior". It uses an algorithm to generate arbitrary inputs that satisfy a set of constraints, and then ensures that the outputs demonstrate a property of the function being fulfilled. Think about mathematical properties of functions or the lack thereof: commutativity, associativity, and even properties of linear relations you might remember from Linear Algebra. Values that violate the expected properties are saved; you can build unit tests that check these values later or cache them for future runs.

Fuzzing is a form of black-box testing that repeatedly runs software differently and tries to crash it. Causes and behavior of crashes can later be inspected.

Mutation testing alters every statement in your code one at a time to change its behavior. Each alteration is a "mutant". If an alteration does not cause one of your tests to fail, the mutant has not been "killed". Your goal is to have a good kill ratio. This is a far better metric for test effectiveness and dead code elimination than statement/branch coverage.

Property testing is kind of like a bytecode fuzzer and mutation testing is very much like a fuzzer for your tests. But property testing is a white-box form of testing with good knowledge of your code; I'd consuder mutation testing to be "grey-box" testing of your test code. "True" fuzz testing is as black-box as you can get: just let an algorithm run your program for a long time (anywhere from hours to months) to find subtle bugs (crashes and other black-box failures) at scales humans struggle with.

In conclusion: Yo dawg, I heard you like tests. So I fuzzed the mutation tests on your property tests to see if arbitrary input on arbitrary code produced arbitrary output without arbitrary exits.


You're comparing fuzzing and property based testing.

Fuzzing is a way of generating a load of random inputs for a system under test. For example if you had a function add(a, b) -> int you'd pass in numbers, strings, bytes, whatever you want and it'll let you know if it breaks.

Property based testing is similar but you check if properties hold across these values and the tool will then shrink this to the simplest case. For example you'd say that add(a, b) takes two integers and regardless of the numbers it should always have the associative property.

So in fuzzing you put in junk an see what happens, in pbt you define properties and check they hold across sane inputs.


Isn't fuzzing more than just random though? I was under the impression fuzzing could introspect branches in the code and work its way backwards to generate inputs to reach that coverage?


Yeah it can get that advanced!

But the distinction is still generating junk until you get errors while pbt is about checking business logic (properties) hold under random generators. Saying this I’m sure the cutting edge of both overlap!


IMO, it's much the same and pretty much only a question of API. Property-based testing (like QuickCheck) generates randomness using a PRNG, creates a structure from that, and checks that some property holds for that structure. A fuzzer generates some arbitrary input and usually checks that the program returns successfully (i.e. doesn't crash).

The reason I think they are the same is that we can rather easily turn one into the other and vice versa:

- A PBT test can be turned into a fuzzer-test by sourcing the randomness from the input instead of a PRNG and failing the program if the property does not hold.

- A fuzz-test can be turned into a PBT by sourcing the input from a PRNG.

The real difference is that fuzzers tend to be coverage-guided, i.e. a fuzzer will use what part of your code is executed to guide its mutations and thus steer it towards more "interesting" inputs.


I guess I'm echoing others here, but fuzzing is magical. I've given introductions to fuzz-testing to more than a few developers, and found interesting bugs in my own code using them.

Until now I've used "go-fuzz", but I'm very much looking forward to having real integrated fuzzing in the standard compiler/toolchain. That has to make things easier to explain and add to more projects.

Even with "100% test coverage" finding bugs due to fuzzing shows how hard testing can be.


I will never understand why this has been included in the standard library instead of as a standalone library available for download. Now it's locked to the Go release cycle and have the potential to languish because of backward compatibility concerns.

The decision to include it is perplexing when other language ecosystems have chosen to keep this kind of functionality out of the standard lib, e.g. requests in python[1]. To quote Kenneth Reitz: "...the standard library is where a library goes to die."

[1] https://github.com/psf/requests/issues/2424


FWIW older design documents have (some) reasoning for integrating fuzzing natively:

- https://docs.google.com/document/d/1N-12_6YBPpF9o4_Zys_E_ZQn...

- https://go.googlesource.com/proposal/+/master/design/draft-f...

One of the original proposals (https://docs.google.com/document/u/1/d/1zXR-TFL3BfnceEAWytV8...) further explains why, by apparently go-fuzz maintainers team (per issue 329[0] none of them seems in any way broken-hearted about the idea of deprecating go-fuzz eventually):

> go-fuzz suffers from several problems:

> - It breaks multiple times per Go release because it's tied to the way go build works, std lib package structure and dependencies, etc. It broke due to internal packages (multiple times), vendoring (multiple times), changed dependencies in std lib, etc.

> - It tries to do compiler work regarding coverage instrumentation without compiler help. This leads to build breakages on corner case code; poor performance; suboptimal quality of coverage instrumentation (missed edges).

> - Considerable difficulty in integrating it into other build systems and non-standard contexts as it uses source pre-processing.

> Goal of this proposal is to make fuzzing as easy to use as unit testing.

[0] https://github.com/dvyukov/go-fuzz/issues/329


Seems fairly standard for Go.

You mentioned requests - the Go net/http library is widely used, even though it's in the standard library. It didn't languish, it didn't die. The interfaces are also used in most 3rd party libraries and work well.

Moreover, Go's quality standard library is often cited as one of its main strengths.

Thus, the inclusion of fuzzing in the stdlib isn't surprising to me. Not saying the other way around would be bad. It's just not surprising, and I don't think it's a bad choice, looking at Go historically.


You mean the quality of using strings as errors?

https://go.dev/blog/go1.13-errors


Maybe, but I wouldn’t support support that argument by holding up Python and its HTTP situation as exemplary. The standard HTTP libraries are a nightmare, requests proves that Python packages can and will languish even outside of the standard library, and writing even a simple HTTP script in Python means you now need to choose between the standard HTTP libraries or tackle dependency management and multi-file deployment issues.

By contrast Go ships with a high quality standard HTTP library that has lasted a decade and no “requests” equivalent has risen up to challenge it.

Note also that Go’s testing situation in general is much nicer than many other languages precisely because things are baked into the standard library—no need to quibble over which test framework to use or to memorize each framework’s equivalent for “run tests that match this pattern” and so on.


The fact that requests has "languished" (not sure how tbh) doesn't really change the fact that the Python stdlib is a bit of a hilarious disaster from afar. Tons of cases of "that shouldn't be in std" where libraries have quirky, locked in behaviors, or an entire major breaking release with decades of work to migrate has to be made to clean up the mistakes.

Python should be a case study in the many ways not to build a language.

> By contrast Go ships with a high quality standard HTTP library that has lasted a decade and no “requests” equivalent has risen up to challenge it.

Yeah this also means that you need to update your compiler when there's a vulnerability instead of just a single point release in a library. This happens with some frequency.

The parent poster is right, in my opinion.


> Yeah this also means that you need to update your compiler when there's a vulnerability instead of just a single point release in a library.

Has updating the Go compiler actually been an issue for you in the past? To me, with Go's stability, it's never been more disruptive than updating a library in practice, so I don't see much of a difference.


> Has updating the Go compiler actually been an issue for you in the past? To me, with Go's stability, it's never been more disruptive than updating a library in practice

I've run into issues with several go version updates.

Off the top of my head, all of the following caused breakages:

1. go 1.4 making directories named 'internal' special and un-importable. Cross-package imports that used to work no longer would compile with a compiler error.

2. go 1.9 adding monotonic clock readings in a breaking way, i.e. this program changed output from 1.8 to 1.9: https://go.dev/play/p/Mi6cGCPd0rS (I know it looks contrived, but I'm not digging up the actual code that broke)

3. The change of the http.Server default to serving http2 instead of http/1.1 broke stuff. Of course it did. How can that possibly _not_ break stuff?

4. The changes in 'GO111MODULE' defaults broke many imports which had either malformed or incorrect go.mod files. This one was quite painful for the whole ecosystem.

5. go1.17 switched to silently truncating a lot of query strings. Of course that broke stuff, how could it not? https://go.dev/play/p/azODBvkb-zK

Those are all intentional breaking changes which were not fixed upstream (i.e. are "working as intended"). The unintentional breaking changes, from changing error messages to cause string-based error detection to fail (because so many stdlib errors aren't exported so you have to do string matching), to just plain dumb bugs in the stdlib.... those are vastly more common. Those usually do get fixed in point releases. Take a gander at those release notes, many of the issues highlighted in those changelogs come from pain people hit during upgrades.

I think the majority of go version upgrades have had some amount of pain, and most of them have been far more disruptive than updating a well-built library.

I would much rather update just my fuzz-testing library in a commit, and be confident that it's only used in tests so CI is good enough to validate it, than have to update that and my http package and my tls package and my os package all at once and have to look for bugs _everywhere_.


I admit I wasn't bit by these changes and had a much better experience overall. Thank you for the long write-up.

However, I think you only mentioned changes in major releases, whereas in this scenario (vulnerability fix) a minor release would suffice (the parent mentioned updating to a point release of a library). Did you also have issues with minor releases?


It's true that minor releases have been much less rocky, but I think the overall point that upgrading a lot of things at once is more annoying than updating a smaller number of things at once still holds.

It's unlikely a fuzzing library has a security issue anyway since it's for test code, so the more pragmatic concern is that new fuzzing features may be adopted slowly because i.e. the feature requires go 1.20, but go 1.20 broke some part of net/http for the 10th time.


> a minor release would suffice

Does the Go compiler have LTS releases? Like if I'm on 1.0, but 1.5 is out, are they going to release a 1.0.1 for a vuln that impacts 1.0+ ?

It seems unlikely but I'd be curious to know.

Libraries release patches more frequently, and it's also generally easier to apply a patch yourself if you need to.

Otherwise a point release may still imply a major release.


The most recent 2 major releases get the fix in case of security issues[0]. This means you can be up to 6 months behind the newest release to never be forced to do a major version update under time pressure.

[0]:https://github.com/golang/go/wiki/MinorReleases


Thanks that's pretty solid.


Your complaints #3 and #5 are library changes, not compiler changes, yet you were complaining about them in the context of compiler updates. They would have bitten you exactly as much if net/http was not in the stdlib.


It's true that those are library changes, but I think when the parent post asked "Has updating the Go compiler actually been an issue for you in the past", they meant "updating the Go distribution", not just compiler, since those two things have been conflated.

And in a sense, yes updating the go compiler was an issue with those because updating the go compiler forcibly updates net/http, with no option to not tie those exactly.


I don't know that I'd hate it if I were a go dev, it would just be a bit annoying for a number of reasons.

For one thing I update libraries all the time so it's a very fast, simple, well worn operation. Updating the compiler is a bit more of a chore and I'm going to worry a bit more about the impact (since it's global to all code vs local to one package).

For another, I would want to make sure I had tooling that could tell me "is this library in use by service X". I don't know Go's story there, but I would hope it's trivial to do so for a library but I suspect if it's part of the standard library that may be trickier. If not, nbd.

It's a bad smell to me, but if I were a Go developer it wouldn't break me.

Perhaps ironically, until this native fuzzing package, upgrading the compiler if you had fuzz tests would be one case where things would likely break.


> Updating the compiler is a bit more of a chore and I'm going to worry a bit more about the impact (since it's global to all code vs local to one package).

This is indeed a chore in other languages. In Go, the compiler is trivially installed. Typically this just means bumping the version in your Dockerfile and “gvm use $newVersion —default”.

> For another, I would want to make sure I had tooling that could tell me "is this library in use by service X". I don't know Go's story there, but I would hope it's trivial to do so for a library but I suspect if it's part of the standard library that may be trickier. If not, nbd.

This is supported out of the box by Go’s tooling. `go mod graph` is what you’re looking for.


> This is indeed a chore in other languages. In Go, the compiler is trivially installed. Typically this just means bumping the version in your Dockerfile and “gvm use $newVersion —default”.

The issue isn't with installing the new compiler, that's trivial in our use case as well (for Rust at least, Python's a disaster, but I accept that). The issue is ensuring compatibility, ensuring no new bugs are introduced, etc. It's just a much heavier change to your produced binary vs changing a package.

> This is supported out of the box by Go’s tooling. `go mod graph` is what you’re looking for.

Cool, thanks.


It is for me, I can't just go build in the new version. So i'm keeping the software with the old compiler.


> The fact that requests has "languished" (not sure how tbh) doesn't really change the fact that the Python stdlib is a bit of a hilarious disaster from afar.

Agreed that the Python stdlib is a disaster, but my point was that the OP contradicts himself by arguing that stability guarantees hold the standard library back while pointing to requests which itself hasn’t made many/any intrepid breaking changes or even sensible non-breaking changes a la async support. Note that “stability is bad” is the OP’s point of view and not mine.

> Tons of cases of "that shouldn't be in std" where libraries have quirky, locked in behaviors, or an entire major breaking release with decades of work to migrate has to be made to clean up the mistakes.

But the parent pointed to the requests library which is not in the stdlib. Note also that Go has been around for a decade and has needed no such major migration initiative.

> Yeah this also means that you need to update your compiler when there's a vulnerability instead of just a single point release in a library. This happens with some frequency.

The frequency is very low and updating the compiler is minimally risky due to Go’s strong compatibility guarantees (precisely the kind of stability the parent opposes). This is a much lesser problem than dependency management in Python (I have 15 years of experience in Python and 10 in Go).


Note that Python was already almost two decades when v3 got out and is now three decades old.


Python 2 was not 2 decades old, and anyway the writing was on the wall many years before Python 3 was released (these things don’t happen over night after all).


Just look at the implementation of namedtuple. It is an utter abomination from an implementation perspective. I’d take a junior developer who sent a code review with that out back and politely beat some sense into them.


> I wouldn’t support support that argument by holding up Python and its HTTP situation as exemplary.

I wasn't holding it up as exemplary. I was using it as an example to show how other language ecosystems have reasoned about what gets included in the stdlib. I can't think of another top20 language that ships fuzzing in the stdlib.


The Go fuzzing tool takes advantage of compiler instrumentation. It can also work with built-in Go types, in comparison to traditional fuzzing tools that just work with bytes. Additionally, integrating it into the testing tool allows it to be as easy to write as a unit test. This can help provide a batteries-included fuzzing experience.


> The Go fuzzing tool takes advantage of compiler instrumentation.

This is the main benefit. I've been using go-fuzz for years and compiler upgrades (especially any changes related to modules/GOROOT/GOPATH) was a pain because it always behaved slightly differently.

> It can also work with built-in Go types, in comparison to traditional fuzzing tools that just work with bytes.

This could have been done just as efficiently without upstream integration.


This is just a work around for:

a) A lack of strong typing

b) A custom compiler toolchain

llvm already has instrumentation/ coverage support and generics make it easy to work with higher level constructs than bytes, although you generally do want to just work with bytes when fuzzing imo.

The language is weak, therefor the language has to add more and more batteries-included because extending it is purposefully difficult.


> This is just a work around for:

> [...]

> llvm already has instrumentation/ coverage support

I mean, that supports the idea of having fuzzing support in whatever core you have.


My point is that between Go's custom compiler backend and inexpressive typing there's much more need to build things like this in directly vs what other languages can do by just using llvm/gcc. Like if Go developers want sanitizers equivalent to what llvm packages they'll have to build that themselves, although that won't have the same issue with inexpressive types.


> Like if Go developers want sanitizers equivalent to what llvm packages they'll have to build that themselves

Go uses LLVM’s ThreadSanitizer since 1.1.


I don't think that really addresses my point, it just shows that they did the work for one sanitizer already.


> llvm already has instrumentation

The Go toolchain actually supports emitting instrumentation for LLVM's libFuzzer with -gcflags=all=-d=libfuzzer.

> therefor the language has to add more

Okay, and so? If this ends up making fuzzing more popular and easy-to-use, I frankly don't care if it was added as a library or deeply integrated into the toolchain.


> The Go toolchain actually supports emitting instrumentation for LLVM's libFuzzer with -gcflags=all=-d=libfuzzer.

Sweet, that's a smart approach.

> Okay, and so? If this ends up making fuzzing more popular and easy-to-use, I frankly don't care if it was added as a library or deeply integrated into the toolchain.

I don't care either because I don't write Go, so just the fact that it's supported is nice for me since it encourages this in languages I do care about.

But if I were a go developer I might care a lot about how my language evolves, what's built in, what's a library, what the capabilities are, what tools I can integrate with, etc.

It sounds like they've done a pretty good job with regards to this implementation though, happy to see it.


Python had an 18 month release cycle for most of its life (now 12 month), while Go has a 6 month release cycle.

Many Python devs use the OS packaged Python versions, while Go devs tend to use the latest release.

Integrating this in the stdlib means that more people will use basic fuzzing functionality. There's nothing preventing third party fuzzers from continuing to develop.


Just because the standard lib of language A is bad doesn't mean every standard lib must be bad.

A great standard lib is one of the best things that can happen to a language. The advantages are huge when it comes to maintaining a project. It's just that it's really hard to create and maintain a standard lib properly.

The Go team makes once again a remarkable good job here. Other languages might not have the resources or expertice to extend and maintain the language and standard lib as well as they do.

Some people argue that there's something inherently bad having a great standard lib, which is of course not true. These arguements seems mainly an excuse for the thin standard lib of their favorite language ...


I think it is great in general. OTOH - nobody prohibits to use any third party library whoever wants to. Third party libraries also die like - https://github.com/go-check/check


Counterpoint, the standard library is available everywhere the language toolchain exists, external dependencies are hit and miss.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: