Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting, though not too surprising. Do you remember what the regexes were that your assignment used?

Go's regexp package does have a couple of advantages over Python, Perl, and so on: 1) it's guaranteed linear time in the length of the input, regardless of the regex, see https://swtch.com/~rsc/regexp/regexp1.html, and 2) it's a relatively simple implementation.




I have done some optimisations in Go regex recently; I have a talk coming up on Saturday:

https://fosdem.org/2022/schedule/event/go_finite_automata/

This repo collects all the changes so you can try them out: https://github.com/grafana/regexp/tree/speedup#readme


That's excellent! Those all look like pretty nice small code changes that all add up. I especially like the very small "avoid copy" change (https://go-review.googlesource.com/c/go/+/355789) that adds up to a 30% speedup on many benchmarks. I hope they get included in Go 1.19. Good work!


Simplicity of implementation isn't what users need, though; they need performance. For example, you can make GCC into a much simpler C compiler by compiling at -O0, but in practice nobody does that.


Totally agreed: almost all users (me/GoAWK included) want performance and don't care nearly as much about simplicity under the hood. Simplicity of implementation is of value for educational purposes, but we could easily have a small, simple 3rd party package for that. Go's regexp package is kinda too complex for a simple educational demonstration and too simple to be fast. :-)

I actually tried BurntSushi's https://github.com/BurntSushi/rure-go (bindings to Rust's regex engine) with GoAWK and it made regex handling 4-5x as fast for many regexes, despite the CGo overhead. However, rure-go (and CGo in general) is a bit painful to build, so I'm not going to use that. Maybe I'll create a branch for speed freaks who want it.

I've also thought of using https://gitlab.com/cznic/ccgo to convert Mawk's fast regex engine to Go source and see how that performs. Maybe on the next rainy day...


Have you considered writing your own string matcher for the simple cases like fixed patterns?

I got some pretty solid wins just by guarding some regex executions with simple strings.indexof calls.


Yeah, that's a good idea, I did consider it, but haven't tried it yet. Do you hook and look at the regex string before it's compiled, or do you hook in at the parsed regex AST level? (eg: regexp/syntax in Go).


For something like awk, I think you'd look before compiling, then create your own matcher. With an abstract Matcher interface that regexp implements.

It's C, but openbsd grep does something like this because libc regex is super slow. Look for fastcomp on https://github.com/openbsd/src/blob/master/usr.bin/grep/util... It's not super sophisticated, but enough to beat the full regex engine.

In the go code where I did this, it was a little different, with a static pattern. Something like "(\w+) apple" to find all apple adjectives or whatever, but the regexp wasted so much time matching words before not apples. A quick scan for "apple" to eliminate impossible matches made it faster. This depends more on knowing regex and corpus, so probably less relevant for awk.


Go's regexp package even exposes a routine for this: https://pkg.go.dev/regexp#Regexp.LiteralPrefix

It's been a while since I've looked at the source code, but it is almost certainly already doing basic prefix literal optimizations.

The more advanced literal optimizations come into play when every match must start with one of a few possible characters or strings. Then you get into things like Aho-Corasick or fancy SIMD algorithms (Teddy).


Oh, regarding rure go, the bugs note about using int is inaccurate. Go spec says max length of any object can be stored in int. You can't build a too big haystack.


Ah that's right! Nice catch, thanks.


I think GNU grep does something similar. When it has a fixed patter it uses Boyer-Moore [1].

[1]: https://lists.freebsd.org/pipermail/freebsd-current/2010-Aug...


> almost all users (me/GoAWK included) want performance and don't care nearly as much about simplicity under the hood.

that works up until you're at scale and the complexity of the implementation causes hard to understand edge conditions and just bugs from the complexity being beyond what the developers are able to handle.


Exactly, at some point code is such a mess that further optimization and features are harder to get merged. And the smaller the number of contributors the bigger the problem gets, till the main maintainer calls it quits and there's need for a completely new library.


Is it also true when even the essential complexity of the project is quite demanding? Like runtimes, especially with JIT compilers are not easy to begin with.


It is even more important then.


I don't exactly agree. Sure end users don't care about implementation directly. But simplicity of implementation does affect them indirectly. Go is already over 10 years old with maybe many more years ahead. All code bases rot. I think the simpler the implementation, the easier it is to cure rot and code smells which hopefully means Go has a long life as the implementation becomes easier to work on over time. While user's maybe don't care, it does impact them.


You can make the same argument about CPUs. Modern CPUs are horrendously complex. But nobody is asking to remove, say, out-of-order execution on simplicity grounds, because that would hurt users and cost millions of dollars at scale for no reason other than engineering aesthetics.

It's only in a few areas, like programming languages and operating systems, that we're obsessed with simplicity, and it makes little sense to me.


Branch prediction is a CPU "complexity" that got us in to some amount of trouble.

I don't see simplicity as a "virtue", as such: it's all about the ability to reason about a system. The more complex it is, the harder this becomes. This makes it harder for implementers to work on it, and harder for users to figure out what is going wrong.

On the other hand, complexity often offers useful enhancements such as features or performance. There is some amount of tension here, which is hardly unique to software: you see this in, say, tax systems, regulation/law, design of cars and all sort of things, etc.


I'd say it is probably because we are all worst at writing code than we'd like to imagine. So writing necessarily complex code, especially in a FOSS compiler or system, makes little sense since some day someone else is going to have to step in and learn it.


I agree, but simplicity of implementation is a net positive in a vacuum. When balanced against things like performance, it's definitely worth some trade-offs for better performance... but simplicity of implementation definitely has lots of upsides that users indirectly benefit from. Therefore, I think it's important to at least have a balance.


Simplicity of implementation also contributes to Go’s fast compile times, which is a different sort of performance. Trying to find a sweet spot between “slow” interpreted languages and “fast” compiled languages with long compile times (e.g. C++ template hell) is a worthy goal.


I think multi-profile compilations is much better - have a really fast debug build that hardly does any optimizations, and a release one that can take whatever amount of time but will be really optimized.


> Simplicity of implementation isn't what users need, though; they need performance.

It's a tradeoff, in the end. I mean sure, users don't really need to know how things work under the hood, but the people building and maintaining the language do; Go's philosophies on maintainability extend to the language's internals as well.

This is one reason why generics took over ten years to land; they needed to find the middle ground between functionality and sanity within the compiler. Java's generics implementation takes up a huge chunk of the spec and tooling. I don't even want to know about Scala's. It added so much more complexity to the language that I'm not surprised Java stagnated for as long as it did.


> Java's generics implementation takes up a huge chunk of the spec and tooling.

Does it? It is a pretty straightforward generic implementation, imo.


Regexp is one of the things you see attacks on all the time (mostly DOS). Users care a lot about security, and simplicity of implementation correlates with it. It's not something users need, but they do benefit from it.


Performance is relative, so I'm not really sure the point being made here. Sure if your program has regex as the constrained resource this matters, but again it's all relative.


Simplicity is what users indirectly need.

Go is not about providing the fastest implementations out of the box, it's about having a broad toolset in the standard library to solve the problems Go was built for.

Faster (and often more complex) implementations are a maintenance burden for Go contributors. It's far better for a high performance regex library to be a third party package for those that need it.

For those where regex is a limiting factor in performance they'll soon find out why. But for most people fast regex is nothing compared to the overhead of a simple HTTP request.


Given that Go's original raison d'etre was Internet-facing services, the choice for guaranteed linear time execution makes sense as a default.


That ... isn't really true at all. Go's original raison was logs processing.


I'd be interested in reading more about that. Do you have a reference?


I think they're referring to Rob Pike's Sawzall language. However, I wouldn't call Go a descendant of it.


As Knuth opined in an often quoted passage, it is criminally negligent to avoid 10% performance improvement to keep implementation simplicity.


Way to never deliver. Absolute numbers like this are pointless.


> 1) it's guaranteed linear time in the length of the input

What’s the multiplicative factor? Does it dominate for “typical” regexes?


For most regexes, backtracking and Thompson NFA have the same asymptotic complexity, which is why most languages adopted backtracking. The implementors of such languages knew what they were doing, especially when you consider that by adopting the Thompson NFA means you give up backreferences. The differences only arise with pathological regexes.

I used to think that backtracking was superior to the Thompson NFA in practice on typical regexes, but modern implementations of the Thompson NFA have shown that I was wrong in that and the Thompson NFA can match backtracking's performance. Still, the situation isn't nearly as simple as Russ's article makes it out to be by only focusing on /a?a?a?aaa/, a regex which nobody would write. (This is not to deny that people do write pathological regexes from time to time. But what's the size of the userbase that writes pathological regexes compared to the size of the userbase that uses backreferences?)


FWIW, I would say that it's difficult for a Thompson NFA on its own to beat backtracking in non-pathological cases. So I actually think your prior is still mostly correct.

Now a hybrid NFA/DFA (or a "lazy DFA") that does subset construction at search time using a Thompson NFA can definitely beat out a backtracker. A lazy DFA will generally match the speed of a fully compiled DFA, and is about an order of magnitude faster than a Thompson NFA simulation:

    [andrew@frink regex-automata]$ regex-cli find nfa thompson pikevm "@$medium" '(?m)^\w{30}$'
    build pike vm time:  6.584596ms
     create cache time:  278.231µs
           search time:  7.798138892s
                counts:  [3]
    [andrew@frink regex-automata]$ regex-cli find hybrid dfa "@$medium" '(?m)^\w{30}$'
               parse time:  24.256µs
           translate time:  20.202µs
         compile nfa time:  5.966991ms
               nfa memory:  486196
    build hybrid dfa time:  3.137µs
        hybrid dfa memory:  486196
        create cache time:  21.793µs
              search time:  406.746917ms
        cache clear count:  0
                   counts:  [3]
    [andrew@frink regex-automata]$ regex-cli find dfa dense "@$medium" '(?m)^\w{30}$'
                parse time:  22.824µs
            translate time:  15.513µs
          compile nfa time:  6.000195ms
                nfa memory:  486196
    compile dense dfa time:  106.35009ms
          dense dfa memory:  4501024
     dense alphabet length:  117
              dense stride:  128
               search time:  448.568888ms
                    counts:  [3]
    $ ls -lh $medium
    -rw-rw-r-- 1 andrew users 280M Jul 14  2021 /home/andrew/data/benchsuite/subtitles/2018/OpenSubtitles2018.raw.sample.medium.en
(This is using an yet-unreleased tool as part of ongoing regex development.)


People will gleefully write that regex if it causes a denial of your service. People would similarly blow up ftp servers with "ls starstarstarstar" globs.


Arguably the best implementation would be a backtracking-based implementation that goes fast, with some sort of strong API for controlling the amount of effort the implementation puts into matching, because the major concern is the truly pathological cases. For the most part I'm not all that concerned about cases where one or the other implementation is slightly faster on this or that input, I'm worried about the DOS attack. However, determining the correct amount of "effort" to put into a match could be nontrivial; there's the obvious "time out after this amount of clock time" which has its uses for security, but if you want something more sophisticated (including measuring by actual effort applied rather than just clock time, which only loosely correlates with how long you've actually had to run) it's difficult to even know exactly how to express what you want, let alone implement it sensibly in the current programming environment.

If such a thing exists, I haven't seen it, but that's not to say it doesn't exist. In general, though, programming languages and libraries have poor-to-nonexistent generalized support for bounding the amount of resources to spend on a given internal computation. Because that's obviously a rather niche use that's hard to justify a lot of effort, especially since "bound the amount of resources for a given program", a well-implemented bit of functionality, tends to hit a lot of the use cases.


It was 7 years ago, so I don't exactly remember what regex I used. I remember it was information retrieval course, and was supposed to write a crawler and index the webpage. So I think part of it was definitely to find all the links within webpage. I was working on extra credit so there might be some funky stuff.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: