Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Each op must be around 1000 iterations, because that's how much they seem the ns/op seem to be off.


Yep! 1024. Good eye! Some tests, like the atomic ones, use an even larger constant (which explains the crazy 355971 nsec/op result for an atomic increment):

    BenchMarkSize     = 1 << 10 // 1024
    BenchMarkSizeLong = BenchMarkSize << 5


Yes, that one really caught my eye because if an atomic int increment took 35 microseconds, I have some code that would have exploded a long time ago.

I'm not sure what the idea is there, the benchmark system can accommodate small operations, I've used it before on things like single increments or single interface calls and gotten reasonable answers.

Also benchmarking atomic int increment without any contention is not necessarily "useless", but certainly not a full picture if one is investigating using it in a contended data structure. (My usage of it is mostly just counters, where I expect thousands upon thousands of instructions to be between the increment of any given one of them, meaning there's probably no contention to speak of even on production multi-core servers, so I have no idea how they perform under serious contention. But AFAIK it's just the hardware-supported atomic instructions, so what I don't know about is the hardware, not really Go per se.)


> I'm not sure what the idea is there, the benchmark system can accommodate small operations, I've used it before on things like single increments or single interface calls and gotten reasonable answers.

Yeah, I've been wondering if I'm missing something, because adding a second redundant loop to b.N is common in go repos, but seems pointless and needlessly obscuring.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: