Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the parent's argument is that, since they only evaluated their algorithms on sorting arrays of 32-bit or 64-bit integers, it is fair game to compare against integer sorting algorithms for which we have better complexities than O(nlgn).



This seems like a very valid point for iopq's clarifying point in the context of what still might exist to be discovered in the set of theoretically possible new algorithms, though doesn't change that dheera set the bar much, much higher than most people would agree with.

Thank you.


but the number of elements in the array is also a 32 or 64 bit integer, so you still cannot do better than O(n log n) even with fancy radix sort or whatever


not with radix sort, but you can do better when your elements are integers, even if there there are a lot of them, i.e. even when n ~ 2^32.


interesting; got an example or link? What exact asymptotic form do you mean by "better" here?


There are many such algorithms. For example, [1] is expected linear time, deterministic O(nlglgn) time and linear space.

Note that the proposed ML (micro-)algorithms rely heavily on the fact that they are sorting 32-bit or 64-bit integers because they are using conditional moves. A conditional move avoids a branch by converting the instruction to a no-op when the condition doesn't hold. This is fine when moving 4 or 8 bytes because it can be done in a single clock cycle, but you can't apply the same technique when sorting larger objects.

[1] https://dl.acm.org/doi/pdf/10.1145/225058.225173




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: