I think the parent's argument is that, since they only evaluated their algorithms on sorting arrays of 32-bit or 64-bit integers, it is fair game to compare against integer sorting algorithms for which we have better complexities than O(nlgn).
This seems like a very valid point for iopq's clarifying point in the context of what still might exist to be discovered in the set of theoretically possible new algorithms, though doesn't change that dheera set the bar much, much higher than most people would agree with.
but the number of elements in the array is also a 32 or 64 bit integer, so you still cannot do better than O(n log n) even with fancy radix sort or whatever
There are many such algorithms. For example, [1] is expected linear time, deterministic O(nlglgn) time and linear space.
Note that the proposed ML (micro-)algorithms rely heavily on the fact that they are sorting 32-bit or 64-bit integers because they are using conditional moves. A conditional move avoids a branch by converting the instruction to a no-op when the condition doesn't hold. This is fine when moving 4 or 8 bytes because it can be done in a single clock cycle, but you can't apply the same technique when sorting larger objects.