Quite unexpected. Does anyone have a summary somewhere? I don't fully understand the bucketing approach and my 2023 attention span can't handle reading the whole paper.
It looks like they replaced the usual sorting step in Dijkstra's algorithm (used for selecting the next node) with the first step of bucket sort in which the values are distributed across a fixed number of buckets. This can be done in linear time but requires integer values. Then they just skip the second step, which is sorting within each bucket, and pick an arbitrary element from the lowest-cost bucket. And I think one of the proofs shows that this incomplete sorting is correct for this use-case.
I didn't get the formal details either, though, so correct me if I'm wrong.
You'd pretty much have to make sure that no two nodes from the same bucket can improve on one another, and I can't really understand how you'd pick the bucketing so you don't need O(MaxLength) buckets.