Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, and, there are two related but different operations there:

- Look up an arbitrary element by its value, and then change that element's priority. This is often needed in real life, but is fundamentally incompatible with std::priority_queue's highly restricted design. There is no public API at all for dealing with "arbitrary elements" of a std::priority_queue; you interact only with the .top() element.

- Change the top element's priority, i.e. handle it and then throw it back down to be dealt with again sometime later. This operation is used in e.g. the Sieve of Eratosthenes.

I'm not sure which operation you're thinking of w.r.t. Dijkstra's algorithm; I'd wildly guess it's the first operation, not the second.

Changing the top element's priority is easy to graft onto the STL priority_queue's API. I've done it myself here: https://quuxplusone.github.io/blog/2018/04/27/pq-replace-top... The proper name of this operation is `pq.replace_top(value)`, and for the perfect-forwarding version, `pq.reemplace_top(args...)`.

Search `reemplace_top` in this Sieve of Eratosthenes code: https://godbolt.org/z/bvY4Mr1GE




You don't actually need to have the "adjust priority of element" operation to implement Dijkstra or A-star. The standard description of the algorithm always include this, but it is not actually necessary: instead of adjusting element priority, you just push duplicate vertices with new priorities on to the queue, and when you pop the queue, you just check if you've already seen this vertex before. If so discard it and pop the next one. The algorithm still works, since the first time you pop a vertex that is the shortest path, and the rest of the time you can ignore it. Simple to implement and plenty fast. There's no difference in time complexity: you have to consider the "duplicate" case at some point, you're just pushing to a later time when you pop it from the queue.

You might argue that is wasteful of space pushing these duplicates, but your other options are either to graft this functionality on to a normal priority queue in which case you're using that space anyway, or to use a much more complex and usually slower kind of priority queue with this operation naturally (e.g. Fibonacci heaps). The space wasted is quite small in practice, since the only time this happens is if multiple nodes on the frontier points to the same element, but most nodes ("in practice") have small degree of incoming paths. The benefit of being able to use standard (and very fast!) priority queues without this weird operation is well worth it.

In my experience of implementing Dijkstra and A-star a couple of dozen times (I like Advent of Code problems!) this has always been the better way to do it. I mean, I haven't put Dijkstra/A-star into production or anything (I don't work for Google Maps or whatever), but in my experience this is the simplest and fastest way in practice to implement these algorithms.


> You might argue that is wasteful of space pushing these duplicates, but your other options are either to graft this functionality on to a normal priority queue in which case you're using that space anyway, or to use a much more complex and usually slower kind of priority queue with this operation naturally (e.g. Fibonacci heaps).

FWIW, my favorite solution for Dijkstra is the winner-tree (a binary heap except that all the nodes are in leaves, so the interior nodes are duplicates). Simple to implement, O(log n) insert/delete/update, can be made branch-free in important cases, constant extra overhead over a binary heap. I've found it to be _much_ faster in practice than stashing duplicates into a regular maxheap (I'm not sure if I agree with your notion that there are few of them).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: