Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Maybe I'm totally misunderstanding, but figuring out the "best current path" means re-running A* every time you break a wall, as removing arbitrary walls can give you a totally new path to the goal; to wit, it might be a path not even originally visited by A*. And you have to do that every time you try out a wall candidate, so to me this appears to be quadratic(ish) complexity.

My algorithm should obviously work using Dijkstra's algorithm instead of A*. You just have to make sure ALL nodes are explored. You don't have to run searches per node.

Why it works with A* too is MUCH more subtle. In fact it only works if your A* implementation is fair to all likely shortest paths; most implementations do not guarantee fairness. You can enforce fairness by changing your heuristic to be only 0.9999 * Manhattan distance. Fairness ensures that any path that will be the best path after deleting a wall will have a cost recorded for both sides of the wall.

> (But maybe this is exactly what the SO answer does "under the hood," to be honest, I haven't done a deep complexity analysis of it.)

If the original maze is 2D with coordinates (x,y), the SO algorithm is essentially searching in a 3D maze with coordinates `(x,y, number of times crossed a wall)` and directional edges from `(x,y,n) to (x+dx,y+dy,n+1)` if there is a wall there.*



> My algorithm should obviously work using Dijkstra's algorithm instead of A*. You just have to make sure ALL nodes are explored.

Gotcha, yeah, that's what I was thinking. You lose basically all of A-star's optimization because you do need all nodes explored (turning it into pure Dijkstra). Makes total sense.

> If the original maze is 2D with coordinates (x,y), the SO algorithm is essentially searching in a 3D maze with coordinates

That's a neat way of looking at that answer, cool insight!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: