Fatal errors tend to blow up in production rather than test.
One of the simplest solutions for detecting cyclic graphs is instead of collecting a lookup table or doing something non-concurrent like marking the nodes, is to count nodes and panic if the encountered set is more than an order of magnitude more than you expected.
I came onto a project that had done that before and it blew up during my tenure. The worst case graph size was several times the expected case, and long term customers were growing their data sets vertically rather than horizontally (eg, ever notice how much friction there is to making new web pages versus cramming more data into the existing set?) and now instead of 10x never happening it was happening every Tuesday.
I was watching the same thing play out on another project recently but it got cancelled before we hit that threshold for anything other than incorrect queries.
Just wanted to say you're one of my favorite posters. Can't put an exact reason on why, but at some point over the last 15 years I learned to recognize your name simply from consistent high quality contributions. Cheers.
One of the simplest solutions for detecting cyclic graphs is instead of collecting a lookup table or doing something non-concurrent like marking the nodes, is to count nodes and panic if the encountered set is more than an order of magnitude more than you expected.
I came onto a project that had done that before and it blew up during my tenure. The worst case graph size was several times the expected case, and long term customers were growing their data sets vertically rather than horizontally (eg, ever notice how much friction there is to making new web pages versus cramming more data into the existing set?) and now instead of 10x never happening it was happening every Tuesday.
I was watching the same thing play out on another project recently but it got cancelled before we hit that threshold for anything other than incorrect queries.