I am always quite astonished how bad the default layouters for graphs perform. When I was still doing compiler optimization in the beginning of the 2000s we did not struggle with quite big graphs thanks to cool graph visualizers such as vcg [1] . Two weeks ago I was tempted to try it again after nearly 20 years because I was frustrated even visualising a relatively small graph in python (cytoscape seemed to be the only working software in the end but it was quite a pain to get it to just render what I wanted)
Laying out larger graphs is tricky, often because the size simply stands in the way of generating anything that's useful to the viewer. To add to that, most layout algorithms prioritize optimizing certain criteria, while other parts of the visualization emerge from that; and if the human viewer chooses a layout algorithm because of one of the latter properties, they are often surprised that the result doesn't look like they envisioned (because, hey, the algorithm optimized something entirely different). We see this disconnect fairly often in our own customer support, but I haven't really found a good way of putting an explanation in writing.
Then there's generally the problem of larger graphs, which tend to devolve into a tangled mess and hairballs, simply because they often tend to be well-connected. If there's no way of pruning them beforehand, or perhaps grouping, aggregating or clustering (in a way that makes sense to the viewer, not necessarily only structurally), then it can be hard to get good result.
[1] https://www.rw.cdl.uni-saarland.de/people/sander/private/htm...