So obscure that in a field as important as optimization we still think in terms of „escaping from local minima“. Also (as a total outsider) the progress in general optimization algorithms/implementations appears to be very slow (I was shocked how old ipopt is). I was wondering if all the low hanging inductive biases (for real world problems) have already been exploited or if we just have no good ways of expressing them? Maybe learning them from data in a fuzzy way might work?
Unless you come with some new take on the P ?= NP problem, there isn't much we can improve on generic optimization.
There are all kinds of possibilities for specific problems, but if you want something generic, you have to traverse the possibility space and use its topology to get into an optimum. And if the topology is chaotic, you are out of luck, and if it's completely random, there's no hope.
Couldn‘t there be something between chaotic and completely random, let’s call it correlated, where e.g. (conditional) independence structures are similar in real-world problems?
You mean something that is well behaved in practical situations but intractable in general?
There is plenty of stuff like that, things don't even need to be chaotic for that. Anyway, chaotic and random are just two specific categories. There are many different ones. Nature happens to like those two (or rather, not random exactly, but it does surely likes things that look like it), that's why I pointed them.