Ask and ye shall receive... just a couple of hours ago I asked on twitter if such a thing has been done and I've been bombarded with prior attempts.
Interesting how many different ways people have come to this idea. It's a powerful one: if architectures are where we can make a difference and architectures is a trial-and-error kind of thing at the moment that makes it a perfect candidate for automation.
Yes, but most "architecture engineering" has already started to be phased out in deep learning research, with the exception of specifically tuned network types (i.e. for computational efficiency) or those with special layer types (i.e. separable convolutions, deconvolutions, etc.).
I dig hyper-parameter optimization and architecture discovery but I'm always reminded of [this slide](https://image.slidesharecdn.com/k2jeffdean-160609173832/95/l...) and how these hp-tuning and architecture discovery tasks will always fall one tier above whatever your base optimization task is. The base tasks have to get faster for it to be practical for anyone without a server farm.
I definitely think this is the future of ANNs, nets optimizing nets optimizing nets. Turtles all the way down.
Interesting how many different ways people have come to this idea. It's a powerful one: if architectures are where we can make a difference and architectures is a trial-and-error kind of thing at the moment that makes it a perfect candidate for automation.