Seems we could just layer any of these other techniques then. The big thing is layers, and neural networks just get traction because people think we're discovering something important about the brain and mind. So it's just PR at the end of the day. Not a true breakthrough.
First, yes, that's pretty much what deep neural networks are: layers of shallow machine models stacked atop each other, with each layer learning to recognize patterns at different scale; and we train all layers together end-to-end.
Second, it's not PR! This stacking of layers, when done right, can overcome the "curse of dimensionality." Shallow models like kNN, SVM, GP, etc. cannot overcome it; they perform poorly as the number of input features increases. For example, k-nearest-neighbors will not work with images that have millions of pixels each.
Third, I'm only scratching the surface here. There's a LOT more to deep learning than just stacking shallow models.