I wish that I'd had a similar experience. I bought Halmos "Linear Algebra Problem Book" based on the accolades, but the lack of motivation/inspiration caught me offguard and left me sorely disappointed before the end of chapter 1. If you can appreciate maths without pictures, you have to be either truly gifted or totally deluded.
Recently, learning to do Principal Components Analysis to solve a handwriting recognition problem is what finally shed an enchanted light on linear algebra. The way that a simple matrix of data samples is transformed into an ordered set of principle components (eigenvectors, ordered by eigenvalue) is.. "unreasonably effective" (as they say). The principal components are your signal, and the rest (with eigenvalues ~0) are the noise. the handwriting recognition, btw, works fantastic for my simple application. no need for non-linear kernels and whatnot.
Did you see that on Jeremy Kun's great blog? His primers are how I recently got into building my own Entropy-trained decision tree class and also got an intuition for PCA as a reduced basis. I had used truncated SVD and Fourier bases many times before, but to see it with images (eigenfaces!) really sold the intuition.
Even better now, in this hacking life after pure math in college and grad school, is that I can build intuition now not just by proofs and exercises, but also efficient, coded implementation. Gives a different feel for the tools and concepts.
Recently, learning to do Principal Components Analysis to solve a handwriting recognition problem is what finally shed an enchanted light on linear algebra. The way that a simple matrix of data samples is transformed into an ordered set of principle components (eigenvectors, ordered by eigenvalue) is.. "unreasonably effective" (as they say). The principal components are your signal, and the rest (with eigenvalues ~0) are the noise. the handwriting recognition, btw, works fantastic for my simple application. no need for non-linear kernels and whatnot.