IMHO there should be two versions of linear algebra. One for computer science majors and one for mathematicians. I regularly run into stuff at work where I say to myself, "self, this is a linear algebra problem" and I have next to no idea how to transform the problem I have into a matrices or whatever.
But I can write a really stinkin' fast matrix multiplication algorithm. So there's that I guess.
Modern CPUs with big ass SIMD registers are incredibly fast at slogging through a linear algebra problem. Lots of incredibly intelligent people (ie, not me) spend an incredible amount of effort optimizing every last FLOP out of their BLAS of choice. For several years the only question Intel asked itself when designing the next CPU was, "How much faster can we make the SPECfp benchmark?" and it shows. Any time you can convert a problem using whatever ad-hoc algorithm you came up with into a linear algebra problem, you can get absurd speedups. But most programmers don't know how to do that, because most of their linear algebra class was spent proving that the only invertible idempotent nxn matrix is the identity matrix or whatever.
Discrete math has the same problem. When I took discrete math in college, the blurb in the course catalog promised applications to computer science. It turns out the course was just mucking through literally dozens of trivial proofs of trivial statements in basic number and set theory, and then they taught us how to add two binary numbers together. The chapters on graphs, trees, recursion, big-O notation and algorithm analysis, finite automata? Skipped 'em.
IMHO there should be two versions of linear algebra. One for computer science majors and one for mathematicians. I regularly run into stuff at work where I say to myself, "self, this is a linear algebra problem" and I have next to no idea how to transform the problem I have into a matrices or whatever.
But I can write a really stinkin' fast matrix multiplication algorithm. So there's that I guess.
Modern CPUs with big ass SIMD registers are incredibly fast at slogging through a linear algebra problem. Lots of incredibly intelligent people (ie, not me) spend an incredible amount of effort optimizing every last FLOP out of their BLAS of choice. For several years the only question Intel asked itself when designing the next CPU was, "How much faster can we make the SPECfp benchmark?" and it shows. Any time you can convert a problem using whatever ad-hoc algorithm you came up with into a linear algebra problem, you can get absurd speedups. But most programmers don't know how to do that, because most of their linear algebra class was spent proving that the only invertible idempotent nxn matrix is the identity matrix or whatever.
Discrete math has the same problem. When I took discrete math in college, the blurb in the course catalog promised applications to computer science. It turns out the course was just mucking through literally dozens of trivial proofs of trivial statements in basic number and set theory, and then they taught us how to add two binary numbers together. The chapters on graphs, trees, recursion, big-O notation and algorithm analysis, finite automata? Skipped 'em.