You know something? I know a little abstract algebra: groups, subgroups, quotient groups, and the relvent theorems behind them. It's been disappointingly useless to me though. Maybe someday I will take the quotient group of two matrix groups... I'm not sure though.
I certainly haven't applied any of those examples either. ^_^ Abstract algebra, topology, etc. are all studies of problems that already have mental frameworks. People already did an incredible amount of legwork building the apparati for understanding these fields (no pun intended). The value of learning these frameworks, if you're not going to work in those fields directly, is to understand how to build your own framework.
What kind of tools are in your toolbox for breaking problems down? Where is my problem different from others, and where is my problem fundamentally the same? How can we isolate these parts and handle them on their own terms? This is fundamentally mathematics, however it's ultimately expressed.
Here's a small selection of those ideas I've picked up from mathematics that have absolutely paid dividends in my day-to-day:
* The idea of a "homomorphism", a structure-perserving map between two different domains of discourse. The more I learn about category theory, the more I realize that homomorphisms are conceptually everywhere in software. The more I learn about domain-driven design, the more I realize the role functors (a particular kind of homomorphism) really play in software design.
* The idea of a "fixed point", for limiting behavior of processes. Fixed points are especially pleasant in domains where processes have some sense in which they "grow monotonically". When I can model a system as a series of operations that "add knowledge" and don't invalidate prior results, I know I have a wealth of analytical tools at my disposal.
* The idea of products (pairing) and sums (choice) in type theory, for modeling interactions between components. I feel like I'm in a straitjacket when using a language without sum types; I have to encode what I really mean using tools that don't let me get there directly.
What I got to think recently about the value of knowing more about stuff whose usage isn't imminently obvious is that when you expand your knwoledge, the 'range' of your world changes.
So yes, almost by its nature, you would not use the stuff that you don't know much of, but you would be hemmed in by your own ignorance. On the other hand, by expanding your knowledge, you would also expand your range of experience (your world) thus find it more useful.
I studied vector mathematics in high school; matrix operations, dot product, cross product etc. All through these lessons I thought; "what a stupid thing to learn, who would ever use this?". Then after school I became a CAD/CAM developer and spent most of my time working with vector mathematics. It was with the help of OpenGL so I technically didn't need to understand how these operations worked under the hood but yep... what a stupid thing to learn indeed.
Most people start abstract algebra with groups, which is not surprising since the underlying definition is very simple and the basic examples are easy to understand. But abstract algebra only really starts to come into its own when you start to learn about rings and modules, which ultimately turn out to be important in proving most of the significant theorems in group theory as well.
One example I've been toying with recently is the link between complex and split-complex numbers, and the latter are isomorphic to a direct product of two copies of R. Putting these analogies together leads to a slight improvement of Karatsuba's complex number multiplication algorithm:
The extra storage use does call into question whether this representation can be helpful in practice, but the fact that these abstractions can be unrolled into code is pretty cool.