Fantastic endeavour.
I'm someone who has needed to force myself in mathematics for my career. I needed extra help at school, but had no real trouble in any other class.
I thought I was just too dumb for maths. I came to realise that my problem was not in mathematical concepts which it turns out are fairly easy, but in the language of maths itself and how it is taught (some exceptions such as Strang who is just a wonderful educator). Programming in particular really helped me to learn. That's not to say that certain areas aren't still difficult but reformulating away from the traditional notation and teaching style has helped me.
There is so little mathematical notation to learn though. A typical high school course has like a couple of symbols and a few keywords to learn, that can't really be the main reason you had a hard time to learn math when other courses has hundreds to thousands of words you have to learn?
There are some big problems with mathematical notation.
The first is that it is terse--often excessively so. The clearest way this manifests is the sheer predilection for single-variable names. Every variable in every equation invariably gets reduced down to a single letter, and sometimes you need to go so far as distinguishing based on font face or boldness or what random diacritic you can throw on that variable name to keep it a single-letter variable name. Even when it doesn't reach that extreme of an issue, it makes skimming difficult, because you now have to go back through the prose to figure out what 'H' means, and spotting the definition of a single-letter variable in prose is really easy, right? (No, no it is not).
Another factor that can make it infuriating to read mathematical notation is the sheer overloading. Subscript notation is a good one for this--if you see a single letter subscripted by something else, it could mean either getting a particular entry in a sequence of entries, it could be an index into a vector or a matrix or a tensor (and are you getting a row or a column or whatnot? all of the above!), maybe it might be a derivative. Or maybe it's an elaboration of which of the possible terms that get summarized to that letter is actually being referred to (e.g., E_a for activation energy).
Of course, the flip side of the notation problem is the fact that some concepts have multiple different notations. Multiplication is a common example: to multiply a and b, you can use a × b, a · b, or screw any symbol altogether and just say ab (thus a(b) is also a way to indicate multiplication, which totally has no potential confusion with applying a function [1]). Differentiation is the worst; virtually every introduction to derivatives starts with "oh there's multiple notations for this"--is it really necessary that so many need to exist?
[1] There was one paper I once read where I literally had to spend several minutes staring at a key formula trying to figure it if I was looking at function application or multiplication.
> is it really necessary that so many need to exist?
Yes it is necessary - often different contexts require different notation, sometimes as an abbreviation, sometimes not.
I do not believe that mathematical notation can be improved (a poor analogy would be trying to improve, say, English language itself); what does evolve, though, is the understanding of mathematical objects and mathematical frameworks - which can often lead to simplifying things and, sometimes, notation, too.
> I do not believe that mathematical notation can be improved (a poor analogy would be trying to improve, say, English language itself)
I strongly hope you mean that heavy-handed top-down approaches aren't likely to work. Because this reads as though you're saying we've somehow reached the optimal point on every axis for both mathematics and general-purpose communication.
The (original) purpose of mathematical notation is to facilitate (to some degree, automate) reasoning and calculations (mathematicians tend to conflate them) on paper. For the existing frameworks it’s already as good (efficient) as it gets, given the 2-dimensional nature of said paper. Would adding new characters beyond the existing set (which is already quite large), introducing new alphabets in addition to the Latin and Greek (OK, we have the aleph), adding more font styles and sizes - would any of that constitute an improvement worthy of note? We have already exhausted the freedom given to us by the 2-dimensional paper-space as a computational medium - consider, for example, machinery that heavily relies on graphs (diagram chasing; Dynkin diagrams, etc.) or tables (matrices, tensors, character tables of groups, etc.).
And that is why perhaps so much of us like programming: there is exactly one correct way to interpret code. And stylistic differences (different notation for the same thing) are usually discouraged with style guides
> there is exactly one correct way to interpret code
No, there are different compilers, languages or even language versions or compilers settings. Programming notation is way harder and more confusing to learn than the math equivalent just because they are so many more styles languages or versions etc. And we all know that learning all of those symbols and keywords isn't the main difficulty with learning programming or new programming languages.
Of course different languages have different syntax, but it is a pretty hard requirement that within a (specific version of a language the syntax rules are both explicit and consistent.
Those are issues with sloppily written math papers, not with high school math. This repo is just high school and intro college course math, there isn't much overloading or confusion there.
I think a lot of difficulty arises in math noation because most people (myself included) read a lot of math but don't actually write a ton to express my own thoughts. I'm decent at engineering math but grinding through problems, reading papers, and writing are all different skills.
I'm very similar. Show me a big mathematical formula and I'll have to dig into it for a while to understand it. Show me the code that implements the formula and it makes sense a lot quicker (for me anyway).
I am the exact opposite. I look at code written by some analyst or whatever at work, and it's usually borderline unreadable if you ask me. Pretty print it as latex and I can visualize what it actually does almost immediately.
Besides, I don't really care about "the code" i.e. the mechanics of computation but rather the properties of that expression that exist regardless of how it is communicated i.e. it's shape or how it behaves asymptotically etc.
I have had this same experience of finally understanding a mathematical idea by seeing it implemented in a programming language. You can always eventually understand a program because there can't be any ambiguity or the compiler couldn't decide how to compile it. Math is not supposed to have ambiguity but it arises because there is too much convention and assumptions in the notation (granted sometimes they are saying something broader than can be expressed in code). But higher mathematics as a field seems to me to be like a programmer who uses single letter variables, never write comments and really like clever bitwise operators.
Every math paper I've read and written have written out definitions for all the notation they use, even have a section called "notations". Don't you read math papers? Or do you mean applied math papers? Applied math is more like engineering though so they get sloppier with notation, I guess the same thing happens when computer scientists do maths.
But that isn't really a problem with math, but a problem with sloppy people. Or you just didn't read the notation sections explaining everything.
As a programmer and occasional electronics hobbyist most math I encounter is in the context of whitepapers, or physics/EE texts or similar. I don’t really read papers in pure mathematics. Perhaps the difficulty is due to sloppiness by non professional mathematicians.
I found myself in a similar boat (I'm a software engineer with no college degree).
I did well in school with geometry, algebra, and pre-calculus but I did so by memorizing not by understanding.
A decade later I ended up going through a lot of Khan Academy videos to refresh myself and then diving into discrete math and linear algebra textbooks. It really helped me to finally understand the core mathematical concepts that we use in programming algorithms.
I thought I was just too dumb for maths. I came to realise that my problem was not in mathematical concepts which it turns out are fairly easy, but in the language of maths itself and how it is taught (some exceptions such as Strang who is just a wonderful educator). Programming in particular really helped me to learn. That's not to say that certain areas aren't still difficult but reformulating away from the traditional notation and teaching style has helped me.