I like the approach, even though in practice I would not necessarily follow the suggested presentation.
From my experience, most math courses start with the "dead" book definitions.
Which is not the way mathematics are built: people start with intuitive but incorrect ideas, the idea is proven to be useful, and only then made correct. The definitions end up coming from refinements to avoid contradictions arising from edge-cases.
This was a common reproach to the Bourbaki group who wanted to formalize mathematics into an almost computer-digestible form; I think it was Grothendieck who said they were "embalming mathematics".
This is absolutely not a good way to learn mathematics, although it is the lowest effort to come up with.
I generally think it is much better to go through what do we want to achieve, how it came to be, etc.
Which you can generally find with presentation of the Seven Bridges of Königsberg problem, some game mathematics (Rubik's cube, etc), or "You could have invented..." type articles (the famous one being for "spectral sequences").
Although there should be different approaches for different learning types. But this is another problem, which is more linked to the (bad) educational structure.
It's worth understanding the context Bourbaki arose in.
An entire generation of French mathematicians was turned to bits of blood and gristle in the trenches of World War I, and so French mathematicians in the 1920s and early 1930s faced an acute shortage of teachers who were current with modern mathematics.
The premise of the Bourbaki effort was to write everything down in enough detail that a sufficiently motivated reader could learn it without having to learn it master-apprentice style -- because too many potential masters were dead.
In that case, that means they have entirely failed.
The very few people I've met who actually bothered with the books only made fun of how horrible they were to read and understand. I never even did open one myself!
I always admired the principle though, because you can reduce everything to pure logic (and even in some cases you can "brute-force" formalism to obtain new results).
Which makes me think category theory kind of fills this role in a better way?
Even without the context, there's something to be said for a formalised approach. When I was in undergrad there was a lecture course given by a notoriously aloof and formal lecturer. One of the other - more popular - lecturers decided to give an "understandable" alternative to the course at the same time of the day. Myself and a few others were in the 5% that continued with the official schedule. Those notes were hard as hell to work through, but once you understood something, you REALLY understood it.
One of the exam questions was conveniently targeted at one of the lectures from the more difficult course. I think it was proving that A5 is simple by considering the rotations of a dodecahedron.
I disagree. An overly dry and formal style (definition, definition, lemma, theorem, corollary, definition, lemma, lemma, theorem, ...) does not make students “really understand” the material. It just focuses students on low level details of formal definitions and symbolic manipulation and gives a lot of practice regurgitating/performing those, often at the expense of knowing the purpose or meaning of the subject. Low-level details are certainly essential, but the only way to really understand is to figure out what the formalism is for (what problems does it solve), grapple with the possibility space of definitions and theorems (if we picked this alternate definition, would that also get us where we want?), figure out how topics and structures relate to each-other, spend some time doing personal explorations, and build up mental models of what the definitions mean, not just their formal content.
A too-dry mathematics course/book is like a screenwriting course where you focus on snappy dialog and details of the setting but never talk about the plot or themes of the story.
I don't disagree. What we found was that, starting from the lecture notes and a few examples, getting to the point where we could complete the exercises meant that we had to do most of the above figuring-out for ourselves. And because we did it ourselves rather than have it laid out for us, the learning was more established.
You seem to assume that a dry approach necessarily leads to not understanding the context.
TL;DR of the below: people differ in the way they learn, and anybody who disregards the dry approach simply because it doesn't work for the majority, is doing a disservice to someone it does work well for.
I've personally prepared for my high school and university math tests and exams (those consisting of mostly math "problems") by only focusing on the "dry theory" from mostly "dry" textbooks. I understood the context perfectly well, and I had multiple tests and written exams focused on traditional applied math problems where I came up with "novel" approaches by simply putting my dry knowledge to use (as in, approached a problem from theoretical definitions and theorems but completely not in the way they taught it or expected). I wasn't as fast as I would have been if I learned all the tricks of math problem solving vs just going from the dry theory (iow, I'd be getting B grades from very little preparation or even class attendance, but not for getting anything wrong, but just for not having the time to figure out everything).
But most people, teachers and professors included, seem to disregard people like me who are great at applying and seeing context from abstract theory. I never enjoyed practicing math problems just to be fast at math problems, but I very much enjoyed abstract theory building and application: my motivation was never competitive, at least not after 6th grade. For someone like me, it's not just "symbolic manipulation", but actually abstract concept manipulation.
In a way, I was seriously underserved by mathematics classes focused on different types of students than me. And this goes from primary school all the way to university.
So, if the goal is to find students like me, who are likely to excel at pure mathematics and won't have trouble applying it, we actually need a drier approach. Obviously, that's not the goal of primary education, but the same focus leads to less stellar outcomes even at higher levels of education.
What this article proposes is an even larger move in this direction, and people are arguing for it at all levels of education, without ever recognizing that there are people for whom the dry approach might work just as well, even if they are a minority.
> assume that a dry approach necessarily leads to not understanding the context
If you never hear what the purpose and context is of something, you certainly won’t magically infer the details. You might be able to guess some parts in broad strokes, but that speculation is just as likely to be completely wrong. For instance, someone learning about set theory for the first time, even if they are some kind of genius, isn’t going to immediately guess that it was established after a crisis in analysis sparked by Fourier theory, which was itself invented to solve tricky partial differential equations arising in physics. (In case anyone wants to learn about this, see https://www.youtube.com/watch?v=hBcWRZMP6xs)
Which is not to say that the presentation of mathematical topics should be primarily historical, but only that the context in mathematics is fractally deep everywhere you look.
> I understood the context perfectly well,
This is not plausible for anyone’s first course. Even life-long experts don’t understand the context “perfectly well”. We are talking about a subject with infinite depth and interrelation, and centuries of history.
Sure, but with mathematics, nobody is ever starting from scratch. And even the "driest" of educational books has some context in it. Or maybe I was lucky, and I haven't seen really dry books.
Initial topics on numbers and geometry have students already have some intuition on them, usually brought from home (eg. kids can count, add smaller numbers, understand differences between straight lines and circles...).
Similarly, coming into more advanced courses, I already had a bunch of context already at my disposal.
Like everything, it can go to an extreme in either direction, but I am mostly saying that I am comfortable with books that tend to be more on the "dry" end of the spectrum, and I can sometimes figure out many of those "dry theory" applications myself which brings motivation and joy (but certainly not all; lots of math has, as you point out, taken centuries for brilliant people to find solutions and tricks that work, and I don't kid myself that I can figure out all of that in a few months or years, let alone few hours — not without learning specifically of their tricks and solutions that do work). And it's definitely not the most efficient way to advance the science of mathematics — but we are discussing teaching and learning mathematics here: keeping students motivated is at the core of any successful learning experience.
Historical context is wonderful in mathematics because it allows one to really see what the original motivation for building up an abstract system was, and that's the best context.
How did you turn «doesn’t try to include context» into «completely useless»?
Bourbaki has no history, no diagrams, limited motivating discussion. It strives to be entirely axiomatic/formal, and to be organized in a strictly “logical” fashion. Bourbaki claims that intuition and analogy are dangerous/faulty and should be avoided.
That doesn’t mean you can’t learn anything from it.
I looked up Bourbaki, my first go was at Algebra (chapters 1 to 3). Seems pretty decent, a bunch of things are obvious (has an introductory section which describes some of the motivation and historical context.
Then starts by defining a law of composition using a function from E x E -> E: all pretty obvious. It even uses the common operators + and . (or no sign) to indicate addition and multiplication, all of which are intuitively clear and easy to make a parallel with what is already familiar. It even explicitely brings up a law of composition not everywhere defined on E for anyone who has not caught that composition needs to be a function on E x E -> E, or rather that it works for all values from E x E. For instance, subtraction on natural numbers is not a law of composition according to this definition.
And straight up on the first page, there are examples of what compositions are available on the set of natural numbers and on subsets of sets.
I am sure it gets hairer as you go along, but this is roughly the type of books I've used decades ago while studying mathematics, and roughly the type of books I enjoyed when properly interested and motivated.
It's only obvious that one needs to go through the Set Theory first as they rely on the terminology introduced there for precise handling of whatever comes here (I still remember most of the terminology, but I don't trust my memory to get all the specifics right).
It is not completely devoid of context and historical perspective, though it presents it in a slightly backwards way (introduction is clear to highlight how parts between asterisks are not necessary for purely logical reading of the text).
Again, and I've said this before, formal mathematics is hard mostly because you have to memorize so much of the new language, and you can't really grasp the context without having grasped the context for what you are building on (eg. if you don't have understanding of functions/mappings, tough luck getting to the grips of algebraic structures).
Switching to Theory of Sets, this is the type of writing that brings me joy. And it certainly concerns itself with context in the pretty longish introduction, attempts to recognize the limits of the language, covers metamathematics and use of simple arithmetic before the foundation for it has been formally laid...
I enjoy that it starts off with defining symbols of a theory, and then assemblies, which is the first I hear of the term, but I can already feel what assemblies will amount to before the example given of an assembly in Theory of Sets — even if the text warns that "the meaning of this expression will become clear as the chapter progresses."
And that's exactly my point. I can enjoy learning from a text like this. I know I am in the minority (I was in the minority in Mathematical Gymnasium and university studies who enjoyed it; I wasn't even remotely the best at solving math problems, because I did not enjoy them), but I am calling for people not to discount this approach for everybody.
> a sufficiently motivated reader could learn it without having to learn it master-apprentice style
if that's the case, I would say they failed.
however, what they accomplished would certainly help jog the memory of somebody who knew the material once upon a time.
maybe it's a bit like looking at a zip file directly and uncompressing the contents on the fly in your head? (something about 'understanding' as a compression scheme)
Yeah, I must say I prefer the math teaching approach of:
1. Explain some problem that is difficult to solve using other techniques. Ideally the problem should be interesting, even if potentially somewhat abstract.
2. Propose a technique to solve it, without requiring full rigor, but that allows for intuitively feeling that the technique is probably valid.
3. Show that the technique also works on other different problems.
4. Show show some flaws (or limits of generality) of the technique. Propose fixes for the flaw, and/or better outline where the technique viable.
5. Now either prove the technique's validity more rigorously (need not be a perfect proof, especially if a full proof requires far more complicated mathematics) or extend this technique to be able to solve more (but eventually coming back to proving the validity).
This sort of approach is has been used in educational videos like some of threeblueonebrown's on youtube, and I've also seen this used as a faux historical development of algebra and high school calculus in "Algebra: the easy way", and "Calculus: the easy way", which did a great job of motivating the development of most of the ideas in those courses. I'm not familiar with similar texts for higher level mathematics (which would have a different tone, since they would be targeting adults, not teens/children), but surely some must exist, right?
This is infinitely better than the all too common higher level math textbook approach of:
1. Here is some unfamiliar theorem, that you might not even really understand, and certainly have no clue of the relevance.
2. Now here is how to prove the theorem. (Which you might be able to follow, but you probably don't really care about right now).
3. Now we finally explain what the theorem is, and hint at (but may fail to show why) it might be useful.
Unfortunately what we typically end up with when learning math is:
1. Explain some problem that is difficult to solve using other techniques. The problem is painfully abstract and completely absurd.
2. Propose a technique to solve it. Never mind how we got to this technique or why it makes sense, just apply this equation and you'll get the answer. Memorize the numbers and symbols, that's all that matters.
3. Show that the technique also works on different (but really the same) problems.
4. Exam time! Here's a problem that looks nothing like the ones you've seen so far, but use the magic technique you memorized! You did memorize it right? Even though you didn't understand it at all?
5. This class is over so you will never use that technique again--even though it's generally useful, but since you never learned how to actually apply it or why it's useful, you'll forget it the day the exam is finished.
I'm also puzzled why so little math courses seems to involve the actual history of an idea. Often, an idea is presented in a vacuum or - even worse - confusing a formal proof with an explanation of purpose.
In my opinion, that's about as effective for teaching mathematical concepts as teaching vi by giving you a copy of the source code.
I think explaining the historical context of an idea and the problems the authors back then were trying to solve with it would be much more valuable for understanding than just the bare proof.
I found a math book aimed at math grads going into official sub-college teaching positions, and was shocked about the historical context provided. Every idea had 2 or 3 variants that differed in origin. To me it would have made math 2x more fun for a lot of pupils. But none of it made it into our classrooms (lack of time ? a dusty pedagogical approach maybe ..). It's probably vital for most people that don't have their neurons naturally properly aligned to a subject to have various informations and point of views about the problems and goals.
It's a french book. "mathematiques au concours de professeur des ecoles" (Hachette editions 2006)
For instance the numbering system chapter mentions non roman numeral symbols, babylonian, egyptian up to indian of course. Then various ways to multiply numbers. As a kid everything was pruned and we only saw one thing.
If you're interested in ring theory, for example, Kummer's 1844 manuscript on "ideal complex numbers" clarified a bunch of undergraduate algebra for me. Overviews:
IME, you will get that information if you get a math degree. You are less likely to get it, unless you read the textbook (that is, not from the teacher) from any K-12 or non-major math classes because that's not their focus (for better or worse). But often the motivational cases for the math are present in high school and non-major math course textbooks. Just not part of the class because that's not what you're going to be evaluated on.
Just want to say that https://BetterExplained.com has long been one of my favorite sites; IMHO the more math teachers and students know about it the better. The author's ability to make potentially daunting subjects comprehensible, even intuitive, is a rare and powerful gift.
I think the emphasis on this title is 'Learning' as the principles described don't only apply to maths and very much shouldn't be the end point for understanding a concept.
Reducing a concept to a 'cartoonish' essence is really the secret to getting an intuition for a new concept, to allow the learner to 'get' it for the first time. The reason is that when one sees something for the first time, its value or meaning isn't apparent. It's just shapes on a page. In other subjects it might be words on a page or a list of events without any throughline or narrative.
Reducing something to a 'cartoon' allows you to focus on the orthogonality - the thing that that concept does that can't be done elsewhere and the reason why you should pay attention and find somewhere in your mind palace to put this new thing.
But I would emphasise that while the ability to understand something well enough to reduce something to its bare essence for a given audience requires a high level of understanding and mastery, the thing the article skims over somewhat is that for the audience this is only the first step and should be re-enforced and expanded upon almost immediately.
BetterExplained as an outlet mostly focus on the pop-explanations sector which is underserved in Mathematics (with 3Blue1Brown and Numberphile doing a lot of heavy lifting) but as a general philosophy of learning, this article needs a bit more development.
Mathematics aside, am I the only one who finds this style of cartoon caricature unappealing? I find that these kinds of caricatures often bear little resemblance to the actual person, as well as being simply unpleasant to look at. In this instance particularly:
> Technically, his head is an oval, like yours. But somehow, making his jaw wider than the rest of his head is perfect.
I honestly don't understand this point. Looking at the original image, his head and jaw seem perfectly average to me. I understand that caricaturists love to exaggerate features, but why praise them for amplifying a feature that does not really exist?
> We agree that multiplication makes things bigger, right?
Huh!? I have luckily never heard anyone try to explain multiplication that way before. It's a horrible mental model for multiplication!
> Imaginary numbers let us rotate numbers.
Kind of ... it's useful to visualize multiplication by imaginary numbers as analogous to rotation, but this makes it sound like imaginary numbers are useful because they let us "rotate numbers", like that's something we always wanted to do. This is neither the essence of nor the impetus behind imaginary numbers.
> The number e is a little machine that grows as fast as it can
No no no! We happen to use e as an exponential base a lot because it's convenient, but it's exponentiation, not e, that grows quickly! f(x) = 4^x grows faster than e^x does; e^-x shrinks; e^0 stays where it is ... this is a horrible intuition to have!
> The Pythagorean Theorem explains how all shapes behave (not just triangles)
It works on other shapes because you draw triangles in those shapes ...
These are awful, awful intuitions about math. We should not take advice about how to learn math from someone who so deeply struggles with basic concepts about it!
Edit: this article is so bad that I just have to add another example:
> Euler’s Formula makes a circular path.
But you just said that e is a little machine that grows as fast as it can! How can it just spin in a circle!? His own intuitions are completely inconsistent! Besides, Euler's Formula has no free variables ...
You seem to be specifically looking to find a frame in which the author is incorrect, which is basically a sign of the problem he's getting at in the article - the inability to allow ideas to be reduced to 'cartoonish' principles in order to facilitate a different (more engaging?) method of learning math, at the expense of being "technically correct".
All your points seem to be making the same type of argument, so I'll just address the first one from my perspective, with the author's points in mind.
> Huh!? I have luckily never heard anyone try to explain multiplication that way before. It's a horrible mental model for multiplication!
The model definitely worked for me when I first encountered it and it worked for many of my early friends (a couple of whom are now accredited mathematicians). The other cases surrounding multiplication revealed themselves naturally, as I encountered multiplication by 0, multiplication by negatives, etc. (exceptions/cases noted by the author). The basic model gave me a way to think about how I could put the tool (multiplication, in this case) to use without being overwhelming and my self-driven discovery of the unexpected parts made me much more interested in what else I might be missing than if someone had given me a full, technically correct definition from the get-go. The key is that this is specifically aimed at those who are learning, not those who are 'learned'.
> You seem to be specifically looking to find a frame in which the author is incorrect
Actually I read the article because I love the idea, and agreed with everything he said up until halfway through, when it completely veered off into nonsense-land with awful examples. I have no idea why you've decided you know my motivation for criticizing the article. I'm criticizing it because it is bad!
> The model definitely worked for me when I first encountered it and it worked for many of my early friends.
Unfortunately it doesn't sound like either of us really have data on this, your anecdotes notwithstanding. But I have to admit I really wonder whether you're (a) remembering your elementary-school mental state correctly, and (b) correct about your friends' mental states as well. After all, you claimed to know my motivation for writing my comment, so maybe you're not as much of a mind-reader as you think. And it's really hard to remember the way you thought about something N decades ago!
Maybe you're thinking of multiplication as being "repeated addition"? Great! That's a good cartoon of multiplication. That's a foothold into the concept that doesn't lead you astray.
The reason it's a good cartoon is that it is correct -- technically correct -- for a sensible subset of numbers, and it extends naturally to other concepts. Multiplying 7 by 0.5 really is analogous to "adding 0.5 sevens together"; multiplying 7 by -2 is analogous to "adding -2 sevens together". And it's useful if you've been needing to do lots of addition and you're tired of hitting the + button on your calculator over and over. It's immediately useful because addition is useful, and it makes repeated addition easier!
But "it makes things bigger" gives you none of this power -- none whatsoever. It just gives you a weird, bad mental image of it inflating a number into a bigger number ... somehow. It's not correct even for the Whole Numbers -- multiplying by 1 does not make anything bigger! You don't even need 0 for it to be wrong! And if you multiply by 0.5 or 17lbs or i, there's absolutely no way of stretching "making it bigger" into an analogy that's at all useful. It's just bad, bad, bad, bad. Completely different from "multiplication is repeated addition"!
The rule "it makes things bigger, except when it doesn't" is exactly what people hate about math. A weird, opaque non-definition that isn't even true most of the time.
For a time, I too shared the idea that multiplication makes things bigger. I don't remember if it was taught this way, or that was just what I inferred on my own. In any case, it was an absolutely terrible way to think about it. Sure, it was fine until fractions. But unwinding that poor mental model was a jarring, painful event. I remember to this day banging my head against the figurative wall, crying literal tears trying to understand how multiplication by a fraction made something smaller.
I think the problem I have with maths is that at the point where I start to realise that the simplifications are wrong, I lose trust in every area of my understanding and want to give up. I can't be comfortable feeling that there's a missing hole in my mental model.
OTOH, it doesn't work at all to give exhaustive mathematical proof because it doesn't construct a model in my head.
So I don't know what the solution is really.
To take complex numbers for example: the idea that -1 has a square root seemed like utter bullshit as it contradicts other models one has had drummed into one. If you look at it as a means to an end, however, it's a useful little bit of machinery that can help to make other problems easier to solve (e.g. converting a differential equation to a quadratic). That makes sense and explains why we're doing it whereas talking about SQRT(-1) starts one off with the feeling that everything is nonsense.
I have aphantasia. For reference, until 2019 or so I thought things like mental imagery and mind palaces were just metaphors and my memory for such things was poor (even after seeing Sherlock put his mind palace to screen).
As you can tell from my username, I am mathematically inclined (4th in the state in MathCounts, 2nd place team, a 5 on the AP Calc AB exam in 10th grade, and 5 on the BC exam the next year despite not having a math class that year[1]). I'm actually decent at the folding nets problems ("which cube matches this unfolded pattern?") as well, but I suspect my methods just don't match how those that can visualize and manipulate those visualizations do it.
As for the article at hand, there's no real difference to me in terms of what I get out of it (the cartoon gives me a slight smile, but good caricatures usually do) as I only remember the "gist" of such images anyways. I can easily recognize it as the Married with Children actor, but I couldn't begin to verbally describe it in any useful way for even a cartoonist to make without looking at it at the same time (barring "it's a well-known actor": "it's a male face…surprised or 'deer in the headlights', some stubble if any facial hair, short messy hair").
I feel like on some level the aphantasia it is helpful. Namely when expanding some concept such as "equality" to "equivalence" to allow such things as "sum of all natural numbers is -1/12" or modular equivalence relations because I can just replace "is" with "can be treated as" without much mental work. Someone who has a visual representation of such things may have trouble "uprooting" such understandings to plant them in richer soil (a metaphor for which I have no trouble using despite not having a picture of such a thing handy).
Do you have a link to the Ross post?
[1] Having finished Honors Calc in 10th, I instead tripled up on science classes my junior year and went to a local college for math my senior year.
This is interesting, and I'm curious if I could pick your brain a bit. I don't have the math accolades you do, but got through all of grades school math (up thru Calculus) with ease, as just "one of those kids who are good at math", and got my B.Sc. in math without much fuss. I consider myself on the opposite end of the spectrum from someone who suffers (is blessed?) from aphantasia; I can muster up not only a mental image of just about anything, but really recreate any sensory experience (although it has a dullness to it) in my head.
This was obviously a huge part of math for me growing up, and as a professional SWE for over a decade now, something I still lean on heavily. What I'm wondering is, how different really are our methods for reasoning about mathematical things, especially the shape rotation/unfolding you mentioned. Do I get to the solution via the mental visualization of the process, or perhaps do you and I perform a similar non-imaged subconscious process of unfolding, but I just "check my work" in a way.
If that were the case, I'd argue aphantasia is an advantage, as you're not bothering your brain with unnecessary work. This is all handwavey speculation of course, but I've always found this stuff super interesting.
> I consider myself on the opposite end of the spectrum from someone who suffers (is blessed?) from aphantasia
One distinct downside is that I'm queasy around other people's blood or gore. I can't watch visceral movie scenes for too long or visit some science exhibits (that I'd otherwise enjoy) because of it (at least this is my suspicion). My problem is that because I can't visualize a discussion about, say, amputation, my mind grafts the description onto me and things go downhill from there. However, I was able to watch some small procedures on myself just fine because I could see that it was OK (though I still cannot watch my own blood being drawn).
> I can muster up not only a mental image of just about anything, but really recreate any sensory experience (although it has a dullness to it) in my head.
Yeah, I don't have anything to compare that to. I recall "gists" of things through feelings and notions rather than anything I'd call "sensory". I cannot recalls scents or tastes all that well and even sounds get muddled up very easily (I have a very hard time finding a rhythm, melody, or other patterns out of music though I can usually pick out instruments while listening; "tap out this song" detectors never worked well for me because I would tap nearly every note and lyric syllable, not some "core" pattern from it).
Some interesting side effects of this include a blank stare while I think about a list of items when someone asks something like "what is your favorite food or drink?". The answer sometimes changes depending on whether I remember "everything" too. Even "what do you think about your travels to X?" brings it up because I have to first conjure up the feelings associated with that travel, remember the big events in that timeline, and then put it into words for the answer (hoping I didn't miss anything).
> What I'm wondering is, how different really are our methods for reasoning about mathematical things, especially the shape rotation/unfolding you mentioned.
The first thing I notice is that I do not "unfold" the possibilities at all; I turn it into a logic puzzle. I focus on one face that is in "most" of the images and look for contradictions with the net by focusing on one or two vertices of that face. Some rules come out of this like "two steps in one axis is an opposing face" and if you see those two symbols on one possibility, it's obviously not a candidate. It also works fine for the harder versions where the symbols on the face are right, but rotated incorrectly to the facing edge. This usually gets you down to one candidate and you can move on to the next one while your hand is filling in the bubble. I suspect (but have no actual experience with it as most of such tests are on cubes) this scales up well for even octahedra and dodecahedra (but starts to fall apart trying to keep the order/orientation of the 5 items around a vertex for an icosahedron or when the possibilities do not share many symbols from their view because that greatly slows down the method). Giving "adversarial" nets where the answer is actually based on two distant legs of the net that I need to actually mentally fold multiple edges also likely stymies my strategy because I have to think "star circle square" as I walk the edges and cannot just "snap" a picture of the sequence as a gestalt as easily.
I took part in a research experiment recently where they have some 3d shapes made of cubes rotated around a vertical axis and ask "are these two objects the same or mirror images of each other". I obviously don't have the answer key, but I felt that I did really well except when mirror images were rotated 180° from each other. I don't know if I have an explanation for that (usually I realized my mistake right after answering).
> If that were the case, I'd argue aphantasia is an advantage, as you're not bothering your brain with unnecessary work.
I think a major difference is in working memory. What you might be able to "freeze frame" as a collection, I have to extract out the distinct parts and remember them more individually. Sometimes I can "find" a pattern and gestalt that way which tends to stick quite well (like I do for my wife's phone number or a friend's phone number I saw a pattern in once he got it), but most things are just rote memorization.
For example, "star square circle" could just be "decreasing side counts" and just using recognition to determine membership later versus "star circle square" which is more likely to be 3 working items. If they are also colored, that doubles the work for me right away as they otherwise don't have any color association to me.
* I also have an aversion to blood, and I suspect for similar reasons.
* Music is for me what the visual mind's-eye probably is for most people. I feel like I can basically 'hear' the music, in a similar way that people say they can 'see' their mother's face when they close their eyes. I wonder at over-developing one sense when another is compromised, like the blind person who learned bat-like echolocation: https://www.youtube.com/watch?v=a05kgcI9D2Q
* Your star square circle example on working memory is spot-on. I enjoy memorization like a hobby, possibly another over-developed thing. If I'm on a long walk, or have to wait a long time for something, I love to practice memorizing epic poems and pi.
Thanks for the in depth writeup, this is all super interesting, and lo and behold it sounds like the answer to what I was poking at was "it depends" and "there is nuance", which is honestly super neat. Just more fuel for the fire of "the human brain is fantastically complex and we know so little about it", the fact that the two of us could have such different experiences and mechanisms, that ultimately serve similar purposes; move a human being through their lives, while keeping them alive and "well".
FWIW, I am self-diagnosed with aphantasia (I can't imagine any feature of my kids' or wife's face, for instance: I can remember those which I verbalised), and I pretty much excelled in maths and got a BSC in maths and computer science.
However, I want to point out that your repulsion to blood or gore likely has nothing to do with aphantasia: I frequently have my blood pulled for some medical check-ups, and I never had a problem with it, going all the way to being a kid.
When I read his original post and first learned about aphantasia, realized that most people have a functioning 'mind's eye', it was one of the weirdest days of my life.
I don't know if I could classify it as "incredibly valuable in almost all contexts". I have no idea what it's like to have such imagery for comparison. I dream, but cannot remember them beyond tiny details. I've had those nights where I wake up really wanting to remember a dream, but it's like trying to hold water in my palm. However, I also know when my eyes are closed because it's just pitch black (though I can tell if there's a bright light and some direction), so I know when dreaming is over quite easily.
I wonder if some of my love of word play comes from being able to just "reorient" around a concept easily. I can also read upside down or in a mirror (but not if both transforms are applied) at reasonable speed. The former was quite handy in groupwork in school because I could just sit on the other side of the table or desk and follow along instead of crowding around.
> When I read his original post and first learned about aphantasia, realized that most people have a functioning 'mind's eye', it was one of the weirdest days of my life.
The strangest thing I found is that, with hindsight, the clues are everywhere. Functioning "mind palaces", eyewitness testimony, the ability to even slightly help a sketch artist. I remember being in a yoga class and they told us to imagine "three circles around yourself". I was thinking of the circles orbiting like moons; apparently everyone else had them as concentric circles centered on themselves. I could easily flip between them, but others thought it weird how I "saw" it at first.
Everyone here, including the original poster has, I believe, made a mistake of thinking the article meant to literally visualize a problem like a cartoonist. The cartoon caricature is only meant by analogy. Possibly a blind spot for those with aphantasia, but the thesis of this article isn't centered around literally visualizing something but capturing some salient points, perhaps exaggerated and not technically correct as an initial way of starting to explain a mathematical concept. It's perhaps unfortunate that the analogy chosen was difficult to understand or sidetracked people that have trouble with the process of making a cartoon caricature.
I am self-diagnosed with aphantasia :), but I understood the desire of the article to simplify by exaggerating most prominent features necessary for a given context, similar to what cartoons do (though I implicitly assumed they simply meant a "caricature" as opposed to any "cartoon", though I haven't looked up if my understanding of the specifics matches actual definitions of those words). So definitely not a blind spot for those with aphantasia.
Still, the article does an extremely poor job of showing how that is done well, thus fails in getting the point across.
Even the cartoon of a person example sticks out to me as not amplifying a feature, because I don't see the jaw line as a feature in the original face at all. If anything, I see non-existence of that jaw line in the original photo. As such, it's a great model for the article: bad caricature, just like all the bad math examples below.
I draw on paper when I'm trying to visualize something, but the area I work in is quite abstract and very often the images are huge oversimplifications that give some intuition, but on which one shouldn't rely too much.
I agree that the suggested method doesn't sound like a good approach at all to me
I think the bit about the definition of multiplication could be applied to code as well: A source file necessarily contains all possible logic flows, no matter if a flow represents the main operation, an obscure special case or parts that are completely unused or practically unreachable. This can easily hide the "gist" of a function among heaps of clutter.
I think some tool would be interesting that took a code traces or coverage information and generated a view of a code file where each line has a font size dependant on how often that line was executed. Ideally, "important" lines would end up with a large font, while special cases, error checks etc would become "fine print".
I'd be curious if this would help making codebases easier to understand.
That is exactly what profilers do. Xcode gives you a heatmap of what percentage of the execution time has been spent in a particular function (at least in ~2013 when I used this function the last time)
I'm tired of these kinds of titles. They are like news headlines and nothing but a way that maybe it worked for someone. as the adage states: "there is no royal way for learning."
When we talk about 'learning mathematics' it's important to recognize a huge difference between learning how to use mathematical discoveries/inventions, and learning how to develop mathematics from scratch, via the whole theorem -> proof -> new theorem route that defines 'pure mathematics'.
Most people are simply not going to get a lot out of learning the latter method and will indeed be turned off by it, much to the disappointment of the professional mathematicians (i.e. most college professors in maths). I'd guess > 95% of people taking higher maths courses are not going to ever develop new proofs - but they will use what they've learned in other areas, such as physics, biostatistics, finance, etc. Essentially we just take it on faith that the mathematicians got their proofs right, and we gratefully use the fruits of their labor. (They're all quite mad, those mathematicians, if you ask me)
Now, when you first learn how to apply maths to things like physical problems, this is where tyhe cartoons, or 'simple approximations neglecting complex factors' becomes really important to learning. You don't want to try to include friction when first examining falling weights and springs and pendulums through a physical viewpoint, for example. Later on, when you get that job with SpaceX, understanding friction in depth will be critically important, but if you don't start with the simple cartoon approximations, it'll be way too much to comprehend.
However, this probably wouldn't work for the real mathematicians. They've got their axioms, then from the axioms they develop proofs, then from those proofs they develop more proofs - there's no approximation or simplification involved in all that, is there?
The only problem with this dualism is that it really isn't so.
The way mathematics is developed is the same way it should be learned and applied. Your geometry class today does not start with Euclid's axioms, but instead by developing intuition for concepts in the simplest way possible (2D plane, 1D lines), playing around with them, and then formalizing them (which means defining them rigorously to the very last detail, to ensure eg. your concept of a straight line is always a straight line), and generalizing what you learned about them (stating it as theorems). The same is with numbers: I've yet to hear of people starting their arithmetic training with Peano's axioms, yet it's common to teach numbers by pointing at one object and moving another one next to it, and claiming that's now a "two"; adding another one to that gives you a "three" (successor() function, right?).
Sure, most primary education skips this playing around with concepts, and fails to demonstrate failings of not being precise enough, but they are all trivial to demonstrate.
But all of those are what is exactly used in formal/pure math development, and exactly the same processes are used when applying mathematical insights anywhere else. There is no difference at all!
The level of rote memorization and combinatorial explosion rises as you go into more advanced math with more abstract concepts to deal with (which is what I think people really struggle with, since it's a different language, and then intelligence does come into play as well as you start dealing with bigger complexity), but I like to say that math is the simplest form of expression of the human brain, but no simpler. It has well-defined boundaries of our comprehension too (we need to assume some things are true without proving them, aka axioms and base terms).
The biggest problem of all math instruction today is that it's mostly performed by those not understanding the simplicity of it, and thus the only thing they can do is transfer their misunderstanding on to their students. This includes parents who teach their kids basic concepts like basic geometric shapes or numbers.
Another big problem is that people simply fail to realize how much of rote memorization is required for understanding mathematics. Being a different language, you have to learn all the "new" words and grammar to be able to effectively apply it, and people usually discount that as not important, and how only understanding mathematics is hard because of the abstract nature.
I don't see what's wrong with "Multiplication copies things".
The fact that "copying" can be a continual scale from 10 copies to 1 copy down to half a copy and zero copies captures most of the behavior that a non-math person working as a programmer(One of the main uses for math, right next to using it to learn other math and becoming a math-heavy coder) is interested in.
The main actual uses of multiplication before you get to advanced stuff are things like volume controls, total price of N units, area of a square, etc, that are pretty much copying.
Except Ohm's law. That doesn't quite fit intuitive models of math exactly, and when you actually go to use it you usually wind up wanting to know power dissipation which is nonlinear in a very bizzare multi-variable way with some IRL use cases, since you're usually looking at systems as a whole.
In a nutshell, whatever this article suggests, just don't.
This oversimplifies to a point where all learners learn the same way. They don't.
I don't even see the wider jaw in that cartoonist depiction, and I'd never recognize the man from it: a good cartoon will amplify features of someone, but this is a completely non-existant feature of the face being drawn. If anything, the guy has overly oval face compared to an average face.
So basically, based on the wrong premise, it happily leads you to a wrong conclusion.
A better take-away would be to attempt to recognize multiple ways to learn something, and make an effort to see what works best for any single individual. If you can't afford that (too time consuming, thus too expensive, to cater to each individual student), choose what you optimize for: having most kids learn to a particular (likely lower) standard, or having "most-compatible" (eg. in maths, those who kinda already have the mathematical, algorithmic, abstract mind) get the best of their talent. But you will be compromising either way.
That is: all models are wrong, some are useful. Including this one.
Have you ever taught a class? I have, and I absolutely need simplified (aka wrong, but the right kind of wrong) models of learning to be able to operate at all.
I haven't taught a class, but I don't think that discredits my opinion at all. Why do you think it does?
I never said you don't need simplification, but the way article poses it is disingenuous at best. Eg. look at the example of multiplication they give and how it's "technically correct" to call multiplication mostly reducing numbers. Bollocks. Neither "making things bigger" nor "reducing numbers" is technically correct: neither applies.
The natural approach to learning multiplication is to work with objects, and thus natural numbers. That's why it's called "multiplication": you make multiples of something.
You can easily introduce negative numbers as being in debt for X things, and in that sense, multiplication still only increases your debt. Negative numbers are a shorthand for "you owe me this".
The next step is to introduce rational numbers, which are parts of something. That's pretty clear too. Then you introduce multiples of parts of something, and it all still makes sense (quarter of a quarter is now something slightly different). Particularly inclined students will "feel" that you are now entering somewhat abstract "multiplication".
Similar goes for real and complex numbers though that requires a bit of leap of faith.
This is a proper way to teach multiplication. It starts simple, yet it's never wrong.
The premise of the article makes it a come up with a completely unnecessary and untrue statement: "People generally multiply positive numbers greater than 1, so multiplication makes things larger." People mostly multiply natural numbers: this is what drives the intuition and naming of the operation (technically, this still falls under article's statement, but that one is unnecessarily technical in allowing only numbers greater than 1, and allowing rational and real numbers, though I am not sure where complex numbers fit in ;-)).
By focusing on technicalities, the article misses the natural way to simplify things which are never wrong.
Mathematics today is beautifully built from very simple concepts up. Depending on the level you are teaching at, it requires different levels of suspension of disbelief.
From my experience, most math courses start with the "dead" book definitions. Which is not the way mathematics are built: people start with intuitive but incorrect ideas, the idea is proven to be useful, and only then made correct. The definitions end up coming from refinements to avoid contradictions arising from edge-cases. This was a common reproach to the Bourbaki group who wanted to formalize mathematics into an almost computer-digestible form; I think it was Grothendieck who said they were "embalming mathematics". This is absolutely not a good way to learn mathematics, although it is the lowest effort to come up with. I generally think it is much better to go through what do we want to achieve, how it came to be, etc. Which you can generally find with presentation of the Seven Bridges of Königsberg problem, some game mathematics (Rubik's cube, etc), or "You could have invented..." type articles (the famous one being for "spectral sequences").
Although there should be different approaches for different learning types. But this is another problem, which is more linked to the (bad) educational structure.