Thanks for sharing this particular Feynman lecture. The treatment here, using what are basically the Peano postulates to derive the field properties of the real numbers and then the basic structure of secondary school mathematics, follows the treatment of Landau's Grundlagen der Analysis (Foundations of Analysis), a concise book that Feynman was probably aware of as he presented his lecture. Feynman of course added a sense of excitement and wonder that makes this lecture charming to read. Such treatments of the foundations of secondary school mathematics are fairly commonplace in the better university textbooks of mathematics. I first learned of Landau's book in a discussion of mathematics education in a Usenet newsgroup back in the 1990s, and bought the German edition on the recommendation of Michael Spivak's famous textbook Calculus, which follows a similar approach (but starting from the field properties of real numbers taken as axioms). In those days, I'm pretty sure, the standard calculus textbook at Caltech, where Feynman taught, was Apostol's textbook, which starts a little bit differently but also takes a theorem-proof approach.
Nice I was going to mention Spivak but you beat me to it. I stumbled on it in a library 20 years ago and was hooked. I bought the Differential Geometry books with the pretty covers and lost my book collection before I could do much with them. I'll have to check out Apostol, I only know the Number Theory book.
A lot of the issues around complex numbers can be intuitively resolved by drawing a unit circle around 0 on a numberline, and reasoning backwards to a definition of i. Rather than reasoning forwards from sqrt(-1)
It is often taught by its historical development, so first as an alleged imaginary solution to an equation with no real roots.
This has it backwards as far as intuition is concerned. The geometric interpretation is the obvious one, and the use of `i` in finding zeros is a special case.
In many American schools Mathematics is taught as if it were composed solely of magic recipes. Students need only to memorize the patterns described in the red boxes in the textbook to pass.
It's a lot easier to see if you imagine i to be a symbol R that rotates vectors in the plane, since e^(i ɸ) = cos ɸ + i sin ɸ doesn't use the additive property of i at all.
(Well... I guess it does in a sense, because all it means to be additive is that it's linear, so you can add up all the complex-valued terms in e^(i ɸ) and give their coefficients the name sin().)
Figuring out what e^(iɸ) means requires figuring out "What `i` is" and "What e^x means on non-numbers" at the same time. That is, it requires you to perform two intellectual jumps at once instead of one at a time. No wonder it is so confusing.
If these concepts are made sufficiently simple, I imagine that we could live in a world where we also teach e^(a d/dx) f(x) = f(x + a) in high school.
Pardon my mathematical ignorance here, but I've always been curious as to whether negative numbers are the only alternate set that extends from the origin of zero, or whether they are merely the only set that is diametrically opposed to their positive counterparts, and there are in fact an infinite number of these lines extending from the origin at zero, radiating from it in all directions, and what we think of as the positive and negative numbers are simply one pair of rays extending from this origin arbitrarily chosen as our base units.
I often think that a lot of math would work out more easily if we _only_ used polar coordinates and regarded the negative numbers as a "separate number line" rather than a continuation of the positive numbers.
(In particular, if you know about delta functions... a lot of weirdness around x=0 goes away if you write everything in terms of r \sgn (r) and take the derivatives of both terms. e.g. This gives "for free" the fact that the divergence of 1/r^2 is 4 pi delta(r).)
I have heard of systems in which one sticks more lines out from 0 than just the positive and negative numbers. At some level that's what R^2 is, with four copies of the positive number line, but I don't see a strong reason why in principle you couldn't have an odd number of lines, which would correspond to... uh... R^1.5. But you have to define how these lines rotate into each other, and it is gonna be weird.
This is one take, and one that has become very popular, but it's not necessarily the only take. In particular it presupposes that your number-like indeterminates can be both (a) multiplied and (b) added to numbers (and, implicitly, divided). Naturally the solution has to be a division algebra. If instead you asked the question "for what values of O would O^2 (v) = -v", or even just O^4 (v) = v, then you would be more content having the answer live in a different space, of operators on vectors rather than vectors themselves, instead of in a field extension of the present space. Of course they are basically isomorphic but I think the alternate interpretations are useful to keep in mind so that we don't accidentally assume our way into a box of our own making.
edit: I should add, by O^2 I mean O ∘ O, so there's no definition of "multiplication" on these necessarily, just composition.
I thought they were asking about whether a coordinate system with basis elements e_1, e_2, ..., could be re-parameterized after rotation, and whether the re-parameterizations are infinite. The answer is simple via geometric algebra: yes.
Given (* is geometric product, . is scalar product, ^ is wedge product)
then you get the relationship in the new coordinate system:
x' = R * x * ~R
Since it's parameterized for any theta in [0..4pi], there's infinite of them, furthermore you get to pick which path you are taking to do the transformation along the way - either the 'negative' direction [0..2pi] or 'positive' direction [2pi..4pi]
I think it's worth revisiting elementary algebra, because the way most of us learned it in grade school doesn't really do it justice.
With that said, we abandoned `ab = ba` because it's not useful for e.g. linear algebra. Elementary algebra is a very specific (but very useful) mathematical "DSL" over the reals. It's also not necessarily going to help you learn to reason about the kinds of abstractions you have whilst programming, per se, because we can't reverse `a . b` to `b . a` when we code either.
Make no mistake, this is a knowledge for knowledge's sake endeavor. A liberal arts of the STEM fields, if you will.
Commutativity has been abandoned? What?! Nothing could be farther from the truth!
The requirement for order independence is literally everywhere … where would we be without being able to add or do set unions or lattice joins without fear of getting the order wrong?! Imagine getting two different waveforms when adding one to the other, depending on the order!
I think the argument is more that commutivity of operations is context-dependent, even when the operation is written as multiplication. For example, knowing whether the terms of a multiplication can be commuted requires knowing whether the operands are scalars or matrices.
Integer overflows/underflows don't affect commutativity, unless you refer to undefined behavior, which is...undefined and found only in C and C++ among common languages. 2's complement integer arithmetic is commutative for both addition and multiplication.
Thanks to BeetleB for pointing out that addition in IEEE floats is indeed commutative (I originally claimed "+0 + -0 = +0 vs. -0 + +0 = -0" which is incorrect).
As for IEEE-754 floats, addition is commutative as long as you don't care about exact bit patterns:
NaN + NaN may return different bit patterns. The result is still NaN, so the only way you can tell this apart is by bit-casting floats to ints or byte arrays.
Multiplication over floats is also commutative modulo the caveat above.
First, I will note that your result above depends on the rounding mode.
Second, IEEE 754 mandates that +0 and -0 are equal (i.e. any equality operator should return True when comparing these two). Therefore both expressions are equal.
NaN has several representations in bits, but they are all "equal" to one another.[1] If an operation gives you NaN, then so will doing it commutatively. It doesn't matter that the underlying bits are the same.
[1] Except for the signaling aspect. But I believe that is preserved in commutative operations.
You're right, producing +/- 0 depends on the rounding mode and it is commutative, I forgot about that completely. I edited my comment to fix that claim.
Also, yes, +0 == -0, but they can produce different results when used in the same expression, so the distinction does matter (although this doesn't affect commutativity, which is the larger point). For example, let f(x) = 1 / x. Then f(+0) = +inf, f(-0) = -inf.
I also agree with you about NaN, that's why I mentioned having to go outside floating point numbers (bit-casting).
> (I originally claimed "+0 + -0 = +0 vs. -0 + +0 = -0" which is incorrect)
Specifically, (-0.0) + (+0.0) = (+0.0) + (-0.0) = +0.0 (assuming round-to-nearest-or-even). OTOH, (-0.0) + (-0.0) = -0.0. This has nothing to do with +0.0 == -0.0 for comparison, addition just is commutative outright[0].
0: Pedantically, I'm not sure IEEE-754 requires the specific choice of which NaN you get when you do `some_nan + a_different_nan` versus `a_different_nan + some_nan` to be commutative, but it should.
Sure, but the GP alluded to "general purpose programming" in a separate comment, by which I take it they meant something like conventional python/Java code where one uses signed integers and doesn't worry about under/overflow.
Order independence is not a property of general-purpose programming. As much as we'd love for that to go away, it is very much a reality of day-to-day programmers.
Not sure what you mean by it not being a property of general-purpose programming. Do you mean it as, "I don't have to account for it in my day-to-day work"?
If so, I agree with you, but not quite in the way you mean (I think). We don't account for it just as fish don't have to account for water. The fact that you can swap two rows in an excel file and have the results be unchanged is a property that we just take for granted. It needs to be pointed out how remarkable it is, because there are situations where this doesn't work. Debits and credits in finance are not commutative because of overdraft limits. I'm sure you know all this, so I'm curious what you really mean.
There's plenty of commutative operations in everyday programming.
a = f(c)
b = g(d)
can be rearranged if f and g take different input data, and calling independent functions is something seen literally everywhere in programming. Optimizing compilers often take advantage of that.
In the context of linear algebra, it would also be good for folks to know that matrix multiplication (/linear transformation composition) is not commutative.
Isn’t it more often associativity that is required for parallel programming? Being able to parallelize “a+b+c+d” only requires associativity to order the operations as “(a+b) + (c+d)”. Sure, there’s additional benefits for memory locality if you can rearrange the term to “(a+c) + (b+d)”, assuming that the four terms are stored in contiguous memory and each sum is computed with vector operations, but that’s not strictly required for parallelization.
If you receive b before a and combine them as b*a, that's commutative. Both properties are useful in parallel programming, but you may only need one in a specific case, depending on your application.
That particular part glossed over the details. Commutativity is a property of some groups and not others. It is a property of the "normal" numbers, so we all learn it in school. But there are lots of groups that aren't commutative.
I remember reading this chapter and being blown away by the "construction" of the entire Mathematics (for me at that time) by Feynman in what seems like a barely over an hour lecture. In particular, the successive "expansions" always seemed so logical and consequential.
I'm a little surprised that the professor didn't demonstrate how the e^(ix) = cos(x) + isin(x) identity can be used to quickly and intuitively show a more specific famous equation:
e^(ipi) = -1
You only have to see that cos(pi)=-1 and sin(pi)=0, giving:
It doesn't sit with me well to see this being downvoted. I assume downvoters found it dismissive, but I do not read it that way.
I agree, at first it *is* surprising to see a Nobel laureate walk from the most-obvious-count-on-my-fingers-elementary-school-math all the way up to Euler's Formula, only to stop there without taking the very short step, done in the comment above, to land on Euler's Equation. After all, that is how I see it done most often.
The goal here is not to reproduce famous results. That would be the "we could bring forth this formula in two minutes or so, and be done with it" thing that is deliberately called out at the start. Instead, it is explained
> Every so often it is a great pleasure to look back to see what territory has been covered, and what the great map or plan of the whole thing is.
Seeing famous relations reduced to one another is probably enjoyable for you, and judging how many authors do it I think you've got a lot of good company. What is done here is different. It starts with things we all know as children and ends with a relationship between algebra and geometry, covering lots of mathematical apparatuses in between. It is notable that this is done without relying on the formality of landing on "famous results" at each step. That approach, combined with the easygoing language, is what I found most enjoyable about the writing.
Shoutout to Richard Feynman! I didn’t immediately see his name appear on the page; but once I began reading his words, I glanced at the URL and I see, yes, it is a Feynman lecturer! Lucid and clear; slicing through obscure jargon in a fun and playful—but masterful—way; Feynman totally rocks and is worth reading further!
If you want to learn calculus the way Feynman did, seek out "Calculus for the Practical Man"[1], which is the textbook he used to learn. Fair to say it's nowhere near as good as say Stewart or Spivak, which are the ones I have.
Every time I read such lucid explanations of math like this I'm filled with resentment for the math instruction I received in junior high and high school in the U.S. There's a real sense of 'play' that these explanations evoke and that make thinking about numbers and their relationships genuinely fun and interesting. That sense of play was entirely absent from my early math education. It was all "These are the rules. Memorize them for the quiz tomorrow."
I can't help feeling like my math upbringing was akin to a child being raised by parents who speak their own made-up language. Integrating with the rest of the normal-language-speaking world is anxiety inducing and filled with challenges that may never be completely overcome.
The ability to communicate math this way is honestly rare. It comes from a combination of deep understanding, long experience in communicating math, and a certain level of "culturing" that is specific to the academic experience.
Among the best, Feynman was singular in his ability to communicate math and physics.
In other words, don't be so hard on the teachers who were disappointing in comparison to the stellar examples you see from top mathematical communicators. What you're reading is quite rare and, while education quality could certainly improve, its not fair to expect this of a 5th grade teacher who covers 5 topics in a day. Even for the best, developing this type of material takes time and thought that a school teacher probably does not have.
Disagree. The purpose of a textbook and a lecture is very different. A good textbook can be a helpful resource for teaching and lecturing, but it is not sufficient to guarantee high quality math education. Conversely, a good educator who deeply understands the material can deliver fantastic education without a good textbook. Claiming that profs writing bad textbooks is the cause of poor quality in class math instruction is absurd.
> Claiming that profs writing bad textbooks is the cause of poor quality in class math instruction is absurd.
It certainly doesn't help. The tendency to pile more and more into standards, and then to have haphazard treatment in the textbooks, with problems that don't make sense... isn't great.
Stick a new teacher in the classroom, and they're going to run their book's recommended pacing and content. And even a veteran is probably going to lean on the book a lot in a pinch.
And, your course needs to fit together with 2 other teachers who are too likely to be running the absurd pacing and content in the courses before and after yours. The rushed pace leaves no choice but to devote a huge fraction of the time to procedural knowledge.
The net result doesn't serve anyone: the top students are left unchallenged and without the context and enrichment that could let them really grow. The bottom students are in painful struggle. And the middle are perpetually slightly confused, learning specific tools that they'll immediately forget when the unit completes.
Doing one's homework is an excellent way of exposing and filling gaps in one's understanding. I'm more or less adequate at math, but I'd be better if I had done more of my homework.
Yes people who follow the rules and do well in "school maths" are very likely to also do well and succeed in "higher mathematics".
Unfortunately, this selection misses (most?) children who may not be well suited to "school maths" - for whatever reason. But these children may succeed just as well in "higher mathematics".
Two anecdotes:
(1) June Huh dropped out of high school and stagnated for 6 years in university. In his 6th year, he ran into the fields medal Heisuke Hironaka. It was only then his "slow thinking" and deep creative insight (perhaps the things that hindered him in "school maths" type courses?) proved to be fruitful in higher mathematics. June Huh now has a fields medal.
(2) I was frequently in trouble at school and underachieved relative to my predicted grades. I resented the route learning and arbitrariness of "school mathematics". Due to some miracle I'm currently working towards a PhD in theoretical physics, in the mathematics department of a top university, and I also spend about 90% of my free time working through various advanced maths textbooks for fun. Turns out I'm quite suited for thinking about higher mathematics, despite not being particularly well disposed for school. If my school experience was different, I probably would have done a PhD in pure mathematics instead.
> people who follow the rules and do well in "school maths" are very likely to also do well and succeed in "higher mathematics".
I somewhat disagree. There were plenty of students who start to hit higher classes and just don't have the aptitude for it. They really didn't know it wasn't their thing until junior year of undergrad, despite always being told they were "good at math" as a kid.
Junior year in college is when they start doing proofs. This is a crime.
"Back in my day," my school district adopted a math curriculum that introduced sets in first grade, and eased us into proofs. We were not unfamiliar with proofs when we hit high school geometry, which was almost entirely proofs. Also, by doing proofs we could recognize that the manipulations we were doing in the regular problem sets could be seen as mini-proofs, rather than just guessing the right algorithm and grinding through it without knowing why.
When my kids took math, no proofs. Even geometry was all problems and no proofs. Moreover, kids are all aware of the conventional wisdom that "you just need math to get through school, you will never use it after you graduate."
For me, proofs were what made math come alive, and I started college as a math major. Today, despite my theoretical bent, I'm one of the few people at my workplace who is willing to solve practical math problems that don't have a canned solution in a software package.
Agree. Saw that as a math major undergrad, there were some people with more of an engineering bent who just crushed multivariable calc, differential equations stats and numerical methods, but then just got stuck at abstract algebra and point set topology proofs and stuff because there was no concrete application or “real world” anchor for the work.
No doubt. But ranking math skills, being especially consistent in studies/homework is maybe number 5. The problem I am pointing out is math teachers often only have that 1.
I never did any homework, and that worked great until undergrad, where the strategy of reading things on my phone all day instead of listening to anything in class stopped working quite so well.
I'm not very good at proofs, but following arbitrary rules provided with no motivation? Decades of experience!
> "Every time I read such lucid explanations of math like this I'm filled with resentment for the math instruction I received in junior high and high school in the U.S. There's a real sense of 'play' that these explanations evoke and that make thinking about numbers and their relationships genuinely fun and interesting."
You could wonder why you feel this way. If there is a sense of play which makes it interesting, and you are reading it and enjoying it today, why isn't that enough? What is the resentment about - you haven't missed out on the interesting math explanation - it's right here, you're reading it and enjoying it.
(This is the work of cognitive behavioural therapy - "I read a math thing which I found interesting, but instead of feeling elevated, happy, awed in the presence of brilliance, greatful that I stumbled upon such a thing when I could have gone my whole life not knowing about it, I instead jumped to feelings of resentment about things which happened many years ago, leaving me in a bad mood. I wonder what in my head made that connection and why?" Ref: a different person I was replying to in a different thread who was claiming that CBT is about colouring-in while handwaving problems away or waiting for acceptance that life sucks to bestow itself upon you).
This isn’t about enjoying the explanation now, it’s about the academic success and open doors one could have had if their teachers had been better communicators. Math doesn’t get studied in a vacuum, it’s the biggest selection criterion for STEM education.
You could be right, that’s one possible explanation, but that comment didn’t read like “I resent my highschool math teacher because I don’t earn enough in a STEM job” to me.
Upset because of all that was missed out on. The realisation of what might have been.
Could I have been a decent mathematician if the opportunity was not missed while I was totally unaware of its existence? Obviously there can be no definitive answer to that question.
And it’s also ok to be a little resentful of having been tortured for many hours with something that clearly could have been a lot of fun.
> "Could I have been a decent mathematician if the opportunity was not missed while I was totally unaware of its existence? Obviously there can be no definitive answer to that question."
But there can be a definitive (personal, subjective) answer to the question "why am I torturing myself with resentment over this hypothetical world which never existed? Why does 'being a decent mathematician' have such a hold over me whereas 'being a decent sculptor' or 'being a decent botanist' is emotionally neutral or disinteresting by comparison?
It's quite possible that studying apple tree cultivation and propagation and plant genetics and growth factors could be the most interesting thing you've never been exposed to a good teacher on, and that you could have had a fulfilling and satisfying career doing that, if only, if only.
> "And it’s also ok to be a little resentful of having been tortured for many hours with something that clearly could have been a lot of fun."
Are you equally resentful of being "tortured" with all the other subjects you don't care about and weren't interested in? Does listening to good music make you resentful of your highschool music teacher? Does listening to people speak Spanish make you resentful of your highschool Spanish teacher? Anything "could have been a lot of fun" with the right people, right? Bad days at work can be a lot of fun with a good team and good management but you don't live in resentment every time you go into a shop where the cashier seems happy, going "Imagine how much better my life could have been, I'm full of resentment of the bitchy store manager I worked for at age 18, woe, woe" - not at all.
With hindsight, there was nothing stopping anyone studying math independently in highschool, forming a study group of friends, trying to make it fun, asking other teachers or students, trying to get some money together to pool for a tutor; pinning the next twenty years of resentment on MRS JONES WHO DIDN'T MAKE MATH FUN AND RUINED MY LIFE is a mental behaviour pattern that deserves debugging - or at least noticing - not defending.
You and plonk nailed it. I was being a bit hyperbolic in my original comment; regret is a perfect waste of time. But, as you say, I can't help but wonder how different my life would be had I discovered the joy that is mathematics when I was young.
Similarly, I squeaked out of high school before it was required to learn a second language. At the time I was high-five-ing myself, thinking I'd dodged a bullet. Now, as an adult, I can only begin to imagine how much of the world is closed off to me.
I feel the same way. My college years math was a clueless grind (to me) and I don't think I made the connection to the real world until long after school was behind me. There's a very good site called BetterExplained, where I go to when I need to review some concept. I wish I had that when I was in school.
I think it’s very common to have uninspiring math teachers, but it’s not uniformly the case across the United States. That site, for example, is at Caltech. The math teachers I had in suburban Chicago in public schools were superlative.
I think this paragraph in particular should be explained to all introductory students in any STEM-related field. It sets the stage for a basic yet profound understanding of what mathematics is all about:
> "To discuss this subject we start in the middle. We suppose that we already know what integers are, what zero is, and what it means to increase a number by one unit. You may say, “That is not in the middle!” But it is the middle from a mathematical standpoint, because we could go even further back and describe the theory of sets in order to derive some of these properties of integers. But we are not going in that direction, the direction of mathematical philosophy and mathematical logic, but rather in the other direction, from the assumption that we know what integers are and we know how to count."
Beyond that, an understanding of algebraic concepts in terms of how equations can be manipulated from one form to another (and of the rationale for why one can) might benefit from the addition of the concepts of distribution and commutation, which are not included in this lecture (although the important basic idea of a successor is).
This was an issue with my high school math classes too, but the other thing that made for something of a stumbling block for me was the total absence of and refusal to produce practical examples of the math being taught in action.
The way my mind works, in order to grasp a concept well enough to be able to actively use it, I need to see it in action by way of a non-contrived, realistic example. The way any math past basic arithmetic tends to be taught in abstract dramatically slows acquisition.
It’s a quality that cuts both ways. It doesn’t work well with a lot of traditional academics, but it enables me to self-teach highly applied subjects like programming with little friction.
I used to think this was just restricted to pre-college math, and to East-Asian parents. After I had erstwhile-parent types infantilize me over my adult hobbies, I realized, you need a creative person in order to facilitate creative discovery. But it takes guts to try to learn something the "right" way. Imagine going through a whole semester, never once learning what was going to be on the AP test. And it takes some stature to teach it too. Imagine doing a writing class, where there absolutely must not be any rules or criteria for grading, where you really must say, I'll know it when I see it, but there really are going to be some yeses and nos.
What do you want to produce? People who can come up with new solutions to new problems on their own, or people who do cookie-cutter rules? How do you keep doing that throughout your life, long after school? People say they want the former until they come up against the reality of uncertainty, the possibility that there might not actually be any answers, and then they go right back to memorizing and teaching rules. They go back to justifying things the way they are.
> Imagine going through a whole semester, never once learning what was going to be on the AP test.
This is how I learn mathematics – mostly because I'm empirically incapable of doing otherwise. I will achieve a lower grade than I “could”, and miss out on many opportunities as a result.
Really smart people, and sometimes even just moderately smart people, coast along as A, B, or C students while learning what they want and just doing the minimum of the requirements to coast along. And then a bunch of other people have academic-like side-interests that they pursue in their free time.
Push the language analogy further. In a sense, a child really I s raised by parents speaking a made-up language. It just happens to be a locally shared made-up language.
Like math, you brute force language learning. Imitation and memorization. Only after years, or decades, can you go from basic language understanding to appreciating the beauty of words, poetry, literature. I don’t think there is a short cut in language or math. You have to go through the unfun multiplication table, spelling test phase to build the foundation for higher level appreciation l.
https://maa.org/press/maa-reviews/calculus-4