Oh lovely, can't wait to give this a watch. Freya dives deep, and gives wonderful talks. Her video "The Continuity of Splines"[0] is my favorite watch of the past year.
I guess I'm unclear on what physical concept she's getting at by multiplying vectors. Depending on which concept you're trying to calculate with your multiplication, you have dot products like work and cross product like torque.
Defining "multiplication" to be "any function that takes two arguments and outputs one" isn't a very interesting definition of multiplication. The word is a lot more useful if you put constraints on it like saying for something to be a multiplicative operator it has to respect (a + b) * c = a * c + b * c (the distributive property).
Once you put a "reasonable" set of constraints on it... you discover that you can't actually multiply vectors (no function exists that satisfies the properties you want). Though the talk isn't about proving that (or justifying the set of constraints that mean you can't multiply vectors) and instead goes off in another direction of extending your vector space to a bigger space (like how the complex numbers are a bigger space than the reals) where you can define a reasonable multiplication operator.
Yes, that was an example of one property you probably want, not a set sufficient to make it such that no such operator exists.
Another property you want (and the talk uses) is that the operator is that the operator is from V x V to something. I.e. we are multiplying two vectors (because that's what we asked for in the title) not a scalar and a vector. That excludes your counter example, but still isn't nearly enough to make it so that no multiplication operator exists.
I'll be honest and say I'm not listing properties here because I don't remember what properties are needed to make it so you can't define the operator... hopefully someone who has studied this a bit more recently or thoroughly than me can chime in.
You need slightly more than being a ring. It's possible to make a ring (even a field) over R^n provided you don't care about interactions with scalars. For example: Just take any one-to-one map from R^n to R and then apply the operations in R before mapping the results back. It won't make any geometric sense, but it will be a ring.
I've been blanking on what exactly the interactions with scalars that we need to preserve are...
If I remember correctly multiplication requires a “zero,” an “identity,” and for something to be a field each item needs an inverse. I imagine we can define multiplication in R2 just as we do for C. So by that logic we ought to be able to define such an operation on any R(power of 2).
The cross product is a peculiar animal, as it only exists in 3-dimensional space. There is no unique "cross product" in 4 dimensions or higher. (In two dimensions we cheat and define the result of the cross product as the scalar magnitude of what would be the component in the third dimension, if it existed.) Furthermore, it turns out that the result of taking the cross product of two vectors is itself not a vector.
In physics we interpret a vector not simply as "an ordered list of numbers," but as a geometric quantity that responds to coordinate system changes in the expected way. Suppose that we have two vectors, a and b, and we take their cross product. Now suppose we choose to work in the "mirror image" coordinate system. Our choice of coordinate system should not affect physical outcomes. But while "a" and "b" are inverted in our mirror image coordinate system, the cross product "a x b" does NOT invert.
Introductory physics textbooks proceed to tell us that "well actually," the result of a cross product (such as an angular velocity vector or the Poynting vector of electromagnetism) is actually a "psuedovector."
Other formalisms treat the cross product in a more hygienic and general manner.
This is all to say that familiar mechanisms like the "dot product" and "cross product" are not necessarily as "natural" as you may have been lead to believe.
> the "dot product" and "cross product" are not necessarily as "natural" as you may have been lead to believe
The cross product, sure: its problem is that it dualizes unnecessarily, making you deal with a normal vector when you almost always just want the plane.
What I was getting at is that "standard vector analysis" is a choice, and it turns out that there are alternatives where things are defined differently.
A bivector is just as good for the purpose. Think of it as a generalization of a directional arrow (vector) into an oriented area (bivector). The OP talk shows good visualizations if you haven't watched yet.
If you define cross product through minors, then you can expand it to any dimension, just with the different number of arguments, n-1 for R^n, 1 in R^2: [{x,y},] = {y,-x} for example. 3 in R^4, etc. Going to have the same properties as R^3 wrt to linearity and anticommutativity.
So you can of course multiply some vectors- the real numbers are a vectorspace, and they have a perfectly nice product. Even R^2 can have a nice multiplication- (a,b)(c,d) = (ac - bd, bc + ad), the product of the complex numbers. However, it's much trickier to define a multiplication on an arbitrary vectorspace V such that
1) For any vectors u,v in V, the product u*v is in V. (this rules out the dot product as a general product)
2) For u,v in V, u*v = v*u
3) For u,v,w in V, v*(u+w) = (v*u) + (v*w)
4) For u,v in V and s in F (the field V is a vectorspace over), s(u*v) = (su)*v = s*(uv)
Under these restrictions you can still cook up products, but fewer and sadder ones. In general you will not have a multiplicative inverse, for instance.
If so, may I wonder if your drop rule 2 and insist on having for most vector u there exist a v so that u*v = 1. As an icing let us say having a 0 vector. That could be something.
This is known as the Hadamard product and is covered in the video. The tl;dr is that, while it certainly has uses, it doesn't represent multiplication of vector spaces in any reasonable way (in particular, it gives different results when there's a change of basis, while other notions of "product", including dot product, are invariant).
If you allow regular real numbers to have a "basis" as shown by your example, you can't simply multiply them either. You treat them as one dimensional vectors and you are back to dot products, cross products, etc.
With regular multiplication, the numbers are the "real" thing you are interested in.
In a vector space, the fundemantal object is the vector. Writing it as a sequence of numbers with an assumed basis is a notational convience.
If you want to define a function for vectors, then you need that function to give the same result regardless of the basis you use to represent the vectors. When you here mathaticians talk about proving that a function is "well defined" this is what they are talking about.
Of course, you might be working with some spefic structure where component wise multiplication is meaningfull and well define. That structure might happen to also be a vector space, and the notation used to write it might happen to coincide with vector notation under the "obvious" choice of basis.
Other specific vector spaces have their own quirky notion of multiplication. For instance polynomials can be viewed as a vector space, with an obvious basis of { x^n }, but polynomial multiplication looks very different from component wise multiplication.
You need the function to give a result that has the same properties, for some specified set of properties, regardless of the basis. GP's question is perfectly reasonable if you haven't already accepted that this set of properties is the only set of properties that matters. The thesis could be better stated "if we define multiplication to have properties X, you can't multiply vectors" which is much less divisive.
I think GP's (and my question) is why those properties and none others are important.
Perhaps a dumb question, but why do we store embeddings as "vectors" and not "points"? I thought the difference was that vectors have magnitude, but an embedding doesn't have a magnitude - they are just points in an n-dimensional space?
In computing we often like to conflate "ordered tuples of numbers" with "vectors." This vocabulary is even cooked into the C++ standard library.
The difference is that mathematical vectors support some additional operations, such as addition and scalar multiplication.
A vector in C++ is not a mathematical vector, since we can't add two vectors x+y, nor can we perform scalar multiplication a*v.
For mathematical vectors we have this interpretation: An abstract vector is constructed by multiplying the numbers in the ordered tuple each by a corresponding abstract "basis vector" and adding up the results. The numbers are just the "coordinates" of a vector with respect to a particular basis.
It may or may not make sense to talk about "basis vectors" in your application.
Does it make sense to perform "coordinate transformations" on your objects?
Another test is, "Do you want to use linear algebra?" If so, your objects are probably vectors.
A similar but more egregious argument comes about with regard to tensors. Mathematically a tensor is a kind of function that takes vectors and co-vectors as arguments. A matrix, when coupled with the rules for multiplying matrices by row and column vectors, is a tensor.
But an arbitrary n-dimensional array of numbers is not (necessarily) a tensor in the mathematical sense. Unfortunately that term was co-opted by the ML crowd because it sounds cool. :-)
Getting back to what you mentioned about vectors having magnitude - in an abstract vector space, there is no definition of magnitude. It's not until you define an inner product that magnitude becomes defined. In this sense the grade-school definition of a vector as "a quantity with a magnitude and direction" does not necessarily comport with the standard definition of a mathematical vector space.
We like to say that "a tensor is an object that transforms like a tensor" and the same is true for vectors. "A vector is an object that transforms like a vector" under coordinate transformations, while also supporting addition and scalar multiplication.
To address your question more directly: typically "points" (in so much as they are relative to a coordinate system) really are "vectors". But general tuples of numbers are not necessarily.
There is a subtle difference between laynames and technical math names. Lay names have nouns and verbs separate: a number may be multiplied. They are data and functions in programming terms. But in math terms this is combination is also a noun, a group. Many lay terms that appear to only be data but actually have an implied mathematical operation.
There are lots of perspectives on vectors and tobinfricke gives one. Let me give another.
Given a coordinate system, a vector can be represented by a tuple of numbers, just like a point can be represented by a tuple of numbers. The point p, and its position vector i.e. the vector from the origin to p will have the same tuple of numbers. The magnitude of the vector corresponds to the distance of p to the origin. So points or vectors, well it's just a choice of words without a material difference.
If you use the word vectors then you do kind of sort of imply that you could do the vector operations, scalar-multiplication, and vector addition, and getting something semi-useful out. This is indeed sometimes done with embeddings. But most of the time it's in the form of an affine combination (weighted sum where weights sum to 1) which is something you can do for points too.
The concept that we call a "point" gives us an (n-dimensional) "affine space". Affine spaces don't require any sort of coordinate system, axes, origin, etc. which makes them quite general. For example, consider the sleepers of a railway track, or the crossing-points of a chain-link fence, or the hands on a clock face, or the electrical potential at various positions around a circuit, or a date, or the temperatures of various objects, etc.
It makes no sense to "add" or "multiply" points; but we can find the difference between two points. The result will actually be a vector (in the examples above: a distance and direction along the track; a distance and direction along the 2D plane of the fence; an angle; a voltage; a duration; and a temperature difference). We can add such vectors to our points; if we add the vector (pointA - pointB) to pointB, we get pointA! This relationship between points and vectors leads to the concept of a "torsor" https://math.ucr.edu/home/baez/torsors.html
Vectors live in an (n-dimensional) "vector space", which requires more concepts than an affine space; e.g. a notion of "zero", a notion of "size", a notion of "direction", etc. This is less general, but lets us do operations like adding and scaling vectors, as well as the various notions of multiplication defined here.
Some vector spaces arise naturally, e.g. taking the angle between clock hands gives us a natural zero (the difference between identical positions) and a natural size (a full turn), although whether positive/negative indicates clockwise/anticlockwise is still arbitrary. Other times we will "impose" some arbitrary coordinate system on an affine space, since vector operations are so useful; often ignoring the space's affine nature entirely! That way we can treat "point" interchangeably with "vector from the origin"; even though most of the fancy things we want to do are only defined for the latter (e.g. taking dot-products, comparing cosine similarity, etc.)
For example, ere are some arbitrary coordinates that we impose on affine spaces every day:
The definitions of quaternions and dual quaternions look particularly neat. Are there other (non-"geometric algebra") ways to define (dual) quaternions without reference to a basis?
Edit: I guess the "scalar and vector parts" definition at Wikipedia[1] doesn't use a basis, but it's not exactly pretty...
Highly recommend Freya's various deep dives into various game development contents. She also has twitch vods for developing those videos which are also fascinating.
Somehow, in your links, there isn't a single explanation anywhere (that's not in the form of a video that I don't have time or inclination to watch) about what "geometric algebra" is supposed to mean. I'm trying to gather what this is supposed to be from the catchphrases on the front page, and honestly, this just looks like linear algebra.
Can you explain what "geometric algebra" is supposed to be? Or just link to written explanations? I'm a mathematician (algebraic topology), so don't be afraid to get technical. How is this different from linear algebra? The only things I can see are some tensor products and exterior products, and a few couple of division algebra structures. Can you enlighten me? Is there any actually new math behind all this?
My take is a bit different. May I quote but edit myself :
2 points by ngcc_hk 2 days ago | parent | context | prev | next [–] | on: Visualizing quaternions (2018)
Once you accept that imaginary number is not about a straight real number but a number about rotation. And you have two number that is not the same, living in different dimensions. One about straight line and one about rotation I.e.
X0 ops X1(another number system ) where X0, X1 is real
You then have complex number or a+ bi where i^2 = -1
And you can have split complex number a+bj where j^2 = 1 etc
The next move is not only the usual quaternions which one can, but to Clifford algebra
x0 ops x1n1 ops x2n2 ops …
xi is all real but
ni is a different number system represent things not on the real number but a different and higher dimension object.
For example, the Clifford Geometry define the 2nd not a i or j but a plane, so you have
x0 + x1<plane> … and so on
with more high dimensional object (especially if one may say x0 is x0<line>).
Like complex number it simplify a lot of things. Spinsor would be easier to define this way say.
After seeing the video I was a bit surprise by her two comments. This is after the most understandable introduction to this king of geometry. Shocking last 5 minutes.
The first one that it is not that useful just different way of grouping. IaAware of this taking by both physics and computer people. But it is a total different way of looking and should have been taught in high school or at least undergrad. A bad cycle here. It has a lot of use to understand physics for normal people whilst the tensor …
Also her struggle with thinking about shift to another game engine. Unreal … too complex and Godot … to engineering … her remark basically said she may have to go back to game programming (to get a living).
And they made dozens of hour long videos about that? That's insane. And the arrogance of calling it "geometric algebra" like they invented a new field of math?!
Perhaps, but it's not new math (Clifford algebras are almost 200 years old). Shrouding it in a veil of buzzwords and pizzaz makes you look more like con artists than scientists. A sentence like this one is just absurd:
> Clifford's Geometric Algebra enables a unified, intuitive and fresh perspective on vector spaces, giving elements of arbitrary dimensionality a natural home.
I never did quite 'get' geometric algebra. I mean the exterior algebra gives a couple of clear generalisations, and for the ones that require a metric you can typically use the Hodge star to find a generalisation.
Geometric algebra then blends all of this together, and I'm still not entirely convinced that this improves things. Is it actually ever handy to have to deal with mixed degree multivectors?
Even without using mixed-degree multivectors, the fact that the existence and the properties of all the different kinds of multivectors results from a small set of simple axioms is very satisfying in itself.
Before all these concepts were unified by geometric algebra, the system of physical quantities as taught in most places was a huge mess of many different kinds of quantities, scalars, polar vectors, axial vectors, pseudoscalars, tensors, pseudotensors, complex numbers, quaternions, spinors and so on.
It was not at all obvious why there are so many kinds of quantities, which are the relationships between them, are there any other kinds of quantities besides those already studied, etc.
Geometric algebra has brought order in this chaos and it has enabled a much deeper and more complete understanding of physics, by reducing a long list of seemingly arbitrary rules to a much smaller set of axioms, and by deriving all the many kinds of physical quantities from the vectors in the strict sense, i.e. from the translations of the space, which are themselves derived from the points of the affine space (as equivalence classes of point pairs).
(Both logically and historically, in physics the vectors are more fundamental than the scalars. The scalars are obtained by dividing collinear vectors, i.e. they are equivalence classes of pairs of collinear vectors. This division operation is a.k.a. measurement and what are now named as "real numbers" were named as "measures" in the past, for more than two millennia. For any Archimedean group it is possible to define a division operation using the Axiom of Archimedes, generating a set of scalars over which the original group is a vector space).
Yeah but like all of that is true about the exterior algebra as well. Except exterior algebra is a bit more explicit about where you use the metric, which is really convenient when the metric starts to become important.
Clifford (i.e. "geometric") algebras are of some mathematical interest, but for physics the only real value I've seen from them is in offering a fairly nice presentation of spin groups, which is ultimately where the Dirac matrices come from. The geometric algebra advocacy, so far as I can tell, comes entirely from engineers who were never given a proper account of tensors in the first place: it's certainly an improvement over whatever godawful basis-dependent stuff gets taught there.
I'd be interested to know your opinion on space-time algebras. They seem like they would provide a nice way of unifying spatial rotations and Lorentz boosts, but that may be blind optimism as your last sentence seems to have been written to describe me specifically...
Spacetime algebra is just the Clifford algebra Cl_1_3(R), so all of the above applies.
> They seem like they would provide a nice way of unifying spatial rotations and Lorentz boosts
Yes and no: the right way to unify rotations and boosts is to consider them as the orientation-preserving elements of the Lorentz group (sometimes called the proper Lorentz group). You can construct this from the corresponding Clifford algebra, but it's somewhat technical and not physically well-motivated until you start dealing with spinors. It's also the group of symmetries of spacetime that leave the origin unchanged, which is far, far more natural.
I'm not versed at all into the Hodge star operator but it feels like Maxwell's equations expressed in both systems[1] might locate the answer. With Hodge star:
dF = 0
d * F = J
Expressed in GA:
∇F = J
The eyeball-difference of which (dF = 0) would be your "how GA then blends all of this together", if I understand correctly. I'd guesstimate that dF = 0 to be akin to Gauss' law; and that maybe GA somewhat incorporates that the curl of a gradient is the zero field.
These are not quite equivalent, since using the hodge dual explicitly incorporates the geometry of space in a way left implicit by your geometric algebra formulation. The appropriate exterior algebra analogue here is simply `F = dA`. F is the electromagnetic field tensor, d is the exterior derivative, A is the 4-potential.
Thank you so much for the links! I just happen to be endeavoring to seriously learn this stuff at the moment, driven by interests in physics, differential geometry, and gamedev. I'm seriously jazzed to check these out, thank you.
No it's not. Plenty of people don't like math and it has nothing to do with how the universe works. The way that math is taught to most people is absolutely atrocious, involving rote memorization, following rules for the sake of following rules, very little intuition involved.
If anything, most people who express a hatred of math do so because it's taught to them in a way that completely divorces it from the universe or anything whatsoever.
Most people I know who appreciate math did not learn it at school, but learned it either at home from their parents, or learned it independently. I have also made it my own responsibility to make sure my daughter learns math from me and applies math to every day situations and can develop a basic mathematical intuition.
> The way that math is taught to most people is absolutely atrocious, involving rote memorization, following rules for the sake of following rules, very little intuition involved.
The older I get the more I realize the importance of memorization if you want to _actually_ apply maths to solve problems, as opposed to learning for the pleasure of it. When I was a kid I was all about the ideas.
So there is definitely a subtlety here, which is not that memorization is bad in and of itself, but that learning things by memorizing them is a very short term strategy for actual understanding.
For example I use math on an almost daily basis working as a quant, and so yes I happen to have memorized a great deal of math. But that memorization did not happen by sitting and explicitly memorizing things, memorizing formulas, memorizing rules or procedures or theorems. The memorization came over time and through repeated usage naturally.
But the way math is taught at school, students are kind of pushed into a corner where if they want to do well on the test, the quiz, the exams then the path of least resistance is to memorize a narrow set of specific "material" that will be tested and then hyper fixating on that material by employing memorization.
Which is easy to say if it's something you use a lot. But many things are just stepping stones to, or the fundamentals of, the part you do use a lot.
I never gave it much thought in school, but I'm glad we did things like memorize multiplication tables. Being able to instantly know all multiplications of numbers up to 12 makes reasoning about numbers much faster.
That's probably true for the vast majority of things, isn't it?
I also hate ballet, but I suspect that if I made an effort I would learn to appreciate it. However, I don't make it a point to go around loudly telling everyone that I hate ballet as if that was a badge of honor.
Personally, I found this comment to be helpful. Personally, I don't appreciate it when speakers "get cute" with what I regard as misleading titles like this one has.
On one hand, this comment saved me from watching this 50 minute video where I'd also learn that you can multiply vectors. So very efficient usage of time. On the other hand, I'll probably watch the video anyway because I expect it to be more interesting and informative, and go into a lot more depth. So I'm not really sure this comment helped me at all.
[0] https://www.youtube.com/watch?v=jvPPXbo87ds