One neat trick I’ve learned is that you can use the points on a Fibonacci sphere to optimally compress unit vectors, for things like normal textures. For example, if you have an array of 1024 points representing a Fibonacci sphere, you can compress unit vectors into lg(1024)=10 bits with a nearest neighbor search and decompress with an O(1) table lookup.
In fact, the general strategy works for higher dimensions as well. Spread some points on the hypersurface of a unit 3-sphere with some kind of energy minimalization simulation, and the resulting array of 4D unit vectors can can be used to compress quaternions!
I used a similar method of quantizing normal vectors, but I settled for 240 of them, so one byte per normal. It was based on IIRC using a dodecahedron, split faces into triangles, then split those each into 4. Then use the face normals. I dont recall it being as complicated as it sounds, but it does work rather nicely. I could precompute view dependent shading for each normal once per frame and then use the normal index as a color index instead. CPU implementation.
I would expect the energy minimization approach to perform better than anything else, for vectors distributed uniformly on the surface of the N-sphere. Why go with the Fibonacci sphere instead?
The Fibonacci sphere arrangement has the advantage that the point coordinates can be computed from a function, so you don't even need a lookup table to decode:
One reason/situation where the Fibonacci method is preferred is because it is a direct construction method, which can be coded in a few lines, rather than an indirect iterative method.
The second is that because an energy minmization method is minimizing the sum of forces, it more closely minimizes average distance between points, rather than absolute minimum distance which is what packing distance focuses on.
As I describe in the article, different methods produce similar but slightly different solutions. An optimal solution for one objective function, may not be the optimal for a different objective function. I then give details about how the solution that optimizes volume of the convex hull is different to the solution that optimizes for packing distance, etc.
If you just want N in a certain range, we can use the triangle-based polyhedron and successively quadruple or triple the number of faces. Then use the face normals as points. This gives visually appealing distributions without any real oddities.
I ran into this problem working on differential equations that model pattern formation (reaction-diffusion equations, originally postulated by Turing in the 1950s). The equations are highly nonlinear, but some solutions can be found when solving the problem on a sphere. You get spot solutions that dynamically move essentially to the minimum energy configuration (Fekete points I believe are called). BTW, Neil Sloane, of OEIS fame, has a list of the best packings, up to n=100 I believe [0].
Things get interesting when you also allow the sphere to grow, the spots start to split (and sometimes annihilate), understanding how the spots move on the sphere is itself a very interesting problem.
Quite cool how the author picks a topic, explains it well, and on top of that presents some novel results. He did this before with his article on minimum-discrepancy sequences, which is one of my favourite results in math.
For purposes of nearest neighbors this seems like an incredibly interesting shape to inscribe into:
The sphere, despite having spherical properties also maintains linear properties due to the corrugation. To me that means we can try to inscribe orthogonal properties into both of the spaces.
My understanding of these geometries isn't complex enough to make the connections, so my question is this:
Do you think its feasible to use shapes with this 'corrugated' property to make better nearest neighbor compression?
My intuition tells me that you can use the shape's linear nature to push apart independent components and inscribe the rest of the details into the spherical components. Or perhaps the opposite way.
I don't have any intelligent comments on your question, but I wanted to say that I am a fan of Quanta magazine, but somehow had missed this really cool article. So thanks for pointing me to this fascinating field. ;)
Good article, but it'll take some time to understand it. %1 is interesting, I used to use {..} for taking fractional part, %1 is intuitively easy, though not looking particularly good...
yeah. I think his website is extremely old and hasn’t been updated in the last decade or so. Despite this I linked to it because he is a legend in this field and so i think this is still the definitive reference.
As far as i understand, part of the story as to why dodecahedron and the cube fall short is due their non-triangular faces.
Did the article switch the dodecahedron and icosahedron? It specified that the icosahedron is optimal for 12 points and the dodecahedron for 20 which seems backwards to me.
Can you explain the notation [0,1)^2 unit square, does the 2 represent the spatial dimensionality? So,[0,1)^3 is the unit cube? Why is 0 inclusive, but the 1 is exclusive?
"The first is that this mapping is area-preserving, not distance-preserving." Which area is being preserved?
Is there a volume preserving choice function?
What are points t0 and t3, are those the location of the singularity points? What is the definition of those "singularity points"? Is it that seeming void in the center of the fibonacci spiral? And that void doesn't exist within the unit square case?
So[0,1)^1 is a line interval, [0,1)^2 is a unit square and [0,1)^3 is the unit cube, and [0,1]^d is a d-dimensional cube.
2.Only one boundary can be included
It includes 0 but not 1 because it can only the context is usually that practitioners want a region where one edge will wrap to the opposite edge.
Thus they treat [0,1)^2 as if it is actually a 2-dimensional torus.
thus the the 2 boundaries acutally map to the same point, so you can only include one of them.
In our case as we are using x %1 = fractional part of x, the fractional part could be 0, if x=3.0, but it could never be exactly 1.
In past weeks I've done some work on procedural meshes and filling out the space with various layout strategies. The posted article was a good introduction, and then I found this very approachable paper with great overview of different algorithms: "Point Picking and Distributing on the Disc and Sphere".
In fact, the general strategy works for higher dimensions as well. Spread some points on the hypersurface of a unit 3-sphere with some kind of energy minimalization simulation, and the resulting array of 4D unit vectors can can be used to compress quaternions!