Traditionally, if you have a real scene that you want to be able to reproduce as 3D graphics on the computer, you either:
1. Have an artist model it by hand. This is obviously expensive. And, will be stylized by the artist, have a quality levels based on artist skill, accidental inaccuracies, etc...
2. Use photogrammetry to convert a collection of photos to 3D meshes and textures. Still a fair chunk of work. Highly accurate. But, quality varies wildly. Meshes and textures tend to be heavyweight yet low-detail. Reflections and shininess in general doesn't work. Glass, mirrors and translucent objects don't work. Only solid, hard surfaces work. Nothing fuzzy.
Splatting is an alternative to photogrammetry that also takes photos as input and produces visually similar, often superior results. Shiny/reflective/fuzzy stuff all works. I've even seen an example with a large lens.
However the representation is different. Instead of a mesh and textures, the scene is represented as fuzzy blobs that may have view-angle-dependent color and transparency. This is actually an old idea, but it was difficult to render quickly until recently.
The big innovation though is to take advantage of the mathematical properties of "fuzzy blobs" defined by equations that are differentiable, such as 3D gaussians. That makes them suitable to be manipulated by many of the same techniques used under the hood in training deep learning AIs. Mainly, back-propagation.
So, the idea of rendering scenes with various kinds of splats has been around for 20+ years. What's new is using back-propagation to fit splats to a collection of photos in order to model a scene automatically. Before recently, splats were largely modeled by artists or by brute force algorithms.
Because this idea fits so well into the current AI research hot topic, a lot of AI researchers are having tons of fun expanding on the idea. New enhancements to the technique are being published daily.
You can use the "hasty pudding trick" to truncate the cipher output without risking duplicates. It is inefficient if the block space of the cipher is much larger than the sampling space, though.
However, you can also use a Feistel network to create a block cipher for an arbitrary bit length. So, you can always bound the block space to be no more than twice the sampling space, which is okay.
Onai | | San Jose / New York | FULL TIME, CONTRACTORS, GRADUATE INTERNS, POSTDOCTORAL FELLOWS, REMOTE
We're tackling exciting difficult challenges and building offerings relevant to interesting real-world problems in a variety of fields. We have particular strengths in dispersed computation, functional programming, cryptography, and deep learning.
We're currently most interested in engineers with solid experience in Rust, Haskell/Idris, or cryptography. We also have openings for enthusiastic developers or researchers who might lack this precise experience but are eager and able to learn. We welcome internship/fellowship interest from postdoctoral scholars or senior graduate students.
We do not presently have openings for current/recent undergraduates.
Send your resume to [email protected] and we'll let you know if there's a potential fit.
Learning about ECS and how to use them made gamedev so much easier for me. Although now I'm hamstrung by only being able to work on games in languages/frameworks with a solid ECS library (does a good/community agreed upon one exist for rust yet?).
ECS, the observer pattern, and behavior trees are probably the three main things that answered all the "how on earth do they build something this complex" questions for me.
I mostly use it for trivial stuff like proper Unicode arrows → instead of ASCII ->, but I think it's pretty cool to have the option to type any Unicode character I want.
I've found https://www.telepresence.io/ to be helpful. It allows to integrate a local running process into a K8s cluster seamlessly allowing for fast iteration and easy debugging.
I dislike violin plots because they don’t give a good sense of how many points there are overall, and this can be very misleading if you’re trying to compare segments of different size. They also look like female genitalia and I’m 100% serious when I say this tends to distract people for laughs.
Edit: well, I suppose I should clarify the comment on violin plots is implementation dependent and biased by my personal preferences for visualization libraries
Can you say where the scariest and most ambitious convincing pitch was on the following scale?
1) We're going to build the next Facebook!
2) We're going to found the next Apple!
3) Our product will create sweeping political change! This will produce a major economic revolution in at least one country! (Seasteading would be change on this level if it worked; creating a new country successfully is around the same level of change as this.)
4) Our product is the next nuclear weapon. You wouldn't want that in the wrong hands, would you?
5) This is going to be the equivalent of the invention of electricity if it works out.
6) We're going to make an IQ-enhancing drug and produce basic change in the human condition.
7) We're going to build serious Drexler-class molecular nanotechnology.
8) We're going to upload a human brain into a computer.
9) We're going to build a recursively self-improving Artificial Intelligence.
10) We think we've figured out how to hack into the computer our universe is running on.
1. Have an artist model it by hand. This is obviously expensive. And, will be stylized by the artist, have a quality levels based on artist skill, accidental inaccuracies, etc...
2. Use photogrammetry to convert a collection of photos to 3D meshes and textures. Still a fair chunk of work. Highly accurate. But, quality varies wildly. Meshes and textures tend to be heavyweight yet low-detail. Reflections and shininess in general doesn't work. Glass, mirrors and translucent objects don't work. Only solid, hard surfaces work. Nothing fuzzy.
Splatting is an alternative to photogrammetry that also takes photos as input and produces visually similar, often superior results. Shiny/reflective/fuzzy stuff all works. I've even seen an example with a large lens.
However the representation is different. Instead of a mesh and textures, the scene is represented as fuzzy blobs that may have view-angle-dependent color and transparency. This is actually an old idea, but it was difficult to render quickly until recently.
The big innovation though is to take advantage of the mathematical properties of "fuzzy blobs" defined by equations that are differentiable, such as 3D gaussians. That makes them suitable to be manipulated by many of the same techniques used under the hood in training deep learning AIs. Mainly, back-propagation.
So, the idea of rendering scenes with various kinds of splats has been around for 20+ years. What's new is using back-propagation to fit splats to a collection of photos in order to model a scene automatically. Before recently, splats were largely modeled by artists or by brute force algorithms.
Because this idea fits so well into the current AI research hot topic, a lot of AI researchers are having tons of fun expanding on the idea. New enhancements to the technique are being published daily.
https://radiancefields.com/