Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes! This really feels next-gen. After all, you're not actually interested in editing the 2D image, that's just an array of pixels, you want to edit what it represents. And this approach allows exactly that. Will be very interesting to see where this leads!


Or analogous of how you convert audio waveform data into frequencies with the fast-fourier transform, modify it in the frequency spectrum and convert it back into waveform again.

Their examples does however only look a bit like distorted pixel data. The hands of the children seem to warp with the cloth, something they could have easily prevented.

The cloth also looks very static despite it being animated, mainly because the shading of it never changes. If they had more information about the scene from multiple cameras (or perhaps inferred from the color data), the Gaussian splat would be more accurate and could even incorporate the altered angle/surface-normal after modification to cleverly simulate the changed specular highlights as it animates.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: