Hacker Newsnew | past | comments | ask | show | jobs | submit | more duckworthd's commentslogin

I hope to release the code in the new year, but we have some big dependencies that need to be released worse. In the meantime, you can already begin hacking on the live viewer, https://github.com/smerf-3d/smerf-3d.github.io/blob/main/vie...


3D understanding as a field is very much in its infancy. Good work is being done in this area, but we've got a long ways to go yet. SMERF is all about "view synthesis" -- rendering realistic images -- with no attempt at semantic understanding or segmentation.


"It's my VR-deployed SMERF CLIP model with LLM integration, and I want it now!"

It is funny how quickly goalposts move! I love to see progress though, and wow, is progress happening fast!


It's not always moving goalposts - sometimes a new technology progresses on some aspects and regresses in others.

This technology is a significant step forward in some ways - but people are going to compare it to state of the art 3D renders and think that it's more impressive than it actually is.

Eventually this sort of thing will have understanding of lighting (delumination and light source manipulation) and spatial structure (and eventually spatio-temporal structure).

Right now it has none of that, but a layman will look at the output and think that what they're seeing is significantly closer due to largely cosmetic similarities.


I can't say. I'm not familiar with BD in Cyberpunk.


https://youtu.be/KXXGS3MGCro?t=118

It's a sort of replayable cutscene that happens a couple times in the game where you can wander through it. The noteworthy bit is it's rendered out of voxels that look very similar to the demos but at a much lower resolution and if you push the frustrum into any objects, you get the same kind of effect where the surface breaks into blocks.


Interesting effect. It does look very voxel-y. I'm not a video game developer at heart, so I can only guess how it was implemented. I doubt NeRF models were involved, but I wouldn't be surprised if some sort of voxel discretization was.


It seems like it might even just be some kind of shader


If you think about how they created this from the POV of the game creation pipeline, then that probably is the way. If this is done by creating a shader on top of "plain old" 3D assets, then aside from the programmers/artists involved with creating that shader everyone else can go about their business with minimal retraining. There probably was a lot of content to create, to that optimization likely took priority over other methods of implementing this effect.


I won't say too much about this, but the amount of buzz around articles these days is more of "research today" sort of thing. Top conferences like CVPR receives thousands of submissions each year, and there's a lot of upside to getting your work in front of as many eyeballs as possible.

By no means do I claim that SMERF is the be-all-end-all in real-time rendering, but I do believe it's a sold step in the right direction. There are all kinds of ways to improve this work and others in the field: smaller representation sizes, faster training, higher quality, and fewer input images would all make this technology more accessible.


We unfortunately haven't tested our web viewer in Firefox. Let us know which platform you're running and we'll do our best to take a look in the new year (holiday vacation!).

In the meantime, give it a shot in a Webkit- or Chromium-based browser. I've had good results on Safari on iPhone, Chrome on Android/Macbook/Windows.


Collaboration is a thing at the Big G :)


Be careful with this one! Luma's offering requires that the camera follow the recorded video path. Our method lets the camera go wherever you desire!


Nice discovery :). Check the developer console: it'll tell you.


Thank you!


All these details and more in our technical paper! In short: SMERF training takes much longer, SMERF rendering is nearly as fast as 3DGS when a CUDA GPU is available, and quality is visibly higher than 3DGS on large scenes and slightly higher on smaller scenes.

https://arxiv.org/abs/2312.07541


Is it possible to use zip-nerf to train GS to eliminate the floaters.


Maybe! That's the seed of a completely different research paper :)


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: