Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does the an open source toolchain exist for capturing, processing, and hosting navigable 3D walkthroughs like this (e.g. something like an open-source Matterport)?


Not yet, as far as I'm aware. The current flow involves a DSLR for capture, COLMAP for camera parameter estimation, one codebase for training a teacher model, our codebase for training SMERF, and our web viewer for rendering models.

Sounds like an opportunity!


Is there a significant advantage for capturing using DSLRs vs using the phone camera of a decent phone?


The big difference is access to fisheye lenses a burst mode that can be run for minutes at a time, and the ability to minimize the amount of camera post processing. In principle, the capture could be done with a smartphone, but the experience of doing so is pretty time consuming right now.


You don't need a toolchain for capturing; you just need the data. Get it now; process it when better tools become available. There are guides for shooting for Photogrammetry and NeRF that are generally applicable to what you need to do.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: