Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well maybe it works on Obsidian vaults for note taking heh, but with llama models' 2k input token range it'd get a tenth of the way before starting to drop context. Likely useless without something like an 100k model.


Well you wouldn't input the whole Vault to the model, you would use embeddings to find the content that is most relevant to the question being asked.


Is that actually a thing yet? Proper vector DB integration? I sure would like to see some demos of that, as it's been hyped up a lot but I haven't really seen anyone deploy anything proper with it yet.


Even PrivateGPT does that, using Chroma as vector DB




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: