Well maybe it works on Obsidian vaults for note taking heh, but with llama models' 2k input token range it'd get a tenth of the way before starting to drop context. Likely useless without something like an 100k model.
Is that actually a thing yet? Proper vector DB integration? I sure would like to see some demos of that, as it's been hyped up a lot but I haven't really seen anyone deploy anything proper with it yet.