The main issue is that few know about retrieval augmented generation and it’s variants and even fewer know how to build effective versions of it.
Hooking up a real high quality and well configured knowledge graph or other information mechanism retrieval alongside template/constraint tooling to powerful, long context (with full attention, not the BS linear kinds) models is an absolute game changer and minimizes risks of hallucination.
But few are doing this, and thus the public believes that LLMs are not trustworthy.
Hooking up a real high quality and well configured knowledge graph or other information mechanism retrieval alongside template/constraint tooling to powerful, long context (with full attention, not the BS linear kinds) models is an absolute game changer and minimizes risks of hallucination.
But few are doing this, and thus the public believes that LLMs are not trustworthy.