1. Use LLM, possibly already grounded by typical RAG results, to generate initial answer.
2. Extract factual claims / statements from (1). E.g. using some LLM.
3. Verify each fact from (2). E.g using separate RAG system where the prompt focuses on this single fact.
4. Rerun the system with the results from (3) and possibly (1).
If so, this (and variations) isn't really anything new. These kind of workflows have been utilized for years (time flies!).
1. Use LLM, possibly already grounded by typical RAG results, to generate initial answer.
2. Extract factual claims / statements from (1). E.g. using some LLM.
3. Verify each fact from (2). E.g using separate RAG system where the prompt focuses on this single fact.
4. Rerun the system with the results from (3) and possibly (1).
If so, this (and variations) isn't really anything new. These kind of workflows have been utilized for years (time flies!).