Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I imagine

1. Use LLM, possibly already grounded by typical RAG results, to generate initial answer.

2. Extract factual claims / statements from (1). E.g. using some LLM.

3. Verify each fact from (2). E.g using separate RAG system where the prompt focuses on this single fact.

4. Rerun the system with the results from (3) and possibly (1).

If so, this (and variations) isn't really anything new. These kind of workflows have been utilized for years (time flies!).



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: