Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
visarga
on April 14, 2023
|
parent
|
context
|
favorite
| on:
Building LLM Applications for Production
In open-book mode it does not hallucinate. That only happens in closed-book mode. So if you put a piece of text in the prompt you can trust the summary will be factual. You can also use it for information extraction - text to JSON.
layer8
on April 14, 2023
[–]
What are you basing this assessment on? My understanding is that it can in principle still hallucinate, though with a lower probability.
visarga
on April 14, 2023
|
parent
[–]
I experimented on the task of information extraction with GPT3 and 4.
goatlover
on April 14, 2023
|
root
|
parent
[–]
I've had it hallucinate with text I've fed it. More so with 3.5 than 4, but it has happened.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: