Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure - I guess I should say "domain" is an incorrect word, when "use case" is a better phrasing.

LLMs have a tendency to hallucinate at a rate that makes them untrustworthy at scale w/o a human in the loop. The more open ended the prompt, the higher the hallucination rate. Here I mean minor things, like swapping a negative, that can fundamentally change a result.

Thus, any place that we trust computer to perform reliable logic, we cannot trust an LLM because it's error rate is too high.

Methods such as RAG can box in the LLM to keep them on track, but this error rate means that they can never be mission critical, a-la business logic, and keeps them to being a toy.

Where LLMs are game changers are ETL pipelines / data scrapers. I used to work at Clearbit where we built thousands of lines of code just to extract the address of a company's HQ or if a company is owed by another org. LLMs just do that... for free. With LLMs data extraction from free form text is now a solved problem, and thats god damn mindblowing for me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: