Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right: LLMs have a "jagged frontier". They are really good at some things and terrible at other things, but figuring out WHAT those things are is extremely unintuitive.

You have to spend a lot of time experimenting with them to develop good intuitions for where they make sense to apply.

I expect the people who think LLMs are useless are people who haven't invested that time yet. This happens a lot, because the AI vendors themselves don't exactly advertise their systems as "they're great at some stuff and terrible at other stuff and here's how to figure that out".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: