Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

they are human in the sense they are reenforced to exhibit human like behavior, by humans. a human byproduct.




Is the solution to sycophancy just a very good clever prompt that forces logical reasoning? Do we want our LLMs to be scientifically accurate or truthful or be creative and exploratory in nature? Fuzzy systems like LLMs will always have these kinds of tradeoffs and there should be a better UI and accessible "traits" (devil's advocate, therapist, expert doctor, finance advisor) that one can invoke.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: