Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.


Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

We're literally trying to shove as much data as possible into these things afterall.

What I'm implying is that you think you made a point, but you didn't.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: