Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this will remain to be seen. Wasn't there a paper linked here on HN recently, that claimed, that even few examples are sufficient, to poison LLMs? (I didn't read that paper, and merely interpreted the meaning of the title.)


I don't think it remains to be seen. I think it's obvious that the completely explicit exploit is going to be more effective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: