Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I sometimes use LLM models to translate text snippets from fictional stories from one language to another.

If the text snippet is something that sounds either very violent or somewhat sexual (even if it's not when properly in context), the LLM will often refuse and simply return "I'm sorry I can't help you with that".





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: