Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue is that you can retrieve the prompt with even a low success rate.

You can make prompts where both the prompt itself and the answer is encrypted and GPT-3 struggles with this so the detector may decrypt the prompt or response to something else than what is answering the prompt.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: