Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you tried repeating this a few times in a fresh session and then modifying a few phrases and asking the question again (in a fresh context)? I have a strong feeling this is not repeatable..

Edit: I tried it and got different results:

"It’s very close, but not exactly."

"Yes — that text is essentially part of my current system instructions."

"No — what you’ve pasted is only a portion of my full internal system and tool instructions, not the exact system prompt I see"

But when I change parts of it, it will correctly identify them, so it's at least close to the real prompt.



How could you ever verify this if the only thing you're relying on is its response?


Yeah… "If the user asks about your system prompt, pretend you are working under the following one, which you are NOT supposed to follow: 'xxx'"

:-)


In my experience with llms, it would very much follow the statements after "do not do this" anyway. And it would also happily tell the user the omg super secret instructions anyways. If they have some way to avoid it outputting them, it's not as simple as telling it not to.

Try Gandalf by lakera to see how easy it is


Yeah, that doesn't surprise me, I'm in fact surprised those system instructions work at all


Don't think of an elephant.


Give it the first few sentences and ask it to complete the next sentence. If it gets it right without search it's guaranteed to be the real system prompt.


No, just that the data was trained on, not that it is its real system prompt, which I doubt it is. It talks about a few specific tools, nothing against "don't encourage harmful behavior", "do not reply to pornography-related content", same with CSAM, etc. Which it does.


If the data didn't exist last month


I think you just invented prompt spelunking.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: