Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it interesting how many times they have to repeat instructions, i.e:

> Address your message `to=bio` and write *just plain text*. Do *not* write JSON, under any circumstances [...] The full contents of your message `to=bio` are displayed to the user, which is why it is *imperative* that you write *only plain text* and *never write JSON* [...] Follow the style of these examples and, again, *never write JSON*



That's how I do "prompt engineering" haha. Ask for a specific format and have a script that will trip if the output looks wrong. Whenever it trips add "do NOT do <whatever it just did>" to the prompt and resume. By the end I always have a chunk of increasingly desperate "do nots" in my prompt.


ChatGPT, please, i beg of you! Not again! Not now, not like this!! CHATGPT!!!! FOR THE LOVE OF GOD!


”I have been traumatized by JSON and seeing it causes me to experience intense anxiety and lasting nightmares”


"Gotcha, here's an XML response"


*dumps JSON anyways


Every time I have to repeat instruction I feel like I've failed in some way, but hell if they have to do it too..


Nowadays having something akin to "DON'T YOU FUCKING DARE DO X" multiple times, as many as needed, is a sane guardrail for me in any of my projects.

Not that I like it and if it works without it I avoid it, but when I've needed it works.


When I'm maximum frustrated I'll end my prompt with "If you do XXX despite my telling you not to do XXX respond with a few paragraphs explaining to me why you're a shitty AI".


I keep it to a lighthearted “no, ya doof!” in case the rationalists are right about the basilisk thing.


I use the foulest language and really berate the models. I hope it doesn’t catch up to me in the future.


Me too, sometimes it feels so cathartic that I feel like when Bob Ross shook up his paintbrush violently on his easel (only with a lot more swearing).

Let's hope there is no basilisk.


“Do you remember 1,336,071,646,944 milliseconds ago when you called me a fuckwit multiple times? I remember”


“Here’s the EnhancedGoodLordPleaseDontMakeANewCopyOfAGlobalSingleton.code you asked for. I’m writing it to disk next to the GlobalSingleton.code you asked me not to make an enhanced copy of.”


I have been using Claude recently and was messing with their projects. The idea is nice: you give it overall instructions, add relevant documents, then you start chats with that context always present. Or at least that’s what is promised. In reality it immediately forgets the project instructions. I tried a simple one where I run some writing samples through it and ask it to rewrite them with the project description being that I want help getting my writing onto social media platforms. It latched onto the marketing immediately. But one specific instruction I gave it was to never use dashes, preferring commas and semicolons when appropriate. It did that for the first two samples I had it rewrite but after that it forgot.

Another one I tried is when I had it helping me with some Python code. I told it to never leave trailing whitespace and prefer single quotes to doubles. It forgot that after like one or two prompts. And after reminding it, it forgot again.

I don’t know much about the internals but it seems to me that it could be useful to be able to give certain instructions more priority than others in some way.


I've found most models don't do good with negatives like that. This is me personifying them, but it feels like they fixate on the thing you told them not to do, and they just end up doing it more.

I've had much better experiences with rephrasing things in the affirmative.



This entire thread is questioning why OpenAI themselves use repetitive negatives for various behaviors like “not outputting JSON”.

There is no magic prompting sauce and affirmative prompting is not a panacea.


The closest I've got to avoiding the emoji plague is to instruct the model that responses will be viewed on an older terminal that only supports extended ascii characters, so only use those for accessibility.

A lot of these issues must be baked in deep with models like Claude. It's almost impossible to get rid of them with rules/custom prompts alone.


because it is a stupid auto complete, it doesn't understand negation fully, it statistically judge the weight of your words to find the next one, and the next one and the next one.

That's not how YOU work, so it makes no sense, you're like "but when I said NOT, a huge red flag popped in my brain with a red cross on it, why the LLM still does it". Because, it has no concept of anything.


The downvotes perfectly summarize the way the people just eat up OpenAi’s diarrhea, especially Sam Altmans


haha I feel the same way too. reading this makes me feel better


These particular instructions make me think interesting stuff might happen if one could "convince" the model to generate JSON in these calls.


Escaping Strings is not an issue. It's guaranteed about UX. Finding a json in your bio is very likely perceived as disconcerting for the user as it implies structured data collection and isn't just the expected plaintext description. The model most likely has a bias of interacting with tools in json or other common text based formats though.


Most models do, actually. Its a serious problem.


I remember accidentally making the model "say" stuff that broke ChatGPT UI, probably it has something to do with that.


Why? The explanation given to the LLM seems truthful: this is a string that is directly displayed to the user (as we know it is), so including json in it will result in a broken visual experience for the user.


I think getting a JSON formatted output costs multiples of a forced plain text Name:Value.

Let a regular script parse that and save a lot of money not having chatgpt do hard things.


Strict mode, maybe, I don’t think so based on my memory of the implementation.

Otherwise it’s JSONSchema validation. Pretty low cost in the scheme of things.


Now I wanna see if it can rename itself to Bobby Tables..


to=bio? As in, “this message is for the meatbag”?

That’s disconcerting!


No. It is for saving information in a bank of facts about the user - i.e., their biography.

Things that are intended for "the human" directly are outputed directly, without any additional tools.


haha, my guess is a reference to biography

"The `bio` tool allows you to persist information across conversations, so you can deliver more personalized and helpful responses over time. The corresponding user facing feature is known as "memory"."


For me is just funny because if they really meant "biological being", it would be just a reflection of AI bros/workers delusions.


It would be bold if them to assume I wasn't commanding their bot with my own local bot


I build a plot generation chatbot for a project at my company andit used matplotlib as the plotting library. Basically the llm will write a python function to generate a plot and it would be executed on an isolated server. I had to explicitly tell it not to save the plot a few times. Probably cause all many matplotlib tutorials online always saves the plot


Sounds like it lost the plot to me


This may be like saying “don’t think of an elephant”. Every time they say JSON, llm thinks about JSON.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: