Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The OpenAI chat completion endpoint encourages the second-person prompting you describe, so that could be why you see it a lot. My understanding is that a transformation is applied to the user input prompts before being fed to the underlying model, so it's possible that the model receives a more natural transcription-style prompt.

There is something so bizarre about talking to a "natural language" "chat" interface, with some weirdly constructed pseudo representation, to have it re-construct that into a more natural prompt to feed further down to extract tokens from real chat records.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: