Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You address the "personalization" misconception, but to people who don't have this misconception, but are concerned about data retention in a more general sense, this article is unclear and seems self-contradictory.


What's unclear? I have a whole section about "Reasons to worry anyway".


"ChatGPT and other LLMs don’t remember everything you say" in the title is contradicted by the "Reasons to worry anyway", because OpenAI does remember (store) everything I say in non-opted-out chat interface, and there's no guarantee that a future ChatGPT based on the next model won't "remember" it in some way.

The article reads as "no, but actually yes".


Maybe I should have put the word "instantly" in there:

Training is not the same as chatting: ChatGPT and other LLMs don’t instantly remember everything you say


They may not internalise it instantly. They certainly do "remember" (in the colloquial sense) by writing it to a hard drive somewhere.

This article feels like a game of semantics.


In the sense of a chat bot that people are interacting with, it doesn't remember. That's an important distinction, regardless of what OpenAI does to save your interactions somewhere for whatever purposes they may have in mind.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: