They could very well have an "agent" that scrapes all conversations and tags elements that it finds might be interesting to feed the training of a future model. At least, that's what I would do if I were in their shoes.
They presumably have a moderator agent that runs separately of ChatGPT and that causes the "This content may violate our content policy" orange box via `type: moderation` messages. They could just as well have a number of other agents.
They presumably have a moderator agent that runs separately of ChatGPT and that causes the "This content may violate our content policy" orange box via `type: moderation` messages. They could just as well have a number of other agents.