GPT is not a person. It doesn't categorize subjects. It models patterns of text.
A success would mean that your text prompts left a significant text pattern in the model. A failure would mean that it didn't.
Nothing about that has any bearing on logic.
An LLM is 100% inferred patterns.
GPT is not a person. It doesn't categorize subjects. It models patterns of text.
A success would mean that your text prompts left a significant text pattern in the model. A failure would mean that it didn't.
Nothing about that has any bearing on logic.