That's an interesting one, I'll have to think through that a bit more.
Just first thoughts here, but I don't think (2) is off the table. The model wouldn't necessarily have to have been trained on the exact algorithm and outputs. Forcing the model to work a step at a time and show each step may push the model into a spot where it doesn't comprehend the entire algorithm but it has broken the work down to small enough steps that it looks similar enough to python code it was trained on that it can accurately predict the output.
I'm also assuming here that the person posting it didn't try a number of times before GPT got it right, but they could have cherry picked.
More importantly, though, we still have to assume this output would require python comprehension. We can't inspect the model as it works and don't know what is going on internally, it just appears to be a problem hard enough to require comprehension.
2. This was the original ChatGPT, i.e. the GPT3.5 model, pre-GPT4, pre-turbo, etc
3. This capability was present as early as GPT3, just the base model —- you'd prompt it like "<python program> Program Output:" and it would predict the output
Just first thoughts here, but I don't think (2) is off the table. The model wouldn't necessarily have to have been trained on the exact algorithm and outputs. Forcing the model to work a step at a time and show each step may push the model into a spot where it doesn't comprehend the entire algorithm but it has broken the work down to small enough steps that it looks similar enough to python code it was trained on that it can accurately predict the output.
I'm also assuming here that the person posting it didn't try a number of times before GPT got it right, but they could have cherry picked.
More importantly, though, we still have to assume this output would require python comprehension. We can't inspect the model as it works and don't know what is going on internally, it just appears to be a problem hard enough to require comprehension.