>Why is it that subtracting the concept means the system can adapt and create a new rhyme instead of forgetting about the -bit rhyme,
Again, the model has the first line in context and is then asked to write the second line. It is at the start of the second line that the concept they are talking about is 'born'. The point is to demonstrate that Claude thinks about what word the 2nd line should end with and starts predicting the line based on that.
It doesn't forget about the -bit rhyme because that doesn't make any sense, the first line ends with it and you just asked it to write the 2nd line. At this point the model is still choosing what word to end the second line in (even though rabbit has been suppressed) so of course it still thinks about a word that rhymes with the end of the first line.
The 'green' but is different because this time, Anthropic isn't just suppressing one option and letting the model choose from anything else, it's directly hijacking the first choice and forcing that to be something else. Claude didn't choose green, Anthropic did. That it still predicted a sensible line is to demonstrate that this concept they just hijacked is indeed responsible for determining how that line plays out.
>More generally, they failed to rule out (or even consider!) that well-tuned-but-dumb next-token prediction explains this behavior.
They didn't rule out anything. You just didn't understand what they were saying.
Again, the model has the first line in context and is then asked to write the second line. It is at the start of the second line that the concept they are talking about is 'born'. The point is to demonstrate that Claude thinks about what word the 2nd line should end with and starts predicting the line based on that.
It doesn't forget about the -bit rhyme because that doesn't make any sense, the first line ends with it and you just asked it to write the 2nd line. At this point the model is still choosing what word to end the second line in (even though rabbit has been suppressed) so of course it still thinks about a word that rhymes with the end of the first line.
The 'green' but is different because this time, Anthropic isn't just suppressing one option and letting the model choose from anything else, it's directly hijacking the first choice and forcing that to be something else. Claude didn't choose green, Anthropic did. That it still predicted a sensible line is to demonstrate that this concept they just hijacked is indeed responsible for determining how that line plays out.
>More generally, they failed to rule out (or even consider!) that well-tuned-but-dumb next-token prediction explains this behavior.
They didn't rule out anything. You just didn't understand what they were saying.