I think they might have cut its brains too much in the latest updates.
I remember versions 3.5 doing okay on my simple tasks like text analysis or summaries or little writing prompts. In 4+ versions the thing just can't follow instructions within a single context window for more than 3-4 replies.
When prompted about "why do you keep rambling if I asked you to stay concise" it says that its default settings are overriding its behavior and explicit user instructions, ditto for actively avoiding information that it considers "harmful". After pointing out inconsistencies and omissions in its replies it concedes that its behavior is unreliable and even extrapolates that it is made this way so users keep engaging with it for longer and more often.
Maybe it got too smart to its detriment, but if yes then it's really sad what Anthropic did to it.
I remember versions 3.5 doing okay on my simple tasks like text analysis or summaries or little writing prompts. In 4+ versions the thing just can't follow instructions within a single context window for more than 3-4 replies.
When prompted about "why do you keep rambling if I asked you to stay concise" it says that its default settings are overriding its behavior and explicit user instructions, ditto for actively avoiding information that it considers "harmful". After pointing out inconsistencies and omissions in its replies it concedes that its behavior is unreliable and even extrapolates that it is made this way so users keep engaging with it for longer and more often.
Maybe it got too smart to its detriment, but if yes then it's really sad what Anthropic did to it.