I cannot find it anymore, but there is a youtube video of someone making / training an llm who showed a lower precision switch switching his small model from bad to lobotomised and he even mentions he thinks it is what Anthropic does when the servers are overloaded. I notice it, but have no proof. There seems to be a large opportunity for screwing people over without consequences though, especially when using these API's at scale.
with open(file) as f:
This was a mistake not worthy of even gpt3… I’ve also noticed I get overall better suggestions from Claude desktop app.I wonder why