> the statistical model can miss the hidden rules that were a part of the thinking that went into the content that was used for training.
Makes sense. Hidden rules such as, "recommending a package works only if I know the package actually exists and I’m at least somewhat familiar with it."
Now that I think about it, this is pretty similar to cargo-culting.
LLMs don’t really “know” though.mif you look at the recent Anthropic findings, they show that large language models can do math like addition but they do it weird way and when you asked the model how they arrive to the solution they provide method that is completely different to how they actually do it
Makes sense. Hidden rules such as, "recommending a package works only if I know the package actually exists and I’m at least somewhat familiar with it."
Now that I think about it, this is pretty similar to cargo-culting.