Hacker News new | past | comments | ask | show | jobs | submit login

> the statistical model can miss the hidden rules that were a part of the thinking that went into the content that was used for training.

Makes sense. Hidden rules such as, "recommending a package works only if I know the package actually exists and I’m at least somewhat familiar with it."

Now that I think about it, this is pretty similar to cargo-culting.




And cargo-culting is in fact exactly what happens when people act as LLM's.


LLMs don’t really “know” though.mif you look at the recent Anthropic findings, they show that large language models can do math like addition but they do it weird way and when you asked the model how they arrive to the solution they provide method that is completely different to how they actually do it


That's the point. It's one of the implicit, real-world rules that were underlying the training set.


> Now that I think about it, this is pretty similar to cargo-culting.

In other news: the CTO just sent out an email stating we should be using AI every day as our "thought partner."


VC-culting actually works though, if you act like a unicorn people will invest.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: