Hacker News new | past | comments | ask | show | jobs | submit login

While I agree overall, LLMs are pattern matching in a complicated way.

You transform your training data in a very strange and high dimensional space. Then when you write an input, you calculate the distance between that input and the closest point in that space.

So, in some sense.. You pattern match your input with the training data. Of course, in a very non intuitive way for humans.

Now, it doesn't necessarily imply things as 'models cannot solve new problems not seen before' we don't know if our problem could get matched to something completely unrelated for us, but in that space it makes sense.

So with your experiments, if the model is able to solve a new puzzle never seen before, you'll never know why, but it doesn't imply either that the new puzzle was not matched in some sense to some previous data in the dataset.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: