Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess the idea is that by asking the model to do something that is inherently hard for it we might learn something about the baseline smartness of each model which could be considered a predictor for performance at other tasks too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: