Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wrong comment to respond to; if you can’t wait to reply, that might indicate it’s time to take a step back.

>Good thing they don't make sweeping declarations or say anything about that meaning narrow learning without transfer.

That's exactly what that previous quote means. Did you read the methodology? They train on a universal training set and then have to tune it using a closely related training set for it to work. In other words, the first step is not good enough to be transferrable and needs to be fine tuned. In that context, the quote implies the fine tuning pushes the model away from a generalizable one into a narrow model that no longer works outside that specific application. Apropos to this entire discussion, it means it doesn't perform well in novel domains. If it could truly "understand proteins in any definition", it wouldn't need to be retrained for each application. The word you used ('any') literally means "without specification"; the model needs to be specifically tuned to the protein family of interest.

You are quoting an entirely different publication in your response. You should use the paper from which I quoted to refute my statement, otherwise this is the definition of cherry picking. Can you explain why the two studies came to different conclusions? It sure seems like you're not reading the work to learn and instead just grasping at straws to be "right." I have zero interest in having a conversation where someone just jumps from one abstract to another just to argue rather than adding anything of substance.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: