Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

By now, we can find thousands of hours of discussions online about popular papers such as "Attention is All You Need". It should be possible to generate something similar without using the paper as a source -- and I suspect that's what the AI does.

In other words: I suspect that the output is heavily derivative from online discussions, and not based on the papers.

Of course, the real proof would be to see the output for entirely new papers.



There are much newer papers shown than "Attention is All You Need" (all of them?) and much less talked about (probably all of them, too).

It shouldn't be surprising that a LLM is able to understand a paper, just upload one to Claude 3.5 Sonnet.


Of course, "Attention is All You Need" was one of the most discussed papers in our field; there are entire podcast episodes dedicated to it, so it should be easy for a LLM to create a new one.

For all the other papers, assuming they were impactful, they must have been referred by others, highlighting what their contribution is, what is controversial, etc.

In other words: the LLM doesn't have to "understand" the paper; it can simply parrot what others have been saying/writing about it.

(For example: a podcast about Google Illuminate could use our brief exchange to discuss the possible merits of this technology.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: