By now, we can find thousands of hours of discussions online about popular papers such as "Attention is All You Need". It should be possible to generate something similar without using the paper as a source -- and I suspect that's what the AI does.
In other words: I suspect that the output is heavily derivative from online discussions, and not based on the papers.
Of course, the real proof would be to see the output for entirely new papers.
Of course, "Attention is All You Need" was one of the most discussed papers in our field; there are entire podcast episodes dedicated to it, so it should be easy for a LLM to create a new one.
For all the other papers, assuming they were impactful, they must have been referred by others, highlighting what their contribution is, what is controversial, etc.
In other words: the LLM doesn't have to "understand" the paper; it can simply parrot what others have been saying/writing about it.
(For example: a podcast about Google Illuminate could use our brief exchange to discuss the possible merits of this technology.)
In other words: I suspect that the output is heavily derivative from online discussions, and not based on the papers.
Of course, the real proof would be to see the output for entirely new papers.