Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One problem I see with this is legitimizing LLM-extracted content as canon. The realistic human speech masks the fact that the LLM might be hallucinating or highlighting the wrong parts of a book/paper as important.


Happens in the very first example:

[Attention is All You Need - 1:07]

> Voice A: How did the "Attention is All You Need" paper address this sequential processing bottleneck of RNNs?

> Voice B: So, instead of going step-by-step like RNNs, they introduced a model called the Transformer - hence the title.

What title? The paper is entitled "Attention is All You Need".

People are fooling themselves. These are stochastic parrots cosplaying as academics.


Agreed. Another example in the first minute of the "Attention is all you need" one.

"[Transformers .. replaced...] ...the suspects from the time.. recurrent networks, convolution, GRUs".

GRU has no place being mentioned here. It's hallucinated in effect, though, not wrong. Just a misdirecting piece of information not in the original source.

GRU gives a Ben Kenobi vibe: it died out about when this paper was published.

But it's also kind of misinforming the listener to state this. GRUs are a subtype of recurrent networks. It's a small thing, but no actual professor would mention GRUs here I think. It's not relevant (GRUs are not mentioned in the paper itself) and mentioning RNNs and GRUs is a bit like saying "Yes, uses both Ice and Frozen Water"

So while the conversational style gives me podcast-keep-my-attention vibes.. I feel a uncanny valley fear. Yes each small weird decision is not going to rock my world. But it's slightly distorting the importance. Yes a human could list GRUs just the same, and probably, most professors would mistake or others.

But it just feels like this is professing to be the next, all-there thing. I don't see how you can do that and launch this while knowing it produces content like that. At least with humans, you can learn from 5 humans and take the overall picture - if only one mentions GRU, you move on. If there's one AI source, or AI sources that all tend to make the same mistake (e.g. continuing to list an inappropriate item to ensure conversational style), that's very different.

I don't like it.


It then goes on to explain right afterwards that the key thing the transformer does is rely on a mechanism called attention. It makes more sense in that context IMO.


I recently listened to this great episode of "This American Life" [1] which talked about this very subject. It was released in June 2023 which might be ancient history in terms of AI. But it discusses whether LLMs are just parrots and is a nice episode intended for general audiences so it is pretty enjoyable. But experts are interviewed so it also seems authoritative.

[1] https://www.thisamericanlife.org/803/greetings-people-of-ear...


I had the same exact thought - "Did this summary mis-represent the title??" Indeed, it did. However, I thought the end2end implementation was decent.

> These are stochastic parrots cosplaying as academics.

LOL


You left this out

"The transformer processes the entire sequence all at once by using something called self attention"


This is the very next sentence, so it is a little odd that "hence the title" comes before, and not after, "...using something called self attention."

My take is these are nitpicks though. I can't count the number of podcasts I've listened to where the subject is my area of expertise and I find mistakes or misinterpretations at the margins, where basically 90% or more of the content is accurate.


Noticed this as well. But on second thought: That's how humans talk - far from perfect. :)


In a sense they are parrots. But the comparison misses cases where LLMs are good and parrots are useless.


The top list of Apple Podcasts is full of real humans intentionally lying or manipulating information, it makes me worry much less about computer generated lies


Even if society is kinda collapsing that way people are still less likely to listen to a random influencer's review of biochemistry than a Professor in Biochemistry. These LLMs know just as much about the topic they're summarizing as a toddler, they should be treated with just as much skepticism.

There are hacks everywhere but humans lying sometimes have implications (libel/slander) that we can control. Computers are thought of in general society as devoid of bias and "smart" so if they lie people are more likely to listen.


We'll have to see how it holds up for general books. The books they highlighted are all very old and very famous, so the training set of whatever LLM they use definitely has a huge amount of human-written content about them, and the papers are all relatively short.


There are only so many hours in the day, so giving people the choice to consume content in this form doesn’t seem all that bad.

It would be good to lead off with a disclaimer.


Frankly, humans also sometimes remember things incorrectly or pay excess attention to the less significant topics while discussing a book.

In this regard, LLMs are imperfect like ourselves, just to a different extent.


We can find thousands of hours of discussions about popular papers such as "Attention is All You Need". It should be possible to generate something similar without using the paper as a source -- and I suspect that's what the AI is doing here.

In other words: it's not summarising the paper in a clever way, it is summarising all the discussions that have been made about it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: