Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Your argument, on the other hand, is indistinguishable from cynical AI opinions going back decades. It could be made any time. Zero new insight. Zero predictive capacity.

Pointing out logical fallacies?

Lol.





> Historical "technological progress" can't be used as argument for any particular technology.

Historical for billions of years of natural information system evolution. Metabolic, RNA, DNA, protein networks, epigenetic, intracellular, intercellular, active membrane, nerve precursors, peptides, hormonal, neural, ganglion, nerve nets, brains.

Thousands of years of human information systems. Hundreds of years of technological information systems. Decades of digital information systems. Now in in just the last few years, progress year to year is unlike any seen before.

Significant innovations being reported virtually every day.

Yes track records carry weight. Especially with no good reason for any reason for a break, while every tangible reason to believe nothing is slowing down, right up to today.

"Past is not a predictor of future behavior" is about asset gains relative to asset prices in markets where predictable gains have had their profitability removed by the predictive pricing of others. A highly specific feedback situation making predicting asset gains less predictable even when companies do maintain strong predictable trends in fundamentals.

It is a narrow specific second order effect.

It is the worst possible argument for anything outside of those special conditions.

Every single thing you have ever learned was predicated on the past having strong predictive qualities.

You should understand what an argument means, before throwing it into contexts where its preconditions don't exist.

> Right now, if we are talking about AI, we're talking about specific technologies, which may just as well fail and remain inconsequential in the grand scheme of things, like most technologies, most things really, did in the past. Even more so since we don't understand much anything in either human or artificial cognition. Again and again, we've been wrong about predicting the limits and challenges in computation.

> Your argument [...] is indistinguishable from cynical AI opinions going back decades. It could be made any time. Zero new insight. Zero predictive capacity.

If I need to be clearer, nobody could know when you wrote that by reading it. It isn't an argument it's a free floating opinion. And you have not made it more relevant today, than it would have been all the decades up till now, through all the technological transitions up until now. Your opinion was equally "applicable", and no less wrong.

This is what "Zero new insight. Zero predictive capacity" refers to.

> Substantive negative arguments about AI progress have been made. See "Perceptrons" by Marvin Minksy and Seymour Papert, for an example of what a solid negative argument looks like. It delivered insights. It made some sense at the time.

Here you go:

https://en.wikipedia.org/wiki/Perceptrons_(book)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: