Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have not yet seen AI doing a critical evaluation of data sources. AI willcontradict primary sources if the contradiction is more prevalent in the training data.

Something about the whole approach is bugged.

My pet peeve: "Unix System Resources" as explanation for the /usr directory is a term that did not exist until the turn of the millenium (rumor is that a c't journalist made it up in 1999), but AI will retcon it into the FHS (5 years earlier) or into Ritchie/Thompson/Kernigham (27 years earlier).



> Something about the whole approach is bugged.

The bug is that LLMs are fundamentally designed for natural language processing and prediction, not logic or reasoning.

We may get to actual AI eventually, but an LLM architecture either won't be involved at all or it will act as a part of the system mimicking the language center of a brain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: