Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But the people who would apply these things are humans, which are capable of making their own decisions, informed and restricted by societal norms and laws.

The idea that "x is technically possible, therefore it is inevitable" - the favored line of the tech oligarchs who are the very ones directly furthering x - is sadly cargo culted by many rank-and-file technologists. If you were to apply the same defective reasoning to, say, nuclear weapons, then we would all be dead by now.

It is possible for humans to agree that something should not be done, and prevent it from being done. This is especially true of these LLMs, which require ingesting (and ignoring copyright on, by the way) massive amounts of human work and then spending massive amounts of computation to process.

That we could choose to do something about this should not be controversial, regardless of what the driving AI.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: