Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm calling it a rounding error in comparison to a future advanced AI, as well as relative to impact of cultures, laws and economies we're embedded in. And yes, that's still responsible for countless deaths - so imagine how bad it would be if we were to contend with alien minds - whether it's space aliens or AIs.


> I'm calling it a rounding error in comparison to a future advanced AI

Maybe what you imagine future AI will be like, we don't know even what AI will be capable of in 2024. My counter point is that if there is a sensation, emotion or choice that is notable enough, surely it has been described in words many times over. Everything is in the text corpus.

What makes humans superior to AI is not language mastery, but feedback. We get richer, more immediate feedback, and get it from the physical world, from our tools and other people. AI has nobody to ask except us, until recently didn't get to use tools and embodiment is not there yet.

Another missing ability in current gen LLMs is continual learning. LLMs can only do RAG and shuffle information around in limited length prompts. There is no proper long term memory except the training process, not even fine-tuning is good enough to learn new abilities.

So the main issues of AI are memory and integration into the environment, they are already super-aligned to humanity by learning to model text. We already know LLMs are great at simulating opinion polls[1], you just have to prompt the model with a bunch of diverse personas. They are aligned to each and every type of human.

[1] Out of One, Many: Using Language Models to Simulate Human Samples https://www.cambridge.org/core/journals/political-analysis/a...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: