But why would we do that even if we could? Making a very expensive machine act like a human is essentially useless, it is not like there is a shortage of humans on Earth. It wouldn't even be a great model of a human brain.
The reason we are doing all that is for its potential uses. Write letters, code, help customers, find information, etc... Even AGI is not about making artificial humans, it is about solving general problems (that's the "G").
And even if we could make artificial humans, there would be a philosophical problem. Since the idea is to make these AIs work for us, if we make these AI as human-like as possible, isn't it slavery? It is like making artificial meat but insist on making the meat-making machine conscious so that it can feel being slaughtered.
Because right now the major reason people still deny LLM as "intelligent" is because it has no connection or understanding to the things it is saying. You can make it say 1+1=2 but it inherently does not have a real concept of what is one thing and what are two things. Its neural network just perceived the weights to give the most statistically correct answer based on what it was modeled on, i.e. text.
So instead of training it that way, the network can potentially be trained to "perceive" or "model" the reality beyond the digital world. The only way we know or have enough experience and data to do so is through our own experience. An embodied AI is what I think is required for anything to actually grasp the real concepts, or at least as close as possible to them.
And without that inherent understanding, no matter how useful a model is, it will never be a "general" inteligence.
It makes sense to have an embodied AI, i.e. a robot. Self driving cars count.
But it doesn't have to be modeled after humans. The purpose of humans if we can call it that is to make more of itself, like all forms of life. That's not what we build robots for. We don't even give robots the physical abilities to do that. Giving them a human mind (assuming we could) would not be adequate. Wrong body, wrong purpose.
The reason we are doing all that is for its potential uses. Write letters, code, help customers, find information, etc... Even AGI is not about making artificial humans, it is about solving general problems (that's the "G").
And even if we could make artificial humans, there would be a philosophical problem. Since the idea is to make these AIs work for us, if we make these AI as human-like as possible, isn't it slavery? It is like making artificial meat but insist on making the meat-making machine conscious so that it can feel being slaughtered.