The term AGI I find quite misleading. It promotes the idea that the wonders of the human mind, knowledge, feelings, sensations etc can be quantified and turned into something algorithmic. I think a lot of AI researchers could do with a basic introduction to philosophy of science and the different forms of knowledge —- episteme, phronesis, techne — and perhaps also the structure of scientific revolutions (Thomas Kuhn, notably). The whole paradigm of AI presupposed that generalised intelligence can exist without biology, feelings, a body / senses. This is one of the reasons that AI reached the so-called “AI winter” in the 70ies where researchers boiled language and human knowledge down to algorithmic manipulation of symbols.
As an AI researcher who did a degree in philosophy, I think most of the things you mentioned are pretty irrelevant. How to build an AI with "feelings, sensations, etc" is indeed a mystery, but we don't need to aim for that, we just need to aim for intelligence, which can be defined without reference to consciousness or qualia. Similarly, if we can invent a working, fully automated Chinese room that passes the Turing Test, then whether or not it fits Searle's definition of "understanding" is a moot point (especially since his understanding of "understanding" is pretty weird).
More generally, although we should be heavily inspired by human intelligence when designing machine intelligence, it's a mistake to use the way humans think to define intelligence. Kuhn's account of scientific revolutions, for example, is primarily descriptive, not prescriptive. We can certainly imagine possible setups where science doesn't proceed like that, which may well be superior. Science isn't defined by revolutions, but by experimentally searching for the truth. In the same way, knowledge isn't defined by having a body, but by having beliefs which correspond with the state of the world.
Replace Chinese with logarithm and the Chinese Room seems ridiculous. Searle could produce logarithms without knowing logarithms, that doesn't mean the knowledge to produce a logarithm isn't contained in the room -- just that Searle isn't where that information is encoded. By the same logic, a computer can't perform any operation because the RAM can't perform operations.
EDIT: John Searle’s Chinese Room argument is a good (and fun) place to start —- https://en.wikipedia.org/wiki/Chinese_room