That's a good question. I think I might classify that as solving a novel problem. I have no idea if LLMs can do that consistently currently. Maybe they can.
The idea that "understanding" may be able to be modeled with general purpose transformers and the connections between words doesn't sound absolutely insane to me.
How do you define "understanding a concept" - what do you get if a system can "understand" concept vs not "understanding" a concept?