I agree with you to a point, but I think the only reason that it can't understand the context of a system is because it hasn't been trained on that system's code and documentation, which is obviously a future coming soon.
I'm not sure training these models on code and documentation will make that much of a difference. These models struggle significantly with subtlety, relevance, and correctness. It also doesn't have a theory of its own knowledge or confidence, and so tends to "hallucinate" and put out confidently-worded nonsense. Especially for complex and nuanced topics.
A big part of my job in software is having a very sharpened grasp of my ignorance, the ability to weigh a variety of tradeoffs, and the ability to convey my confidence of my abilities and my team's abilities. I'm not sure this is possible for this generation of AI.
The hallucination part is due to a lack of constraints. The AI can recognize constraints, but it can't recognize what it doesn't know for lack of context.
Prohibit all physical meetings. Force all communication through mediums that can be pipelined. Feed everything (accounting, contracts, law, etc...). Work will be then to architecture the AI to produce the optimal response.
The first company to figure this out will be too ahead. I don't think anything will be anywhere close to compete including nation states.