You're asking for a GPU die at least as large as NVIDIA's TU102 that was $1k in 2018 when paired with only 11GB of RAM (because $1k couldn't get you a fully-enabled die to use 12GB of RAM). I think you're off by at least a factor of two in your cost estimates.
Though Intel should also identify say the top-100 finetuners and just send it to them for free, on the down low. That would create some market pressure.
Intel has Xeon Phi which was a spin-off of their first attempt at GPU so they have a lot of tech in place they can reuse already. They don't need to go with GDDRx/HBMx designs that require large dies.
I don't want to further this discussions but may be you dont realise some of the people who replied to you either design hardware for a living or has been in the hardware industry for longer than 20 years.
It would be interesting if those saying that a regular GPU with 128GB of VRAM cannot be made would explain how Qualcomm was able to make this card. It is not a big stretch to imagine a GPU with the same memory configuration. Note that Qualcomm did not use HBM for this.
For some reason Apple did it with M3/M4 Max likely by folks that are also on HN. The question is how many of the years spent designing HW were spent also by educating oneselves on the latest best ways to do it.
Even LPDDR requires a large die. It only takes things out of the realm of technologically impossible to merely economically impractical. A 512-bit bus is still very inconveniently large for a single die.