No, you would need to spend “eye watering amounts of compute” to do it, similar to hiring a lot of developers to produce the code. The compiling of the code to an executable format is a tiny part of that cost.
I still think of millions of dollars of GPU spend crunching away for a month as a compiler.
A very slow, very expensive compiler - but it's still taking the source code (the training material and model architecture) and compiling that into a binary executable (the model).
Maybe it helps to think about this at a much smaller scale. There are plenty of interesting machine learning models which can be trained on a laptop in a few seconds (or a few minutes). That process feels very much like a compiler - takes less time to compile than a lot of large C++ projects.
Running on a GPU cluster for a month is the exact same process, just scaled up.
Huge projects like Microsoft Windows take hours to compile and that process often runs on expensive clusters, but it's still considered compilation.
That’s the dirty secret of why ChatGPT 4 is better. But they’ll tell you it has to do with chaining ChatGPT 3’s together, more fine tuning etc. They go to these poor countries and recruit people to work on training the AI.
Not to mention all the uncompensated work of humans around the world who put their content up on the Web.