Yep. Any large GenAI image model (beyond SD 1.5) is hideously slow on Mac's irrespective of how much RAM you cram in - whereas I can spit out a 1024x1024 image from Flux.1 Dev model in ~15 seconds on a RTX 4090.
4080 won't do video due to low RAM. The GPU doesn't have to be as fast there, it can be 5x slower which is still way faster than a CPU. And Intel can iterate from there.