It's entirely the cost/perf of access to the larger amounts of VRAM that keeps rendering on CPUs now. GPUs are strictly better in almost every way for rendering (We could have some arguments about technical precision, FP calculations, etc. but with modern cards these arguments are largely semantics, you can have output that is accurate to the level that any human watching for entertainment purposes will not be able to determine any physical inaccuracies that arise from a GPU render vs. CPU.), except the need for large amounts of VRAM being quite expensive at current.
But that's already been changing, and we are seeing studios moving to fully GPU based pipelines. Wylie Co, who are a major visual effects company (Dune part 1 and 2, marvel movies, the last of us, a bunch of others) are now a 100% GPU shop. The trend is towards more and more GPU rendering, not less.
With AI providing another strong incentive towards increasing the amount of VRAM on GPUs, I don't see any reason to believe that trend will reverse.
I'm not sure how true that is anymore, from the outside it seems they're at least moving to a CPU/GPU hybrid (which makes a lot of sense), at least judging by new features landing in RenderMan that continues to add more support for GPUs (like XPU).
Hard to know without getting information from people at Pixar really.
Not sure how much sense it would make for Pixar to spend a lot of engineering hours for things they wouldn't touch in their own rendering pipeline. As far as I know, most of the feature development comes from their own rendering requirements rather than from outside customers.
I expect GPU hardware to specialize like Google’s TPU. The TPU feels like ARM in these AI workloads where when you start to run these at scale, you’ll care about the cost perf tradeoff for most usecases.
> CPU/GPU share the same RAM AFAIK.
This depends on the GPU I believe Apple has integrated memory, but most GPUs from my limited experience writing kernels have their own memory. CUDA pretty heavily has a device memory vs host memory abstraction.
Customers like Pixar could probably push this even further, with a more recent Nvidia rack and Mellanox networking. Networking a couple Mac Studios over Thunderbolt doesn't have a hope of competing, at that scale.
I wonder if we’ll end up in a situation like rendered movies.
Where the big studios like Pixar uses CPUs (not GPUs) to render their movies due to the cost/perf (and access to larger amounts of RAM).
https://news.ycombinator.com/item?id=25616372