I haven't done any shader programming, but can we even say that about those languages? It might just be my own inexperience talking, but for anything more complicated than matrix multiplication the innards of a GPU seem just as opaque as a CPU.
OpenGL started out as being a pretty high level language and 1.0 certainly doesn't map closely to modern hardware at all. But APIs and hardware have moved closer together over time, and stuff like CUDA and Vulkan use models that are a pretty close match to the hardware they run on. When writing CUDA you can reasonably figure the number of cycles an operation will take, and benchmarking will agree, unlike CPUs that have become so non-deterministic that they are much harder to reason about.
That said, I wouldn't look to those as examples for how to design a good "low-level" CPU language, as CPUs and GPUs solve very different problems.