The conversion is pretty expensive, and very sensitive to numerical errors. Okay as a preprocessing step, but I'd strongly advise against doing it in a shader.
If you store the image in Lab coordinates, then converting back to RGB is two 3×3 matrix-vector multiplications and three cubes. The hue shift would be another 2×2 matrix-vector multiplication if you have (cos θ, sin θ). Is that that expensive?
(Obviously preprocessing would be a lot better if you can get away with it.)
I think it changes the nature of this technique from a cheap simple trick – just one low-precision matmul – to a less general, many times more expensive, more complex computation.
To me that starts to feel wasteful, even if GPUs can churn through that anyway. If you're merely implementing a color tweak in a game you can come up with plenty of simpler formulas that look nice.
I've written oklab and okhsl implementations — that cube amplifies errors greatly. The full-precision cbrt makes it definitely a one-way thing for real-time graphics.
I have a feeling that it would be good to these sort of transformations in perceptually uniform colorspace anyways. Ottosson has some examples of color blending in different color spaces: https://bottosson.github.io/posts/colorwrong/#comparisons