OpenGL has a model of the hardware pipeline that is quite old. A lot of things that are expressed as OpenGL state are now actually implemented in software as part of the final compiled shader on the GPU. For example, GLSL code does not define the data format in which vertex attributes are stored in their buffers. This is set when providing the attribute pointers. The driver then has to put an appropriate decoding sequence for the buffer into the shader machine code. Similar things happen for fragment shader outputs and blending these days. This can lead to situations where you're in the middle of a frame and perform a state change that pulls a rug from under the shader instances that that driver created for you so far. So the driver has to go off and rewrite and reupload shader code for you before the actually requested command can be run.
More modern interfaces now force you to clump a lot of state together into pretty big immutable state objects (e.g. pipeline objects) so that the driver has to deal with fewer surprises at inopportune times.
I think I understand now. Ideally the GLSL shader code is compiled once and sent to the GPU and used as-is to render many frames.
But if you use the stateful OpenGL APIs to send instructions from the CPU side during rendering you can invalidate the shader code that was compiled.
It had not occurred to me because the library I am using makes it difficult to do that, encouraging setting the state up front and running the shaders against the buffers as a single "render" call.
More modern interfaces now force you to clump a lot of state together into pretty big immutable state objects (e.g. pipeline objects) so that the driver has to deal with fewer surprises at inopportune times.