It’s always surprised me how the world of digital video is a cousin of IT yet is impenetrable to people outside the video industry. How they refer to resolutions, colors, networking, storage is (almost deliberately?) different.
This gives an idea of the parameters we cover for roughly 200 different models of broadcast cameras we might have so far. These are only to tweak the image quality which is the job of the video engineer (vision engineer in UK). We usually don't cover all the other functions a camera has, which could be more intended for the camera operator himself. The difficulty is to bring some consistency with so many different cameras and protocols.
Do you "normalize" the parameters to some intermediate config so that everything behind that just needs to work with that uniform intermediate config? What about settings that are unique to a given device?
That was the idea—we started by normalizing all the standard parameters found in most cameras. The challenge came when we had to incorporate brand-specific parameters, many of which are only used by a single manufacturer. Operators also weren’t keen on having values changed from what the camera itself provided, as some settings serve as familiar reference points. For example, they know the right detail enhancement values to use for football or studio work. So, we kept normalization for the key functions where it made sense, but for other parameters, we now try to stay as close as possible to the camera’s native values.
As for the topics on MQTT, they function as a kind of universal API—at least internally. Some partners and customers are already using them to automate certain functions. However, we haven’t officially released anything yet, as we wouldn’t be able to guarantee stability or prevent changes at this stage.
People who only ever work with 'consumer' video equipment need extra training and a back-to-basics set of reading material to understand things like the difference between a 420 and 422 color space, or why serious cinema cameras record video ungraded, or what the color grading process in a post-production workflow looks like (and the different aesthetic choices of grading that might be possible). That's before even getting into things like raw yuv/y4m uncompressed video, or very-high-bitrate barely compressed video, generating proxy footage to work with in an editor because the raw is too much of a firehose of data to handle even on a serious workstation....
I would say that unless you have a professional reason, there's very little benefit to the average end-user to do a deep dive into it. If your intention is to spend $7000 on a RED camera and then $13,000 on lenses, gimbal, cage, follow focus, matte box, memory cards etc to make a small and cost effective single camera production package, then by all means, dig into it.
There's a notable difference between shading and grading. Shading is for the TV industry where you adjust all cameras to match perfectly the exposure, tone curve and colors. So when switching between camera angles you don't notice any difference in skin tone or detail, and the green of the grass and blue of the sky are all the same. Also a very important point is to get the color of the sponsor logos right, that would be where to start sometimes... There's less creativity here, you have mainly to follow the standards like ITU-R BT.709 or for HDR HLG and ITU-R BT.2020.
Grading is the creative process of adding a look to your production, which is usually handled in post production but there are now ways to do it live, although by using similar tools as the post production software. And they still re-do it in post production. This is used live for concerts and fashion shows.
There is a significant distinction between shading and grading.
Shading is essential in the TV industry, where the goal is to ensure all cameras are perfectly matched in exposure, tone curve, and colors. This ensures seamless transitions between camera angles, maintaining consistency in skin tones, fine details, and the color of grass and sky. A crucial aspect of shading is accurately reproducing sponsor logos' colors, which can sometimes be the starting point as that's where the money comes from. Creativity plays a lesser role here, as the focus is on following industry standards such as ITU-R BT.709 for SDR or ITU-R BT.2020 and HLG for HDR.
Grading, on the other hand, is a creative process meant to give a distinctive look to a production . Traditionally done in post-production, it can also now be applied in real time using tools similar to those found in post-production software. Despite this, it is often still refined further in post. Live grading is commonly used for events such as concerts and fashion shows, where you want to look different from TV productions.
TIL about shading, and am surprised how less I've seen this term in grading tutorials. While different, I feel like shading is something that should be learnt before grading.
PS You might have pasted two different answer drafts above. Paras 1,4 and 2,5 deliver similar information