I think you're picturing a different level of the network stack than I had in mind. Yes, above the physical level they will be explicitly using very sophisticated codes. But I think physically it is the case that messages are transmitted using pulses of photons, where a pulse will contain many photons and will lose ~5% of its photons per kilometer when travelling through fiber (which is why amplifiers are needed along the way). In this case the "repetition code" is the number of photons in a pulse.
But we are classical, so I think it's wrong (or at least confusing) to talk about the many photons as repetition codes. Then we might as well start to call all classical phenomena repetition codes. Also how would you define SNR when doing this?
Repetition codes have a very clearly defined meaning in communication theory, using them to mean something else is very confusing.
> Then we might as well start to call all classical phenomena repetition codes
All classical phenomena are repetition codes (e.g., https://arxiv.org/abs/0903.5082 ). And this is perfectly compatible with the meaning in communication theory, except that the symbols we're talking about are the states of the fundamental physical degrees of freedom.
In the exact same sense, the von Neumann entropy of a density matrix is the Shannon entropy of its spectrum, and no one says "we shouldn't call that the Shannon entropy because Shannon originally intended to apply it to macroscopic signals on a communication line".
I think what they mean is that classical systems use more than a single packet of energy to represent a state. Our “digital systems” are actually analog systems in saturation states. Each time you want to set a bit in memory, you need enough energy to fill up a capacitor (or similar related concept) which is far more than a single electron. This methodology from the viewpoint of quantum systems is repetition/redundancy which provides robustness against stray electrons (eg: from induced current/interference.)
As we build systems that require less power, we are also building systems that use less electrons to represent a single bit.
Another fun tidbit in this space comes down to signal propagation in a lossy medium: we don’t actually use square waves for our clocks since a square wave is composed of many different frequencies. Since different frequencies propagate at different speeds, a square wave looks less and less like a square and also uses more power than a simple sine wave. If you remember your Fourier/Laplace knowledge, you probably remember that a sine is a single frequency and thus it will be coherent through a conductor and use the least amount of power to generate.
Edit: I’m talking about electrons here but the concept of many packets for a single state extends to most of our communication channels today … eg radio communications like we see to/from Voyager.
Some people suggest that digital computing and neural networks are a bit fit, and that would should be using analog devices.
That sounds very appealing at first. But we have (at least) two problems:
First, our transistors dissipate almost no energy when they are either 'fully open' or 'fully closed'. Because either there's approximately no current, or approximately no resistance. Holding them partially open, like you'd do in analog processing, would produce a lot of heat.
The second problem: electrons are discrete, and thanks to miniaturisation and faster and faster clockspeeds, we are actually getting into realms where that makes a difference. So either you have to accept that the maximum resolution of activation of your analog neuron is fairly small (perhaps 10 bits or so?), which is not that much better than using your transistors in binary only; or you'll have to use much larger transistors in your neural chips.
Both problems together mean that analog computing for neural networks isn't really competitive with digital computing. (Outside of some very niche applications, perhaps.)
I think all of your points are valid. I also think that we have optimized heavily for this state of technology. If we figured out that analog computing was somehow superior in a big way, I bet we would find ways of reducing power etc in analog designs.
One way that analog computing would be really neat for neural networks is in speed. The way it might not be so great is in reliability (or repeatability, specifically.) Analog systems are more susceptible to noise as well as variation from fabrication processes. Running things at saturation makes them easier to design, test and mass produce.
Specifically to the point of the comparatively low reliability / high variation of analog systems: an interesting property of neural nets is that they can be robust relative to noise when trained with the same type of noise witnessed under inference.
Whether or not speed/etc. would be better in the digital vs. analog design-space, it's an interesting thing to consider that neural nets can automatically account for the encoding-medium's variability. This perhaps makes neural solutions a good fit for low power analog media which otherwise aren't useful for classical computing.
> This insight captures the essence of Quantum Darwin-
ism: Only states that produce multiple informational off-
spring – multiple imprints on the environment – can be
found out from small fragments of E. The origin of the
emergent classicality is then not just survival of the fittest
states (the idea already captured by einselection), but
their ability to “procreate”, to deposit multiple records
– copies of themselves – throughout E
Basically, quantum information is unknown. All possible classical states are mixed together. Once you measure it, it decoheres into a classical system, where all measurements are highly correlated -- some states exist, and others don't exist. A quantum coin flip is heads or tails on top, and tails or heads on bottom. But once you measure it, it all decoheres at once. You get a classical coin flip, that is hrads on top and tails on bottom, or vice versa. You can't get heads on top, and tails or heads on bottom unsettled for a second bit of information.
The same message is written twice, on the top and the bottom of the coin. You can look at the top of coin or the bottom and get the same result. If the top of the coin melts, you can still read the result of the flip from the bottom.
All the objective classical facts out there (e.g., the location of the moon, the existence of the dinosaurs, what I ate for breakfast) are recorded in many degrees of freedom (e.g., the photons scattered off the moon, the dinosaur bones, and the microscopic bits of bagel in the trash). By the no-cloning theorem, quantum information cannot be amplified, so this necessarily means the information we can access is classical. But these classical facts are the exception rather than the rule, as far as the full arena of quantum mechanics is concerned. There are always many more degrees of freedom that are not redundantly recorded, for the simple fact that you need to use one degrees of freedom to record another.
Even if the world were fundamentally classical (which it's not), the only things you could actually know about it would be the tiny subset that is amplified to macroscopic scales, necessarily producing many records in, if nothing else, the many atoms in the neurons in your brain.
Oh, you can have multiple layers of error correcting coding.
Eg Google stores data internally with something like Reed-Solomon error correction, but they typically have two independent copies.
So they have repetition code at the classical nano-scale, then Reed-Solomon error correction at the next level, and at the highest level they apply repetition again.
There's nothing confusing about this, as long as you are careful to make sure that your listener knows which level you are talking about.
> Repetition codes have a very clearly defined meaning in communication theory, using them to mean something else is very confusing.
OP used them exactly with the orthodox meaning as far as I can tell.
Yeah, I agree it's unusual to describe "increased brightness" as "bigger distance repetition code". But I think it'll be a useful analogy in context, and I'd of course explain that.