Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Is Analog Computing? (quantamagazine.org)
49 points by walterbell on Aug 7, 2024 | hide | past | favorite | 45 comments


It's interesting that analog computers embody both meanings of "analog" - that of continuous quantities vs discrete/digital ones, and also of one thing being comparable to ("an analog of", cf analogy) something else.

The article seems to put a bit of a silly spin on an interesting subject by wanting to present analog computers as a potential alternative to digital ones, when in fact that is only rarely going to be possible.

Digital computers support programming - solving of problems by describing the process by which they are to be solved, whereas analog computers instead support solving of (limited types) of problem by building an analog of the problem (which you could think of as "describing" the problem, rather than how to solve it), either using analog electronics (a patch panel analog computer), or by building a physical analog of it such as the Antikythera mechanism which models the motions of the planets.

The article oddly omits any mention of quantum computers which (unless I'm totally misguided!) are essentially a type of analog computer - one that is no replacement for digital computers and general problem solving since it can't be programmed, but rather only works when one can configure it to be an analog of the problem to be solved. As with any other type of analog computer, the problem is then solved by letting the system run and the dynamics play out.


Quantum/classical is orthogonal to the analog/digital distinction. There are analog quantum computers and digital quantum computers.

Analog quantum computers (like annealers) are simpler to make, and they have a lot more play w.r.t. their building blocks when trying to make interesting effects... but ultimately they are limited by noise. Digital quantum computers restrict themselves to finite gate sets, often requiring expensive decompositions to do basic operations (e.g. [1]), but those gates are compatible with error correction so noise can be suppressed arbitrarily (e.g. [2]). It's very similar to how an analog classical computer can do addition with two resistors and a junction, but the accuracy is limited by the precision of the resistance. Whereas a digital classical computer will decompose the addition problem into bits and gain more precision by adding bits so that increasing precision becomes about increasing quantity instead of quality.

[1]: https://www.mathstat.dal.ca/~selinger/newsynth/

[2]: https://arxiv.org/abs/1208.0928


Right, but even with the "digital quantum computer" (i.e. error-corrected quantum computer) aren't they still used in same fashion as an analog computer - one has to configure (cf patch panel) the computer into being an analog of the problem to be solved, then let the natural dynamics do it's thing, rather than being able to program it to perform some step-wise algorithm of the user's choosing?


No, that's not true at all.

For example, a chemistry simulation can be done in first quantization; where the state is a list of superposed 2s-complement integers indicating the positions of the electrons (as opposed to a more direct one-qubit=one-position mapping). And the list is initialized in a way that satisfies the Pauli exclusion principle by using a sorting network [1]. This is presumably not at all how Nature does it.

Another example is factoring. Shor's factoring algorithm is not at all like behaving analogous to a physical system. It's about modular exponentiation and Fourier transforms; math things not physics things.

Yet another example is Hamming weight phasing. If you need to rotate many qubits by a common angle around the Z axis, you can achieve that effect more cheaply by temporarily computing their Hamming weight (under superposition) and then rotating the first qubit of the Hamming weight register by the angle, the second by twice the angle, the third by four times the angle, etc. And actually that various-rotation-angles operation can then also be replaced by an addition into a special reusable phase gradient state; this achieves the desired effect by phase kickback [2]. Adding the Hamming weight of their spins into a helper state is probably not how Nature goes about precessing electrons in a uniform magnetic field.

[1]: https://arxiv.org/abs/1711.10460

[2]: https://arxiv.org/abs/1709.06648


Isn't that where the term analog for continuous values comes from though? That they are voltages that vary in proportion to physical quantities? And therefore analogous to those quantities.



Makes sense - I was wondering about the connection, but lazily looking up the etymology of the word didn't provide any obvious answer.


Quantum computers deserve to be mentioned in discussions about analog computers. Under the assumption that nature is quantum at its core it follows that every analog computer could instead by replaced by a quantum computer running a suitable algorithm.

This connection opens doors for research in both directions, how to design quantum algorithms and how to built physical computing systems that make use of quantum mechanics.


The reason we ended up with digital logic is because of noise. Hysteresis from digital gates was the only way to make such large systems from such small devices without all of our signaling turning into a garbled mess. Analog processing has its niches and I suspect the biggest niche will be where that noise is a feature rather than a hindrance, something like continuous time neutral networks.


Neuromorphic hardware is an area where I encountered analogue computing [1]. Biological neurons would be modeled by a leaky integration (resistor/capacitator) unit. The system was 1*10^5 times faster than real-time—too fast to use it for robotics—and consumed little power but was sensitive to temperature (much as our brains). If I recall correctly, the technology has been used at cern, as digital HW would have required too high clock speeds. I have no clue what happened to the technology but there were other attempts at neuromorphic, analogue hardware. It was very exciting to observe and use this research!

[1]https://brainscales.kip.uni-heidelberg.de/

edit: newer link: https://open-neuromorphic.org/neuromorphic-computing/hardwar...


I worked on a similar project - the Stanford braindrop chip. It's a really interesting technology, what happened is that most people don't really know how to train those systems. There's a group called Vivum that seems to have a solution.

I work with QDI systems, and I've long suspected that it would be possible to use those same design principles to make analog circuits robust to timing variation. QDI design is about sequencing discrete events using digital operators - AND and OR. I wonder if it is possible to do the same with continuous "events" using the equivalent analog operators, mix and sum.


We got some nice results with spiking/pulsed networks but the small number of units limits the application so we usually end up in a simulator or using more abstract models. There seems to be a commercial product but also only with 0.5K neurons, might be enough for 1D data processing though and filling a good niche there (1mW!) [2]

[2] https://open-neuromorphic.org/neuromorphic-computing/hardwar...


> the Stanford braindrop chip Interesting.

Just read about it and there are familiar names on the author list. I really wish this type of technology gained more traction but I am afraid it will not receive the deserved focus considering the direction of current research in AI.

> I wonder if it is possible to do the same with continuous "events" using the equivalent analog operators, mix and sum.

Don't know much about QDI but sounds promising.


I actually worked for a startup that makes tiny FPAA's (Field Programmable Analog Arrays) for the low powered battery market. Their major appeal was that you could reprogram the chip to synthesize a real analog network to offload the signal processing portion of your product for a tiny fraction of the power cost.

The key thing is that analog components are influenced much more by environmental fluctuations (think temperature, pressure, and manufacturing), which impacts the "compute" of an analog network. Their novelty was that the chip can be "trimmed" to offset these impacts using floating gate MOSFETs, the same that are used in flash memory, as an analog offset. It works surprisingly well, and I suspect if they can capture the low power market we'll see a revitalization of analog compute in the embedded space. It would be really exciting to see this enter the high bandwidth control system world!


Can you share their name and do they have a consumer product already?


Well it's not just about noise. There is also loss. It's easier to reconstruct and/or amplify a digital signal than an analogue one. Also it's easier to build repeatable digital systems that don't require calibration compared to analogue ones.

Worth noting the exception here which is current loops.


Also density and speed. And precision.

This is a rare misfire from Quanta. No, there is no practical way to model anything non-trivial - especially not ML - with analog hardware of any kind.

Analog hardware just isn't practical for models of equivalent complexity. And even if it was practical, it wouldn't be any more energy efficient.

Whether it's wheels and pulleys or electric currents in capacitors and resistors, analog hardware has to do real work moving energy and mass around.

Modern digital models do an insane amount of work. But each step takes an almost infinitesimal amount of energy, and the amount of energy used for each operation has been decreasing steadily over time.


> No, there is no practical way to model anything non-trivial - especially not ML - with analog hardware of any kind.

Are you aware that multiple companies (IBM, Intel, others) have prototype neuromorphic chips that use analog units to process incoming signals and apply the activation function?

IBM's NorthPole chip has been provided to the DoD for testing, and IBM's published results look pretty promising (faster and less energy for NN workloads compared to Nvidia GPU).

Intel's Loihi 2 chip has been provided to Sandia National laboratories for testing with presumably similar performance benefits as IBM's.

There are many other's with neuromorphic chips in process.

My opinion is that AI workloads will shift to neuromorphic chips as fast as the technology can mature. The only question is which company to buy stock in, not sure who will win.

EDIT: The chips I listed above are NOT analog, they are digital but with alternate architecture to reduce memory access. I've read about IBM's test chips that were analog and assumed these "neuromorphic" chips were analog.


The article ends flat and abruptly, I was setting myself for a long and nice journey in the style of Quanta Magazine, what happened?


Reader version on mobile seem broken. Hiding reader show a longer article


I don't think so, it ends like this:

"The advantages of digital computing are real, but so are the drawbacks. Perhaps, by reaching back to computing’s past, researchers will be able to steer a sustainable path toward our computational future.

Correction: August 2, 2024: The $100 billion data center would require 5 gigawatts of power, not 5 megawatts."

So it's the end of the article indeed.


Encore! Encore!


The internet archive has this interesting educational film about mechanical computers on ships: https://archive.org/details/27794FireControlComputersPt1


I've been reading a lot recently about the World War 2 Pacific naval conflict. The things done with mechanical fire controllers was amazing! The US has radar linked anti-aircraft and surface guns, plus radar proximity shells, all mechanical/analogue driven.

Thank you for the link! :)


Here’s an example of a general purpose, analog computer:

https://museum.syssrc.com/artifact/exhibits/251/

I’ll note that there’s been plenty of work on general-purpose, analog computing. Analog’s competitiveness also lives on in mixed-signal ASIC’s. The problem with new products, though, is analog requires fully-custom, circuit design. Outside some niche uses, you can’t just synthesize them from higher-level specs like you can digital designs with standard cells.

That puts analog at a severe disadvantage on NRE cost. Especially as both design rules and mask costs increase with every node shrink.


Indeed, analog circuits are much more process sensitive (while digital circuits are defect sensitive), which makes their design slower and much more closely coupled to the specific process node that is being targeted.

This is why now a days bleeding edge process nodes might be lacking in IO (especially fast serial IO) when they're first released.

Additionally, analog just hasn't scaled for quite a while now. Even traditionally analog circuits like PLL's and IO are getting more and more digital as the digital scaling is making more digital architectures competitive in space and power when they weren't before.


I would say an analog synthesizer is an analog computer. The input are for example voltages to simulate notes, and the computational output is a waveform. I prefer analog because digital synthesizer, at least for Subtractive synthesis, are only a simulation of the real instrument. The downsides are not unlike other analog computers, noise and dependence on temperature distort the output.


Veritasium has a good video about analog computers. It covers how they work, what they are used for, pros and cons compared to digital, what they look like today, a bit of the history and their possible future.

https://youtu.be/GVsUOuSjvcg


This is one of those things I don't get why people go so gaga over them. The article starts with:

"But it’s not obvious why a system that operates using discrete chunks of information would be good at modeling our continuous, analog world."

OK, fine. But, if my "discrete" modelling is using 64 bits of precision, or, in decimal, about 19 digits of precision, and your analog system on a good day has about 3, how could your analog system even "tell" mine is discrete? It can't "see" that finely in the first place.

Oh, you don't like how it's excessively precise? It is off-the-shelf numerical tech to include precision in calculations. It isn't something every programmer knows, but it's been in books for years.

Oh, by the way, it is also off-the-shelf numerical tech to estimate errors using digital computers. That's going to be a hard trick to match in your analog system. Also, digital systems offer many tradeoffs with regard to error levels and computation time. Another thing your analog system is going to have a hard time with.

"But floating point numbers have lots of problems with instability and such." Most of those problems will manifest in your analog computers, too. Many of those aren't so much about floating point per se as just doing inadvisable things with numbers in general, e.g., if you set up an analog computer to subtract two quantities that come out "close to zero" and then divide by that quantity, your analog computer is going to behave in much the same way that a digital computer would. Ye Olde Classic "add a whole bunch of small numbers together only for them all to disappear because of the large number" is also the sort of problem that you can only get because digital is good enough to reveal it in the first place; accurate adding together millions of very tiny numbers is something your analog computer almost certainly just plain can't do at all.

A carefully crafted analog system might be able to outpace a digital system, but it's not going to be the norm. Normally the digital system will outpace the analog system by absurd amounts.

Now, are there niche cases where analog may be interesting? Yes, absolutely. Neural nets is an interesting possibility. Obviously a big possibility. There's a handful of others. But it's a niche within a niche. There's no problem with that. There's plenty of interesting niches within niches. You can make careers in a niche within a niche. But I don't know what it is about this one that convinces so many people that there's some sort of revolution budding and we're on the cusp of an amazing new era of analog computing that will do so many amazing things for everyone... as opposed to a niche within a niche. We had that revolution. It was digital computing. It is the single largest revolution in the history of man as measured by factors of magnitude of improvement from start to now, and the ditigal revolution is still not done. Analog is not going to overtake that. It doesn't have the headroom. It doesn't have the orders of magnitude improvement possible. That's why it was left behind in the first place. The delta between my kid 3D printing a clock mechanism and the finest Swiss watchmaker who ever lived is still a small single digit number of orders of magnitude in terms of their mechanical precision, and in the digital world we still expect that degree of progress about every 5-10 years or so... and that's slower than it was for many decades.


Analog computing (aka: OpAmps) will always exist in situations where the A-to-D converter is a problem.

For example: 24-bit ADCs may exist but with nonlinearity errors, offset errors and noise problems.

You will likely get better results out of Instrumentation OpAmps (specifically designed OpAmps for (A-B) * C type operations). Less noise, more accuracy and more linearity across wider voltages and currents.

------

And if my Analog design is already better than the best possible A-to-D converters out there, then no digital technique can ever possibly catch up. Just converting the analog signal to the digital world already introduced significant errors before the computer even booted up... Let alone the final D-to-A conversion back. (Most things today need a voltage or current applied to operate, possibly in proportion to the computed values).

I expect radio (WiFi, cellphones etc. etc.) to remain analog for the near future. SDR techniques are crazy good today but still expensive.


I don't get the sense that people are expecting that sort of thing, but expecting analog computers to move into what digital computers are doing in a big way, and I just don't see that happening. I think the people who are talking about this sort of thing wouldn't even call what you're talking about 'computing'.

I'm comfortable calling it that. I'm happy to take an expansive definition.

When one end of the output is analog, it makes perfect sense that you may have a stack of components that are operating in that regime. Similarly I'm sure top-end robot arms have a lot of components designed to do things that you could model in the control software as some operation, but it's easier in every way to add things like real physical dampening and such. That sure isn't going anywhere, but neither is that about to start displacing the digital portions; the symbiosis is already near optimal probably.


I suspect a lot of the hype comes from people who have never tinkered with electronics.

Noise is always a problem. Just trying to measure a small signal is a problem since your instruments (which are not perfect) can influence the circuit. Component tolerances are also a problem - if you're trying to do something super precise, you'll have to spend extra money on better components, which is why measurement instruments such as oscilloscopes are so expensive.

In some ways the hardware fights against you in analog design. Digital just brushes that away by clamping all possible values to just two.


Precise analog designs are tuned with provable error bounds.

Actually, a digital potentiometer is likely used today to tune extremely tight tolerances. So a bit of digital technique but overall it's analog (since digital pots are just turning analog resistors on/off with switches)

You buy 5% accurate components everywhere then just trim the errors down to 0.01%. Not easy per se, but 99.99% accuracy is doable at the student level with the right designs.

Professional level precision analog designs are the 7+ digit multimeters (99.99999% accuracy). Yes, built out of just 1% accurate parts.

If you need matching pairs, they make specially matched resistor chips or whatever.

--------

The concepts aren't hard btw though maybe slightly out of date. The book Art of Electronics analyzes a 7+ digit accurate multimeter design from Agilent or something.

Instead of using an analog pot to tune, today you'd use a digital pot. Otherwise, I'm pretty sure the overall analysis and technique holds in today's electronics market.


Interesting. Would that be doable at a VLSI scale? I imagine that for large scale analog computers (like, neural networks) all these trimpots would become a challenge...


True.

VLSI does have techniques though. A clocked capacitor can sorta-kinda act as a trimpot if you calibrate the clock for example. And capacitors and clocks are more amicable to VLSI.


As someone who once has dabbled with Cadence Innovus to design a standard cell in university, there is basically no way that an analog standard cell is going to end up smaller.

I don't really know much about analog computers, but I assume they're going to need at least an op amp per analog adder and those aren't going to be small.


> This is one of those things I don't get why people go so gaga over them

I think it's because they look so fucking cool.


Well, there's no arguing with that. Electronics used to look a lot cooler, but now all the gizmos and geegaws that used to look so cool are just a few chips on a green wafer now.


Why do computers have clocks?

So they don't have to do everything at once.

(the joke is that analog computers don't have clocks... Do they?)


> (the joke is that analog computers don't have clocks... Do they?)

Not a digital clock, no.

But your modem controller (2.4GHz, 5GHz or whatever) has a frequency that the radio analog chips are adding, multiplying, exponenting and decimating to pull the data out of the radio waves.

And computation to pull data out of (or push data into) a 2.4GHz channel is best done with analog today. Sure, SDR exists but its far more expensive than some PLLs and other analog techniques.


The old jumper-wire style analog computers I'm envisioning did not have a clock to my knowledge [1].

Does the animal brain have a clock?

[1] ChatGPT says otherwise — says, yes, they did have a clock of sorts. I'm wondering if something like a voltage sweep (triangle or sine wave?) would be a kind of "clock input" for an analog computer though.


i sort of maintained one them back in the early 80s and i seem to remember that it packed a wave generator as one of its inputs, which could kind of be used as a clock. but don't quote me - i'm strictly a digital guy!


I would assume that neurons would behave more like a wavefront processor, but even this analogy is still a big stretch.


> analog computers don't have clocks

Laws of Physics?


OpAmps are still one of the most common chips used today, and one of the most obvious analog computers in use.

And yet not even one paragraph in this article?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: