Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For things like machine learning I wonder how much extra performance could be squeezed out by simply working with continuous floating values on the analog level instead of encoding them as bits through a big indirect network of nands.


This is something that has been tried, basically constructing an analog matrix multiply/dot product and it gives reasonable power efficiency at into levels of precision. More precision and the analog accuracy leads to dramatic power efficiey losses (each bit is about 2x the power), so int8 is probably the sweet spot. The main issues are it is pretty inflexible and costly to design vs a digital int8 mac array, and hard to port to newer nodes, etc


I have wondered this and occasionally seen some related news.

Transistors can do more than on and off, there is also the linear region of operation where the gate voltage allows a proportional current to flow.

So you would be constructing an analog computer. Perhaps in operation it would resemble a meat computer (brain) a little more, as the activation potential of a neuron is some analog signal from another neuron. (I think? Because a weak activation might trigger half the outputs of a neuron, and a strong activation might trigger all outputs)

I don’t think we know how to construct such a computer, or how it would perform set computations. Like the weights in the neural net become something like capacitance at the gates of transistors. Computation is I suppose just inference, or thinking?

Maybe with the help of LLM tools we will be able to design such things. So far as I know there is nothing like an analog FPGA where you program the weights instead of whatever you do to an FPGA… making or breaking connections and telling LUTs their identity


You don't think we know how to construct an analog computer? We have decades of experience designing analog computers to run fire control systems for large guns.

https://maritime.org/doc/firecontrol/index.php


We have also a pretty decent amount of experience with (pulse/spiking) artificial neural networks in analog hardware, e.g. [1]. Very Energy efficient but yet hard to scale.

[1] https://www.kip.uni-heidelberg.de/Veroeffentlichungen/detail...


That’s a very cool abstract, thanks. I suppose it’s the plasticity that poses a pretty serious challenge.

Anyway, if this kind of computer was so great maybe we should just encourage people to employ the human reproduction system to make more.

There’s a certain irony of critics of current AI. Yes, these systems lack certain capabilities that humans possess, it’s true! Maybe we should make sure we keep it that way?


It's possible, but analog multiplication is hard and small analog circuits tend to be very noisy. I think there is a startup working on making an accelerator chip that is based on this principle, though.


There are optical accelerators on the market that - I believe - do that already, such as https://qant.com/photonic-computing/


You lose a lot of stability. Each operation's result is slightly off, and the error accumulates and compounds. For deep learning in particular, many operations are carried in sequence and the error rates can become inacceptable.


Deep learning is actually very tolerant to imprecision, which is why it is typically given as an application for analog computing.

It is already common practice to deliberately inject noise into the network (dropout) at rates up to 50% in order to prevent overfitting.


Isn't it just for inference? Also, differentiating thru an analog circuit looks... interesting. Keep inputs constant, wiggle one weight a bit, store how the output changed, go to the next weight, repeat. Is there something more efficient, I wonder.


Definitely, if your analog substrate is implementing matrix vector multiplications (one of the most common approaches in this area). Then your differentiation algorithm is the usual backpropagation, which has rank 1 weight updates. With some architectures this can be implemented in O(1) time simply by running the circuit in "reverse" configuration (inputs become outputs and vice-versa). With ideal analog devices, this would be many orders of magnitude more efficient than a GPU.


TLC,QLC,MLC in ssd is it. so it is used already. and it gives you limits of current technology.


>*TLC,QLC,MLC"

For those unaware of these acronyms (me):

TLC = Triple-Layer Cell

QLC = Quad-Level Cell

MLC = Multi-Level Cell


For those unaware, MLC used to mean mean Two-level cell.

Quad-level is the current practical maximum.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: