Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How so? All primitive operations are backward stable, and unlike addition and subtraction, division and multiplication are well-conditioned.


It's more about resolution. Products of small numbers get small a lot faster than sums. Likewise for dividing two small numbers. The smallest possible quad precision number is a lot smaller than the smallest possible double precision one.


But the main feature of floating point is that you keep the same relative precision at all sizes. As long as you're not hitting infinity or denormals, multiplying doesn't lose information like adding a big number to a small number does.

Do stats often deal with distict probabilities below 10^-300? And do they need to store them all over, or just in a couple variables that could do something clever?


> Do stats often deal with distict probabilities below 10^-300?

Yes and with very wide dynamic range, though you really try to avoid it using other tricks. A lot of methods involve something resembling optimization of a likelihood function, which is often [naively] in the form of a product of a lot of probabilities (potentially hundreds or more) or might also involve the ratio of two very small numbers. Starting out far away from the optimum those probabilities are often very small, and even close to the optimum they can still be unreasonably small while still having extremely wide dynamic range. Usually when you really can't avoid it there's only a few operations where increased precision helps, but again even then it's usually a bandaid in lieu of a better algorithm or trick to avoid explicitly computing such small values. Still, I've had a few cases where it helped a bit.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: