Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Divergent math library implementations is the other main category, and for many practical cases, you might have to worry about parallelization factor changing things. For completeness' sake, I might as well add in approximate functions, but if you using an approximate inverse square root instruction, well, you should probably expect that to be differ on different hardware.

On the plus side, x87 excess precision is largely a thing of the past, and we've seen some major pushes towards getting rid of FTZ/DAZ (I think we're at the point where even the offload architectures are mandating denormal support?). Assuming Intel figures out how to fully get rid of denormal penalties on its hardware, we're probably a decade or so out from making -ffast-math no longer imply denormal flushing, yay. (Also, we're seeing a lot of progress on high-speed implementations of correctly-rounded libm functions, so I also expect to see standard libraries require correctly-rounded implementations as well).




The definition I use for determinism is "same inputs and same order = same results", down to the compiler level. All modern compilers on all modern platforms that I've tested take steps to ensure that for everything except transcendental and special functions (where it'd be an unreasonable guarantee).

I'm somewhat less interested in correctness of the results, so long as they're consistent. rlibm and related are definitely neat, but I'm not optimistic they'll become mainstream.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: