Hacker Newsnew | past | comments | ask | show | jobs | submit | more cycomanic's commentslogin

This is over stating the extend of the replication "crisis" (which is a terrible name). The reality is that fraud rates are much lower than in pretty much any other part of society. The irony is also that the problems have increased significantly, because of making scientists constantly having to justify the "monetary value" of their research. It incentives overstaying impact (everyone who's ever written a grant application knows how ridiculous it is, everything is supposed to be high gain/risk, but at the same time you're supposed to already know everything you'll find out and how you will use the results).


One reason that people often overlook is that it's much easier (and much less error prone for the user) to give an instruction that uses the cli instead of a GUI tool, e.g. if someone would ask how to add a new user who's in the usb group on Linux, I would always tell the person `adduser --ingroup usb [username] ` instead of giving the GUI instructions which are longer and depend on what desktop the person uses.


If you think a single add user command is comparable to things like use grapheneos or adb usb injection chains then you’ve missed the point here.


1it is mentioned. In the packages it says from Google to Proton mail.


There is a Neovim "fork" of org mode that defined a spec for their format (which differs somewhat from org mode though)


It's worth remembering that airplanes have to start, land and taxi as well. So while radiation levels might be safe while the plane is flying at altitude, things might be very different where planes have to land and start.


I don't quite understand what you mean. Are you talking about color points, you can add many different ones in inks ape (the UI of the gradient tool is admittedly quite bad though). Or do you mean a sort of double gradient, i.e. if goes from red to blue from left to right, but from opaque to transparent from top to bottom? I never had to use such a gradient, so not sure if it's possible.


Take a triangle, put a different color on each vertex, tri-interpolate


This is not supported in SVG. There was a Mesh Gradient feature planned for SVG v2.0, but AFAIK that was removed from the draft. It's a shame. Here is an article discussing that. (2018, mind you)

https://librearts.org/2018/05/gradient-meshes-and-hatching-t...

EDIT: I assumed this is SVG renderer, but now i think it may not be bound by SVG limitations.


I was talking with a colleague the other day and we came to the conclusion that in our experience if you're using llms as a programming help models are really being optimised for the wrong things.

At work I often compare locallly run 4-30B models against various GPTs (we can only use non-local models for few things, because of confidentiality issues). While e.g. GPT-4o gives better results on average, the chances of it making parts of the response up is high enough that one has to invest significant amount to check and iterate over results. So the difference in effort is not much lower compared to the low parameter models.

The problem is both are just too slow to really iterate quickly, which makes things painful. I'd rather have a lower quality model (but with large context) that gives me near instant responses instead of a higher quality model that is slow. I guess that's not giving you the same headlines as the improved score on some evaluation.


I think this article overstates the importance of the problems even for scientific software. In the scientific code I've written, noise processes are often orders of magnitude larger than what what is discussed here and I believe this applies to many (most?) simulations modelling the real world (i.e. Physics chemistry,..). At the same time enabling fast-math has often yielded a very significant (>10%) performance boost.

I particularly find the discussion of - fassociative-math because I assume that most writers of some code that translates a mathetical formula to into simulations will not know which would be the most accurate order of operations and will simply codify their derivation of the equation to be simulated (which could have operations in any order). So if this switch changes your results it probably means that you should have a long hard look at the equations you're simulating and which ordering will give you the most correct results.

That said I appreciate that the considerations might be quite different for libraries and in particular simulations for mathematics.


It would be nice if there was some syntax for "math order matters, this is the order I want it done in".

Then all other math will be fast-math, except where annotated.


I thought most languages have this? If you simply write a formula operations are ordered according to the language specifiction. If you want different ordering you use parentheses.

Not sure how that interacts with this fast math thing, I don't use C


That’s a different kind of ordering.

Imagine a function like Python’s `sum(list)`. In abstract, Python should be able to add those values in any order it wants. Maybe it could spawn a thread so that one process sums the first half in the list, another sums the second half at the same time, and then you return the sum of those intermediate values. You could imagine a clever `sum()` being many times faster, especially using SIMD instructions or a GPU or something.

But alas, you can’t optimize like that with common IEEE-754 floats and expect to get the same answer out as when using the simple one-at-a-time addition. The result depends on what order you add the numbers together. Order them differently and you very well may get a different answer.

That’s the kind of ordering we’re talking about here.


The article mentioned that gcc and clang have such extensions. Having it in the language is nice though, and that's the approach Zig took.


I worked in cad, robotics and now semiconductor optics. In every single field, floating precision down to the very last digits was a huge issue


"precision" is an ambiguous term here. There's reproducibility (getting the same results every time), accuracy (getting as close as possible to same results computed with infinite precision), and the native format precision.

ffast-math is sacrificing both the first and the second for performance. Compilers usually sacrifice the first for the second by default with things like automation fma contraction. This isn't a necessary trade-off, it's just easier.

There's very few cases where you actually need accuracy down to the ULP though. No robot can do anything meaningful with femtometer+ precision, for example. Instead you choose a development balance between reproducibility (relatively easy) and accuracy (extremely hard). In robotics, that will usually swing a bit towards reproducibility. CAD would swing more towards accuracy.


Interesting, I stand corrected. In most of the fields I'm aware off one could easily work in 32bit without any issues.

I find the robotics example quite surprising in particular. I think the precision of most input sensors is less than 16bit so. If your inputs have this much noise on them how come you need so much precision your calculations?


The precision isn't uniform across a range of possible inputs. This means you need a higher bit depth, even though "you aren't really using it", just so you can establish a good base precision you are sure you are hitting at every range. The part where you are saying "most sensors" is doing a lot of leverage here.


It matters for reproducibility between software versions, right?

I work in audio software and we have some comparison tests that compare the audio output of a chain of audio effects with a previous result. If we make some small refactoring of the code and the compiler decides to re-organize the arithmetic operations then we might suddenly get a slightly different output. So of course we disable fast-math.

One thing we do enable though, is flushing denormals to zero. That is predictable behavior and it saves some execution time.


Yeah that is the killer for me. I'm not particularly attached to IEEE semantics. Unfortunately the replacement is that your results can change between any two compiles, for nearly any reason. Even if you think you don't care about tiny precision variances: consider that if you ever score and rank things with an algorithm that involves floats, the resulting order can change.


What exactly is the trademark violation that they are "obliged" to defend? Somebody putting location and opening times of their restaurants on a map?


In this case it was their logo mark. AFAIK that's the only claim that they actually made as well. Reading anything else into this story is ridiculous.


What do you believe is actually covered by trademark law? Maybe, the name of the website, but clearly the location and open status can't be, that would mean Google and many other map providers are violating trademarks on a massive scale. Or another example those websites with maps of petrol stations and their prices?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: