This is actually desired though, at least by some programs. For example, say you have a function with a very expensive loop that repeatedly performs a null check and then executes some extra code if it's null, but never sets the value. This is called from another function which uses the checked value without a null check (proving it's not null) before and after the loop ends. The first function is inlined. You want to tell the compiler not to optimize out the null check and extra code in the loop? Or that it can't optimize stuff out to reuse the value from the first use of the value? If so, what is the compiler allowed to optimize out or reorder?
Now, to see why this might actually produce a bug in working code--say some other thread has access to the not-null value and sets it racily (non-atomically) to null. Or (since most compilers are super conservative about checks of values that escape a function because they can't do proper alias analysis), some code accidentally buffer overflows and updates the pointer to null while intending to do something else. Suddenly, this obvious optimization becomes invalid!
Arguments to the effect of "the compiler shouldn't optimize out that loop due to assuming absence of undefined behavior" are basically arguments for compilers to leave tons of performance on the table, due to the fact that sometimes C programs don't follow the standard (e.g. forgetting to use atomics, or indexing out of bounds). While it's a legitimate argument, I don't think people would be too happy to find their C programs losing to Java in benchmarks on -O3, either.
There may be programs that desire such behavior. But I've never intentionally written one. Which is why I personally avoid C, and wish that I didn't have to work in environments coded in C.
I seriously would accept everything running at half speed for the certainty of not being subject to the problems of C level bugs. But as Rust grows in popularity, it looks like I won't need to worry about that.
Well, any code that triggers undefined behavior is already buggy by definition. I think it would be a lot more fruitful if, instead of blaming compilers for doing their job (trying to optimize code in a language that allows all sorts of potentially unsafe behavior), people enumerated the specific UB they had issues with. For example, a lot of people don't consider integer overflow, too-large bitshift, nonterminating loops, type punning without union, or "benign" data races automatic bugs in themselves. Some people don't even consider a null pointer dereference an automatic bug (but what about a null pointer field access, or array index that happens to land on a non-null page? Is the compiler allowed to optimize field accesses to pointer arithmetic, or not?).
Anyway this is all fine, but as you can imagine you lose a lot of optimizations that are facilitated by all that UB, so the compiler authors should then counter with some way to signal that you want the original undefined semantics (for instance, references in C++ and restrict pointers in C), or provide compile-time checking to prevent misuse that messes up optimizations (e.g. Rust's Send+Sync for avoiding data races, or UnsafeCell for signaling lack of restrict semantics / raw pointers for lack of non-nullability).
Now, to see why this might actually produce a bug in working code--say some other thread has access to the not-null value and sets it racily (non-atomically) to null. Or (since most compilers are super conservative about checks of values that escape a function because they can't do proper alias analysis), some code accidentally buffer overflows and updates the pointer to null while intending to do something else. Suddenly, this obvious optimization becomes invalid!
Arguments to the effect of "the compiler shouldn't optimize out that loop due to assuming absence of undefined behavior" are basically arguments for compilers to leave tons of performance on the table, due to the fact that sometimes C programs don't follow the standard (e.g. forgetting to use atomics, or indexing out of bounds). While it's a legitimate argument, I don't think people would be too happy to find their C programs losing to Java in benchmarks on -O3, either.