Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So this mode needs to set user expectations appropriately: your code breaking between compiler releases is a feature, not a bug.

Good luck. I feel that the C++ community values backward compatibility way too much for this to succeed. Most package maintainers are not going to like it a bit.



There has been plenty of breakage throughout ISO revisions.

The biggest problem is ABI, in theory that isn't something that standard cares about, in practice all compiler vendors do, thus proposals that break ABI from existing binary libraries tend to be an issue.

Another issue is that WG21 nowadays is full of people without compiler experience, willing to push through their proposals, even without implementations, which then compiler vendors are supposed to suck it up and implement them somehow.

After around C++14 time, it became cool to join WG21 and now the process is completely broken, there are more than 200 members.

There is no guidance on an overall vision per se, everyone gets to submit their pet proposal, and then needs to champion it.

Most of these folks aren't that keen into security, hence the kind of baby steps that have been happening.


Compilers at least allow specifying the standard to target, which solves the ISO revision issue. But breaking within the same -std=... setting is quite a bit more annoying, forcing either indefinite patching on otherwise-complete functional codebases, or keeping potentially every compiler version on your system, both of which are pretty terrible options.


Breaking within the same std, is something impossible to prevent in compiled languages with enough freedom in build.

Even the C ABI many talk about, most of them don't have any idea of what they are actually talking about.

First of all, it is the OS ABI, in operating systems that happened to be written in C.

Secondly, even C binary libraries have plenty of breakage opportunities within the same std, and compiler.

ABI stability even in languages that kind of promise it, is in reality an half promise.

Bytecode, or some part of the language is guaranteed to be stable, while being tied to a specific version, not all build flags fall under the promise, and not much is promised over the standard library.

Even other good examples that go to great efforts like Java, .NET or Swift, aren't fully ABI safe.


> First of all, it is the OS ABI, in operating systems that happened to be written in C.

It may be per-OS (I wouldn't try linking Linux and NT object files even if they were both compiled from C by GCC with matching versions and everything), but enough details come from C that I think it's fair to call it a C ABI. Like, I can write unix software in pascal, but in order to write to stdout that code is gonna have to convert pascal strings into C strings. OTOH, pascal binaries using pascal libraries can use pascal semantics even on an OS that uses C ABIs.


Strings is the easy part.

Try to link two binary libraries in Linux, both compiled with GCC, while not using exactly the same compiler flags, or the same data padding, for example things like structures.

Since committee people can explain it even better,

"To Save C, We Must Save ABI"

https://thephd.dev/to-save-c-we-must-save-abi-fixing-c-funct...


Sorry, this is nonsense. Binaries link just fine on Linux. If you use a compiler flag that changes the ABI, then you are on your own, of course, but the GCC docu makes it very clear which specific flags those are. There is some corner cases with problems where you get problems if you use different compilers, e.g. atomic alignment (by adopting the broken C++ design into C) and some other corner cases where compilers did things slightly different.


> e.g. atomic alignment (by adopting the broken C++ design into C)

I would like to learn more about that. Do you mean this:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65146


Things like this resulting in differnt alignment between Clang and GCC on x86_64 for _Atomic struct { char a[3]; }; See: https://godbolt.org/z/v5hsjhzj9

The problem is that in C++ these atomics are library types, but in C they are built-in types which should have a clearly specified ABI. But the goal was to make them compatibility with C++ library types, which is a rather stupid idea, which pulls in even more problems.


Assuming you have control over all binaries.

Exactly because one gets issues with multiple compilers is yet another prof why there isn't such thing as official C ABI.


There official platform ABIs. There are not called C ABIs though.


It's certainly not impossible to write code that breaks, or modify a library in an ABI-incompatible way, but ABI stability, at least on Linux, does largely Just Work™. A missing older shared library can be quite problematic, but that's largely it.

And while, yes, there are times where ABIs are broken, compiler versions affecting things would add a whole another uncontrollable axis on top of that. I would quite like to avoid a world of "this library can only be used by code compiled with clang-25" as much as possible.


Works most of the time, probably, isn't really the meaning of stable.


Can't solve the issue of "you just don't have the library (or a specific version thereof) installed".

But you can make it worse by changing "You must have version X of library Y installed" to "You must have version X of library Y compiled by compiler Z installed".

As-is, one can reasonably achieve ABI stability for their C library if they want to; really all it takes is "don't modify exposed types or signatures of exposed functions" and "don't use intmax_t", and currently you can actually break the latter.


You forgot about binaries compiled with incompatible build or linker flags.

There is a reason why commercial software has several combinations on their SDKs, for their libraries.

Release, debug, multi-threaded, with math emulation, with fast math, specific CPU ISA with and without SIMD, and these are only the most common ones.


Release vs debug shouldn't affect ABI (or, at least, the library author can decide whether it does; all it takes is.. not having `#if DEBUG` in your exposed header files changing types or function signatures).

Multi-threading doesn't affect ABI in any way at all.

fast-math doesn't affect ABI (maybe you mean the setting of FTZ/DAZ? but modern clang & gcc don't do that either, and anyway that breaks everything float in general, ABI itself is one of the few float things that don't immediately break, really).

Presence or absence of SIMD extensions, hard-float, or indeed any other form of extension, also doesn't modify the ABI by itself.

There's a separate -mabi=... that controls hard-float & co, but generally people don't touch that, and those that do, well, have a clear indication of "abi" in "-mabi" that tells them that they're touching something about ABI. (SIMD does have some asterisks on passing around SIMD types, but gcc does give a -Wpsabi warning when using a natively-unsupported SIMD type in a function signature; and this affects only very specialized low-level libraries, and said functions should be marked via an attribute to assume the presence of the necessary intended extension anyway, and probably are header-only and thus unaffected in the first place)

That said, it would probably make sense to have a way to configure -mabi at the function level (if this doesn't already exist).

General CPU ISA is one thing that does inescapably affect ABI of compiled programs; but you can have a stable ABI within one ISA. But yes, there's the wider requirement of "You must have version X of library Y for ISA W installed", but yet "You must have version X of library Y for ISA W compiled by compiler Z installed" is still worse.


Forgetting that C now has threading capabilities, and some of it can get exposed via ABI?

C89 was long time ago.

We are not talking about what gcc, clang do in their specific implementations, we are talking about C.

All those examples with compiler flags are exactly workarounds around the one true ABI that C doesn't actually have.


I'm of course not saying that C has a single universal ABI. But for any single platform (OS+ISA) where it is possible and meaningful to have shared libraries in the first place, there's a pretty clear single ABI to follow, so the distinction is, for practical purposes, completely meaningless. (ok windows does have some mess, but that's windows for you, it'll achieve a mess in anything)

Still have no clue what you mean by threading; sure, threads exist, even officially so in C11, but still just in completely no way whatsoever affect ABI any more than any other part of the standard library, i.e. "as stable as the stdlib impl is".


This is honestly what pisses me off about the whole ABI thing. The ABI is defined by the OS, not the compiler. The compiler just implements the ABI, but somehow everyone lets their OS be defined by what a particular C/C++ implementation does. This then leads to FFI realistically only being possible by using a C/C++ compiler for interfacing, which defeats the point of an OS wide ABI.


I would consider the compiler to be part of the OS. You can't use an OS, if there is no way to create programs for it.


Assuming the code is position independent why can't the linker translate the ABI?


Maybe some things could be translated by a linker, but a linker can't change the size/layout of an in-memory data structure, and there's no info on what to translate from, even if info was added on what to translate to, anyway.


Data sizes, alignment, the way stuff is loaded into registers, all that can change.


My favourite weird ABI choice is: Who does what for a barrier? The barrier needs one party to do work for a full fence, but which party that should be doesn't matter... There can be an x86 FENCE over on the store side, or we can put the x86 FENCE on the load. We could do both but that's pointless, so don't do that. However if we do neither we haven't built a barrier and now there's probably a horrible bug.


I wish the additional proposak that would add Eust like editions with the cpp moduled were expected. So sad it didnt pass.


I don't like that statement (or that whole paragraph) one bit either. My packages breaking between compiler releases is most definitely a big fat bug.

If bounds checks are going to be added, cool, -fstl-bounds-check. Or -fhardened like GCC. But not by default.

Working existing code is working existing code, I don't care if it looks "suspicious" to some random guy's random compiler feature.


I'm kind of with you on the coding-style warning flags. it does really bother me that some opinionated person has decided that the way I use parenthesis needs to be punished.

but I totally disagree with your second point. running code often has real problems with race conditions, error handling, unwanted memory reuse, out of bounds pointers, etc. if a new version of the compiler can prove these things for me - that's invaluable.


I too love those features. Just behind an option, so that existing scripts etc still continue to work.

If many of those features are being added and the flags might add up to become a pain, then even a group flag -f-new-safety-features or whatever.


For many, backwards compatibility == long term employment.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: