D does compile time function evaluation every time there's a ConstExpression in the grammar. It did this back in 2007. It required no changes to the grammar or syntax, it just enhanced constant folding to be able to do function calls.
We aren't talking about just compile time function evaluation of ConstExpressions. C++ had that all the way back in C++11, and many compilers were doing CTFE long before that as an optimisation (but as a programmer you couldn't really control when that would happen)
Compilers are always allowed to CTFE than what the C++ standard specifies, many do. The standard is just moving the lower bar for what all compilers must now evaluate at compile time, and what c++ programmer may expect from all compilers.
Since C++11, they have been massively extending the subset of the language that can be evaluated at compile time. C++14 allowed compiletime evaluation of loops and conditionals, along with local variables. C++20 allowed CTFE functions to do memory allocation (as long as it was also freed) along with calling object constructors/deconstructors.
The main new thing in C++26 is the ability to do CTFE placement new, which is something I remember missing the last time I tried to do fancy C++ stuff. Along with marking large chunks of the standard library as constexpr.
It has been a very exciting fifteen years. As a minor counter argument, other languages will let you run any code at compile time. Because it's code. And it can run whenever you want. C++ has that too I suppose, you just have to weave some of your control flow through cmake.
A big discussion item in constexpr in C++ was evaluation of floating-point at compile-time.
Because depending on your current running CPU flags, you can easily end up in a case where a function do not give the same result whether at run-time vs at compile-time, confusing users. How does that work in D (and for that matters, any other language with compile-time evaluation) ?
Here's the thing - that cannot be fixed. Consider the x87 FPU. It does all computations to 80 bits, regardless of how you set the flags for it. The only way to get it to round to double/float is to write the value out to memory, then read it back in.
Java tried this fix. The result was a catastrophe, because it was terribly slow. They backed out of it.
The problem has since faded away as the X86_64 has XMM registers for float/double math.
C still allows float to be done in double precision.
D evaluates CTFE in the precision specified by the code.
More to the point, constexpr does supply something D doesn't do. Whether D runs a calculation at compile time or run time is 100% up to the programmer. The trigger is a Constant-Expression in the grammar, a keyword is not required.
BTW, D's ImportC C compiler does the same thing - a constant expression in the grammar is always evaluted at compile time! (Whereas other C compilers just issue an error.)
int sum(int a, int b) { return a + b; }
_Static_assert(sum(1,2) == 3, "message"); // works in ImportC
_Static_assert(sum(1,2) == 3, "message"); // gcc error: expression in static assertion is not constant
Tbf, C and C++ did constant folding for function calls since forever too (as well as pretty much any compiled language AFAIK), just not as a language feature, but as an optimizer feature:
Because C++ is a completely overengineered boondoggle of a language where every little change destabilizes the Jenga tower that C++ had built on top of C even more ;)
int sum(int a, int b) { return a + b; }
_Static_assert(sum(1,2) == 3, "message");
gcc -c -O x.c
x.c:2:17: error: expression in static assertion is not constant
_Static_assert(sum(1,2) == 3, "message");
Constexpr as a keyword which slowly applies in more places over decades is technically indefensible. It has generated an astonishing quantity of work for the committee over the years though which in some sense is a win.
I think there's something in the psychology of developers that appreciates the byzantine web of stuff which looks like it will work but doesn't.
Instead of annotating functions with "maybe this will be evaluated by the compiler", you can not do that, and the compiler will still evaluate them as a QoI thing. The usual justification is that it's a helpful comment for developers. Consteval being much the same idea. And constinit also the same idea.
An alternative design would be "global initialisations are evaluated at compile time where possible, like everything else, and you don't need to annotate anything to get that". Which would require exactly the same compiler work as constexpr but wouldn't leave that word scattered around codebases in similar fashion to register.
I always wondered why constexpr needs to be an explicit marker. We could define a set of core constexpr things (actually this already exists in the Standard) and then automatically make something constexpr if it only contains constexpr things. I don't want to have to write constexpr on every function to semantically "allow" it to be constexpr even though functionally it already could be done at compile time...
Same story with noexcept too.
One reason which matters with libraries is that slapping constexpr on a function is, in a way, a promise by the author that the function is indeed meant to be constexpr.
If the author then changes the internals of the function so that it can no longer be constexpr, then they've broken that promise. If constexpr was implicit, then a client could come to depend on it being constexpr and then a change to the internals could break the client code.
I think being explicit is a good thing in C++. Suppose there is not constexpr in C++ and the following works:
inline int foo(int x) { return x + 42; }
int arr[foo(1)];
I think it would qualify as spooky action-at-a-distance if modifying foo causes arr to be malformed. And if they are in different libraries it restricts the ways the original function can be changed, making backwards compatibility slightly harder.
This would make more sense if constexpr was actually constant like say, Rust's const.
The Rust const fn foo which gives back x + 42 for any x, is genuinely assured to be executed at compile time when given a constant parameter. If we modify the definition of foo so that it's not constant the compiler rejects our code.
But C++ constexpr just says "Oh, this might be constant, or it might not, and, if it isn't don't worry about that, any constant uses will now magically fail to compile", exactly the spooky action at a distance you didn't want.
When originally conceived it served more or less the purpose you imagine, but of course people wanted to "generalize" it to cover cases which actually aren't constant and so we got to where we are today.
Slapping the constexpr keyword on a function is useless by itself, but it becomes useful when you combine it with a constexpr or constinit variable. Which is not all that different from Rust:
// C++
constexpr Foo bar() { /* ... */ }
constexpr Foo CONSTANT = bar(); // Guaranteed to be evaluated at compile time
constinit Foo VARIABLE = bar(); // Guaranteed to be evaluated at compile time
// Rust
const fn bar() -> Foo { /* ... */ }
const CONSTANT: Foo = bar(); // Guaranteed to be evaluated at compile time
static VARIABLE: Foo = bar(); // May or may not be evaluated at compile time
So Rust is actually less powerful than C++ when it comes to non-constant globals because AFAIK it doesn't have any equivalent to constinit.
That Rust constant named CONSTANT, is an actual constant (like an old-school #define) whereas in C++ what you've named CONSTANT is an immutable variable with a constant initial value that the language promises you can refer to from other constant contexts. This is a subtle difference, but it's probably at the heart of how C++ programmers end up needing constfoo, constbar, const_yet_more_stuff because their nomenclature is so screwed up.
The VARIABLE in Rust is an immutable static variable, similar to the thing you named CONSTANT in C++ and likewise it is constant evaluted so it happens at compile time, there is no "may or may not" here because the value is getting baked into the executable.
If we want to promise only that our mutable variable is constant initialized, like the C++ VARIABLE, in Rust we write that as:
let mut actually_varies = const { bar() }; // Guaranteed to be evaluated at compile time
And finally, yes if we write only:
let mut just_normal = bar(); // This may or may not be evaluated at compile time
So, I count more possibilities in the Rust, and IMNSHO their naming is clearer.
static VARIABLE: Foo = bar(); // May or may not be evaluated at compile time
Under what circumstances would this not be evaluated at compile time? As far as I know the initializer must be const.
Unlike C++, Rust does not have pre-main (or pre-_start) runtime initializers, unless you use something like the ctor crate. And the contents of that static must be already initialized, otherwise reading it (especially via FFI) would produce unknown values.
To evaluate something at runtime you'd have to use a LazyCell (and LazyCell::new() is const), but the LazyCell's construction itself will still be evaluated at compile time.
I don't understand why it is so complex in C++.