Hacker News new | past | comments | ask | show | jobs | submit login

The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions, functions returning bool, functions returning 0 on success, functions returning 0 on error, functions returning -1 on error, functions returning negative errno on error, functions taking optional pointer to bool to indicate error (optionally), functions taking reference to std::error_code to set an error (and having an overload with the same name that throws an exception on error if you forget to pass the std::error_code)...I understand there's 30 years of history, but it still is annoying, that even the standard library is not consistent (or striving for consistency).

Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".




The result type does make for some great API design, but SerenityOS shows that this same paradigm also works fine in C++. That includes something similar to the ? operator, though it's closer to a raw function call.

SerenityOS is the first functional OS (as in "boots on actual hardware and has a GUI") I've seen that dares question the 1970s int main() using modern C++ constructs instead, and the API is simply a lot better.

I can imagine someone writing a better standard library for C++ that works a whole lot like Rust's standard library does. Begone with the archaic integer types, make use of the power your language offers!

If we're comparing C++ and Rust, I think the ease of use of enum classes/structs is probably a bigger difference. You can get pretty close, but Rust avoids a lot of boilerplate that makes them quite usable, especially when combined with the match keyword.

I think c++, the language, is ready for the modern world. However, c++, the community, seems to be struck at least 20 years in the past.


Google has been doing a very similar, but definitely somewhat uglier, thing with StatusOr<...> and Status (as seen in absl and protobuf) for quite some time.

A long time ago, there was talk about a similar concept for C++ based on exception objects in a more "standard" way that could feasibly be added to the standard library, the expected<T> class. And... in C++23, std::expected does exist[1], and you don't need to use exception objects or anything awkward like that, it can work with arbitrary error types just like Result. Unfortunately, it's so horrifically late to the party that I'm not sure if C++23 will make it to critical adoption quickly enough for any major C++ library to actually adopt it, unless C++ has another massive resurgence like it did after C++11. That said, if you're writing C++ code and you want a "standard" mechanism like the Result type, it's probably the closest thing there will ever be.

[1]: https://en.cppreference.com/w/cpp/utility/expected


I had a look. In classic C++ style, if you use *x to get the ‘expected’ value, when it’s an error object (you forgot to check first and return the error), it’s undefined behaviour!

Messing up error handling isn’t hard to do, so putting undefined behaviour here feels very dangerous to me, but it is the C++ way.


The reason it works this way is there's legitimately no easy way around it. You're not guaranteed a reasonable zero value for any type, so you can't do the slightly better Go thing (defined behavior but still wrong... Not great.) and you certainly can't do the Rust thing, because... There's no pattern matching. You can't conditionally enter a branch based on the presence of a value.

There really is no reasonable workaround here, the language needs to be amended to make this safe and ergonomic. They tried to be cheeky with some of the other APIs, like std::variant, but really the best you can do is chuck the conditional branch into a lambda (or other function-based implementation of visitors) and the ergonomics of that are pretty unimpressive.

Edit: but maybe fortune will change in the future, for anyone who still cares:

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...


You could assert. You could throw. I can’t understand how, this modern age where so many programs end up getting hacked, that introducing more UB seems like a good idea.

This is one odd the major reasons I switched to rust, just to escape spending my whole life worrying about bugs caused by UB.


Assertions are debug-only. Exceptions are usually not guaranteed to be available and much of the standard library doesn't require them. You could std::abort, and that's about it.

I think the issue is that this just isn't particularly good either. If you do that, then you can't catch it like an exception, but you also can't statically verify that it won't happen.

C++ needs less of both undefined behavior and runtime errors. It needs more compile-time errors. It needs pattern matching.


I agree these things would be better, but I don’t understand how anyone can think UB is better than abort.

(Going to moan for a bit, and I realise you aren’t responsible for the C++ standards mess!)

I have been hearing for about… 20 years now that UB gives compilers and tools the freedom to produce any error catching they like, but all it seems to have done in the main is give them the freedom to produce hard to debug crash code.

You can of course usually turn on some kind of “debug mode” in some compilers, but why not just enforce that as standard? Compilers would still be free to add a “standards non-compliant” go fast mode if they like.


> but why not just enforce that as standard

I don’t think people want that as standard. The whole point of using C++ tends to be because you can do whatever you need to for the sake of performance. The language is also heavily driven by firms that need extreme performance (because otherwise why not use a higher level language)

There are knobs like stdlib assertions and ubsan, but that’s opt-in because there’s a cost to it. Part of it is also the commitment to backwards compatibility and code that compiled before should generally compile now (though there are exceptions to that unofficial rule).


There does not need to be an additional cost for this.

Most users will do this:

1. Check if there is a value

2. Get the value

There is nothing theoretically preventing the compiler from enforcing that step 1 happens before step 2, especially if the compiler is able to combine the control flow branch with the process of conditionally getting the value. The practical issue is that there's no way to express this in C++ at all. The best you can do is the visitor pattern, which has horrible ergonomics and you can only hope it doesn't cause worse code generation too.

Some users want to do this:

1. Grab the value without checking to see if it's valid. They are sure it will be valid and can't or don't want to eat the cost of checking.

There is nothing theoretically preventing this from existing as a separate method.

I'm not a rust fanboy (seriously, check my GitHub @jchv and look at how much Rust I write, it's approximately zero) but Rust has this solved six ways through Sunday. It can do both of these cases just fine. The only caveat is that you have to wrap the latter case in an unsafe, but either way, you're not eating any costs you don't want to.

C++ can do this too. C++ has an active proposal for a feature that can fix this problem and make much more ergonomic std::variant possible, too.

https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...

Of course, this is one single microcosm in the storied history of C++ failing to adequately address the problem of undefined behavior proliferating the language, so I don't have high hopes.


I might have misinterpreted what you mean as “standard” (I was originally thinking like you meant that the standard dictates all operations must be safe and doing things like the unsafe/unchecked option references would be outside the standard). I realized that you might mean it like it should be the default behavior, so correct me if I’m wrong there.

Yeah I agree that sane defaults would be nice and that things like unchecked accesses should be more explicit. Having linear types to properly enforce issues with things like use-after-move would also be awesome. I don’t know that anyone has ever accused C++ of being particularly ergonomic. Sum types and pattern matching would be awesome.

Nothing wrong with being a Rust fanboy :), I have written some (https://github.com/afnanenayet/diffsitter).


A lot UB is things you wouldn't do anyway. While it is possible to define divide by zero or integer overflow, what does it mean. If you code does either of those things you have a bug in your code (a few encryption algorithms depend on specific overflow behavior - if your language promises that same behavior it is useful).

Since CPUs handle such things differently whatever you define to happen means that the compiler as to insert a if to check on any CPU that doesn't work how you define it - all for something that you probably are not doing. The cost is too high in a tight loop when you know this won't even happen (but the compiler does not).

No


This is a bad answer too, IMO.

I think there is a solid case for the existence of undefined behavior; even Rust has it, it's nothing absurd in concept, and you do describe some reasoning for why it should probably exist.

However, and here's the real kicker, it really does not need to exist for this case. The real reason it exists for this case is due to increasingly glaring deficiencies in the C++ language, namely, again, the lack of any form of pattern matching for control flow. Because of this, there's no way for a library author, including the STL itself, to actually handle this situation succinctly.

Undefined behavior indeed should exist, but not for common cases like "oops, I didn't check to see if there was actually a value here before accessing it." Armed with a moderately sufficient programming language, the compiler can handle that. Undefined behavior should be more like "I know you (the compiler) can't know this is safe, but I already know that this unsafe thing I'm doing is actually correct, so don't generate safeguards for me; let what happens, happen." This is what modern programming languages aim to do. C++ does that for shit like basic arithmetic, and that's why we get to have the same fucking CVEs for 20+ years, over and over in an endless loop. "Just get better at programming" is a nice platitude, but it doesn't work. Even if it was possible for me to become absolutely perfect and simply just never make any mistakes ever (lol) it doesn't matter because there's no chance in hell you'll ever manage that across a meaningful segment of the industry, including the parts of the industry you depend on (like your OS, or cryptography libraries, and so on...)

And I don't think the issue is that the STL "doesn't care" about the possibility that you might accidentally do something that makes no sense. Seriously, take a look at the design of std::variant: it is pretty obvious that they wanted to design a "safe" union. In fact, what the hell would the point of designing another unsafe union be in the first place? So they go the other route. std::variant has getters that throw exceptions on bad accesses instead of undefined behavior. This is literally the exact same type of problem that std::expected has. std::expected is essentially just a special case of a type-safe union with exactly two possible values, an expected and unexpected value (though since std::variant is tagged off of types, there is the obvious caveat that std::expected isn't quite a subset of std::variant, since std::expected could have the same type for both the expected and unexpected values.)

So, what's wrong? Here's what's wrong. C++ Modules were first proposed in 2004[1]. C++20 finally introduced a version of modules and lo and behold, they mostly suck[2] and mostly aren't used by anyone (Seriously: they're not even fully supported by CMake right now.) Andrei Alexandrescu has been talking about std::expected since at least 2018[3] and it just now finally managed to get into the standard in C++23, and god knows if anyone will ever actually use it. And finally, pattern matching was originally proposed by none other than Bjarne himself (and Gabriel Dos Reis) in 2019[4] and who knows when it will make it into the standard. (I hope soon enough so it can be adopted before the heat death of the Universe, but I think that's only if we get exceptionally lucky.)

Now I'm not saying that adding new and bold features to a language as old and complex as C++ could possibly ever be easy or quick, but the pace that C++ evolves at is sometimes so slow that it's hard to come to any conclusion other than that the C++ standard and the process behind it is simply broken. It's just that simple. I don't care what changes it would take to get things moving more efficiently: it's not my job to figure that out. It doesn't matter why, either. The point is, at the end of the day, it can't take this long for features to land just for them to wind up not even being very good, and there are plenty of other programming languages that have done better with less resources.

I think it's obvious at this point that C++ will never get a handle on all of the undefined behavior; they've just introduced far too much undefined behavior all throughout the language and standard library in ways that are going to be hard to fix, especially while maintaining backwards compatibility. It should go without saying that a meaningful "safe" subset of C++ that can guarantee safety from memory errors, concurrency errors or most types of undefined behavior is simply never going to happen. Ever. It's not that it isn't possible to do, or that it's not worth doing, it's that C++ won't. (And yes, I'm aware of the attempts at this; they didn't work.)

The uncontrolled proliferation of undefined behavior is ultimately what is killing C++, and a lot of very trivial cases could be avoided, if only the language was capable of it, but it's not.

[1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n17...

[2]: https://vector-of-bool.github.io/2019/01/27/modules-doa.html

[3]: https://www.youtube.com/watch?v=PH4WBuE1BHI

[4]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p13...


I cannot follow your rant... I'll do my best to respond, but I'm probably not understanding something.

Divide by zero must be undefined behavior in any performant language. On x86 you either have a if before running the divide (which of course in some cases the compiler can optimize out, but only if it can determine the value is not zero); or you the CPU will trap into the OS - different OSes handle this in different ways, but most not in a while that makes it possible to figure out where you were and thus do something about it. This just came up in the C++ std-proposals mailing list in the past couple weeks.

AFAIK all common CPUs have the same behavior on integer overflow (two-complement). However in almost all cases (again, some encryption code is an exception) that behavior is useless to real code and so if it happens your code has a bug either way. Thus we may as well let compilers optimize assuming it cannot happen as it if it does you have a bug no matter what we define it as. (C++ is used on CPUs that are not two-complement as well, but we could call this implementation defined or unspecified, but it doesn't change that you have a bug if you invoke it.)

For std::expected - new benchmarks are proving in the real world, and with optimized exception handlers that exceptions are faster in the real world than systems that use things like expected. Microbenchmarks that show exceptions are slower are easy to create, but real world exceptions that unwind more than a couple function calls show different results.

As for modules, support is finally here and early adopters are using it. The road was long, but it is finally proving it worked.

Long roads are a good thing. C++ has avoided a lot of bad designs by spending a lot of time thinking about problems about things for a long time. Details often matter and move fast languages tend to run into problems when something doesn't work as well as they want. I'm glad C++ standardization is slow - it already is a mess without add more half backed features to the language.


The problem is 'undefined behaviour' is far too powerful.

Why not make division by zero implementation defined. I'm happy with my compiler telling me my program will get terminated if I divide by zero, no problem. Let's even say it "may" be terminated (because maybe the division is optimised out if we don't actually need to calculate it, fine).

My problem is that UB let's compilers do all kinds of weird things, like assume if I write:

    int dividebyzero = 0;
    if(y == 0) { dividebyzero = 1; }
    z=x/y;
Then set dividebyzero to always be 0, because 'obviously' y can't be 0, because then I would invoke undefined behaviour.

Also, for two-complement, it's fairly common people want wrapping behaviour. Also, I don't think basically anyone is using C++ on non-twos complement CPUs (both gcc and clang don't support it), and even if it does run on such CPUs, why not still require a well-defined behaviour, in the same way C++ runs on 32-bit and 64-bit systems, but we don't say asking for the size of a pointer is undefined behaviour -- everyone just defines what it is on their system!


> Divide by zero must be undefined behavior in any performant language. On x86 you either have a if before running the divide (which of course in some cases the compiler can optimize out, but only if it can determine the value is not zero); or you the CPU will trap into the OS - different OSes handle this in different ways, but most not in a while that makes it possible to figure out where you were and thus do something about it. This just came up in the C++ std-proposals mailing list in the past couple weeks.

I mean look, I already agree that it's not necessarily unreasonable to have undefined behavior, but this statement is purely false. You absolutely can eat your cake and have it too. Here's how:

- Split the operation in two: safe, checked division, and fast, unchecked division.

- OR, Stronger typing; a "not-zero" type that represents a numeric type where you can guarantee the value isn't zero. If you can't eat the cost of runtime checks, you can unsafely cast to this.

I think the former is a good fit for C++.

C++ does not have to do what Rust does, but for sake of argument, let's talk about it. What Rust does here is simple, it just defines divide-by-zero to panic. How? Multiple ways:

- If it knows statically it will panic, that's a compilation error.

- If it knows statically it can not be zero, it generates unchecked division.

- If it does not know statically, it generates a branch. (Though it is free to implement this however it wants; could be done using CPU exceptions/traps if they wanted.)

What if you really do need "unsafe" division? Well, that is possible, with unchecked_div. Most people do not need unchecked_div. If you think you do but you haven't benchmarked yet, you do not. It doesn't get any simpler than that. This is especially the case if you're working on modern CPUs with massive pipelines and branch predictors; a lot of these checks wind up having a very close to zero cost.

> AFAIK all common CPUs have the same behavior on integer overflow (two-complement). However in almost all cases (again, some encryption code is an exception) that behavior is useless to real code and so if it happens your code has a bug either way. Thus we may as well let compilers optimize assuming it cannot happen as it if it does you have a bug no matter what we define it as. (C++ is used on CPUs that are not two-complement as well, but we could call this implementation defined or unspecified, but it doesn't change that you have a bug if you invoke it.)

It would be better to just do checked arithmetic by default; the compiler can often statically eliminate the checks, you can opt out of them if you need performance and know what you're doing, and the cost of checks is unlikely to be noticed on modern processors.

It doesn't matter that this usually isn't a problem. It only has to be a problem once to cause a serious CVE. (Spoiler alert: it has happened more than once.)

> For std::expected - new benchmarks are proving in the real world, and with optimized exception handlers that exceptions are faster in the real world than systems that use things like expected. Microbenchmarks that show exceptions are slower are easy to create, but real world exceptions that unwind more than a couple function calls show different results.

You can always use stack unwinding or exceptions if you want to; that's also present in Rust too, in the form of panic. The nice thing about something like std::expected is that it theoretically can bridge the gap between code that uses exceptions and code that doesn't: you can catch an exception and stuff it into the `e` of an std::expected value, or you can take the `e` value of an std::expected and throw it. In theory this should not have much higher cost than simply throwing.

> As for modules, support is finally here and early adopters are using it. The road was long, but it is finally proving it worked.

Last I was at Google, they seemed to have ruled out C++ modules because as-designed they are basically guaranteed to make compilation times worse.

For CMake, you can't really rely on C++ Modules. Firstly, the Makefile generator which is default on most platforms literally does not and as far as I know will not support C++ Modules. Secondly, it doesn't support header units or importing the STL as modules. For all intents and purposes, it would be difficult to even use this for anything.

For Bazel, there is no C++ Modules support to my knowledge.

While fact-checking myself, I found this handy website:

https://arewemodulesyet.org/tools/

...which shows CMake as supporting modules, green check mark, no notes! So that really makes me wonder what value you can place on the other green checkmarks.

> Long roads are a good thing. C++ has avoided a lot of bad designs by spending a lot of time thinking about problems about things for a long time. Details often matter and move fast languages tend to run into problems when something doesn't work as well as they want. I'm glad C++ standardization is slow - it already is a mess without add more half backed features to the language.

I'm glad you are happy with the C++ standardization process. I'm not. Not only do things take many years, they're also half-baked at the end of the process. You're right that C++ still winds up with a huge mess of half-baked features even with as slow as the development process is, and modules are a great example of that.

The true answer is that the C++ committee is a fucking mess. I won't sit here and try to make that argument; plenty of people have done a damningly good job at it better than I ever could. What I will say is that C faces a lot of similar problems to C++ and somehow still manages to make better progress anyways. The failure of the C++ standard committee could be told in many different ways. A good relatively recent example is the success of the #embed directive[1]. Of course, the reason why it was successful was because it was added to C instead of C++.

Why can't C++ do that? I dunno. Ask Bjarne and friends.

[1]: https://thephd.dev/finally-embed-in-c23


> What if you really do need "unsafe" division? Well, that is possible, with unchecked_div. Most people do not need unchecked_div. If you think you do but you haven't benchmarked yet, you do not. It doesn't get any simpler than that.

This attitude is why modern software is dogshit slow. People make this "if you haven't benchmarked, it doesn't matter" argument thousands of times and the result is that every program I run is slower than molasses in Siberia. I don't care about "safety" at the expense of performance.


> This attitude is why modern software is dogshit slow.

Bullshit. Here's my proof: We don't even do this. There's a ton of software that isn't safe from undefined behavior and it's still slow as shit.

> People make this "if you haven't benchmarked, it doesn't matter" argument thousands of times and the result is that every program I run is slower than molasses in Siberia. I don't care about "safety" at the expense of performance.

If you can't imagine a world where there's nuance between "we should occasionally eat 0.5-1ns on checking an unsafe division" and "We should ship an entire web browser with every text editor and chat app" the problem is with you. If you want your software to be fast, it has to be benchmarked, just like if you want it to be stable, it has to be tested. There's really no exceptions here, you can't just guess these things.


Modern software is dogshit slow because it's a pile of JS dependencies twenty layers deep.

Checked arithmetic is plenty fast. And as for safety vs performance, quickly computing the incorrect result is strictly less useful than slowly computing the correct one.


Google has a odd C++ style guide that rules out a lot of useful things for their own reasons.

There is no reason why make could not work with modules if someone wanted to go through the effort. The CMake people have even outlined what needs to be done. Ninja is so much nicer that you should switch anyway - I did more than 10 years ago.


I do use Ninja when I use CMake, but honestly that mostly comes down to the fact that the Makefiles generated by CMake are horrifically slow. I don't particularly love CMake, I only use it because the C++ ecosystem really has nothing better to offer. (And there's no chance I'm going to redistribute a project that can't build with the Makefile generator, at least unless and until Ninja is default.)

Anyway, the Google C++ style guide has nothing to do with why C++ modules aren't and won't be used at Google, it's because as-implemented modules are not an obvious win. They can theoretically improve performance, but they can and do also make some cases worse than before.

I don't think most organizations will adopt modules at this rate. I suspect the early adopters will wind up being the only adopters for this one.


I agree very much with what you wrote.

> the lack of any form of pattern matching for control flow

Growing features after the fact is hard. Look at the monumental effort to get generics into Go. Look at how even though Python 3.10 introduced the match statement, it is a statement and not an expression - you can't write `x = match ...`, unlike Rust and Java 14. So it doesn't surprise me that C++ struggles with this.

> Undefined behavior indeed should exist

Agreed. Rust throws up its hands in narrow cases ( https://doc.rust-lang.org/reference/behavior-considered-unde... ), and even Java says that calling Thread.stop() and forcing monitor unlocks can lead to corrupted data and UB.

> but not for common cases like

Yes, C/C++ have far, far too many UB cases. Even down to idiotically simple things like "failing to end a source file with newline". C and C++ have liberally sprinkled UB as a cop-out like no other language.

> C++ does that for shit like basic arithmetic

I spent an unhealthy amount of time understanding the rules of integer types and arithmetic in C/C++. Other languages like Rust are as capable without the extreme mental complexity. https://www.nayuki.io/page/summary-of-c-cpp-integer-rules

Oh and, `(uint16_t)0xFFFF * (uint16_t)0xFFFF` will cause a signed 32-bit integer overflow on most platforms, and that is UB and will eat your baby. Scared yet? C/C++ rules are batshit insane.

> "Just get better at programming" is a nice platitude, but it doesn't work.

Correct. Far too often, I hear a conversation like "C/C++ have too many UB, why can't we make it safer?" "Just learn to write better code, dumbass". No, literal decades of watching the industry tells us that the same mistakes keep happening over and over again. The evidence is overwhelming that the languages need to change, not the programmers.

> it's obvious at this point that C++ will never get a handle on all of the undefined behavior; they've just introduced far too much undefined behavior all throughout the language and standard library

True.

> in ways that are going to be hard to fix, especially while maintaining backwards compatibility

Technically not true. Specifying undefined behavior is easy, and this has already been done in many ways. For example, -fwrapv makes signed overflow defined to wrap around. For example, you could zero-initialize every local variable and change malloc() to behave like calloc(), so that reading uninitialized memory always returns zero. And because the previous behavior was undefined anyway, literally any substitute behavior is valid.

The problem isn't maintaining backward compatibility, it's maintaining performance compatibility. Allegedly, undefined behavior allows the compiler to optimize out redundant arithmetic, redundant null checks, etc. I believe this is what stops the standards committees from simply defining some kind of reasonable behavior for what is currently considered UB.

> a meaningful "safe" subset of C++ that can guarantee safety from memory errors, concurrency errors or most types of undefined behavior is simply never going to happen

I think it has already happened. Fil-C seems like a capable approach to transpile C/C++ and add a managed runtime - and without much overhead. https://github.com/pizlonator/llvm-project-deluge/blob/delug...

> The uncontrolled proliferation of undefined behavior is ultimately what is killing C++

It's death by a thousand cuts, and it hurts language learners the most. I can write C and C++ code without UB, but it took me a long time to get there - with a lot of education and practice. And UB-free code can be awkward to write. The worst part of it is that the knowledge is very C/C++-specific and is useless in other languages because they don't have those classes of UB to begin with.

I dabbled in C++ programming for about 10 years before I discovered Rust. Once I wrote my first few Rust programs, I was hooked. Suddenly, I stopped worrying about all the stupid complexities and language minutiae of C++. Rust just made sense out of the box. It provided far fewer ways to do things ( https://www.nayuki.io/page/near-duplicate-features-of-cplusp... ), and the easy way is usually the safe and correct way.

To me, Rust is C++ done right. It has the expressive power and compactness of C++ but almost none of the downsides. It is the true intellectual successor to C++. C++ needs to hurry up and die already.


Culturally, I think C++ has a policy of "there's no single right answer." Which leads to there being no wrong answers. We just need more answers so everyone's happy. Which is worse.


Abort would be fine here. Operator* on expected is intended to be used when you have already verified the result wasn’t error.


Of course you can do the Rust thing, it's just taking a function object.


`StatusOr<T>::operator` there is akin to `Result<T, _>::unwrap()`. On C++ unwrapping looks like dereferencing a pointer which is scary and likely UB already.

But as you learn to work with StatusOr you'll end up just using just ASSIGN_OR_RETURN everytime and dereferencing remains scary. I guess the complaint is that C++ won't guarantee that the execution will stop, but that's the C++ way after you drop all safety checks in `StatusOr::operator` to gain performance.


This is the idiomatic way in C++. I'm not even sure what your proposed alternative is -- as other commenters have noted, an exception or "panic" are not actual options.

Every pointer dereference, array access, and even integer truncation is UB in C++. This isn't rust.

A static analyzer can and does catch these errors and others internally. Typical usage of StatusOr is via macros like ASSIGN_OR_RETURN and RETURN_IF_ERROR; actually using the * operator would definitely draw my attention in code review.


Very similar footgun on std::optional::operator*. The big C++ libraries do at least have (debug-only) assertions on misuse.


There’s a few backports around, not quite the same as having first class support, though.


I believe the latest versions of GCC, Clang, MSVC and XCode/AppleClang all support std::expected, in C++23 mode.


Facebook's Folly has a similar type: folly::Expected (dating to 2016).


If I had to guess, that idea came from Andrei Alexandrescu.



> I think c++, the language, is ready for the modern world. However, c++, the community, seems to be struck at least 20 years in the past.

Good point. A language that gets updated by adding a lot of features is DIVERGING from a community that has mostly people that still use a lot of the C baggage in C++, and only a few folks that use a lot of template abstraction at the other end of the spectrum.

Since in larger systems, you will want to re-use a lot of code via open source libraries, one is inevitably stuck in not just one past, but several versions of older C++, depending on when the code to be re-used was written, what C++ standard was stable enough then, and whether or not the author adopted what part of it.

Not to speak of paradigm choice to be made (object oriented versus functional versus generic programmic w/ templates).

It's easier to have, like Rust offers it, a single way of doing things properly. (But what I miss in Rust is a single streamlined standard library - organized class library - like Java has had it from early days on, it instead feels like "a pile of crates").


Just give Rust 36 years of field use, to see how it goes.


36 years is counting from the first CFront release. Counting the same way for Rust, it's been around since 2006. It's got almost 20 years under it's belt already.

edit: what's with people downvoting a straight fact?


Rust 0.1, the first public release, came out in January 2012. CFront 1.0, the first commercial release, came out in 1985.

The public existence of Rust is 13 years, during which computing has not changed that much to be honest. Now compare this to the prehistory that is 1985, when CFront came out, already made for backwards compatibility with C.


I grew up with all the classic 8 bit micros, and to be honest, it doesn't feel like computing has changed at all since 1985. My workstation, while a billion times faster, is still code compatible with a Datapoint 2200 from 1970.

The memory model, interrupt model, packetized networking, digital storage, all function more or less identically.

In embedded, I still see Z80s and M68ks like nothing's changed.

I'd love to see more concrete implementations of adiabatic circuits, weird architectures like the mill, integrated FPGAs, etc. HP's The Machine effort was a rare exciting new thing until they walked back all the exciting parts. CXL seems like about the most interesting new thing in a bit.


Does GPU thingy count as something that has changed with computing?


It may go on to be as important as the FPU [0]. Amazingly enough you can still get one for a Classic II [1].

0. https://en.wikipedia.org/wiki/Floating-point_unit

1. https://www.tindie.com/products/jurassicomp/68882-fpu-card-f...


Yeah, I almost called that out. Probably should have. GPU/NPU feels new (at least for us folks who could never afford a Cray). Probably the biggest change in the last 20 years, especially if you classify it with other multi-core development.


Today a byte is 8 bits. That was not always the case back then, for example.


> I grew up with all the classic 8 bit micros

Meaning that all the machines I've ever cared about have had 8 bit bytes. The TI-99/4A, TRS-80, Commodore 64 and 128, Tandy 1000 8088, Apple ][, Macintosh Classic, etc.

Many were launched in the late 70s. By 1985 we were well into the era of PC compatibles.


in 1985 PC compatibles were talked about, but systems like VAX, and mainframes were still very common and considered the real computers while PCs were toys for executives. PCs had already shown enough value (via word processors and spreadsheets) that everyone knew they were not going away. PCs lacked things like multi-tasking that even then "real" computers had for decades.


> in 1985 PC compatibles were talked about

My https://en.wikipedia.org/wiki/Tandy_1000 came out in 1984. And it was a relatively late entry to the market, it was near peak 8088 with what was considered high end graphics and sound for the day, far better than the IBM PC which debuted in 1981 and only lasted until 1987.


Because it is counting since CFront 2.0, the first official release with industry use in UNIX systems.

So that would be Rust 1.0, released in 2015, not 2006, putting it down to a decade.

And the point still stands when looking at any long enough ecosystem still in use, with strong backwards compatibility, not only the language, the whole ecosystem, eventually editions alone won't make it, and just like those languages, Rust will gain its own warts.


Fair enough. I can cop to getting the CFront date wrong. Still, a decade since 1.0 is non-trivial.

> eventually editions alone won't make it, and just like those languages, Rust will gain its own warts.

That's possible. Though C++ hasn't had editions, or the HLIR / MIR separation, the increased strictness, wonderful tooling, or the benefit of learning from the mistakes made with C++. Noting that, it seems reasonable to expect Rust to collect less cruft and paint itself into fewer corners over a similar period of time. Since C++ has been going for 36 years, it seems Rust will outlive me. Past that, I'm not sure I care.


C++ editions are -std=something, people keep forgeting Rust editions are quite limited in what they actually allow in grammar and semantic changes across versions, and they don't cover standard library changes.

IDEs are wonderful tooling, maybe people should get their heads outside UNIX CLIs and MS-DOS like TUIs.

Then there is the whole ecosystem of libraries, books, SDKs and industry standards.


I'm not sure who in your mind is forgetting that, or what the rest of your comment means to communicate.

Who are you speaking to who hasn't explored all those things in depth?

I see Rust's restrictions as a huge advantage over C++ here. Even with respect to editions. Rust has always given me the impression of a language designed from the start to be approximately what C++ is today, without the cruft, in which safety is opt-out, not opt-in. And the restrictions seem more likely to preserve that than not.

C/C++ folks seem to see Rust's restrictions as anti-features without realizing that C/C++'s lack of restriction resulted in the situation they have today.

I only maintain a few projects in each language, so I haven't run into every sort of issue for either, but that's very much how it feels to me still, several years and several projects in.


Many of the members of the Rust Evangelism Strike Force, as main audience. That is to whom it is targeted for, given the usual kind of content that some write about.

I agree that Rust is designed to be like C++ is today, without the cruft, except all languages if they survive long enough in the market, beyond the adoption curve, they will eventually get their own cruft.

Not realizing this, will only make that 30 years from now, if current languages haven't yet been fully replaced by AI based tools, there will be that language designed to be like Rust is in 30 years, but without the cruft.

The strength of C++ code today is on the ecosystem, that is why we reach for it, having to write CUDA, DirectX, maybe dive into the innards of Java, CLR, V8, GCC, LLVM, doing HPC with OpenAAC, OpenMP, MPI, Metal, Unreal, Godot, Unity.

Likewise I don't reach for C for fun, the less the merrier, rather POSIX, OpenGL, Vulkan,....


> Many of the members of the Rust Evangelism Strike Force, as main audience.

Well I'm not them. I'm just a regular old software developer.

> The strength of C++ code today is on the ecosystem

Ecosystem is why I jumped ship from C++ to Rust. The difference in difficulty integrating a random library into my project is night and day. What might take a week or a month in C++ (integrating disparate build systems, establishing types and lifetimes of library objects and function calls, etc) takes me 20 minutes in Rust. And in general I find the libraries to be much smaller, more modular, and easier to consume piecemeal rather than a whole BOOST or QT at a time.

And while the Rust libraries are younger, I find them to be more stable, and often more featureful and with better code coverage. The language seems to lend itself to completionism.


A lot of people using C++ don't actually use any libraries. I've observed the opposite with Rust.

People choose C++ because it's a flexible language that lets you do whatever you want. Meanwhile Rust is a constrained and opinionated thing that only works if you do things "the right way".


> People choose C++ because it's a flexible language that lets you do whatever you want.

You went on a bit too long. C++ lets you do whatever. Whether you wanted that is not its concern. That's handily illustrated in Matt Godbolt's talk - you provided a floating point value but that's inappropriate? Whatever. Negative values for unsigned? Whatever.

This has terrible ergonomics and the consequences were entirely predictable.


It's just strong typing. You can do it in C++ too.


I’ve seen it argued that, in practice, there’s two C++ communities. One is fundamentally OK with constantly upgrading their code (those with enterprise refactoring tools are obviously in this camp, but it’s more a matter of attitude than technology) and those that aren’t. C++ is fundamentally caught between those two.


This is the truth. I interview a lot of C++ programmers and it amazes me how many have gone their whole careers barely touching C++11 let alone anything later. The extreme reach of C++ software (embedded, UIs, apps, high-speed networking, services, gaming) is both a blessing and a curse and I understand why the committee is hesitant to introduce breaking changes at the expense of slow progress on things like reflection.


C++ carries so much on its back and this makes its evolution over the past decade even more impressive.


Yes, people keep forgeting C++ was made public with CFront 2.0 back in 1989, 36 years of backwards compatibility, to certain extent.


C++ is C compatible so more than 50 years of backward compatibility. Even today the vast majority of C programs can be compiled as C++ and they just work. Often such programs run faster because C++ a few additions that the compiler can use to optimize better, in practice C programs generally mean the stronger rules anyway (but of course when they don't the program is wrong).


<pedantry corner>CFront was never compatible with K&R C to the best of my knowledge, so the actual start date would be whenever C89-style code in widespread use; I'm not sure how long before 1989 that was.


I can tell that during 1999 - 2003, the aC compiler we had installed on our HP-UX 11 development servers, still had issues with C89, we had #defines to use K&R C function declarations when coding on that system.


Kind of, compatible with C89 as language, and with C23 to the extent of library functions that can be written in that subset, or with C++ features.

And yes, being a "Typescript for C" born at the same place as UNIX and C, is both what fostered its adoption, among compiler and OS vendors, and also what brings some pains trying to herd cats to write safe code.


I created a library "cpp-match" that tries to bring the "?" operator into C++, however it uses a gnu-specific feature (https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html), I did support msvc falling-back to using exceptions for the short-circuit mechanism.

However it seems like C++ wants to only provide this kind of pattern via monadic operations.


You can't really do Try (which is that operator's name in Rust) because C++ lacks a ControlFlow type which is how Try reflects the type's decision about whether to exit early.

You can imitate the beginner experience of the ? operator as magically handling trivial error cases by "just knowing" what should happen, but it's not the same thing as the current Try feature.

Barry Revzin has a proposal for some future C++ (lets say C++ 29) to introduce statement expressions, the syntax is very ugly even by C++ standards but it would semantically solve the problem you had.


> The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions

This isn't really true since Rust has panics. It would be nice to have out-of-the-box support for a "no panics" subset of Rust, which would also make it easier to properly support linear (no auto-drop) types.


I wish more people (and crate authors) would treat panic!() as it really should be treated: only for absolutely unrecoverable errors that indicate that some sort of state is corrupted and that continuing wouldn't be safe from a data- or program-integrity perspective.

Even then, though, I do see a need to catch panics in some situations: if I'm writing some sort of API or web service, and there's some inconsistency in a particular request (even if it's because of a bug I've written), I probably really would prefer only that request to abort, not for the entire process to be torn down, terminating any other in-flight requests that might be just fine.

But otherwise, you really should just not be catching panics at all.


> only for absolutely unrecoverable errors

Unfortunately even the Rust core language doesn't treat them this way.

I think it's arguably the single biggest design mistake in the Rust language. It prevents a ton of useful stuff like temporarily moving out of mutable references.

They've done a shockingly good job with the language overall, but this is definitely a wart.


> I probably really would prefer only that request to abort, not for the entire process to be torn down,

This is a sign you are writing an operating system instead of using one. Your web server should be handling requests from a pool of processes - so that you get real memory isolation and can crash when there is a problem.


Even if you used a pool of processes, that's still not one process per request, and you still don't want one request crashing to tear down unrelated requests.


I question both things. I would first of all handle each request in its own process.

If there was a special case that would not work, then the design dictates that requests are not independent and there must be risk of interference (they are in the same process!)

What I definitely do not want is a bug ridden “crashable async sub task” system built in my web program.


This is simply a wrong idea about how to write web servers. You're giving up scalability massively, only to gain a minor amount of safety - one that is virtually irrelevant in a memory safe language, which you should anyway use. The overhead of process-per-request, or even thread-per-request, is absurd if you're already using a memory safe language.


> You're giving up scalability massively

you’re vastly over estimating the overhead of processes and number of simultaneous web connections.

> only to gain a minor amount of safety

What you’re telling me is performance (memory?) is such a high priority you’re willing to make correctness and security tradeoffs.

And I’m saying thats ok, one of those is crashing might bring down more than one request.

> one that is virtually irrelevant in a memory safe language

Your memory safe language uses C libraries in its process.

Memory safe languages have bugs all the time. The attack surface is every line of your program and runtime.

Memory is only one kind of resource and privilege. Process isolation is key for managing resource access - for example file descriptors.

Chrome is a case study if these principles. Everybody thought isolating JS and HTML pages should be easy - nobody could get it right and chrome instead wrapped each page in a process.


Please find one web server being actively developed using one process per request.

Handling thousands of concurrent requests is table stakes for a simple web server. Handling thousands of concurrent processes is beyond most OSs. The context switching overhead alone would consume much of the CPU of the system. Even hundreds of processes will mean a good fraction of the CPU being spent solely on context switching - which is a terrible place to be.


> Handling thousands of concurrent processes is beyond most OS

It works fine on Linux - the operating system for the internet. Have you tried it?

> good fraction of the CPU being spent solely on context switching

I was waiting for this one. Threads and processes do the same amount of context switching. The overhead of processes switch is a little higher. The main cost is memory.


> Threads and processes do the same amount of context switching.

Yes, therefore real webservers use a limited amount of threads/processes (in the same ballpark as a number of CPU cores). Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.

> The main cost is memory.

The main cost is scheduling, not switching per se. Preemptive multitasking needs to deal with priorities to not waste time, and algorithms that do it are O(N) mostly. All these O(N) calculations needs to be completed multiple times per second, the higher the frequency of switching the more work to do. When you have thousands of processes it is the main cost. If you have tens of thousands it starts to bite hard.


> The main cost is scheduling, not switching per se. Preemptive multitasking needs to deal with priorities to not waste time, and algorithms that do it

The person I am having a conversation with is advocating for threads instead of processes. How do you think threads work?

> Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.

That’s certainly the popular approach. As I said at the beginning this approach is making a mini operating system with more bugs and less security rather than leveraging the capabilities of your operating system.

Once again, im waiting to here about your experience of maxing out processes and after that having to switch to green threads.


> The person I am having a conversation with is advocating for threads instead of processes. How do you think threads work?

I was certainly not, I explicitly said that thread-per-request is as bad as process-per-request. I could even agree that it's the worse of both worlds to some extent - none of the isolation, almost all of the overhead (except if you're using a language with a heavy runtime, like Java, where spawning a new JVM has a huge cost compared to a new thread in an existing JVM).

Modern operating systems provide many mechanisms for doing async IO specifically to prevent the need for spawning and switching between thousands of processes. Linux in particular has invested heavily in this, from select, to poll, to epoll, and now unto io_uring.

OS process schedulers are really a poor tool for doing massively parallel IO. They are a general purpose algorithm that has to keep in mind many possible types of heterogeneous processes, and has no insight into the plausible behaviors of those. For a constrained problem like parallel IO, it's a much better idea to use a purpose-built algorithm and tool. And they have simply not been optimized with this kind of scale in mind, because it's much more important and common use case to run quickly for a small number of processes than it is to scale up to thousands. There's a reason typical ulimit configurations are limited to around 1000 threads/processes per system for all common distros.


> Linux in particular has invested heavily in this, from select, to poll, to epoll, and now unto io_uring.

Correction. People who wanted to do async IO went and added additional support for it. The primary driver is node.js.

> And they have simply not been optimized with this kind of scale in mind,

yes, processes do not sacrifice security and reliability. That’s the difference.

The fallacy here is assuming that a process is just worse for hand wavy reasons and that your language feature has fa secret sauce.

If it’s not context switching then that means you have other scheduling problems because you cannot be pre-empted.

> There's a reason typical ulimit configurations are limited to around 1000 threads/processes per system

STILL waiting to hear about your experience of maxing out Linux processes on a web server - and then fixing it with green threads.

I suspect it hasn’t happened.


> The person I am having a conversation with is advocating for threads instead of processes. How do you think threads work?

Are they? I looked back and I've found this quote of them: "The overhead of process-per-request, or even thread-per-request, is absurd if you're already using a memory safe language." Doesn't seem as an advocacy for thread-per-request to me.

> As I said at the beginning this approach is making a mini operating system with more bugs and less security rather than leveraging the capabilities of your operating system.

Lets look at Apache for example. It starts a few processes and/or threads, but then each thread deals with a lot of connections. The threads Apache starts are for spreading work over several CPUs and maybe to overcome some limits of select/poll/epoll. The main approach is to track a state of a connection, and when something happens on a socket, Apache find the state of the connection and deals with events on the socket. Then it stores the new state and moves to deal with other sockets in the same manner.

It is like green threads but without green threads. Green threads streamlines all this state keeping by allowing each connection to have it's own stack. And I'd say it is easier to do right than to write a finite automata for HTTP/HTTPS.

> Once again, im waiting to here about your experience of maxing out processes and after that having to switch to green threads.

Oh, I didn't. A long long time ago I was reading stuff on networking. All of it was in one opinion: 10k kernel tasks maybe a tolerable solution, but 100k is bad. IIRC Apache had a document describing its internal architecture and explaining why it is as it is.

So I wouldn't even try to start thousands of threads. I mean I tried to start 1000s of processes when I was young and learned about fork-bombs, and this experience confirmed it for me, that 1000s of processes is not a really good idea.

Moreover I completely agree with them: if you use a memory-safe language, then it is strange to pay costs for preemptive multitasking just to have separate virtual address spaces. I mean, it will be better to get a virtual machine with JIT compiler, and run code for different connection on different instances of a virtual machine. O(1) complexity of cooperative switching will beat O(N) complexity of preemptive switching. To my mind hardware memory management is overrated.


> Lets look at Apache for example

Apache has years of engineering work - and almost weekly patches to fix issues related to security. Many of these security issues would go away if they were not using special technique to optimize performance.

But the best part of the web is its modular. So now your application doesn’t need to that. It can leverage those benefits without complexity cascade.

For example, Apache can manage more connections than your application needs running processes for.

> I was reading stuff on networking….

That’s exactly my point. Too many people are repeating advice from Google or Facebook and not actually thinking about real problems they face.

Can you serve more requests using specialized task management? Yes. You can make a mini-OS with fewer features to squeeze out more scheduling performance and that’s what some big companies did.

But you will pay for that with reduced security and reliability. To bring it back to my original complaint - you must accept that a crash can bring down multiple requests.

And it’s an insane default to design Rust around. It’s especially confusing to make all these arguments about how “unsafe” languages are, but then ignore OS safety in hopes of squeezing out a little more perf.

> So I wouldn't even try to start thousands of threads.

Please try it before arguing it doesn’t work. Fork bombing is recursive and unrelated.

> if you use a memory-safe language, then it is strange to pay costs for preemptive multitasking just to have separate virtual address spaces

Then why do these “memory-safe” languages need constant security patches? Why does chrome need to wrap each page’s JS in its own process?

In theory you’re right. If they are actually memory-safe then you don’t need to consider address spaces. But in practice the attack surface is massive and processes give you stronger invariants.


We did that at Dropbox in Python for a while. Though they switched to async after I left.


> you’re vastly over estimating the overhead of processes and number of simultaneous web connections.

It's less the actual overhead of the process but the savings you get from sharing. You can reuse database connections, have in-memory caches, in-memory rate limits and various other things. You can use shared memory which is very difficult to manage or an additional common process, but either way you are effectively back to square one with regards to shared state that can be corrupted.


You certainly can get savings. I question how often you need that.

I just said one of the costs of those saving is crashing may bring down multiple requests - and you should design with that trade off.


Using a Rust lib from Swift on macOS I definitely want to catch panics - to access security scoped resources in Rust I need the Rust code to execute in process (I believe) but I’d also like it not to crash the entire app.


would you consider panics acceptable when you think it cannot panic in practice? e.g. unwraping/expecting a value for a key in a map when you inserted that value before and know it hasn't been removed?

you could have a panic though, if you wrongly make assumptions


Obviously yes. For the same reason it's acceptable that myvec[i] panics (it will panic if i is out of bounds - but you already figured out that i is in bounds) and a / b panic for a and b integers (it will panic if b is zero, but if your code is not buggy you already tested if b is zero prior to dividing right?)

Panic is absolutely fine for bugs, and it's indeed what should happen when code is buggy. That's because buggy code can make absolutely no guarantees on whether it is okay to continue (arbitrary data structures may be corrupted for instance)

Indeed it's hard to "treat an error" when the error means code is buggy. Because you can rarely do anything meaningful about that.

This is of course a problem for code that can't be interrupted.. which include the Linux kernel (they note the bug, but continue anyway) and embedded systems.

Note that if panic=unwind you have the opportunity to catch the panic. This is usually done by systems that process multiple unrelated requests in the same program: in this case it's okay if only one such request will be aborted (in HTTP, it would return a 5xx error), provided you manually verify that no data structure shared by requests would possibly get corrupted. If you do one thread per request, Rust does this automatically; if you have a smaller threadpool with an async runtime, then the runtime need to catch panics for this to work.


> Note that if panic=unwind you have the opportunity to catch the panic.

And now your language has exceptions - which break control flow and make reasoning about a program very difficult - and hard to optimize for a compiler.


Yeah, but this isn't the only bad thing about unwinding. Much worse than just catching panics is the fact that a panic in a thread takes down only that thread (except if it is in the main thread). If your program is multithreaded, panic=unwind makes it much harder to understand how it reacts to errors, unless you take measures to shut down the program if any thread panic (which again, requires catch_unwind if you have unwinding). Also: that's why locks in Rust have poisoning, they exist so that panics propagate between threads: if a thread panics while holding a lock, any other thread attempting to acquire this lock will panic too (which is better than a deadlock for sure)

And that's why my programs get compiled with panic=abort, that makes panics just quit the program, with no ability to catch them, and no programs in zombie states where some threads panicked and others keep going on.

But see, catch_panic is an escape hatch. It's not meant to be used as a general error handling mechanism and even when doing FFI, Rust code typically converts exceptions in other languages into Results (at a performance cost, but who cares). But Rust needs a escape right, it is a low level language.

And there is at least one case where the catch_unwind is fully warranted: when you have an async web server with multiple concurrent requests and you need panics to take down only a single request, and not the whole server (that would be a DoS vector). If that weren't possible, then async Rust couldn't have feature parity with sync Rust (which uses a thread-per-request model, and where panics kill the thread corresponding to the request)


> when you have an async web server with multiple concurrent requests and you need panics to take down only a single reques

Addressed in sibling thread - it’s a poor default to design Rust around.


Not the same person, but I first try and figure out an API that allows me to not panic in the first place.

Panics are a runtime memory safe way to encode an invariant, but I will generally prefer a compile time invariant if possible and not too cumbersome.

However, yes I will panic if I'm not already using unsafe and I can clearly prove the invariant I'm working with.


I don't speak for anyone else but I'm not using `unwrap` and `expect`. I understand the scenario you outlined but I've accepted it as a compromise and will `match` on a map's fetching function and will have an `Err` branch.

I will fight against program aborts as hard as I possibly can. I don't mind boilerplate to be the price paid and will provide detailed error messages even in such obscure error branches.

Again, speaking only for myself. My philosophy is: the program is no good for me dead.


> the program is no good for me dead

That may be true, but the program may actually be bad for you if it does something unexpected due to an unforeseen state.


Agreed, that's why I don't catch panics either -- if we get to that point I'm viewing the program as corrupted. I'm simply saying that I do my utmost to never use potentially panicking Rust API and prefer to add boilerplate for `Err` branching.


So what do you do in the error branch if something like out-of-bounds index happens? Wrap and propagate the error to the caller?


Usually yes. But I lean much more to writing library-like code, I admit.

When I have to make a decision on an app-level, it becomes a different game though. I don't have a clear-and-cut answer for that.


This implies that every function in your library that ever has to do anything that might error out - e.g. integer arithmetic or array indexing - has to be declared as returning the corresponding Result to propagate the error. Which means that you are now imposing this requirement (to check for internal logic bugs in library code) onto the user of your library.


Well, I don't write as huge a code as this though, nor does it have as many layers.

Usually I just use the `?` and `.map_err` (or `anyhow` / `thiserror`) to delegate and move on with life.

I have a few places where I do pattern-matches to avoid exactly what you described: imposing the extra internal complexity to users. Which is indeed a bad thing and I am trying to fight it. Not always succeeding.


Honestly, I don't think libraries should ever panic. Just return an UnspecifiedError with some sort of string. I work daily with rust, but I wish no_std and an arbitrary no_panic would have better support.


Example docs for `foo() -> Result<(), UnspecifiedError>`:

    # Errors

    `foo` returns an error called `UnspecifiedError`, but this only
    happens when an anticipated bug in the implementation occurs. Since
    there are no known such bugs, this API never returns an error. If
    an error is ever returned, then that is proof that there is a bug
    in the implementation. This error should be rendered differently
    to end users to make it clear they've hit a bug and not just a
    normal error condition.
Imagine if I designed `regex`'s API like this. What a shit show that would be.

If you want a less flippant take down of this idea and a more complete description of my position, please see: https://burntsushi.net/unwrap/

> Honestly, I don't think libraries should ever panic. Just return an UnspecifiedError with some sort of string.

The latter is not a solution to the former. The latter is a solution to libraries having panicking branches. But panics or other logically incorrect behavior can still occur as a result of bugs.


Funny that as a user of this library, I would just unwrap this, and it results in the same outcome as if library panicked.


Yes. A panic is the right thing to do, and it's just fine if the library does it for you.


My main issue with panics is poor interop across FFI boundaries.


This is like saying, "my main issue with bugs is that they result in undesirable behavior."

Panicking should always be treated as a bug. They are assertions.


This is already a thing, I do this right now. You configure the linter to forbid panics, unwraps, and even arithmetic side effects at compile time.

You can configure your lints in your workspace-level Cargo.toml (the folder of crates)

“””

[workspace.lints.clippy]

pedantic = { level = "warn", priority = -1 }

# arithmetic_side_effects = "deny"

unwrap_used = "deny"

expect_used = "deny"

panic = "deny"

“””

then in your crate Cargo.toml “””

[lints]

workspace = true

“””

Then you can’t even compile the code without proper error handling. Combine that with thiserror or anyhow with the backtrace feature and you can yeet errors with “?” operators or match on em, map_err, map_or_else, ignore them, etc

[1] https://rust-lang.github.io/rust-clippy/master/index.html#un...


The issue with this in practice is that there are always cases where panics are absolutely the correct course of action. When program state is bad enough that you can't safely continue, you need to panic (and core dump in dev). Otherwise you are likely just creating an integrity minefield for you to debug later.

Not saying there aren't applications where using these lints couldn't be alright (web servers maybe), but at least in my experiences (mostly doing CLI, graphics, and embedded stuff) trying to keep the program alive leads to more problems than less.


The comment you're replying to specifically wanted "no panics" version of rust.

It's totally normal practice for a library to have this as a standard.


Indent by 4 spaces to get code blocks on HN.

    Like
    this



You only need 2. https://news.ycombinator.com/formatdoc

> Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)


  Thank
  you


But can deny the use of all operations that might panic like indexing an array?


Yes, looks like you can, try indexing_slicing

https://rust-lang.github.io/rust-clippy/master/#indexing_sli...


There's a lint for indexing an array, but not for all maybe-panicking operations. For example, the `copy_from_slice` method on slices (https://doc.rust-lang.org/std/primitive.slice.html#method.co...) doesn't have a clippy lint for it, even though it will panic if given the wrong length.


It's pretty difficult to have no panics, because many functions allocate memory and what are they supposed to do when there is no memory left? Also many functions use addition and what is one supposed to do in case of overflow?


>many functions allocate memory and what are they supposed to do when there is no memory left?

Return an AllocationError. Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. It's now trying to add in explicit allocators and allocation failure handling (A:Allocator type param) at the cost of splitting the ecosystem (all third-party code, including parts of libstd itself like std::io::Read::read_to_end, only work with A=GlobalAlloc).

Zig for example does it right by having explicit allocators from the start, plus good support for having the allocator outside the type (ArrayList vs ArrayListUnmanaged) so that multiple values within a composite type can all use the same allocator.

>Also many functions use addition and what is one supposed to do in case of overflow?

Return an error ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ) or a signal that overflow occurred ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ). Or use wrapping addition ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ) if that was intended.

Note that for the checked case, it is possible to have a newtype wrapper that impls std::ops::Add etc, so that you can continue using the compact `+` etc instead of the cumbersome `.checked_add(...)` etc. For the wrapping case libstd already has such a newtype: std::num::Wrapping.

Also, there is a clippy lint for disallowing `+` etc ( https://rust-lang.github.io/rust-clippy/master/index.html#ar... ), though I assume only the most masochistic people enable it. I actually tried to enable it once for some parsing code where I wanted to enforce checked arithmetic, but it pointlessly triggered on my Checked wrapper (as described in the previous paragraph) so I ended up disabling it.


> Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. [...] Zig for example does it right by having explicit allocators from the start

Rust picked the right default for applications that run in an OS whereas Zig picked the right default for embedded. Both are good for their respective domains, neither is good at both domains. Zig's choice is verbose and useless on a typical desktop OS, especially with overcommit, whereas Rust's choice is problematic for embedded where things just work differently.


Various kind of "desktop" applications like databases and video games use custom non-global allocators - per-thread, per arena, etc - because they have specific memory allocation and usage patterns that a generic allocator does not handle as well as targeted ones can.

My current $dayjob involves a "server" application that needs to run in a strict memory limit. We had to write our own allocator and collections because the default ones' insistence on using GlobalAlloc infallibly doesn't work for us.

Thinking that only "embedded" cares about custom allocators is just naive.


> Thinking that only "embedded" cares about custom allocators is just naive.

I said absolutely no such thing? In my $dayjob working on graphics I, too, have used custom allocators for various things, primarily in C++ though, not Rust. But that in no way makes the default of a global allocator wrong, and often those custom allocators have specialized constraints that you can exploit with custom containers, too, so it's not like you'd be reaching for the stdlib versions probably anyway.


I don't see why you would have to write your own - there are plenty of options in the crate ecosystem, but perhaps you found them insufficient?

As a video game developer, I've found the case for custom general-purpose allocators pretty weak in practice. It's exceedingly rare that you really want complicated nonlinear data structures, such as hash maps, to use a bump-allocator. One rehash and your fixed size arena blows up completely.

95% of use cases are covered by reusing flat data structures (`Vec`, `BinaryHeap`, etc.) between frames.


> there are plenty of options in the crate ecosystem

Who writes the crates?


That's public information. It's up to you to make the choice whether to trust someone, but the least you can do is look at the code and see if it matches what you would have done.


The allocator we wrote for $dayjob is essentially a buffer pool with a configurable number of "tiers" of buffers. "Static tiers" have N pre-allocated buffers of S bytes each, where N and S are provided by configuration for each tier. The "dynamic" tier malloc's on demand and can provide up to S bytes; it tracks how many bytes it has currently allocated.

Requests are matched against the smallest tier that can satisfy them (static tiers before dynamic). If no tier can satisfy it (static tiers are too small or empty, dynamic tier's "remaining" count is too low), then that's an allocation failure and handled by the caller accordingly. Eg if the request was for the initial buffer for accepting a client connection, the client is disconnected.

When a buffer is returned to the allocator it's matched up to the tier it came from - if it came from a static tier it's placed back in that tier's list, if it came from the dynamic tier it's free()d and the tier's used counter is decremented.

Buffers have a simple API similar to the bytes crate - "owned buffers" allow &mut access, "shared buffers" provide only & access and cloning them just increments a refcount, owned buffers can be split into smaller owned buffers or frozen into shared buffers, etc.

The allocator also has an API to query its usage as an aggregate percentage, which can be used to do things like proactively perform backpressure on new connections (reject them and let them retry later or connect to a different server) when the pool is above a threshold while continuing to service existing connections without a threshold.

The allocator can also be configured to allocate using `mmap(tempfile)` instead of malloc, because some parts of the server store small, infrequently-used data, so they can take the hit of storing their data "on disk", ie paged out of RAM, to leave RAM available for everything else. (We can't rely on the presence of a swapfile so there's no guarantee that regular memory will be able to be paged out.)

As for crates.io, there is no option. We need local allocators because different parts of the server use different instances of the above allocator with different tier configs. Stable Rust only supports replacing GlobalAlloc; everything to do with local allocators is unstable, and we don't intend to switch to nightly just for this. Also FWIW our allocator has both a sync and async API for allocation (some of the allocator instances are expected to run at capacity most of the time, so async allocation with a timeout provides some slack and backpressure as opposed to rejecting requests synchronously and causing churn), so it won't completely line up with std::alloc::Allocator even if/when that does get stabilized. (But the async allocation is used in a localized part of the server so we might consider having both an Allocator impl and the async direct API.)

And so because we need local allocators, we had to write our own replacements of Vec, Queue, Box, Arc, etc because the API for using custom A with them is also unstable.


Did you publish these by any chance?


Sorry, the code is closed source.


> Zig for example does it right by having explicit allocators from the start

Odin has them, too, optionally (and usually).


> Rust unfortunately picked the wrong default here

I partially disagree with this. Using Zig style allocators doesn't really fit with Rust ergonomics, as it would require pretty extensive lifetime annotations. With no_std, you absolutely can roll your own allocation styles, at the price of more manual lifetime annotations.

I do hope though that some library comes along that allows for Zig style collections, with the associated lifetimes... (It's been a bit painful rolling my own local allocator for audio processing).


Explicit allocators do work with Rust, as evidenced by them already working for libstd's types, as I said. The mistake was to not have them from day one which has caused most code to assume GlobalAlloc.

As long as the type is generic on the allocator, the lifetimes of the allocator don't appear in the type. So eg if your allocator is using a stack array in main then your allocator happens to be backed by `&'a [MaybeUninit<u8>]`, but things like Vec<T, A> instantiated with A = YourAllocator<'a> don't need to be concerned with 'a themselves.

Eg: https://play.rust-lang.org/?version=nightly&mode=debug&editi... do_something_with doesn't need to have any lifetimes from the allocator.

If by Zig-style allocators you specifically mean type-erased allocators, as a way to not have to parameterize everything on A:Allocator, then yes the equivalent in Rust would be a &'a dyn Allocator that has an infectious 'a lifetime parameter instead. Given the choice between an infectious type parameter and infectious lifetime parameter I'd take the former.


Ah, my bad, I guess I've been misunderstanding how the Allocator proposal works all along (I thought it was only for 'static allocators, this actually makes a lot more sense!).

I guess all that to say, I agree then that this should've been in std from day one.


The problem is, everything should have been there since day 1. It’s still unclear which API Rust should end up with, even today, which is why it isn’t stable yet.


Looking forward to the API when it's stabilised. Have there been any updates on the progress of allocators of this general area of Rust over the past year?


I haven’t paid that close of attention, but there have been two major APIs that people seem to be deciding between. We’ll see.


>Return an AllocationError. Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. It's now trying to add in explicit allocators and allocation failure handling

Going from panic to panic free in Rust is as simple as choosing 'function' vs 'try_function'. The actual mistakes in Rust were the ones where the non-try version should have produced a panic by default. Adding Box::try_new next to Box::new is easy.

There are only two major applications of panic free code in Rust: critical sections inside mutexes and unsafe code (because panic safety is harder to write than panic free code). In almost every other case it is far more fruitful to use fuzzing and model checking to explicitly look for panics.


In order to have true ergonomic no_panic code in Rust you'd need to be able to have parametricity on the panic behavior: have a single Box::new that can be context determined to be panicky or Result based. It has to be context determined and not explicitly code determined so that the top most request for the no_panic version to be propagated all the way down to stdlib through the entire stack. If you squint just a bit, you can see this is the same as maybe async, and maybe const, and maybe allocate, and maybe wrapping/overflowing math, etc. So there's an option to just add try_ methods on the entire stdlib, which all the code between your API and the underlying API need to use/expose, or push for a generic language level mechanism for this. Which then complicates the language, compiler and library code further. Or do both.


>what are they supposed to do when there is no memory left

Well on Linux they are apparently supposed to return memory anyway and at some point in the future possibly SEGV your process when you happen to dereference some unrelated pointer.


You can tell Linux that you don't want overcommit. You will probably discover that you're now even more miserable and change it back, but it's an option.


Whenever I switch off overcommitting, every program on my system (that I'm using) dies, one by one, over the course of 2–5 seconds, followed by Xorg. It's quite pretty.


I did that and even with enormous amounts of free memory, Chrome and other Chromium browsers just die.

They require overcommit just to open an empty window.


Don't know about your parent poster but I didn't take it 100% literally. Obviously if there's no memory left then you crash; the kernel would likely murder your program half a second later anyway.

But for arithmetics Rust has non-aborting bound checking API, if my memory serves.

And that's what I'm trying hard to do in my Rust code f.ex. don't frivolously use `unwrap` or `expect`, ever. And just generally try hard to never use an API that can crash. You can write a few error branches that might never get triggered. It's not the end of the world.


Dealing with integer overflow is much more burdensome than dealing with allocation failure, IME. Relatively speaking, allocation failure is closer to file descriptor limits in terms of how it effects code structure. But then I mostly use C when I'm not using a scripting language. In languages like Rust and C++ there's alot of hidden allocation in the high-level libraries that seem to be popular, perhaps because the notion that "there's nothing you can do" has infected too many minds.

Of course, just like with opening files or integer arithmetic, if you don't pay any attention to handling the errors up front when writing your code, it can be an onerous if not impossible to task to refactor things after the fact.


Oh I agree, don't get me wrong. Both are pretty gnarly.

I was approaching these problems strictly from the point of view of what can Rust do today really, nothing else. To me having checked and non-panicking API for integer overflows / underflows at least gives you some agency.

If you don't have memory, well, usually you are cooked. Though one area where Rust can become even better there is to give us some API to reserve more memory upfront, maybe? Or I don't know, maybe adopt some of the memory-arena crates in stdlib.

But yeah, agreed. Not the types of problems I want to have anymore (because I did have them in the past).


In C I simply use -fsanitize=signed-integer-overflow if I expect no overflow and checked arithmetic when I need to handle overflow. I do not think this is worse than in any other languages and seems less annoying than Rust. If I am lazy, I let allocation failure trap on null pointer dereference which is also safe, out-of-bounds accesses are avoided by -fsanitize=bounds (I avoid pointer arithmetic and unsafe casts where I can and essentially treat it like Rust's "unsafe").


Rust provides a default integer of each common size and signedness, for which overflow is prohibited [but this prohibition may not be enforced in release compiled binaries depending on your chosen settings for the compiler, in this case what happens is not promised but today it will wrap - it's wrong to write code which does this on purpose - see the wrapping types below if you want that - but it won't cause UB if you do it anyway]

Rust also provides Wrapping and Saturating wrapper types for these integers, which wrap (255 + 1 == 0) or saturate (255 + 1 == 255). Depending on your CPU either or both of these might just be "how the computer works anyway" and will accordingly be very fast. Neither of them is how humans normally think about arithmetic.

Furthermore, Rust also provides operations which do all of the above, as well as the more fundamental "with carry" type operations where you get two results from the operation and must write your algorithms accordingly, and explicitly fallible operations where if you would overflow your operation reports that it did not succeed.


Additions are easy. By default they are wrapped, and you can make them explicit with checked_ methods.

Assuming that you are not using much recursion, you can eliminate most of the heap related memory panics by adding limited reservation checks for dynamic data, which is allocated based on user input/external data. You should also use statically sized types whennever possible. They are also faster.


Wrapping on overflow is wrong because this is not the math we expect. As a result, errors and vulnerabilities occur (look at Linux kernel for example).


It depends on the context. Of course the result may cause vulnerabilities if the program logic in bad context depends on it. But yeah, generally I would agree.


> what are they supposed to do when there is no memory left?

You abandon the current activity and bubble up the error to a stage where that effort can be tossed out or retried sometime later. i.e. Use the same error handling approach you would have to use for any other unreliable operation like networking.


> Also many functions use addition and what is one supposed to do in case of overflow?

Honestly this is where you'd throw an exception. It's a shame Rust refuses to have them, they are absolutely perfect for things like this...


I'm confused by this, because a panic is essentially an exception. They can be thrown and caught (although it's extremely discouraged to do so).

The only place where it would be different is if you explicitly set panics to abort instead of unwind, but that's not default behavior.


`panic` isn’t really an error that you have to (or can) handle, it’s for unrecoverable errors. Sort of like C++ assertions.

Also there is the no_panic crate, which uses macros to require the compiler to prove that a given function cannot panic.


You can handle panics. It’s for unrecoverable errors, but internally it does stack unwinding by default like exceptions in C++.

You see this whenever you use cargo test. If a single test panics, it doesn’t abort the whole program. The panic is “caught”. It still runs all the other tests and reports the failure.


> but internally it does stack unwinding by default

Although as a library vendor, you kind have to assume your library could be compiled into an app configured with panic=abort, in which case it will not do that


Well, kinda. It's more similar to RuntimeException in Java, in that there are times where you do actually want to catch and recover from them.

But on those places, you better know exactly what you are doing.


I would say that Segmentation Fault is better comparison with C++ :-D


that's kind of a thing with https://docs.rs/no-panic/latest/no_panic/ or no std and custom panic handlers.

not sure what the latest is in the space, if I recall there are some subtleties


That's a neat hack, but it would be a lot nicer to have explicit support as part of the language.


That's going to be difficult because the language itself requires panic support to properly implement indexing, slicing, and integer division. There are checked methods that can be used instead, but to truly eliminate panics, the ordinary operators would have to be banned when used with non-const arguments, and this restriction would have to propagate to all dependencies as well.


Yes that’s right. The feature really wants compiler support for that reason. The simplest version wouldn’t be too hard to implement. Every function just exports a flag on whether or not it (or any callees) can panic. Then we have a nopanic keyword which emits a compiler error if the function (or any callee) panics.

It would be annoying to use - as you say, you couldn’t even add regular numbers together or index into an array in nopanic code. But there are ways to work around it (like the wrapping types).

One problem is that implicit nopanic would add a new way to break semver compatibility in APIs. Eg, imagine a public api that just happens to not be able to panic. If the code is changed subtly, it could easily start panicing again. That could break callers, so it has to be a major version bump. You’d probably have to require explicit nopanic at api boundaries. (Else assume all public functions from other crates can panic). And because of that, public APIs like std would need to be plastered with nopanic markers everywhere. It’s also not clear how that works through trait impls.


Yeah, this is how it works with no_std.


No? https://godbolt.org/z/jEc36vP3P

As far as I can tell, no_std doesn't change anything with regard to either the usability of panicking operators like integer division, slice indexing, etc. (they're still usable) nor on whether they panic on invalid input (they still do).


The problem is with false positives. Even if you clearly see that some function will never panic (but it uses some feature which may panic), compiler might not always see that. If compiler says that there are no panics, then there are no panics, but is it enough to add as part of the language if you need to mostly avoid using features that might panic?


I do not want a library to panic though, I want to handle the error myself.


Let's say the library panics because there was an out-of-bounds array access on some internal (to that library) array due to a bug in their code. How will you handle this error yourself, and how is the library supposed to propagate it to you in the first place without unwinding?


Ensure all bounds and invariants are checked, and return Result<T, E> or a custom error or something. As I said, I do not want a library to panic. It should be up to the user of the library. When I write libraries, I make sure that the users of the library are able to handle the errors themselves. Imagine using some library, but they use assert() or panic() instead of returning an error for you to handle, that would frustrate me.


Maybe contrarian, but imo the `Result` type, while kind of nice, still suffers from plenty of annoyances, including sometimes not working with the (manpages-approved) `dyn Error`, sometimes having to `into()` weird library errors that don't propagate properly, or worse: `map_err()` them; I mean, at this point, the `anyhow` crate is basically mandatory from an ergonomics standpoint in every Rust project I start. Also, `?` doesn't work in closures, etc.

So, while this is an improvement over C++ (and that is not saying much at all), it's still implemented in a pretty clumsy way.


There's some space for improvement, but really... not a lot? Result is a pretty basic type, sure, but needing to choose a dependency to get a nicer abstraction is not generally considered a problem for Rust. The stdlib is not really batteries included.

Doing error handling properly is hard, but it's a lot harder when error types lose information (integer/bool returns) or you can't really tell what errors you might get (exceptions, except for checked exceptions which have their own issues).

Sometimes error handling comes down to "tell the user", where all that info is not ideal. It's too verbose, and that's when you need anyhow.

In other cases where you need details, anyhow is terrible. Instead you want something like thiserror, or just roll your own error type. Then you keep a lot more information, which might allow for better handling. (HttpError or IoError - try a different server? ParseError - maybe a different parse format? etc.)

So I'm not sure it's that Result is clumsy, so much that there are a lot of ways to handle errors. So you have to pick a library to match your use case. That seems acceptable to me?

FWIW, errors not propagating via `?` is entirely a problem on the error type being propagated to. And `?` in closures does work, occasionally with some type annotating required.


I agree with you, but it’s definitely inconvenient. Result also doesn’t capture a stack trace. I spent a long time tracking down bugs in some custom binary parsing code awhile ago because I had no idea which stack trace my Result::Err’s were coming from. I could have switched to another library - but I didn’t want to inflict extra dependencies on people using my crate.

As you say, it’s not “batteries included”. I think that’s a fine answer given rust is a systems language. But in application code I want batteries to be included. I don’t want to need to opt in to the right 3rd party library.

I think rust could learn a thing or two from Swift here. Swift’s equivalent is better thought through. Result is more part of the language, and less just bolted on:

https://docs.swift.org/swift-book/documentation/the-swift-pr...


> the `anyhow` crate is basically mandatory from an ergonomics standpoint in every Rust project I start

If you use `anyhow`, then all you know is that the function may `Err`, but you do not know how - this is no better than calling a function that may `throw` any kind of `Throwable`. Not saying it's bad, it is just not that much different from the error handling in Kotlin or C#.


Yeah, `anyhow` is basically Go error handling.

Better than C, sufficient in most cases if you're writing an app, to be avoided if you're writing a lib. There are alternatives such as `snafu` or `thiserror` that are better if you need to actually catch the error.


I find myself working through a hierarchy of error handling maturity as a project matures.

Initial proof of concepts just get panics (usually with a message).

Then functions start to be fallible, by adding anyhow & considering all errors to still be fatal, but at least nicely report backtraces (or other things! context doesn't have to just be a message)

Then if a project is around long enough, swap anyhow to thiserror to express what failure modes a function has.


I know a ‘C’ code base that treats all socket errors the same and just retries for a limited time. However there are errors that make no sense to retry, like invalid socket or socket not connected. It is necessary to know what socket error occurred. I like how the Posix API defines an errno and documents the values. Of course this depends on accurate documentation.


This is an IDE/documentation problem in a lot of cases though. No one writes code badly intentionally, but we are time constrained - tracking down every type of error which can happen and what it means is time consuming and you're likely to get it wrong.

Whereas going with "I probably want to retry a few times" is guessing that most of your problems are the common case, but you're not entirely sure the platform you're on will emit non-commoncases with sane semantics.


Yes. I prefer ‘snafu’ but there are a few, and you could always roll your own.


+1 for snafu. It lets you blend anyhow style errors for application code with precise errors for library code. .context/.with_context is also a lovely way to propagate errors between different Result types.


How does that compare to "this error for libraries and anyhow for applications"?


You don't have to keep converting between error types :)


Yeah, with SNAFU I try to encourage people going all-in on very fine-grained error types. I love it (unsurprisingly).


? definitely works in closures, but it often takes a little finagling to get working, like specifying the return type of the closure or setting the return type of a collect to a Result<Vec<_>>


Mapping a Vec<T> to Result<U, E> and collecting them into a single Result<Vec<U>, E> made me feel like a ninja when I first learned it was supported. I’m a little worried it’s too confusing to read for others, but it works so well.

Combined with futures::try_join_all for async closures and you can use it to do a bunch of failable tasks in parallel too, it’s great.


A couple of those annoyances are just library developers being too lazy to give informative error types which is far from a Rust-specific problem


Generally, I agree the situation with errors is much better in Rust in the ways you describe. But, there are also panics which you can catch_unwind[1], set_hook[2] for, define a #[panic_handler][3] for, etc.

[1] https://doc.rust-lang.org/std/panic/fn.catch_unwind.html

[2] https://doc.rust-lang.org/std/panic/fn.set_hook.html

[3] https://doc.rust-lang.org/nomicon/panic-handler.html


Yeah, in anything but heavily multi-threaded servers, it's usually best to immediately crash on a panic. Panics don't mean "a normal error occurred", they mean, "This program is cursed and our fundamental assumptions are wrong." So it's normal for a unit test harness to catch panics. And you may occasionally catch them and kill an entire client connection, sort of the way Erlang handles major failures. But most programs should just exit immediately.


Result type still requires quite a few lines of boilerplate if one needs to add custom data to it. And as a replacement of exceptions with automatic stack trace attachment it is relatively poor.

In any case I will take Rust Result over C++ mess at any time especially given that we have two C++, one with exception support and one without making code incompatible between two.


FWIW, stack traces are part of C++ now and you can construct custom error types that automagically attach them if desired. Result types largely already exist in recent C++ editions if you want them.

I use completely custom error handling stacks in C++ and they are quite slick these days, thanks to improvements in the language.


What I really like to see is stack traces annotated with values of selected local values. A few years ago I tried that in a C++ code base where exceptions were disabled using macros and something like error context passed by references. But the result was ugly and I realized that I had zero chances to adopt it.

With Rust Result and powerful macros it easier to implement.


The Result type isn't really enough for fun and easy error handling. I usually also need to reach for libraries like anyhow https://docs.rs/anyhow/latest/anyhow/. Otherwise, you still need to think about the different error types returned by different libraries.

Back at Google, it was truly an error handling nirvana because they had StatusOr which makes sure that the error type is just Status, a standardized company-wide type that stills allows significant custom errors that map to standardized error categories.


unfortunately it's not so simple. that's the convention. depending on the library you're using it might be a special type of Error, or special type of Result, something needs to be transformed, `?` might not work in that case (unless you transform/map it), etc.

I like rust, but its not as clean in practice, as you describe


There are patterns to address it such as creating your own Result type alias with the error type parameter (E) fixed to an error type you own:

    type Result<T> = result::Result<T, MyError>;

    #[derive(Debug)]
    enum MyError {
        IOError(String)
        // ...
    }
Your owned (i.e. not third-party) Error type is a sum type of error types that might be thrown by other libraries, with a newtype wrapper (`IOError`) on top.

Then implement the `From` trait to map errors from third-party libraries to your own custom Error space:

    impl From<io::Error> for MyError {
        fn from(e: io::Error) -> MyError {
            MyError::IOError(e.to_string())
        }
    }
Now you can convert any result into a single type that you control by transforming the errors:

    return sender
        .write_all(msg.as_bytes())
        .map_err(|e| e.into());
There is a little boilerplate and mapping between error spaces that is required but I don't find it that onerous.


I scratch my head when people try to justify Rust's implementation of X. It looks absolutely horrendous, IMO.

I would rather have what OCaml has: https://ocaml.org/docs/error-handling.


You can use anyhow, but yeah zig generally does errors better IMO


Errors are where I find zig severely lacking. They can't carry context. Like if you're parsing a JSON file and it fails, you can know that it failed but not where it failed within the file. Their solution in the standard library for cases like this was to handle printing to stderr internally, but that is incredibly hacky.


The std has diagnostics struct you can give to the json parser. Zig is manually memory managed language so it doesnt have payloads in errors for a good reason.


Manual memory management is not a reason Zig couldn't have supported sum types for error returns. You don't need an allocator involved at all to return a discriminated union like Rust uses.

I actually find Zig quite a pleasant language other than my gripe with not handling more complex errors as cleanly.


You would have to give up on global error set with sum types, unless you want to see the global error set bloat with the largest sum type. Also if the error payload needs allocating what are you gonna do? Allocating is no no no here. I know rust will just panic in OOM but zig chooses not to do that. Even if you allocated, since zig has no borrow checker the error handling becomes nightmare as you now have to deal with freeing the potential allocations.

Using out parameters for context, or diagnostic pattern like std does is not bad at all imo.

The only nice thing error payloads give you is better error messages in case of uncaught errors.


You can use anyhow::Result, and the ? will work for any Error.


I work in a new-ish C++ codebase (mid-2021 origin) that uses a Result-like type everywhere (folly::Expected, but you get std::expected in C++23). We have a C pre-processor macro instead of `?` (yes, it's a little less ergonomic, but it's usable). It makes it relatively nice to work in.

That said, I'd prefer to be working in Rust. The C++ code we call into can just raise exceptions anywhere implicitly; there are a hell of a lot of things you can accidentally do wrong without warning; class/method syntax is excessively verbose, etc.


Failure is not an option, it's a Result<T,E>


Proper error handling is the biggest problem in a vast majority of programs and rust makes that straightforward by providing a framework that works really well. I hate the `?` shortcut though. It's used horribly in many rust programs that I've seen because the programmers just use it as a half assed replacement for exceptions. Another gripe I have is that most library authors don't document what errors are returned in what situations and you're left making guesses or navigating through the library code to figure this out.


Error handling and propagation is one of those things I found the most irritating and struggled[1] with the most as I learned Rust, and to be honest, I'm still not sure I understand or like Rust's way. Decades of C++ and Python has strongly biased me towards the try/except pattern.

1: https://news.ycombinator.com/item?id=41543183


Counterpoint: Decades of C++/Python/Java/... has strongly biased me against the try/except pattern.

It's obviously subjective in many ways. However, what I dislike the most is that try/except hides the error path from me when I'm reading code. Decades of trying to figure out why that stacktrace is happening in production suddenly has given me a strong dislike for that path being hidden from me when I'm writing my code.


There should be a way to have the function/method document what sort of stuff can go wrong, and what kinds of exceptions you can get out of it.

It could be some kind of an exception check thing, where you would either have to make sure that you handle the error locally somehow, or propagate it upwards. Sadly programming is not ready for such ideas yet.

---

I jest, but this is exactly what checked exceptions are for. And the irony of stuff like Rust's use of `Result<T, E>` and similarly ML-ey stuff is that in practice they end up with what are essentially just checked exceptions, except with the error type information being somewhere else.

Of course, people might argue that checked exceptions suck because they've seen the way Java has handled them, but like... that's Java. And I'm sorry, but Java isn't the definition of how checked exceptions can work. But because of Java having "tainted" the idea, it's not explored any further, because we instead just assume that it's bad by construction and then end up doing the same thing anyway, only slightly different.


> There should be a way to have the function/method document what sort of stuff can go wrong, and what kinds of exceptions you can get out of it.

The key phrase you're looking for is "algebraic effect systems". Right now they're a pretty esoteric thing only really seen in PL research, but at one point so was most of the stuff we now take for granted in Rust. Maybe someday they'll make their way to mainstream languages in an ergonomic way.


I totally agree with you that Java checked exceptions suck. IME exceptions are far more ergonomic than Result.

Nim has a good take on exception tracking that's elegant, performant, and only on request (unlike Java's attempt).


Honestly I'm not even sure that Java checked exceptions are so bad in general compared to Result<T, E>. The amount of verbiage is roughly the same.

Where Java failed is the inability to write generic code that uses checked exceptions - e.g. a higher-order function should be able to say, "I take argument f, and I might throw anything that f() throws, plus E1". But that, as you rightly point out, is a Java problem, not a checked exception problem. In fact, one of the more advanced proposals for lambda functions in Java tackled this exact issue (but unfortunately they went with a simpler proposal that didn't).


I liked checked exceptions. I just think they were overused. Had a CS prof that summed up the optimal case like this:

Programmer's fault: runtime exception

Not programmer's fault: checked exception

Reading from a file but the disk fails? Not programmer's fault. IOException (checked). Missed a null somewhere? Programmer's fault. NullPointerException (unchecked).


Lack of parametrization meant that any interface that could be implemented in a way that could e.g. throw IOException had to declare it on its methods, even if only a single implementation out of several actually used it. And API clients then had to handle those exceptions even if they knew that they never use the interface implementation that could throw.

Or, alternatively, the interface wouldn't declare it as thrown, and then you couldn't implement it in terms of disk I/O without rewrapping everything into unchecked exceptions. A good example of that is java.util.Map, if you try to implement it on top of a file-based value store.


there are answers in the thread you linked that show how easy and clean the error handling can be.

it can look just like a more-efficient `except` clauses with all the safety, clarity, and convenience that enums provide.

Here's an example:

* Implementing an error type with enums: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/... * Which derives from a more general error type with even more helpful enums: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/... * then some straightforward handling of the error: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/...


abseil's "StatusOr" is roughly like Rust's Result type, and is what is used inside Google's C++ codebases (where exceptions are mostly forbidden)

https://github.com/abseil/abseil-cpp/blob/master/absl/status...


I wish Option and Result weren’t exclusive. Sometimes a method can return an error, no result or a valid result. Some crates return an error for “no result”, which feels wrong to me. My solution is to wrap Result<Option>, but it still feels clunky.

I could of course create my own type for this, but then it won’t work with the ? operator.


I think Result<Option> is the way to go. It describes precisely that: was it Ok? if yes, was there a value?

I could imagine situations where an empty return value would constitute an Error, but in 99% of cases returning None would be better.

Result<Option> may feel clunky, but if I can give one recommendation when it comes to Rust, is that you should not value your own code-aesthetical feelings too much as it will lead to a lot of pain in many cases — work with the grain of the language not against it even if the result does not satisfy you. In this case I'd highly recommend just using Result<Option> and stop worrying about it.

You being able to compose/nest those base types and unwraping or matching them in different sections of your code is a strength not a weakness.


Result<Option> is the correct way to represent this, and if you need further convincing, libstd uses it for the same reason: https://doc.rust-lang.org/stable/std/primitive.slice.html?se...


For things like this I find that ? still works well enough, but I tend to write code like

    match x(y) {
        Ok(None) => "not found".into(),
        Ok(Some(x)) => x,
        Err(e) => handle_error(e),
    }
Because of pattern matching, I often also have one arm for specific errors to handle them specifically in the same way as the ok branches above.


> I could of course create my own type for this, but then it won’t work with the ? operator.

This is what the Try[^1] trait is aiming to solve, but it's not stabilized yet.

[^1]: https://rust-lang.github.io/rfcs/3058-try-trait-v2.html


This sounds valid. Lookup in a db can be something or nothing or error.

Just need a function that allows lifting option to result.


Well, I think returning "not found" when action performed was an "update X" and X doesn't exist. Result<Option> is totally normal where it makes sense, tho.


Convention-wise Go is even better. On the one hand, there is zero magic in error handling ("Errors are values" and interface type 'error' is nothing special), on the other hand it's kind of a convention (slightly enforced by linters) that functions that return errors use this type and it's the last return parameter.

Nothing prevents people from doing their own way (error int codes, bool handling, Result types, etc, panic), but it's just an easiest way that handles well 99% of the error handling cases, so it sticks and gives a nice feeling of predictability of error handling patterns in Go codebases.


It's also highly dependent upon the team's skill and diligence. You can easily ignore errors and skip error handling in Go with predictably hilarious results.

In Rust, you can't just skip error handling. You have to proactively do something generally unwise (and highly visible!) like call .unwrap() or you have to actually handle the error condition.

Go still relies on goodwill and a good night's sleep. The Rust compiler will guard against laziness and sleep deprivation, because ultimately programming languages are about people, not the computers.


IMHO the ugly thing about Result and Option (and a couple of other Rust features) is that they are stdlib types, basic functionality like this should be language syntax (this is also my main critique of 'modern C++').

And those 'special' stdlib types wouldn't be half as useful without supporting language syntax, so why not go the full way and just implement everything in the language?


Uh, nope. Your language needs to be able to define these types. So they belong into the stdlib because they are useful, not because they are special.

You might add syntactic sugar on top, but you don't want these kinds of things in your fundamental language definition.


> Then you top it on with `?` shortcut

I really wish java used `?` as a shorthand to declare and propagate checked exceptions of called function.


One of the strengths of C++ is the ability to build features like this as a library, and not hardcode it into the language design.

Unless you specifically want the ‘?’ operator, you can get pretty close to this with some clever use of templates and operator overloading.

If universal function call syntax becomes standardized, this will look even more functional and elegant.


Rust also started with it as a library, as try!, before ?. There were reasons why it was worth making syntax, after years of experience with it as a macro.


why not just read the function you are calling to determine the way it expects you to handle errors?

after all, if a library exposes too many functions to you, it isn't a good library.

what good is it for me to have a result type if i have to call 27 functions with 27 different result types just to rotate a cube?


Ok, I'm at like 0 knowledge on the Rust side, so bear that in mind. Also, to note that I'm genuinely curious about this answer.

Why can't I return an integer on error? What's preventing me from writing Rust like C++?


You can write a Rust function that returns `i32` where a negative value indicates an error case. Nothing in Rust prevents you from doing that. But Rust does have facilities that may offer a nicer way of solving your underlying problem.

For instance, a common example of the "integer on error" pattern in other languages is `array.index_of(element)`, returning a non-negative index if found or a negative value if not found. In Rust, the return type of `Iterator::position` is instead `Option<usize>`. You can't accidentally forget to check whether it's present. You could still write your own `index_of(&self, element: &T) -> isize /* negative if not found */` if that's your preference.

https://doc.rust-lang.org/std/iter/trait.Iterator.html#metho...


Nothing prevents you, you just get uglier code and more possibility of confusion.


Did you ever actually program in Rust?

In my experience, a lot of the code is dedicated to "correctly transforming between different Result / Error types".

Much more verbose than exceptions, despite most of the time pretending they're just exceptions (i.e. the `?` operator).

Why not just implement exceptions instead?

(TBH I fully expect this comment to be downvoted, then Rust to implement exceptions in 10 years... Something similar happened when I suggested generics in Go.)


I've only worked in exceptions, so I can't really comprehend the book-keeping required without them. To me it's a separation of concerns: the happy path only involves happy code. The "side channel" for the unhappy path is an exception, with an exception handler at a layer of the abstraction where it's meaningful, yet happy, code. By "happy" I mean code that's simply the direct practical work that's trying to accomplished something, so doesn't need to worry about when things go terribly wrong.

Being blind to the alternative, and mostly authoring lower level libraries, what's the benefit of not having exceptions? I understand how they're completely inappropriate for an OS, a realtime system, etc, but what about the rest? Or is that the problem: once you have the concept, you've polluted everything?


There's mostly drawback of having exceptions, I'm not really aware of any benefits of not having them.

People often complain about:

- performance impact, either speed (most languages) or binary size (C++); this, however, is mostly an implementation concern, and doesn't impact Rust at all, as exceptions can simply be syntax-level compiler sugar, and can be compiled exactly the same as Result type is currently (having said that, the optional stack trace is another potential issue, which would have a performance impact even in Rust)

- checked exceptions - this is particularly a concern in Java, which has a fairly poor type system (closed subtyping only) and no type inference, so declaring all unhandled exceptions is tedious

- non-checked exceptions - in this case, "every exception can happen anywhere" so it's ostensibly unsafe (well it's just how life is, and even Java has special unchecked exceptions such as OutOfMemory that can happen anywhere) - some people claim that "exceptions can't just randomly jump out of code" is a benefit of not having exceptions but usually those people sweep OutOfMemory and DivisionByZero under the rug (e.g. Rust, where they just "crash" the program)

Rust would obviously fit the checked exceptions path, as the Result implementatoin basically is "poor-man's checked exceptions". It only needs to flip the syntax sugar - propagate by default (and implicitly), "catch" and materialize using `?` - as well as making Error an open enum (such that you can add cases implicitly - see e.g. OCaml's exception type `exn` [1]) - and that's basically it!

[1] https://ocaml.org/manual/5.3/extensiblevariants.html


In places where you only want to pass an error to the caller, Rust lets you just add "?" to the call that returns a Result/Option. And the Rust compiler will check that every potential error is handled.

I wouldn't say that it's the tedious part of the language.


The result type is obviously insufficient for writing nontrivial programs, because nontrivial programs fail in nontrivial ways that need exceptional control flow. The result type does not work because you have to choose between immediate callers handling failures (they don't always have the context to do so because they're not aware of the context of callers higher up on the call stack) or between propagating all of your error values all the way up the stack to the error handling point and making your program fantastically brittle and insanely hard to refactor.


The Result type works for an awful lot of people. Be careful with absolute statements like "does not work." When it works for many others, they might just assume it's a skill issue.


When I say "it doesn't work" I mean that it doesn't allow you to write good code, not "doesn't work" as in the sense that people don't like it. That latter one doesn't make any sense, as languages like PHP "work" for many tens (hundreds?) of thousands of people.

I'm well aware of the tendency of Rust programmers to write bad code, constrained by the language, and then be deluded into thinking that that's good code.


Lord, grant me the confidence of this man who claims objective understanding of what does and does not constitute "good code".


I note that you did nothing to refute my point about why error-handling-via-return-values is insufficient and instead resort to emotional manipulation and logical fallacies.

This seems to happen a lot in the Rust community when people point out flaws in the language.


Why should I defend Rust? You haven't even defined "good code", which has eluded the best minds in the field for as long as the field has existed.

You are not a serious person.


I don't need to define "good code" for my argument.

You're not a person capable of using logic, apparently. As is characteristic of Rust zealots.


> When I say "it doesn't work" I mean that it doesn't allow you to write good code

But you skipped over how you are defining, "good code"? Without that part, "doesn't allow" cannot be evaluated in the context of Java or C++ or Python or Go or Rust.

Logic.


Apparently you have difficulty with reading comprehension, because my original comment already defines "good code":

> The result type does not work because you have to choose between immediate callers handling failures (they don't always have the context to do so because they're not aware of the context of callers higher up on the call stack) or between propagating all of your error values all the way up the stack to the error handling point and making your program fantastically brittle and insanely hard to refactor.

It's interactions like this that are the reason why my organization isn't adopting Rust.


Defined good code TO YOU. You've failed to recognize there is no objective and universally recognized metric for good code within our industry. The closest thing we have to it is number of defects per line of code and how severe those defects are through CVEs. That's it.

You've mistaken your personal preferences and aesthetic sense for absolute truth.

The arrogance is astounding.


> have to choose between immediate callers handling failures (they don't always have the context to do so because they're not aware of the context of callers higher up on the call stack) or between propagating all of your error values all the way up the stack to the error handling point and making your program fantastically brittle and insanely hard to refactor.

This is a bad choice to have to make. If you believe otherwise, you are an incompetent software engineer. Full stop. These are not personal preferences or aesthetics - these are facts. If you can't parse why these are bad, then you're incapable of basic logic.

> within our industry

Of course, we can already see that because you're resorting to fallacies instead of addressing the point itself.

> You've failed to recognize there is no objective and universally recognized metric for good code within our industry.

Literally none of that is relevant to the point that I'm making. I don't have to define "good code" in order to point out that something is "bad code".

I'm going to print out this thread and show it to anyone who says that they're considering learning Rust as a warning that this is what the community is like - incapable of using logic, unwilling to admit the slightest fault in their religion despite factual evidence to the contrary, and willing to use dishonest rhetoric and fallacies in their defense.


You've certainly demonstrated something, though likely not what you intended.


You can inspect error values in Rust, handle some errors, and bubble up others, with an ordinary match statement.

Exactly like try catch


> The result type is obviously insufficient for writing nontrivial programs

Counterpoint: there are many non-trivial programs written in Rust, and they use Result for error handling.


Pointing to programs written with design flaws caused by the flaw in the programming language does invalidate the claim that the flaw exists and negatively affects those programs.

You can write any program in COBOL. Most people would say that it's an insufficient language for doing so.

"Insufficient" here obviously does not mean that it's impossible to write non-trivial programs, just that they'll have bad code.


I like so much about Rust.

But I hear compiling is too slow.

Is it a serious problem in practice?


Absolutely, the compile times are the biggest drawback IMO. Everywhere I've been that built large systems in Rust eventually ends up spending a good amount of dev time trying to get CI/CD pipeline times to something sane.

Besides developer productivity it can be an issue when you need a critical fix to go out quickly and your pipelines take 60+ minutes.


If you have the money to throw at it, you can get a long way optimising CI pipelines just by throwing faster hardware at it. The sort of server you could rent for ~$150/month might easily be ~5x faster than your typical Github Actions hosted runner.


Besides faster hardware, one of the main features (and drawbacks) you get with self-hosted runners is the option to break through build isolation, and have performant caches between builds.

With many other build systems I'd be hesitant to do that, but since Cargo is very good about what to rebuild for incremental builds, keeping the cache around is a huge speed boost.


Yes, this is often the best "low-hanging fruit" option, but it can get expensive. It depends how you value your developer time.


Don't use a single monolithic crate. Break your project up into multiple crates. Not only does this help with compile time (the individual crate compiles can be parallelized), it also tends to help with API design as well.


Every project I've worked on used a workspace with many crates. Generally that only gets you so far on large projects.


It compiles different files separately, right?

With some exceptions for core data structures, it seems that if you only modified a few files in a large project the total compilation time would be quick no matter how slow the compiler was.


Sorta. The "compilation unit" is a single crate, but rustc is now also parallel, and LLVM can also be configured to run in parallel IIRC.

Rust compile times have been improving over time as the compiler gets incrementally rewritten and optimised.


We have 60 minutes deploy pipelines and are in python. Just mentioning that since, in theory, we are not penalized for long compile times.

Fast ability to quickly test and get feedback is mana from the gods in software development. Organizations should keep it right below customer satisfaction and growth as a driving metric.


I can't speak for a bigger rust project, but my experience with C++ (mostly with cmake) is so awful that I don't think it can get any worse.

Like with any bigger C++ project there's like 3 build tools, two different packaging systems and likely one or even multiple code generators.


that does not answer at all OP's question.


"It can't get any worse than C++" That's my response. So just use rust. In the long run you'll save time as well.


It is slow, and yes it is a problem, but given that typical Rust code generally needs fewer full compiles to get working tests (with more time spent active in the editor, with an incremental compiler like Rust Analyzer) it usually balances out.

Cargo also has good caching out of the box. While cargo is not the best build system, it's an easy to use good system, so you generally get good compile times for development when you edit just one file. This is along made heavy use of with docker workflows like cargo-chef.


Compile times are the reason why I'm sticking with C++, especially with the recent progress on modules. I want people with weaker computers to be able to build and contribute to the software I write, and Rust is not the language for that.


C++ compiles quickly? News to me.


I worked in the chromium C++ source tree for years and compiling there was orders of magnitude slower than any Rust source tree I've worked in so far.

Granted, there aren't any Rust projects that large yet, but I feel like compilation speeds are something that can be worked around with tooling (distributed build farms, etc.). C++'s lack of safety and a proclivity for "use after free" errors is harder to fix.


Are there rust projects that are within orders of magnitude of Chromium?


Almost a quarter of Firefox' compiled code is Rust: https://4e6.github.io/firefox-lang-stats/


Nice. That's more than I expected. What's the compilation time compared to, say, the c++ portion?


It depends on where you're coming from. For me, Rust has replaced a lot of Python code and a lot of C# code, so yes, the Rust compilation is slow by comparison. However, it really hasn't adversely affected (AFAICT) my/our iteration speed on projects, and there are aspects of Rust that have significantly sped things up (eg, compilation failures help detect bugs before they make it into code that we're testing/running).

Is it a serious problem? I'd say 'no', but YMMV.


Yes, Rust compiling is slow. Then again, I wouldn't say that C++ is exactly speedy in that area either. Nor Java. None of those are even in the same zip code to Go's compile speed.

So if you're cool with C++ or Java compile times, Rust will generally be fine. If you're coming from Go, Rust compiles will fell positively glacial.


Compilation is indeed slow, and I do find it frustrating sometimes, but all the other benefits Rust brings more than make up for it in my book.


People who say "Rust compiling is so slow" have never experienced what building large projects was like in the mid-1990s or so. It's totally fine. Besides, there's also https://xkcd.com/303/


Or maybe they have experienced what it was like and they don't want to go back.


Not really relevant. The benchmark is how other language toolchains perform today, not what they failed to do 30 years ago. I don't think we'd find it acceptable to go back to mid-'90s build times in other languages, so why should we be ok with it with something like Rust?


And panics?


Those are generally used as asserts, not control flow / error handling.


Its true but using unwrap is a bit boring , I mean...boring is good but its also boring.


You shouldn't be using unwrap.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: