As a society we've proven over and over again that we're unable to solve these problems that require coordination against greed. We've pulled the smartphone out of Pandora's box.
There's a 500B industry selling the phones, 2.5 trillion selling telecom services, trillions more selling social media, and most of the economy involves selling their products over the internet. Those are some HUGE incentives to maintain the status quo, or get people even more addicted yet.
I don't think our society is capable of answering that question and starting a Dune-style "Butlerian Jihad" and destroying all machines-that-think.
Absolutely agree. People made the bad decisions. But the bad choices existed. People who don't understand the bad choices are bad choices, or worse, think that there is no possible way there is a bad choice, are far more likely to end up being those people who made bad decisions then people who understand that the decisions mattered.
Don't go running around telling people that they can dig the Panama Canal with three toothpicks and a spare weekend, and if they fail, well by golly they just didn't have enough grit and gumption like us awesome folks who could have done it with only two. Tool choice matters. In fact I can hardly process how anyone can be an engineer and think that it doesn't, let alone how they can think it's some sort of engineering wisdom to claim that it doesn't matter what tools you use to do a project.
Of course, picking the tool is only the moment the project may fail. It is not the moment the project succeeds; there's still a lot of using it correctly that will be necessary and plenty of further opportunities to fail even with the correct tool. But at least success is within the range of possibilities. You can forstall that possibility entirely on day one with incorrect tool choices.
Sure, people with ZERO enterprise experience might say - hey, let's write our <main software> in PHP - I had a lot of fun 15 years ago with that, and I'm sure we can make it work.. That's the "3 toothpicks" in your example.
But is anyone REASONABLY competent going to do that? They might pick C/C++, Go, Rust, Java, etc. Those aren't a choice between "bulldozers and toothpicks" - they are more akin to choosing between Caterpillar, Volvo or Hitachi as the vendor of choice for construction equipment. They may have some gaps in their specific list of equipment, they may charge too much for one specific tool, your workers may have experience with one, not the other, etc..
Your example should be the textbook definition of a strawman argument..
> Tool choice matters. In fact I can hardly process how anyone can be an engineer and think that it doesn't, let alone how they can think it's some sort of engineering wisdom to claim that it doesn't matter what tools you use to do a project
Just to be clear, I wasn't trying to claim this; tooling certainly matters, at the very least, for the happiness and welfare of an engineering team! But, the article tries to claim things like "choosing a programming language is the single most expensive economic decision your company will make" and outside of a few extreme edge cases, I just can't agree with that particular thesis. Even the examples of bad decision-making you pose in your sibling comments, like writing a database in Go or "almost failing" by using sketchy niche datastores, are actually examples of this exact thing: these projects made huge engineering mistakes only to achieve some level of success as a business. Would they have been more successful if they made better engineering decisions? Possibly, but again, language and framework just was not the most important decision or factor driving an outcome.
I'm not saying that means we shouldn't care about making good engineering choices; there are easy ways to do things and hard ways to do things, and certainly I'm going to advocate for and work with people and at companies that favor the easy ways to do things. But when it comes to overall outcomes, I'll stand by having seen far more projects sacrificed to analysis paralysis, rewrites, rewrite-related hand wringing, and language/tooling hubris than sabotaged by poor language and framework choices.
> But programmers seem to feel that they are above this stuff.
No, programmers are quite below this stuff. Budgets for medical research, treatments, trials are way more than IT budgets. And for typical IT project the underlying point is even if requirements were wrong, changed halfway , software is malleable enough, to be changed, refactored. And all while it can remain in use even during change cycle.
> The main driver of a project's success is almost always driven by: the composition of employees working on the project, and the competence of the people architecting the project.
This is my experience too. I’d go a bit further and say the leads are the primary driver of success. Because ultimately, if the composition of the people on a project is incorrect, it’s the lead’s responsibility to realize and change it.
I recommend including links to these within the submission itself. Landing on the game with no context isn’t a great experience. It also isn’t easy to share a link with others this way.
Apologies, the context is the talk, and we didn’t submit this submission of the visualizer to HN!
We are working on a tutorial stage, but people tend to hear about TB and TigerStyle and our DST first, appreciating the talks and that we make the visualizers in that context.
I always think more languages should support Result… but only to handle expected error states. For example, you may expect that some functions may time out or that a user may attempt an action with an invalid configuration (e.g., malformed JSON).
Exceptions should be reserved for developer errors like edge cases that haven’t been considered or invalid bounds which mistakenly went unchecked.
I find it kind of funny that this is almost exactly how Java's much-maligned "checked exceptions" work. Everything old is new again.
In Java, when you declare a function that returns type T but might also throw exceptions of type A or B, the language treats it as though the function returned a Result<T, A|B>. And it forces the caller to either handle all possible cases, or declare that you're rethrowing the exception, in which case the behavior is the same as Rust's ? operator. (Except better, because you get stack traces for free.)
Java's distinction between Runtime and Checked Exception makes sense, and is pretty much the same panic vs Result distinction Rust makes. But Java's execution of the concept is terrible.
1. Checked exception don't integrate well with the type-system (especially generics) and functional programming. It's also incompatible with creating convenient helper functions, like Rust offers on Result.
2. Converting checked exceptions into runtime exception is extremely verbose, because Java made the assumption that the type of error distinguishes between these cases. While in reality errors usually start as expected in low-level functions, but become unexpected at a higher level. In Rust that's a simple `unwrap`/`expect`. Similarly converting a low level error type to a higher level error type is a simple `map_err`.
3. Propagation of checked exception is implicit, unlike `?` in Rust
Though Rust's implementation does have its weaknesses as well. I'd love the ability to use `Result<T, A | B>` instead of needing to define a new enum type.
I wish I could upvote this more. I can totally understand GP's sentiment, but we need to dispel the myth that results are just checked exceptions with better PR.
I think the first issue is the most important one, and this is not just an implementation issue. Java eschewed generics on its first few versions. This is understandable, because generics were quite a new concept back then, with the only mainstream languages implementing them then being Ada and C++ - and the C++ implementation was brand new (in 1991), and quite problematic - it wouldn't work for Java. That being said, this was a mistake in hindsight, and it contributed to a lot of pain down the road. In this case, Java wanted to have exception safety, but the only way they could implement this was as another language feature that cannot interact with anything else.
Without the composability provided by the type system, dealing with checked exceptions was always a pain, so most Java programmers just ended up wrapping them with runtime exceptions. Using checked exceptions "correctly" meant extremely verbose error handling with a crushing signal-to-noise ratio. Rust just does this more ergonomically (especially with crates like anyhow and thiserror).
I like the declaration side. I think part of where it misses the mark is the syntax on the caller side.
I feel like standard conditionals are enough to handle user errors while the heavy machinery of try-catch feels appropriately reserved for unexpected errors.
Probably, the problem with Java's `try-catch` is it's not composable and has an abundance of unchecked exceptions (could mess up `catch` type). In Rust, you could just `?` to short-circuit return or do another chained method call `result.map(...).and_then(...).unwrap_or(...)`.
And more importantly, I don't think there's any JEP trying to improve checked exception handling.
While Java gets the blame, the concept was already present in CLU, Modula-3 and C++ before Java was even an idea.
I also find a certain irony that forced checked results are exactly the same idea from CS type theory point of view, even if the implementation path is a different one.
Java's checked exceptions experiment was very painful in various ways that directly exposing an error state as part of the return value is not so I wouldn't quite characterize this as "Everything old is new again."
The first big thing is that Java, especially in the days of when checked exceptions were a really big thing and less so in modern Java, was really into a certain kind of inheritance and interface design that didn't play well with error states and focused on the happy path. It is very difficult to make Java-esque interfaces that play well with checked exceptions because they like to abstract across network calls, in-memory structures, filesystem operations, and other side effectful tasks that have very different exception structures. An interface might have a single `writeData` method that might be backed by alternatively a write into an in-memory dictionary, a filesystem key-value store, a stateless REST API, or a bidirectional WebSocket channel which all have wildly different exceptions that can occur.
The second thing is that because checked exceptions were not actual return values but rather had their own special channel, they often did not play well with other Java API decisions such as e.g. streams or anything with `Runnable` that involved essentially the equivalent of a higher-order function (a function that takes as an argument another function). If e.g. you had something you wanted to call in a `Stream.map` that threw a checked exception, you couldn't use it, even if you notated in the enclosing method that you were throwing a checked exception because there was no way of telling `Stream.map` "if the function being `map`ed throws an exception rethrow it" which arose because checked exceptions weren't actual return values and therefore couldn't be manipulated the same way. You could get around it, but would have to resort to some shenanigans that would need to be repeated every time this issue came up for another API.
On the other hand if this wasn't a checked exception but was directly a part o the return value of a function, it would be trivial to handle this through the usual generics that Java has. And that is what something like `Result` accomplishes.
IMHO the mapping issue comes from functions not being first class, so all types require Functor-like interfaces which are needlessly verbose. Splitting these is not semantically different than a function that returns a value vs a function that returns a Result.
I have little love for Java, but explicitly typed checked exceptions are something I miss frequently in other languages.
No I think it's a deeper issue than that. In particular because exceptions aren't a return value, you can't make a function generic over both values and exceptions at the same time. This would persist even with first class functions.
If you want to be generic over exceptions, you have to throw an exception. It would be nice to have e.g. a single `map` method that appropriately throws an exception when the function that is called throws a function and one that doesn't throw an exception when the function that is called throws a function. But as it stands, if you want to be able to throw a checked exception at all, you have to mark that your higher order function throws checked exceptions, even if you would prefer to be more generic so that e.g. you only throw a checked exception if your function that is called throws a checked exception.
Thr only reason this works in other languages is because they’ve made the choice to return objects that represent success+value or error and then added explicit syntax to support those types. That means function signatures put the error in the return type instead of a separate exception channel, but that’s really only a syntactic difference. It’s otherwise isomorphic.
Only it is not considered by the type checker. Result brings errors into the realm of properly typed code that you can reason about. Checked exceptions are a bad idea that did not work out (makes writing functional code tedious, messes with control flow, exceptions are not in the type system).
The only difference between a `fun doThing: Result<X, SomeError>` and a `fun doThing: X throws SomeError` is that with the checked exception, unpacking of the result is mandatory.
You're still free to wrap the X or SomeError into a tuple after you get one or other other. There is no loss of type specificity. It is no harder to "write functional code" - anything that would go in the left() gets chained off the function call result, and anything that would go in the right() goes into the appropriate catch block.
You've discarded the error type, which trivialised the example. Rust's error propagation keeps the error value (or converts it to the target type).
The difference is that Result is a value, which can be stored and operated on like any other value. Exceptions aren't, and need to be propagated separately. This is more apparent in generic code, which can work with Result without knowing it's a Result. For example, if you have a helper that calls a callback in parallel on every element of an array, the callback can return Result, and the parallel executor doesn't need to care (and returns you an array of results, which you can inspect however you want). OTOH with exceptions, the executor would need to catch the exception and store it somehow in the returned array.
Either works, but now you have two ways of returning errors, and they aren't even mutually exclusive (Either-retuning function can still throw).
Catch-and-wrap doesn't compose in generic code. When every call may throw, it isn't returning its return type T, but actually an Either<T, Exception>, but you lack type system capable of reasoning about that explicitly. You get an incomplete return type, because the missing information is in function signatures and control flow, not the types they return. It's not immediately obvious how this breaks type systems if you keep throwing, but throwing stops being an option when you want to separate returning errors from the immediate act of changing control flow, like when you collect multiple results without stopping on the first error. Then you need a type for capturing the full result type of a call.
If you write a generic map() function that takes T and returns U, it composes well only if exceptions don't alter the types. If you map A->B, B->C, C->D, it trivially chains to A->D without exceptions. An identity function naturally gives you A->A mapping. This works with Results without special-casing them. It can handle int->Result, Result->int, Result->Result, it's all the same, universally. It works the same whether you map over a single element, or an array of elements.
But if every map callback could throw, then you don't get a clean T->U mapping, only T -> Either<U, Exception>. You don't have an identity function! You end up with Either<Either<Either<... when you chain them, unless you special-case collapsing of Eithers in your map function. The difference is that with Result, any transformation of Result<T, E1> to Result<U, E2> (or any other combo) is done inside the concrete functions, abstracted away from callers. But if a function call throws, the type change and transformation of the type is forced upon the caller. It can't be abstracted away from the caller. The map() needs to know about Either, and have a strategy for wrapping and unwrapping them.
catch lets you convert exceptions to values, and throw convert values to exceptions, so in the end you can make it work for any specific use-case, but it's just this extra clunky conversion step you have to keep doing, and you juggle between two competing designs that don't compose well. With Result, you have one way of returning errors that is more general, more composable, and doesn't have a second incomplete less flexible way to be converted to/from.
I think you're missing the key point about return types with checked exceptions.
`int thing()` in Java returns type `int`. `int thing() throws AnException` in Java returns type `int | AnException`, with language-mandated destructuring assignment with the `int` going into the normal return path and `AnException` going into a required `catch` block.
The argument you're making, that the compiler doesn't know the return type and "you lack type system capable of reasoning about that explicitly", is false. Just because the function says its return type is `int` doesn't mean the compiler is unaware there are three possible returns, and also doesn't mean the programmer is unaware of that.
The argument you are making applies to UNchecked exceptions and does not apply to CHECKED exceptions.
It's not a single return type T that is a sum type. It's two control flow paths returning one type each, and that's a major difference, because the types and control flow are complected together in a way that poorly interacts with the type system.
It's not `fn(T) -> U` where U may be whatever it wants, including Ok|Exception in some cases. It's `fn(T) -> U throws E`, and the `U throws E` part is not a type on its own. It's part of the function signature, but lacks a directly corresponding type for U|E values. It's a separate not-a-type thing that doesn't exist as a value, but is an effect of control flow changes. It needs to be caught and converted to a real value with a nameable type before it can work like a value. Retuning Either<U, E> isn't the `U throws E` thing either. Java's special alternative way of returning either U or E is not a return type, but two control flow paths returning one type each.
Compiler is fully aware of what's happening, but it's not the same mechanism as Result. By focusing on "can this be done at all", you miss the whole point of Result achieving this in a more elegant way, with fewer special non-type things in the language. Being just a regular value with a real type, which simply works everywhere where values work without altering control flow is the main improvement of Result over checked exceptions. Removal of try/catch from the language is the advantage and simplification that Result brings.
Result proves that Java's special-case exception checking is duplicating work of type checking, which needlessly lives half outside of the realm of typed values. Java's checked exceptions could be removed from the language entirely, because it's just a duplicate redundant type checker, with less power and less generality than the type checker for values.
I usually divide things in "errors" (which are really "invariant violations") and "exceptions". "exceptions", as the name implies, are exceptional, few and when they happen, they're usually my fault, "errors" on the other hand, depending on the user, happens a lot and usually surfaced to the user.
That's subtly different. It's secondary whose fault is this, what primarily matters is whether you should continue with the rest of the process.
There is always a cleanup layer, the trick is to choose well between 1 and 2:
1. Some code in the same OS process is able to bring data back to order.
2. OS can kill the process and thus kill any corruption that was in its address space.
3. Hardware on/off button can kill the entire RAM content and thus kill any corruption that spilled over it.
Take for example an unexpected divide by zero, that's a classic invariant violation. The entire process should blow up right there because:
- it knows that the data in memory is currently corrupted,
- it has no code to gently handle the corruption,
- and it knows the worst scenario that can happen: some "graceful stop", etc., routine might decide to save the corrupted data to disk/database/third-party. Unrecoverable panic (uncatchable exception) is a very good generic idea, because persistently-corrupted-data bug is a hundred times worse than any died-with-ugly-message bug as far as users are concerned.
> Take for example an unexpected divide by zero, that's a classic invariant violation. The entire process should blow up right there because:
> - it knows that the data in memory is currently corrupted,
> - it has no code to gently handle the corruption,
are these conditions or implications?
- corruption isn't necessary: it could be the consequence of a bug
- corruption can be handled sometimes: fetch the data from the source again, apply data correction algorithms
- you know and control what the graceful stop does; maybe not save the corrupted data to disk? or maybe save it to disk and ask the user to send it to you
I took several assumptions to build an argument why some errors should result in process-level termination.
By "corruption" I mean much more than merely a hardware bit error. I mean any situation when data no longer has a meaning for a user.
By "unexpected" I mean that the program has not been prepared to deal with the situation: there is no code there to "fetch the data from the source again", etc.
> you know and control what the graceful stop does
No, in fact I'm exploring here situations when I don't know this with certainty.
Both bugs and exceptions can be reasonably thought of as programmer error, but they are not the same kind of programmer error. Exceptions are flaws in the code — conditions that could have been caught at compile time with a sufficiently advanced language/compiler. Whereas bugs are conditions that are programatically sound, but violate human expectations.
Bugs require execution so a compiler cannot find bugs.
Exceptions also require execution, but that does not suggest that they are encompassed by bugs. The lack of a third term tells that there is no overlap. If "bug" covered both exceptions and where human expectations are violated, there would necessarily be another term just for the case where human expectations are violated. But there is no such term...
...that I've ever heard. If it is that I've been living under a rock, do tell.
No you haven’t been living under a rock your definitions are just off and you didn’t read carefully what I wrote. Or you’re just overly pedantic.
I wrote bugs can cover exceptions. Think hard about what that sentence means in English. If I can do something it means I can both do something and not do something. So that means there are exceptions that are bugs and exceptions that are not bugs.
The reason why it’s important to illustrate this difference is because a large, large number of exceptions occur as a bug.
> So that means there are exceptions that are bugs
Which, like before, is false. Are you, perhaps, confusing exceptions (the concept; what we're talking about here) with the data structure of the same name? A bug could erroneously lead to the creation of an exception (data structure), but that wouldn't be an exception (concept), that'd just be a bug.
There is no situation where an exception (concept) is also a bug, unless you take "bug" to mean something akin to general "programmer error", where exceptions are subset thereof. But, as before, we know that's not how "bug" is used as there is no other terminology for human expectation violations to fill the void. No need as that is already what "bug" refers to.
> a large, large number of exceptions occur as a bug.
That doesn't really make any sense. Maybe if, again, you have confused exceptions (concept) with exceptions (data structure), then it could start to mean something, but even then "large, large" is one hell of a claim. Let's be real: The vast majority of exceptions (data structure) are created by programmers who mistakenly believe that "exception" is another word for "error". While not exactly a good idea, that use falls under neither exceptions (concept) nor bugs. In second place, exceptions (data structure) are created due to exceptions (concept). I'm sure exceptions (data structure) being created due to bugs has happened before, but I've never seen it.
You’re arguing as though reality cares about your word boundaries. Saying “exceptions are not bugs” because one is a concept and the other is a data structure is like saying a flat tire isn’t a car problem because the air is outside the vehicle. It’s technically true in the same way “rain is just water, not weather” is true. Nobody outside the argument needs a whiteboard to see what’s gone wrong.
An exception is what happens when a bug hits the runtime. The program trips, falls, and throws a message about it. Calling that “not a bug” is like saying your toaster catching fire isn’t a malfunction because combustion is a physical process, not an electrical one. Or saying a sinking ship isn’t in trouble because “leak” is a noun and “disaster” is a category.
In practice, exceptions are how bugs announce their presence. You can keep redrawing the philosophical fence around which one counts as which, but it’s the same as a child insisting the spilled milk isn’t a mess because it’s just “liquid on the table.” Everyone else has already grabbed the paper towels.
> An exception is what happens when a bug hits the runtime.
Remember, "exception" is short for "exceptional event". "An exceptional event is what happens when a bug hits the runtime." means... what, exactly? And let's not forget that you said that not all exceptions are bugs so it seems we also have "An exceptional event is what happens when a bug does not hit the runtime." What is that supposed to mean?
Returning us from la-la land, an exceptional event, or exception, refers to encountering a fundamental flaw in the instruction. Something like divide by zero or accessing an out of bounds index in an array. Things that you would never have reason to carry out and that a more advanced compiler could have reasonably alerted you to before execution.
Bugs are different. They can only be determined under the watchful eye of a human deciding that something isn't behaving correctly. Perhaps your program has a button that is expected to turn on an LED, but instead it launches a nuclear missile. While that is reasonably considered programmer error, that's not a fundamental flaw — that's just not properly capturing the human intent. In another universe that button launching nuclear missiles very well could be the intended behaviour. There is no universe where accessing an out of bounds index is intended.
> exceptions are how bugs announce their presence
Bugs aren't announceable, fundamentally. They can be noticed by a human who understands the intent of the software, but it is impossible for the machine to determine what is and what isn't a bug. The machine can determine what is an exception, though, and that's why exceptions often get special data structures and control flows in programming languages while bugs don't.
You’re still mistaking your own definitions for truth. You’re describing programming as if words create reality, not the other way around. Your claim that bugs “can only be determined by humans” and “machines cannot announce them” is like saying a car cannot have a problem until a mechanic gives it a name. The smoke pouring from the hood does not wait for linguistic permission to exist.
Exceptions are the runtime form of program faults. The fact that we can construct them synthetically or that some are anticipated does not erase their relationship to bugs. You seem to believe that because we can imagine a world where launching a missile is “intended,” it stops being a bug. By that logic, nothing in computing can ever be wrong as long as someone somewhere hypothetically wanted it that way. That isn’t philosophy. It’s a child’s loophole.
Your “fundamental flaw in the instruction” definition collapses immediately under reality. A division by zero is only “fundamental” because of human intent: we chose to define arithmetic that way. Under your logic, if we wrote a compiler that quietly handled it as infinity, exceptions would vanish from the universe. That should tell you that your ontology is not describing truth but just the current convention of language design.
The machine can’t “determine” bugs? Of course it can’t. The machine can’t determine anything. It executes. Yet you just admitted exceptions exist because “the machine determines them.” You’ve built a castle out of circular definitions and are calling it a worldview.
In practice, exceptions are one of the main observable ways that bugs manifest. The rest of us live in the world where programs crash, not in the world where we rename crashes until they sound academic.
> [...] is like saying a car cannot have a problem until a mechanic gives it a name. The smoke pouring from the hood does not wait for linguistic permission to exist.
That doesn't make any sense. But there may be something salvageable in your analogy. Consider the difference between manufacturing defects and wear and tear. You can understand how each, while both able to lead to failure, are different categories of problems?
Good. Now, analogies can only go so far, but we can extrapolate some similarity from that in software. Meaning that both bugs and exceptions are problems, but they are not of the same category. They originate from different conditions. Exceptions are fundamental flaws in the code, while bugs are deviation from human intent.
> Yet you just admitted exceptions exist because “the machine determines them.”
That's right. The machine can determine when an exception occurs. For example, divide by zero. There is no way the machine can carry on once this happens. The machine is capable of addressing such a condition. Which is very different from divide by one when you actually intended to divide by two. This is what differentiates exceptions from bugs and why we have more than one word to describe these things.
> exceptions are one of the main observable ways that bugs manifest.
Maybe you're still confusing the concept with the data structure? Absolutely, a bug can lead to creating an exception (data structure). Consider:
color sky = "blue";
if (sky == "blue") { // was intended to be sky != "blue"
throw "the sky was expected to be blue"; // produces an exception data structure (message, call stack, etc.)
}
But that wouldn't be an exception (concept), that'd just be a bug. The fault is in the inverted conditional, not a fundamental execution flaw.
The presence of an exception (data structure) does not imply that there is an exception (concept). The overlapping use of words is a bit unfortunate, perhaps — albeit consistent with error (concept) and error (data structure), but the difference between a data structure and a concept should not confuse anyone.
Exceptions (concept), on the other hand, looks more like:
object.methodThatDoesNotExist();
The code is fundamentally flawed. The machine is, in this case, quite able to determine that you screwed up. If you have a sufficiently advanced compiler, you'd be warned at compile time that methodThatDoesNotExist cannot be called. But if that condition evades the compiler then it would lead to an exceptional event at run time instead.
First, your categories are not universal. They depend on the language and the runtime.
1. Divide by zero
In IEEE floating point you can define it to yield Infinity or NaN and keep going. In some languages it raises an exception. In C with integers you get undefined behavior with no exception at all. If your claim were about a fundamental flaw of code itself, the result would not change across environments. It does. So your category is a policy choice, not a law of nature.
2. Out of bounds
In Java you get an IndexOutOfBoundsException. In C you can trample memory and never raise anything. Same code pattern, different outcome. Again this shows your supposed bright line is just a design decision.
3. methodThatDoesNotExist
In a statically typed language this is a compile time error and never throws at runtime. In a dynamic language it can throw NoSuchMethod at runtime. Same text, different category, based on tooling. That is not a fundamental metaphysics of exceptions. It is the compiler and runtime drawing the line.
Conclusion from 1 through 3
Your definition of exception as a fundamental flaw collapses the moment we cross a language boundary. What you are calling fundamental is actually configurable convention.
⸻
Second, your machine versus human distinction is self defeating.
You say the machine can determine exceptions but only humans can determine bugs. The machine does not determine anything. It executes rules we wrote. If a rule maps a state to raise an exception, the machine raises one. Whether that state is a bug is defined by the spec. If the spec says division by zero must never occur in production, any runtime that hits it is a bug. If the spec says handle it as Infinity, then hitting it is not a bug. The label follows intent and contract, not the instruction pointer.
Obvious example
A payment service has a rule that user input must be validated before division. A request slips through with divisor zero and the service throws. That is a bug even if the language designer felt very proud of raising an ArithmeticException. Flip the rule. If the service is specified to accept zero and return a sentinel value, then raising an exception is itself the bug.
⸻
Third, the data structure versus concept move does not rescue the argument.
Yes, we all know you can throw an object on purpose. Yes, we all know many libraries create rich exception objects. None of that changes the core point. In a nontrivial system, unhandled exceptions or exceptions raised in states that the spec forbids are bugs by definition. The exception is the transport. The bug is the violation that made the transport fire.
Your own toy example proves it
You show an inverted conditional that throws. You say that is not an exception in the conceptual sense. But the system would still page the on call, still fail the request, still violate the acceptance test. Every operator and every customer would call that a bug. You are trying to win by relabeling the alarm as not part of the fire drill.
⸻
Fourth, here are obvious scenarios that make the logic clear to anyone.
Elevator
Spec: doors must never open between floors. Code hits that state and trips a safety exception. If that ever happens in production, it is a bug. If the safety system decides to log and keep moving instead, and the spec requires a stop, then not throwing is the bug. The exception path is just the messenger.
Bank transfers
Spec: never double charge. Off by one posts the same transfer twice. The code throws on the second insert due to a unique constraint. If it throws, the incident is a bug. If it does not throw because the database had no constraint and you silently post two charges, that outcome is a bug. In both cases the presence or absence of an exception is not the category. The spec is the category. The bug is the deviation.
Medical device
Spec: sensor readings out of range must be clamped and logged, not crash the loop. If the runtime raises an exception and the loop dies, that exception is the manifestation of a bug. If the runtime never raises and you keep using garbage without logging, that silent path is the bug. Same state space. Different handling. The spec decides which path is the failure.
Toaster
Spec: pop after two minutes. The timer arithmetic overflows and you get a runtime exception in the control loop. That exception is how the bug announced itself. In a different implementation the overflow wraps and the toaster never pops. No exception at all. Still a bug.
These are obvious because everyone can see that what matters is the contract. The exception is only a vehicle for discovery.
⸻
Fifth, the correct taxonomy already exists and it does not match your story.
Industry standards separate three things. A fault is the cause in the code or design. An error is the incorrect internal state. A failure is the externally visible deviation. Exceptions are one mechanism for surfacing an error or failure. When an exception causes a failure relative to the spec, we have a bug. When an exception is expected and fully handled within the spec, we do not. This mapping is stable across languages in a way your concept versus data structure split is not.
⸻
Final point
Your thesis needs both of these to be true at once.
A) Exceptions mark fundamental flaws independent of intent.
B) Bugs are only deviations from intent.
But we have shown counterexamples where intent flips the classification and where language choice flips the behavior. That means A and B cannot both stand. Once A falls, your whole distinction reduces to wordplay. Once B stands, the only consistent rule is this:
If an exception leads the program to violate its requirements, it is evidence of a bug. If it does not, it is routine control flow. In practice many exceptions are how bugs announce themselves. That is why production teams triage exception rates and not treatises on metaphysics.
> What you are calling fundamental is actually configurable convention.
Not within the topic's layer of abstraction. I can see you put a lot of thought into this, and if you were writing in a vacuum it might even be interesting, but discussion relies on context...
> If that ever happens in production, it is a bug.
Right. A bug, not an exception. Overloading the use of exception data structures and associated control flows does not imply that there is an exception in concept, just like using an error data structure to represent an email address[1] does not imply that you are dealing with an error in concept. It's just an email address.
> If it throws, the incident is a bug.
Right. A bug, not an exception. Again, just because you overloaded the use of a feature intended for dealing with exceptions for other purposes does not mean that you have an exception in concept. Just as an email address does not become a conceptual error just because you put it in an error data structure[1].
> If the runtime raises an exception and the loop dies, that exception is the manifestation of a bug.
Right. A bug, not an exception. I sense a recurring theme here. It is clear now that you've undeniably confused exceptions (data structure) with exceptions (concept). You obviously missed a lot of context so this may come as a surprise, but we were talking about the concept of programmer error. Data structures used in an implementation find no relevance here.
> In practice many exceptions are how bugs announce themselves.
As bugs and exceptions are different categories within the broader concept of programmer error, you could equally say "In practice many bugs are how bugs announce themselves". How, exactly, am I to grok that? I honestly have no idea what to make of that statement.
>Not within the topic’s layer of abstraction. I can see you put a lot of thought into this, and if you were writing in a vacuum it might even be interesting, but discussion relies on context…
You keep invoking “context” as a shield for avoiding substance. Context doesn’t change semantics. Arithmetic faults, null dereferences, and contract violations are not “configurable conventions” no matter how much pseudo-philosophical varnish you apply. Calling fundamental runtime behavior a “configurable convention” is like claiming gravity is optional if you talk about it at the right “layer of abstraction.” It sounds deep until anyone with an actual engineering background reads it.
⸻
>Right. A bug, not an exception. Overloading the use of exception data structures and associated control flows does not imply that there is an exception in concept, just like using an error data structure to represent an email address[1] does not imply that you are dealing with an error in concept. It’s just an email address.
Your “email analogy” is nonsense. The fact that something can be misused doesn’t invalidate its conceptual role. By that logic, if someone writes int banana = "hello", integers stop existing. Exceptions are not defined by their container type. They are defined by the runtime condition they represent. The fact that you have to keep insisting otherwise shows you have no grasp of the operational meaning of exceptions beyond the syntax of throw.
⸻
>Right. A bug, not an exception. Again, just because you overloaded the use of a feature intended for dealing with exceptions for other purposes does not mean that you have an exception in concept. Just as an email address does not become a conceptual error just because you put it in an error data structure[1].
Repeating the same broken analogy doesn’t make it coherent. You keep hammering at this contrived misuse like it proves anything. It doesn’t. It’s a straw man. Nobody is arguing that misusing the mechanism redefines the concept. You’ve constructed a toy example where you intentionally misuse a type and then triumphantly point to your own confusion as evidence. This is the intellectual equivalent of setting your keyboard on fire and declaring that typing is impossible.
⸻
>Right. A bug, not an exception. I sense a recurring theme here. It is clear now that you’ve undeniably confused exceptions (data structure) with exceptions (concept). You obviously missed a lot of context so this may come as a surprise, but we were talking about the concept of programmer error. Data structures used in an implementation find no relevance here.
You’re projecting your own confusion. You can’t even decide which layer you’re talking about. One moment you say exceptions are “data structures,” the next you say the data structure is irrelevant. You wave your hands at “concepts” but those concepts are precisely instantiated through those data structures. You don’t get to discard the implementation layer when it undermines your argument. Pretending the runtime’s behavior is irrelevant to the “concept of programmer error” is a convenient way to avoid admitting that bugs and exceptions are related manifestations of the same fault system. You’re not clarifying categories; you’re just renaming them until they stop overlapping.
⸻
>As bugs and exceptions are different categories within the broader concept of programmer error, you could equally say “In practice many bugs are how bugs announce themselves”. How, exactly, am I to grok that? I honestly have no idea what to make of that statement.
Of course you don’t. The problem isn’t the statement; it’s your refusal to recognize that categories can overlap without collapsing. Saying exceptions often announce bugs is perfectly coherent: bugs are latent causes, exceptions are how those causes surface at runtime. Your confusion stems from trying to turn mutually informative categories into mutually exclusive ones. It’s the same error a freshman makes when they insist that “rain isn’t weather because weather causes rain.”
⸻
[1] The Error("foo@example.com") example doesn’t reveal deep insight; it reveals that you can misuse syntax. That’s not philosophy, it’s just bad code.
> They are defined by the runtime condition they represent.
Exactly. Exceptions represent conditions that violate the rules of the computing environment, while bugs represent conditions that violate "business" rules.
> Saying exceptions often announce bugs is perfectly coherent
Not really. If it were coherent we wouldn't be here. But perhaps what you are grasping to say is that it is possible that a bug could "corrupt" the computing environment, which could then also lead to an exception later on?
Allowing user input of "0" where the business rules say that "0" input is invalid would be considered a bug, and later when that 0 input is used as a divisor would see an exception (in an environment where divide by 0 is not permitted, for those who struggle). The exception in this case would likely reveal that there is also a bug in the user input.
But that does not imply that exceptions encompass bugs. Those are independent events, both wanting their own independent resolutions.
Neither exceptions, the concept, nor exceptions, the data structure, have anything to do with control flow. You could be thinking of exception handlers, which refers to a control flow mechanism, but if you were you'd be going completely off the rails.
> Do you doubt you can use exceptions for control flow?
The same way you can use bugs for control flow, I suppose, given that bugs and exceptions are different categories within the same general idea of programmer mistakes. With bugs, of course, being mistakes around "business" rules, and exceptions being mistakes made around computing environment rules.
> Of course you need to combine with exception handlers.
How do you combine a programmer making a mistake with exception handlers? Exception handlers only exist inside programming languages. Are you under the impression that programmers also live inside programming languages? That's... interesting.
> With bugs, of course, being mistakes around "business" rules, and exceptions being mistakes made around computing environment rules.
For me, bugs are all mistakes which manifest themselves in software misbehaving.
Maybe it surfaces by a program crash of an uncaught exception or an error message presented to the user, a program termination, a wrong result, ...
> How do you combine a programmer making a mistake with exception handlers?
I write a bug leading to an exception sometimes and catch the exception, then try the operation again, or show an error to the user
>>They are defined by the runtime condition they represent.
>Exactly. Exceptions represent conditions that violate the rules of the computing environment, while bugs represent conditions that violate “business” rules.
You just moved the goalposts. In real systems the rules are one stack of contracts: language rules, library rules, service rules, business rules. A violation at any layer is a bug relative to that layer’s contract. Many exceptions are thrown for business rules as a matter of design.
Examples:
• ArgumentNullException when a domain service requires a nonempty customer id
• IllegalStateException when a workflow step is called out of order
• ValidationException on a failed business invariant
These are not violations of the CPU. They are domain failures surfaced as exceptions by choice. Your split collapses the second we leave toy examples.
⸻
>>Saying exceptions often announce bugs is perfectly coherent
>Not really. If it were coherent we wouldn’t be here.
This is rhetoric, not a rebuttal. The production fact is simple. When an unhandled exception takes down a request and that outcome violates the service contract, that incident is triaged as a bug. If it is handled and stays within the contract, it is routine control flow. Teams do this every day without metaphysics.
⸻
>But perhaps what you are grasping to say is that it is possible that a bug could “corrupt” the computing environment, which could then also lead to an exception later on?
No need to reach for corruption. Ordinary causal chains are enough.
Bad input passes validation when the spec says it must not. That is a bug. Later division by that input throws. The exception is the observable manifestation of the earlier bug. One cause. One effect. No magic.
If your design instead clamps zero or returns a sentinel, then the exception would be the bug. The label follows the spec, not your taxonomy.
⸻
>Allowing user input of “0” where the business rules say that “0” input is invalid would be considered a bug, and later when that 0 input is used as a divisor would see an exception … The exception in this case would likely reveal that there is also a bug in the user input.
You have just restated that exceptions announce bugs. Your own example concedes the point you claimed was incoherent. The exception is how the system made the hidden violation visible. That is exactly what observability is for.
⸻
>But that does not imply that exceptions encompass bugs. Those are independent events, both wanting their own independent resolutions.
Independent events would mean no causal relation. Yet your own scenario shows a direct chain from the validation bug to the later exception. They are linked by cause and effect and are handled together in a single incident. Ops does not file two unrelated tickets and pretend the crash and the root cause are strangers.
⸻
The correct model is simple and general:
• A spec defines allowed states at every layer.
• A bug is entry into a disallowed state relative to that spec.
• An exception is a mechanism that signals some disallowed states.
From this it follows:
• Many bugs surface as exceptions.
• Some exceptions are not bugs because the spec expects and handles them.
• Some bugs do not raise exceptions because the language or code does not signal them.
Your attempt to split computing environment rules from business rules ignores that both are contracts, and violations of either are bugs. Your own examples demonstrate the causal link you say does not exist.
Nope. This carries the exact same semantic intent as my original comment. I've had to severely dumb down the phrasing over the course of discussion as it is clear you don't have a firm grasp on computing, and may continue to dumb it down even more if you continue to display that you don't understand, but there is nowhere for the goalposts to go. They were firmly set long before my time and cannot be moved.
> and violations of either are bugs.
They are both programmer error. "Bug" and "exception" are different labels we use to offer more refinement in exactly what kind of error was made. If I erroneously write code that operates on a null pointer, contrary to the rules of the computing environment, I created an exception. If I erroneously wrote code to paint a widget blue when the business people intended it to be red, I created a bug. While you may not understand the value in differentiating — to the user who only sees that the program isn't functioning correctly it is all the same, right? — programmers who work on projects in industry do, hence why they created different words for different conditions.
> the language or code does not signal them.
It seems you continue to confuse the exception data structure with exception as a categorical type of programmer error. The discussion is, was, and will only ever be about the later. There was no mention of programming languages at the onset of our discussion and turning us towards that is off-topic. You seem to be here in good faith, so let's keep it that way by staying true to the original topic.
> If I erroneously try to operate on a null pointer, I created an exception.
This is only true if some library/framework you use creates an exception for you.
Why do you operate on a null pointer in the first place? Well, you didn't because you painted something in the wrong color, but because you passed a null pointer to a piece of code, which should not have received a null pointer.
> This is only true if some library/framework you use creates an exception for you.
No. That's like saying you can't have errors unless you language/library/framework has an error datatype. Quite possibly the stupidest thing I've ever heard. A language doesn't need exception data structures or exception handlers for a programmer to violate a rule of the computing environment.
So the language/library/framework does not create an exception when operating on the null pointer, but instead it does not do anything (when it should change the color of a widget from blue to red).
Now you have a bug by operating on a null pointer, which supposedly is an exception, while exceptions cannot be bugs?
I'm currently working on something that requires a GPU with CUDA at runtime. If something went wrong while initializing the GPU, then that'd be an exceptuion/bug/"programming error" most likely. If the user somehow ended up sending data to the GPU that isn't compatible/couldn't be converted or whatever, then that'd be an user error, they could probably fix that themselves.
But then for example if there is no GPU at all on the system, it's neither a "programming error" nor something the user could really do something about, but it is exceptional, and requires us to stop and not continue.
> If something went wrong while initializing the GPU, then that'd be an exceptuion/bug/"programming error" most likely.
That depends if it is due to the programmer making a mistake in the code or an environmental condition (e.g. failing hardware). The former is exceptional if detected, a bug if not detected (i.e. the program errantly carries on as if nothing happened, much the dismay of the user), while the latter is a regular error.
> But then for example if there is no GPU at all on the system, it's neither a "programming error" nor something the user could really do something about, but it is exceptional
Not having a GPU isn't exceptional in any sense of the word. It is very much an expected condition. Normally the programmer will probe the system to detect if there is one and if there isn't, fall back to some other option (e.g. CPU processing or, at very least, gracefully exiting with feedback on how to resolve).
The programmer failing to do that is exceptional, though. Exceptions are "runtime compiler errors". A theoretical compiler could detect that you forgot to check for the presence of a GPU before your program is ever run.
The grey area is malfunctioning CPU/memory. That isn't programmer error, but we also don't have a good way to think about it as a regular error either. This is what "bug" was originally intended to refer to, but that usage moved on long ago and there is seemingly no replacement.
> Exceptions should be reserved for developer errors like edge cases that haven’t been considered or invalid bounds which mistakenly went unchecked.
Isn't this what assertions are for? How would a user even know what exceptions they are supposed to catch?
IMO exceptions are for errors that the caller can handle in a meaningful way. Random programmer errors are not that.
In practice, exceptions are not very different from Result types, they are just a different style of programming. For example, C++ got std::expected because many people either can't or don't want to use exceptions; the use case, however, is pretty much the same.
I’ve often seen assertions throw exceptions when violated. Users don’t catch exceptions, developers do. Users interact with the software through things like failure pop ups. You’d need to check that there’s a failure to show one, hence returning a Result to represent the success/fail state.
You’re getting downvoted which is unfortunate because I think you make a worthwhile point.
Emotionally I disagree with you. It feels like a bully is getting what a bully deserves. Logically, I think you are right though. Crowds just aren’t equipped to handle these situations. There are cases where the wisdom of the crowd is correct, but there are many more where it multiplies harms.
The underlying problem is that it never feels like justice is being served. Another comment mentions that there should be harsher punishment for false DMCAs. I don’t think the “wisdom of the crowd” approach is the best way to write those wrongs but I lament that modern justice has not been up to the task.
The problem is they are both drugs and productivity devices. I have two iPads and I love them. I use a Mini exclusively for book reading and logging workouts. I use a Pro for video calls and occasionally YouTube videos.
The addiction didn’t get me through them… on the other hand, here I am posting HN comments instead of doing something productive so it did reach me through my phone.
Your comments on here show you get into the same pointless arguments and make the same dumb quips everyone on Reddit does. Not sure what makes you think you're doing anything different here
The funny bit about it is that the story still holds up to some extent if it’s a lie since the whole thing is about a liar making folks think he’s more credentialed than he really is.
As a society we do get to answer these questions.
reply