Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I use a parser to check for a non empty list return. It’s called a unit test.

All joking aside, what is the practical difference or benefit here to testing?




Very simple.

Say you have a user that inputs a list. And then you have another function somewhere in the code, that takes a list, sorts it and uses the max value to send a notification email.

If it's done your way, then you'll have to add an assertion in that function, such as

    if(input_list.length < 1) explode("Bug! Bug! You can't call me with an empty list")
And you will have to write multiple unit tests _each time_ you call this function from somewhere, to make sure it's not called with an empty list.

If you use a compiler... all of this goes away. The function just expects you to give it a non-empty list and if you don't the code just doesn't compile.

Less code to maintain = win.

The price you pay for this is learning, understanding and using a more advanced typesystem. That's not a small price to pay. Pick your poison.


Your example isn’t even what the articles attempt to suggest, and indeed, it’s not even possible, since there’s no way to ensure that the program passes non empty lists at compile.

For example the grandparent article compares these

  validateNonEmpty :: [a] -> IO ()
  validateNonEmpty (_:_) = pure ()
  validateNonEmpty [] = throwIO $ userError "list cannot be empty"
 
  parseNonEmpty :: [a] -> IO (NonEmpty a)
  parseNonEmpty (x:xs) = pure (x:|xs)
  parseNonEmpty [] = throwIO $ userError "list cannot be empty"
He then states

  “Both of these functions check the same thing, but parseNonEmpty gives the caller access to the information it learned, while validateNonEmpty just throws it away.”
But the caller still needs to deal with the empty type. It’s just moved it up higher in the code. Which makes checking easier. He’s referring to this as “parsing”

  “parseNonEmpty is a perfectly cromulent parser: it parses lists into non-empty lists, signaling failure by terminating the program with an error message”
The parent article however seems to want to take this further, but again, to make use of this as suggested by actually parsing still would require unit tests.

A compiler can’t solve the problem of preventing the class of bug you’re talking about, hence why there always needs to be some code that handles cases like empty files or lists etc.

To understand why check out the halting problem:

https://cs.stackexchange.com/questions/72014/what-cannot-be-...

https://en.m.wikipedia.org/wiki/Halting_problem


I think your misunderstanding lies here:

> But the caller still needs to deal with the empty type.

Yes, the caller has to deal with it. The caller in this case is, in my example, the function that takes the user inputs. But every subsequent part of the application (in my example the sorting/email function) does _not_ have to deal with it anymore.

So essentially: you only have to deal with errors at the boundaries of your application - and after that, the rest of your code (usually the vast majority) just works without checking for errors again and again.

So to answer your summary:

> A compiler can’t solve the problem of preventing the class of bug you’re talking about, hence why there always needs to be some code that handles cases like empty files or lists

Yes. The difference is: do you want to handle these case all over again in each function of your code base... or just once at the boundary.


Ok, well I agree with that assessment. I just don’t think the article has made the case that the methods he talks about simplify handling these things at the boundary in some special way.

There are numerous ways, even in dynamically typed languages to use abstraction to raise the level of such errors and handle them in a single place.

For instance let’s take ruby. Rails has a “.blank?” Function that wraps up handling empty strings, lists, nils, etc. So if I use that abstraction, I’ve raised up having to check all those issue in every function, instead now I need to ensure the “blank?” function does what’s its supposed to.

So there’s definitely value in abstraction, and in moving certain classes of errors up the layers in the application. And yes I can see how some fancy uses of type systems (in this case Haskell) can make this work. The grandparent article seems to leave it there.

This parent article though seems to muddle things with the idea that parsing can do even better, and that’s were I think the idea is half baked.


Just to put a nail on it, a previous commenter above spells it out

  “And some of those simpler solutions do come from the dynamic programming world. For example, ‘You can then go further and remove the more primitive operations, only allowing access to the higher level abstractions,’ is another excellent way to make illegal states unrepresentable. And you don't need shiny new programming languages or rocket powered type systems to do it. I was really rather disappointed that that section of the article gave a nod to Dijkstra, but completely failed to mention Alan Kay's The Early History of Smalltalk[1], which was, to an approximation, several thousand words' worth of grinding away on that point.”
So I feel like maybe we all were confused by what the article was suggesting, if indeed it suggested anything concrete at all.

1. http://worrydream.com/EarlyHistoryOfSmalltalk/


> The grandparent article seems to leave it there.

I think the article unfortunately just assumes that you are already familiar with using an advanced statically typed language and draw the conclusion about the advantages automatically.


I am actually very familiar with advanced statically typed languages, however the article isnt trying to make the case that such languages have any advantages. (Another debate entirely) It’s instead making the case that adding “parsing” to such languages helps deal with some certain set of bugs, but it still seemed pretty non specific and doesn’t seem to cover any new ground in any interesting way frankly.


No one genuinely has an answer to this?

I get that we’re talking about Haskell here which is compiled. But why wouldn’t we talk about running the code through unit tests.

For example the article talks about given function that requires a number that must be between one and five... and then goes on to specify specific grammar rules.

How is that better than simply typing that up in a test.

And by simply running test suites you eliminate issues with bugs in your parser system.

What am I missing?


When you have a non-emptiness property on a type, the compiler can guarantee you that all current and future occurences of values of that type in the system will be interacted consistently with the specified non-empty property. Or in other words, the accent is moved from the limited set of units that you are able (and willing) to test to the invariants that will be held for all occurences of the type at all times, regardless of an unlimited number of units that are willing to use them.


How does one guarantee that for all values? If we’re talking about a parser, wouldn’t it need to test all the values? And if your not testing all the values, aside from simple cases, it’s not something that could just be computed (np, halting problem etc)... so wouldn’t the parser still just need to test some preset number of values?

I guess maybe we’re talking kind of like a prebaked in set of unit tests for certain circumstances?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: