It’s only tough to change your way of thinking. Most people making the switch find it tough because they are trying to find imperative techniques to do something in a functional way and struggling because they can’t find an if else statement or a for loop. But if you were never taught to think in terms of conditional branching or looping indexes you’ll save a lot of time.
Exactly. I've long held the sentiment, that pure functional programming is easier than imperative programming. The only hard part, is switching from an imperative mindset.
By what measure? Haskell can be a huge productivity multiplier. The standard library is built upon many powerful, unifying and consistent mathematical abstractions. For example, there is almost no boilerplate to write for any traversal, mapping, error handling etc. The average Pythonista simply has no idea what they are missing. But Haskell doesn't have anywhere near the third party ecosystem of Python, so is less productive by some measures.
> That needs a qualifier: it can make easy problems tough if you're not familiar with how to solve them in a functional context.
All problems are easy if you are familiar with how to solve them. Unfortunately it's part of the problem to find out how to solve them, and that can be unusually hard in case of functional programming. Like solving something with recursion instead of loops + states. There is a reason cookbooks use loops not recursion.
Not really, certain problems are just inherently harder to express in a purely functional way than they are in an imperative way (and the reverse is just as true). For example, computing a histogram is much simpler in imperative terms (keep an array of histogram values, go through the original list, add 1 to the array element corresponding to the current element in this list) than in a purely functional style, especially if you need a somewhat efficient implementation.
My favorite example of this is implementing quicksort. It's significantly easier in C than it is in Haskell.
I know you're being sarcastic, but even in this linked-list quicksort, it could be made more efficient. By defining lesser and greater separately, you're traversing the list twice. You could compute them at the same time by changing those last two lines to this:
where
(lesser, greater) = partition (< p) xs
I just learned about the existence of this function today, and wonder why I had never seen it used before.
Heh, I've used some version of partition in my TS/JS code ever since underscore.js. But "Haskell Quicksort" is basically a code meme now, and I didn't think to optimize it (though I think the original was a one-liner using list comprehensions)
Arguably the issue with "quicksort in Haskell" is not that it's "hard" to implement, but rather it defeats the whole purpose of using a "purely functional" language.
The pragmatic way to look at Haskell is not that it's purely functional, but rather, you could write <del>imperative code</del> Monads if you wanted, and that gives a "functional by default" environment, whereas most imperative languages default to mutable objects etc that are not friendly to functional-style programming.
But then the more modern languages are catching on with immutable by default variables etc. so in the end the differences between newer languages may not be that great after all...
Well, I'd say using an entirely different collection type than the rest of language (STArray instead of [a]) is already a big complication. It also ends up being more than double the size of the Java code. And, as the author admits, it's actually even slower than the original not-quicksort implementation above, because it actually has to make a copy of the original list, and then return a copy of the mutated array.
So, one of the best sorting algorithms ever devised is not actually usable to sort a [a] in Haskell... I maintain this is a good example of making an easy problem tough.
> I'd say using an entirely different collection type than the rest of language (STArray instead of [a]) is already a big complication
Haskell uses many different collection types, just like any other language. Why not?
> it's actually even slower than the original not-quicksort implementation above, because it actually has to make a copy of the original list, and then return a copy of the mutated array.
Sure, but it could also not do that, if callers are happy to provide a mutable array, just like any other language ...
> one of the best sorting algorithms ever devised is not actually usable to sort a [a] in Haskell
Indeed! One of the best algorithms for sorting a mutable array can't be used on an immutable data type, just like any other language ...
None of this invalidates your original claim that "It's significantly easier in C than it is in Haskell" of course.
quicksort2 has a very reasonable constraint of Array, it's the helpers that use the STArray implementation. I suspect it wouldn't be hard to port to MArray, though I don't know that it would perform any better (maybe avoiding some copies since MArray is actually usable outside of runST). I also suspect the overhead of the copies pays off with larger lists given the lack of space leaks compared to the naive algorithm. Some benchmarks of different-sized lists would have been nice.
I'm not a cheerleader for Haskell, it's not the most appropriate language for every job (certainly not for quicksort). But some folks suddenly become hyper-optimizing assembly programmers whenever anyone has the thought of porting a sql crud app to another language... Horses for courses and all that.
Absolutely! Any beginner can readily combine the catamorphisms and anamorphisms in `recursion-schemes`, or use the ready-made hylomorphisms for common tasks such as setting a value in a data structure. What could be simpler? /s
> makes tough problems easy and easy problems tough
And because of mutual recursion, that means that tough is easy (and easy tough). In other words, if we call the class of tough problems T and easy problems NT, we have T==NT, given FP.
Yes, and AFAIK, you're pretty much free to cause side-effects in functional languages; it's just a bit awkward and somewhat discouraged. It's kind of like how C still has goto, yet it's still a structured programming language.
Even in Haskell, which tries to control side-effects more, it's not hard; it's just that it's stuck with an "IO" annotation. Any function that calls an IO function also becomes an IO function; it's infectious.
There's some real confusion about what "functional" means, because it depends on who's speaking. Writing _exclusively_ pure functions is a Haskell thing. A much looser definition of the functional style would be to say that you mostly write pure functions over immutable data, but when you have to actually do a side effect you write a procedure that talks to the outside world and go on with your day. If you go digging in this site's archives from about ten years ago, I recall numerous debates about what constituted functional programming, and whether the latter counts at all. But if we _are_ talking about Haskell, the answer to your question is obviously "Monads."
This is not true in my personal experience.
As has been famously said (paraphrased): Functional programming makes tough problems easy and easy problems tough.
In other words the value of functional programming depends on your domain.