IMHO in the context of the Tar-Pit Paper (programming for services and applications rather than electronics) mutation merely is a mean to an end. The end being: some piece of memory holds the value I expect (e.g., to transmit, store, show information). Thus I disagree that mutable state is essential complexity. For the rest of the post I don't understand what you mean with "walking the line between avoiding/ignoring mutable state" and I wish you ad elaborated about what you mean by "play very well with Turing neighbors" because I cannot really connect with that.
Regarding, functional programming and mutation. In a typed "pure-FP" language like Haskell:
- you need to explicitly declare what constitutes a valid value
- which drives what valid values are observable between program steps
- which informs how corrupted a data-structure can be in your program
For instance, using a tree-shaped structure like `Tree1 = Leaf Int32 | Node Int32 Tree1 Tree1` and you have some `MutableRef (Tree1)`. You know exactly that from the ref you can read a whole Tree1, you can change the content of the MutableRef but you need to give a whole new Tree1. In particular, you are sure to never read "an half-initialized tree" or "a tree that was changed while I was reading it" because these things just do not exist in the realm of valid Tree1s. Such simple guarantees are really important to many programs but are not guaranteed in most programming languages.
Of course, if for some reason (e.g., for performance) you need something more flexible and are willing to pay the extra complexity, you can do it. As an illustration, you can augment your Tree1 with some MutableRef as well. `Tree2 = Leaf Int32 | Node Int32 Tree2 Tree2 | Cell (MutableRef Tree1)`. Here Tree2 contains mutable references to a valid Tree1 so Tree2 is not fractally complicated but you could do that too. With Tree2, you can interleave reads, writes, and other operations while working on a Tree2. These extra effects could be required (for performance reasons for instance) at the expense of inviting surprising behaviors (bugs). In pure-FP it is clear that with Tree2 we lost the possibility to "read/write a whole tree in a single program step". Thus, if the control flow of the code is not linear (e.g., multi-threaded, callback-heavy) you may have fun things occurring, like printing a half of the tree at a given time and the second half of the tree after a mutation, resulting in printing a tree that never existed. Enforcing global properties like the heap-invariant becomes inconvenient because it is actually hard if we let mutation in. I'd go as far as saying that the Tree2 doesn't exist as a tree: given a Tree2 we are merely capable of enumerating chunks with tree-shapes piecewise.
This reply is long already, so I won't go into how Haskell has various flavors of mutable refs in IO, ST, STM and recently even got linear-types to allow some fine-grained composition-of-atomic-mutations/interleaving/prevention of effects. But overall I find pure-FP takes mutability much more seriously than other more mainstream languages. Here, pure-FP just keeps us honest about where non-determinism bites. The collective failure to realize is one of the key reason why we are in The Tar-Pit: violated in-memory-invariants can compound even outside programs by forcing outside/future users to deal with things like misleading information being shown to users, urgently upgrading your fleet of servers, days of one-off data-cleanup and so on and so forth.
I'm not sure we need a whole new programming paradigm and languages (with the hiccup that they'll require bindings to other FFI-libs that do not have as much guarantees). If you have a rich domain and want to benefit from performance boost using "fenced" mutations. Haskell has linear-types, software-transactional memory, and the ST monad. Which are three ways to constrain side-effects over references. If you need tight control of "the CPU as a state-machine", for instance, for IO-heavy services, then you have Rust, ADA and other C-targeting DSLs
The way I've done it for my own notes (in a custom-made static-blog generator) is to have a template dedicated to "notes" and some over-engineered pipeline for similarly small datasets [1].
Dhall is advertised as a configuration language but you can do a tad more. I use it in my blog-engine to fit a use-case I found was poorly addressed by other approaches: small-cardinality datasets that benefit from type-checking and templating (e.g., list of notes, a photo gallery). I don't claim the idea is especially novel but I found the use case rare and interesting enough to write some explanations, design, and a demo here:
https://lucasdicioccio.github.io/dhall-section-demo.html
There is a myth that "monads" are magical insights of some sort -- it's not.
Difficult to understand: likely yes because the myth is not groundless. What "monads" capture is how to combine things with a lot of ceremony: (0) the things that we want to combine are sharing some structure/properies (1) we can inspect the first thing before deciding what the second thing is (2) we can inspect both before deciding what is the resulting combination. What requires a lot of thought is appreciating why "inspect, decide, combine" are unified in a single concept.
Important: indeed, because in Haskell-like languages monads are pervasive and even have syntactic primitives. It's also extremely useful when manipulating concepts or approaching libraries that implement some monadic behaviour (e.g. promises in JS) because the "mental model" is rigorous. If you tell someone a library is a monadic-DSL to express business rules in a specific domain, you're giving them a headstart.
Some final lament: there's a fraction of people who found that disparaging (or over-hyping) the concept was a sure-fire way to yield social gain. Thus, when learning the concept of monads, one situational difficulty that we should not understate is that one has to overcome the peer-pressure from their circle of colleagues/friends. Forging one's understanding and opinions takes more detachment than the typical tech job provides.
I wouldn't call that trivial but "common", and for these cases we could even use something like catMaybes:
> sum . catMaybes
Lenses (imho) shine when you have to write data-extractors for arbitrary-complex and ill-documented APIs with corner cases or things like HTML scrapping. The beauty of lenses is that you get one composable language across libraries. For instance, if your HTTP-library has lenses and your XML-library has lenses, you can write things like safely composing a getElementsByTagName with some XML-parsing and data extractions:
> response ^.. responseBody . xml . folding universe . named (only "p") . text
and the bonus is that if you need to filter, you can do that with a similar mechanism (e.g., filtering based on some html meta-tag)
overall it feels like SQL-for-arbitrary-structures: dense but powerful
I think it comes down to a mix of (a) how people (and robots) organize and find information (b) tool limitations and genericity.
Regarding (a),I personally like the view that the content lives in a "flat world" on top of which we collate different structures to organize/filter a same set of contents. In that worldview, web users entry-points can be more than directory-listings. A great inspiration is how Wikipedia offers a way to find which articles use a given picture: each picture acts like a "category", the same way that "recent-changes" is another filter on the same "flat world" of articles.
However, what is immensely difficult is to standardize these in a world where de-facto implementations burgeon, flourish, and eventually becoming out of touch (e.g., site-maps, RSS, OpenGraph). Hence we are stuck with very limited but very generic tools (b) for which rules like "directory-listing on the slash-separator" or "generate a JSON of the whole site connections to display as an interactive graph" (which I do on my personal blog) merely are local work around which require a bit of duck-tape to work.
I've recently written my own static-blog generator. On the one hand, I got irritated trying or figured out I would quickly hit limitations with the popular ones. On the other hand, the effort I would have spent searching for and evaluating the hundreds of existing other ones is on par with re-implementing my own.
This comment misses the point of the article, which argues against a urban legend that Haskell is too difficult "unless you are this tall" (a Phd, an E.T., a superhuman). I also had a shot at explaining why people are already tall enough in a past short "comic strip" [0]. I think the article makes a good job re-assuring the urban legends are a myth.
Now. there are many reasons why one would want to use Haskell. For instance, to have fun, to understand better some pattern, to write expanding-brain memes, or, because they are good at solving problems with it. It is fine if your reasons do not intersect with other people's reasons. You can try finding answers whether Haskell matches your reasons in some other essays [1,2,3]. Maybe it is true for many people that their needs and desires are entirely covered with their own toolset. As a curious/optimistic person I find it incredibly pedant to say I'll never ever need to learn/use something (there's different goodness in everything).
Personally, practicing Haskell led me to appreciate the importance and trade-offs that occur when isolating/interleaving side-effects. Similarly, it gave me some vocabulary to articulate my thoughts about properties of systems. Both are super important in the large when you architect softwares and systems. There are other ways to learn that (e.g., a collection of specialized languages), but at least for me, Haskell helped me build an intuition around these topics. That said, the prevalence of negative and derailing comments in discusions about Haskell can be demotivating (but our industry is like this ️).
Mostly talking about some engineering and project management topics. Often revolves about decision-making (in a broad sense).