> A lot of people think that functional programming is mostly about avoiding mutation at all costs.
People should try to stop thinking of mutation as something to be avoided, and start thinking of it as something to be managed.
Mutating state is good. That's usually the whole point.
What's bad is when you create "accidental state" that goes unmanaged or that requires an infeasible effort to maintain. What you want is a source of truth for any given bit of mutable state, plus a mechanism to update anything that depends on that state when it is changed.
The second part is where functional programming shines -- you have a simple way to just recompute all the derived things. And since it's presumably the same way the derived things were computed in the first place, you don't have to worry about its logic getting out-of-sync.
I like to say that immutability is a really good idea in the 1990s, especially considering how counterculture it would have been at the time. I don't mean that as diminutive or patronizing, I'm serious. It was a good cutting edge idea.
However, nobody had any experience with it. Now we do. And I think what that experience generally says is that it's a bit overkill. We can do better. Like Rust. Or possibly linear types, though that is I think much more speculative right now. Or other choices. I like mutable islands in a generally immutable/unshared global space as a design point myself, as mutability's main problem is that it gets exponentially more complicated to deal with as the domain of mutability grows, but if you confine mutability into lots of little domains that don't cross (ideally enforced by compiler, but not necessarily) it really isn't that scary.
It was a necessary step in the evolution of programming ideas, but it's an awful lot to ask that it be The One True Idea for all time, in all places, and that nobody in the intervening decades could come up with anything that was in any way an improvement in any problem space.
> I like to say that immutability is a really good idea in the 1990s, especially considering how counterculture it would have been at the time. I don't mean that as diminutive or patronizing, I'm serious. It was a good cutting edge idea.
> However, nobody had any experience with it. Now we do.
Working intimately with C++ in the '90s, immutability as a concept was neither considered counterculture nor without significant experience employing it. At that time and in C++, it was commonly known as "const correctness" and was a key code review topic.
Go back another decade or two when K&R C ruled the land and that's a different story ;-).
The funniest thing is that to compile Rust, enabling various compiler features (like borrow checker etc), the code has to be transmuted into non-mutable one Single Static Assignment form :V
And in the functional world, Lean 4 achieves language level support for local mutation, early return, etc. in the same way, by converting it into non-mutating code:
All this immutability discussion is more about going against "old school object orientation" which promoted keeping state spread out around a lot of mutable object instances.
This is a hill I will die on. Probably literally. If you are writing in a metaphor of equations, than yes, mutation is almost certainly going to bite you. If you are writing in a metaphor of process, you almost certainly want to manage, as you say.
I feel that early texts were good at this. Turtle Geometry is my personal favorite book in this vein. I seem to recall we spent a long time going over how to double buffer graphics so that you could be working on one buffer while letting the system draw the other. Not sure what texts we used for that, back in the day.
Later texts, though, go through a lot of hurdles to hide the fact that things are actively changing. The entire point is to change things.
I'm not entirely clear what you mean. I can easily see malloc/free in the realm of "incidental complexity." Many of the abstractions in process descriptions are absolutely not incidental, though?
Control-flow is an awkward choice there, as goto is not necessarily fundamental for how control-flow is run for a lot of code. And I have absolutely used labeled break/continue in Java before for a control loop that ran great until people tried to refactor to use more indirect control.
I also think it is interesting as I greatly prefer code where you can do basic left/right and top/down reading to know what is intended by the code.
At any rate, my original intent was to discuss code that is controlling something works really well if you embrace a metaphor for the code you are in.
> The second part is where functional programming shines -- you have a simple way to just recompute all the derived things. And since it's presumably the same way the derived things were computed in the first place, you don't have to worry about its logic getting out-of-sync.
Thinking this further, this is also performance related. I think there is an interesting relationship between FP and DOD:
A technique of data oriented design is to keep state minimal and lazily derive data when you actually need it, which may involve recomputing things. The rationale is that compressed, normalized data requires less fetching from memory and computation on it is faster.
In contrast caching and buffering, both of which are heavily stateful and require a lot of additional memory, are often necessary, because they minimize inherently slow operations that are out of your control. Those kinds of things are often best implemented as (computational) objects with encapsulated, internal state and small, general interfaces, like OO has taught us.
But once the data in your control, this mindset has to be flipped on its head. You want to model your in-memory data not that differently from how you'd model for databases: Neatly aligned, normalized data, with computed columns, views and queries to get richer answers.
Interestingly if you follow this approach, then code starts to look more similar to functional code, because you potentially need the whole context to derive values from it and a lot less like independent objects that send messages to each other.
That should read, “plus one mechanism”. Another place where people get into trouble is thinking “a” means >= 1 instead of exactly one.
When the state has multiple entities trying and failing to manage it consistently is when things get bad. Functional tends to make for more friction which discourages doing a lot of state, which to some extent controls the superlinear complexity of state management by making each piece dearer.
People should try to stop thinking of mutation as something to be avoided, and start thinking of it as something to be managed.
Mutating state is good. That's usually the whole point.
What's bad is when you create "accidental state" that goes unmanaged or that requires an infeasible effort to maintain. What you want is a source of truth for any given bit of mutable state, plus a mechanism to update anything that depends on that state when it is changed.
The second part is where functional programming shines -- you have a simple way to just recompute all the derived things. And since it's presumably the same way the derived things were computed in the first place, you don't have to worry about its logic getting out-of-sync.