> But we've still ended up with only two basic formalisms that almost(?) all mathematics is expressed in terms of: geometry (visual mathematics), and algebra (symbolic mathematics, including formal logic.) Because those are the formalisms that we have mental hardware to comprehend.
> We'd certainly never take the formalisms that do work for us, throw them out, and replace them with non-human-mind-comprehensible formalisms instead. Why would we bother? What would we gain?
The dominance of those formalisms is at least as much about the physical hardware as the mental hardware. In my head, a mathematical proof is usually something like a tree structure or even a wiki: I have an overview that A implies B and B and C together imply D, and then if I "zoom in" then there's all sorts of structure to the steps to get from A to B. Of course if I were to write this proof on paper or blackboard then I'd have to flatten it (though not completely; I'd certainly break it up into lemma x and sub-lemma y), but saying that a paper is the human-mind-comprehensible formalism is putting the cart before the horse. The kind of richer structures that we can use with computers - Jupyter-style notebooks, or wiki-like linked crossreferences - makes it easier for humans to comprehend, not harder.
> Can you think in terms of what a JetBrains codebase will look like after you refactor it, without actually doing the refactoring and seeing the result? How about with two or three such applications in play? It gets pretty hard, no? Because those aren't just pure symbol-manip. They're not just moving letters around on the mental blackboard that represents the current module. The non-local effects aren't strictly intuitive. Each layer of that that you have to hold solely in your head, bogs you thinking down. In a high-friction language, you just don't bother to try to work too far ahead with such changes; you just "take things one step at a time", applying each change and then re-learning the codebase in light of it; sometimes applying a change and then backing out once you realize you're got your sequencing wrong.
Completely disagree. The result of the refactoring is always the obvious thing that you'd expect, and having the IDE take care of the details of which characters change in which files makes it much easier to focus on the actual change you're making rather than get bogged down in the details. I can't imagine understanding programming as manipulating a flat sequence of symbols; it would be like trying to work in Brainfuck.
> Ask a composer: what do you think about when you compose music? Do you think of the sound; or do you picture the score? Where music's concerned, I would guess pretty much everyone is thinking about the sound, because that's a lot more intuitive. Someone could probably write a song entirely by manipulating a score in their mind—and make it sound good!—but it'd be hell to do, in comparison.
Again I think you're getting this completely backwards. I have the structure of the program in my head, and it's something much richer than a sequence of symbols; it's a graph of directed connections, and much more besides. An IDE that can do things like showing references on hover gives me a much closer representation of the program than a linear sequence of symbols, brings me much closer to being able to manipulate the program itself; manipulating the sequence of symbols sounds exactly like manipulating the score instead of manipulating the sound.
> We'd certainly never take the formalisms that do work for us, throw them out, and replace them with non-human-mind-comprehensible formalisms instead. Why would we bother? What would we gain?
The dominance of those formalisms is at least as much about the physical hardware as the mental hardware. In my head, a mathematical proof is usually something like a tree structure or even a wiki: I have an overview that A implies B and B and C together imply D, and then if I "zoom in" then there's all sorts of structure to the steps to get from A to B. Of course if I were to write this proof on paper or blackboard then I'd have to flatten it (though not completely; I'd certainly break it up into lemma x and sub-lemma y), but saying that a paper is the human-mind-comprehensible formalism is putting the cart before the horse. The kind of richer structures that we can use with computers - Jupyter-style notebooks, or wiki-like linked crossreferences - makes it easier for humans to comprehend, not harder.
> Can you think in terms of what a JetBrains codebase will look like after you refactor it, without actually doing the refactoring and seeing the result? How about with two or three such applications in play? It gets pretty hard, no? Because those aren't just pure symbol-manip. They're not just moving letters around on the mental blackboard that represents the current module. The non-local effects aren't strictly intuitive. Each layer of that that you have to hold solely in your head, bogs you thinking down. In a high-friction language, you just don't bother to try to work too far ahead with such changes; you just "take things one step at a time", applying each change and then re-learning the codebase in light of it; sometimes applying a change and then backing out once you realize you're got your sequencing wrong.
Completely disagree. The result of the refactoring is always the obvious thing that you'd expect, and having the IDE take care of the details of which characters change in which files makes it much easier to focus on the actual change you're making rather than get bogged down in the details. I can't imagine understanding programming as manipulating a flat sequence of symbols; it would be like trying to work in Brainfuck.
> Ask a composer: what do you think about when you compose music? Do you think of the sound; or do you picture the score? Where music's concerned, I would guess pretty much everyone is thinking about the sound, because that's a lot more intuitive. Someone could probably write a song entirely by manipulating a score in their mind—and make it sound good!—but it'd be hell to do, in comparison.
Again I think you're getting this completely backwards. I have the structure of the program in my head, and it's something much richer than a sequence of symbols; it's a graph of directed connections, and much more besides. An IDE that can do things like showing references on hover gives me a much closer representation of the program than a linear sequence of symbols, brings me much closer to being able to manipulate the program itself; manipulating the sequence of symbols sounds exactly like manipulating the score instead of manipulating the sound.