"He hadn’t quite internalized left-hand side vs. right-hand side." This is noteworthy.
In mathematics, equality is symmetric (a=b is the same as b=a).
In programming, variable assignment is asymmetric (a=b is not the same as b=a).
Some programming languages recognize this but get the change wrong by changing the equality operator to == or === and always preferring assignment to be =. Some get it right, e.g., := as assignment.
This is a tradeoff of one hard day in your teens ("But but but equals should work both ways! head explodes") and thirty seconds every time you pick up a new language ("Assignment operator is equals? Got it.") versus typing more several dozen or hundred times a day every day for the rest of your professional life.
Yes, but don't overlook the "Let x =" syntax of mathematics. Programming has just dropped the "let". So, really there is a choice to be made about how to resolve that ambiguity in a language parser. Visual Basic has the nice case of A = B = C setting A equal to the boolean expression B = C.
I don't know where I picked this up, but I use asymmetric whitespace around the assignment operator: a= b. It provides a subtle hint that assignment is not equality.
I know that's what it does under the covers, but is there any reason not to think of it as assignment in your head? That's the mental construct I use in my extremely limited Haskell coding, anyway.
As lanaer said, "<-" is really syntax sugar for ">>=" (bind), and could mean a variety of things (including assignment). "=" is binding a variable, as in "let", and is not symmetric. One could argue that a better name for "=" would be "->", as in "case".
I would like to see someone take one thousand twelve year olds, stratify them and randomly split them up into groups of two hundred for the summer and teach each group basic programming using five different languages: Python, Basic, Scheme, and a couple more, then at the end, perform some type of objective programming skills evaluation to determine which language really is the best for beginning learners.
I tried that with just two kids using PLT Scheme and Javascript. The biggest hurdle is convincing them that software is useful outside the development environment; we wrote apps for the desktop/GUI, console, and MySpace. Kids then tried to customize everything, including system utilities, Open Office, and their own 3rd party games. It was baffling how much they expected from their programming tools; they demanded full integration between compiler and whatever else they were running on their system. They don't understand that some software is extensible (The Dr. Scheme IDE) and others are not (Minesweeper, Bejeweled.), but that was mainly my fault because I did some magic behind the scenes and put their javascript "commands" in the firefox toolbar.
My nephews kept thinking of functions, operators, and special forms (conditionals and iteration) as "objects". They wanted to put "title", "value" and "callback" on the "wrong" objects. For example, they could bind a callback to a button and have it popup when it was clicked; they want to attach a similar callback to a block of code and have it fired when the block was executed (a trivial problem to fix; just call your callback from the body of the code block, but they wanted to use call-back syntax.)
I am sure they would have loved Squeak, but I didn't have the time to learn it on time; my previous experience with Smalltalk has been brief, mainly with GST and theoretic uses of Self.
Kids are a tougher crowd to please than programmers with some "training", because children have not yet accepted whatever limitations imposed at us by "industry". They want the Ideal.
Even better, once they've learned one, see how they perform relative to one another at learning one or more of the other languages.
See if there is a language that is further upstream than the rest, and also see if there is a language that is such a paradigm shift from the rest, and perhaps even from typical human-thought patterns, that many people from every group struggle to learn it. For those who don't, see if you can discover what is different about how they reason their way through programming.
Or perhaps find the language that everyone just gets once they've been exposed to one or more other languages. See if it really is the easiest, and more sparsely featured of the bunch, or if it just does a remarkably good job of unifying the concepts from all the others.
Just a couple of thoughts as I read your comment. :-)
such a paradigm shift from the rest, and perhaps even from typical human-thought patterns
For me, this is Haskell. Trying to write any decently large program (say, a Super Mario clone) causes brain hemorrhage.
I've also met people who learned Haskell as a first programming language (common in UK unis). A lot of them wind up being crappy C++/Java/Python programmers, because they keep waving their arms and expecting magic to happen like it does in Haskell.
Indeed, I learned Python and Haskell simultaneously during a seriously nerdy week (the gf. was away :) and my Python codes came out with a bloody lambda on every other line! Then I realized I was trying to re-implement laziness, but obviously Python lambdas are not lazy at all (that's right - stuff gets evaluated when you define the lambda).
> For me, this is Haskell. Trying to write any decently large program (say, a Super Mario clone) causes brain hemorrhage.
Programming mostly in Haskell, I could say the same (brain hemorrhage) about imperative programming languages. There are just so damn (unnecessarily) hard to program in.
It would seem you would need more than one group for each language in order to determine if it was the language or the teacher that has the greatest influence on a student's ability to learn to program.
I should have put teach in quotes, I'm from the unschooling camp, if you're familiar with that. I think the "teachers" if they existed at all would be minimalists.
I had a similar experience when helping a friend with a bug in his code from an introductory programming course at my alma mater. The programming assignment involved manipulation of an array of data in C, and there was a counter variable involved to keep track of which element of the array was currently being manipulated.
He understood that there was a relationship between the array and the counter variable, but he took the relationship to far. He thought that the counter variable needed to represent the element of the array, and so he was assigning the value of the particular element in the array to the counter variable. In his mind, the counter variable didn't just provide the element number to look for; it was the element he was looking for, or at least it needed to become so.
Both this, and the color example provided by the author, demonstrate how many implied relationships we depend on within the programs we write, and how many people believe that those relationships to be explicitly stated or else the compiler won't understand what we mean.
I don't think I was able to get my friend to really understand what was going on and why we couldn't just 'tell' the computer to 'relate' these two things for us. It was a level of such naive concreteness that his mind just fought against descending to. I wonder how someone might educate people in thinking at this level? The optimist in me says it might be possible; the pessimist suggests that it might be in the same realm as trying to get people to understand pointers and recursion, which enough people I've read suggest is extremely hard for some (many?) people to understand.
This is where variable names help. I wouldn't call it a "counter", I would instead call it an "index" or "position". Then you have a separate variable called "element" for the specific element of the array, or whatever.
It's interesting to read that. I've always found it hard to explain programming language problems, because I internalized so many conventions so long ago. It can be a challenge to plumb your memory and list all the interesting bits.
7th grade, TI-BASIC. That's what got it started for me. The book that came with the TI-83 had a little chapter on the built-in language and I ran with it. No student with any knowledge of algebra and symbolic mathematics should be making these errors (shown in the article) with named arguments though...that's just weird. Who sees "0 = x" in class?
Thanks for the flashback. I wonder how many HN members got started on their TI-83. I remember even the most remedial users figuring out how to store enough data to cheat in pre-calc. IMO, the language is less relevant, it just takes a project.
The first real applications I ever wrote were TI-Basic apps I wrote for my TI-89 back in Algebra class.
I remember coding them directly with the keyboard on the calculator. I wrote all kinds of things, things to help me solve things, solve equations, geometry solvers, games. It was a blast, and I think that's what got me interested in programming.
Thanks TI! Maybe Middle Schools should work on developing an introductory programming curriculum to introduce kinds to the idea of programming. It's probably more important then most things middle schoolers are learning, especially considering I probably spent so much of my time in math class programming my calculator.
As far as serious work goes, I only got as far as really simple stuff to take care of the quadratic formula, etc. I was all about the text adventure back then (slow, memory-leaking DrugWars-style game, but people liked it!).
I do. But then, I learned my arithmetic in arabic ;-)
(OT: I can also write mirror-text at my natural hand-writing pace, and often get the order of drawing individual glyphs "wrong"; I might, for example, make a dot first then a vertical bar to wite an 'i'. The occassional "russian" backward or upside down letter creeps in when I am tired, etc.)
I actually think that this rule/convention of having the value on the right and location on the left is a little arbitrary and potentially confusing. To someone that's learning to program, you're putting something that comes after (the assignment) before something that comes before (computation of the value to assign) can be counterintuitive.
Once you get used to it you stop thinking about it, but it's important to realize that this concept isn't as intuitive as it may appear.
Oh, and you didn't ever use the store arrow key (STO->) on your TI-83 to store a value? With this operation the variable goes on the right of the expression, not the left.
I remember in sixth grade when they revealed to us that "3 \cdot a" and "3a" were additional notations for "3 x a" (three times a) and encouraged use of those notations on homework, I happily wrote "a3" as well because multiplication is commutative.
True, the dependent variable should be on the left. Sometimes it matters less, for example to express that two sides have the same length, a = b and b = a would both be acceptable.
Possibly more important, you would never see:
a = 3
a = 4
Begs the question: which is it, 3 or 4? Or does 3 = 4 too? Just a reminder that the operators really are equality, not assignment.
In mathematics, equality is symmetric (a=b is the same as b=a).
In programming, variable assignment is asymmetric (a=b is not the same as b=a).
Some programming languages recognize this but get the change wrong by changing the equality operator to == or === and always preferring assignment to be =. Some get it right, e.g., := as assignment.