Actually, the article content doesn't seem to match the title.
The best predictor according to the article (50% of variance) is fluid intelligence. Language aptitude was a distant second place at 8% of variance explained, narrowly edging out memory for third place.
I don't think anyone is at all surprised by a result that says "being able to think logically in novel situations is a better predictor of whether you will be able to pick up programming quickly than all other predictors combined".
Nor am I particularly surprised to learn that the component of mathematical expertise that is not derived from fluid intelligence (e.g. ability to perform mental arithmetic, for example) isn't predictive of the ability to program.
There are several different results. Language aptitude was the best predictor for learning rate. Fluid intelligence was the best predictor for programming accuracy (the numbers you quote), and for declarative knowledge.
But yeah the actual title of this 2020 article is "Relating Natural Language Aptitude to Individual Differences in Learning Programming Languages", the current HN title "Predictor of learning to code is language aptitude - not math/cognitive ability" doesn't appear in the article, and I don't see at all how it is supported (besides numeracy specifically not showing up strongly) when they found general cognitive ability to be a significant predictor.
> Actually, article content doesn’t match the title … best predictor is fluid intelligence
On the contrary, the title, for both this submission and the article, emphasizes the “learning” phase, while you’re citing the success outcomes bit. So not actually.
On learning, the article says:
Learning: When the six predictors of Python learning rate (language aptitude, numeracy, fluid reasoning, working memory span, working memory updating, and right fronto-temporal beta power) competed to explain variance, the best fitting model included four predictors: language aptitude, fluid reasoning (RAPM), right fronto-temporal beta power, and numeracy. This model was highly significant [F(4,28) = 15.44, p < 0.001], and explained 72% of the total variance in Python learning rate. Language aptitude was the strongest predictor, explaining 43.1% of the variance, followed by fluid reasoning, which contributed an additional 12.8% of the variance, right fronto-temporal beta power, which explained 10%, and numeracy scores, which explained 6.1% of the variance.
TL;DR: For “learning”, language aptitude was the strongest predictor, explaining 43.1% of the variance, followed by fluid reasoning, which contributed an additional 12.8% of the variance…
I don’t think that a study on such a complex matter involving just 36 participants does really advance science in any way.
Said that, as somebody that has almost only worked on enterprise software, my job has been mainly to write code that the next guy would understand. So it was a sort of indirect communication with the next dev (which could have been me 2 years after writing the code), so I may understand why language skills are important. It may be different for people working on graphic engines or mathematical problems.
> Rate of learning, programming accuracy, and post-test declarative knowledge were used as outcome measures in 36 individuals who participated in ten 45-minute Python training sessions.
10 45 minute Python training sessions?
Don't really think this is indicative of learning to code.
Would be interesting to see a much longer study looking at CS undergrads or perhaps coding bootcamp students.
Once you get into Data Structures & Algorithms or something then you'll probably see math ability start to play a role.
Not only that, but imperative Python is quite different than e.g. declarative programming (SQL), functional programming (Haskell, OCaml), meta programming (Templates/Macros), …
Entirely anecdotally, I find that a maths/science background sometimes feels like more of an impediment to teaching someone practical code than an arts/philosophy/language one.
Teaching is so often about finding the right metaphor, and students who are comfortable with metaphor tend to grasp the concepts faster.
This doesn't hold for the deeper bits - the specifics of boolean algebra etc. - there you do want that rigorous, stepped approach to understanding. But in terms of quickly getting to grips with a language and its paradigms - programming, rather than computer science - the more practice a student has at working with highly abstract concepts the better. That's something that does often come with a STEM background, but pretty much always with an arts one.
This paper has been posted in various forms in various places but it never stops being full of shit. The "coding" test was essentially memorizing some syntax in python which makes it meaningless. Algorithms and abstraction are just math and syntax is syntax, great programmers are generally good at the former which they don't even attempt to test for. I'm all for diversity in tech but the narrative does get tiring after a while
Would love to see a larger study to know more about the “first evidence that measures of intrinsic network connectivity obtained from resting-state (rs)EEG can be used to predict Python learning outcomes”.
Is the title not telling on someone's biases, conflating cognitive ability with mathematics and dismissing language aptitude as not cognitive by comparison?
This could contribute to outliers and looking at the raw data there is too much scatter for me to recognize a very definitive comparison.
Plus with a common y-axis but different x-axes terms, the slopes can not be as meaningful relatively as I would like to see.
More helpful normalization might be possible with a much larger data set.
However it can be seen that only two participants had learning rate scores 1.6 or above, so those must be the same two points on each applicable graph. Notice they were at the high end of all three tested characteristics.
The best predictor according to the article (50% of variance) is fluid intelligence. Language aptitude was a distant second place at 8% of variance explained, narrowly edging out memory for third place.
I don't think anyone is at all surprised by a result that says "being able to think logically in novel situations is a better predictor of whether you will be able to pick up programming quickly than all other predictors combined".
Nor am I particularly surprised to learn that the component of mathematical expertise that is not derived from fluid intelligence (e.g. ability to perform mental arithmetic, for example) isn't predictive of the ability to program.