The sense I'm getting is that the output will look authoritative, but may not work. When it doesn't work, it may be obvious, or it may be latent. What does someone need to know in order to be able to detect latent flaws (and to be able to ask the right questions in the first place)?
I imagine it helps to know a lot of the same basics, but as AI gets better at reliably performing certain types of functions, it becomes OK to view more and more stuff as 'black boxes'. I'm trying to figure out what those black boxes are, and what they will be in the future. Because the more time you spend learning something that becomes irrelevant, the less time you can spend learning other stuff that remains relevant (coding or otherwise).
You try to run the code, or your kid tries to run it. I do plan to build an auto-debugging mode in eventually to the extent it is possible. Say about half the time, feeding an error or bug description in to the coding model from AI, it is able to fix the bug just from that.