> But it seems people have warped “interpreted” to mean JIT to compensate for the advancements in scripting runtimes. That is a bastardisation of the term in my opinion.
Python's not JIT, either. It reads bytecode - which AFAIK is just the source code but tokenized - and it runs it, one operation at a time. It doesn't compile anything to native CPU instructions.
That’s the 2nd time you’ve posted that and it wasn’t right the first time you said it either.
CPython’s virtual machine is stack based thus the byte code is more than just tokenised source code.
In fact there’d be very little point just tokenising the source code and interpreting that because you get no performance benefit over that vs running straight off the source. Whereas compiling to a stack machine does allow you to make stronger assertions about the runtime.
One could argue that the byte code in the VM is interpreted but one could also argue that instructions in a Windows PE or Linux ELF are interpreted too. However normal people don’t say that. Normal people define “interpreted” as languages that execute from source. CPython doesn’t do this, it compiles to byte code that is executed on a stack machine.
Hence why I keep saying the term “interpreted” is misused these days.
Or to put it another way, Visual Basic and Java behaved similarly in the 90s. They compiled to P-Code/byte code that would execute inside a virtual machine and at that time the pseudo code (as some called it - not to be confused with human readable pseudo code but technically “pseudo” is what the “P” stands for in “P-code”) was interpreted instruction by instruction inside a stack machine.
Those languages were not classed as “interpreted”.
The only distinction between them and CPython is that they were AOT and CPython is JIT. And now we are back to my point about how you’re conflating “interpretation” with “JIT”.
>The only distinction between them and CPython is that they were AOT and CPython is JIT. And now we are back to my point about how you’re conflating “interpretation” with “JIT”.
Talking about conflating though, AOT and JIT mean different things in a programming context...
I’m not conflating the terms AOT and JIT. I’m using examples from how AOT compilers work to illustrate how modern JIT compilers might have passes that are described as an interpreter but that doesn’t make the language an interpreted language.
Ie many languages are still called “interpreted” despite the fact that their compiler more or less functions exactly the same as many “compiled languages” except rather than being invoked by the developer and the byte code shipped, it’s invoked by the user with the source shipped. But the underlying compiler tech is roughly the same (ie the language is compiled and not interpreted).
Thus the reason people call (for example) Python and JavaScript “interpreted” is outdated habit rather than technical accuracy.
Edit:
Let’s phrase this a different way. The context is “what is an interpreted language?”
Python compiles to byte code that runs on a stack machine. That stack machine might be a VM that offers an abstraction between the host and Python but none the less it’s still a new form of code. Much like you could compile C to machine code and you no longer have C. Or Nim to C. Or C to WASM. In every instance you’re compiling from one language to another (using the term “language in a looser sense here”).
Now you could argue that the byte code is an interpreted language, and in the case of CPython that is definitely true. But that doesn’t extend the definition backwards to Python.
The reason I cite that the definition cannot be extended backwards is because we already have precedence of that not happening with languages like Java (at least with regards to its 90s implementation. I understand things have evolved since but not poked at the JVM internals for a while).
So what is the difference between Java and Python to make this weird double standard?
The difference is (or rather was) just that JIT languages like Python used to be fully interpreted and thus lazily still referred to that way. Where as AOT languages like Java were often lumped in the same category as C (I’m not saying their compilation process is equivalent because clearly its not. But colloquially people to often lump them in the same group due to them both being AOT).
Hence why I make comparisons to some AOT languages when demonstrating how JIT compilers are similar. And hence why I make the statement that aside from shell scripting, no popular language is interpreted these days. It’s just too damn slow and compilers are fast so it makes logical sense to compile to byte code (even machine code in some instances) and have that assembled language interpreted instead.
Personally (as someone who writes compilers for fun) I think this distinction is pretty obvious and very important to make. But it seems to have thrown a lot of people.
So to summarise: Python isn’t an interpreted language these days. Though it’s runtime does have an interpretation stage, it’s not interpreting Python source. However this is also true for some languages we don’t colloquially call “interpreted”.
That’s property of the JIT compiler though, not a lack of compilation. You want to keep compiler times low so you only analyse functions on demand (and cache the byte code).
If CPython behaved identical to Javas compiler people would moan about start up times.
Some AOT languages can mimic this behaviour too with hot loading code. Though a lot of them might still perform some syntax analysis first given that’s an expectation. (For what it’s worth, some “scripting languages” can do a complete check on the source inc unused functions. Eg there’s an optional flag to do this in Perl 5).
I will concede things are a lot more nuanced than I perhaps have credit for though.
Python's not JIT, either. It reads bytecode - which AFAIK is just the source code but tokenized - and it runs it, one operation at a time. It doesn't compile anything to native CPU instructions.