Hacker News new | past | comments | ask | show | jobs | submit login

> Tooth's processors didn't have nearly enough RAM to run Lisp directly [1] so instead we used a custom-designed compiler written in Lisp to generate 6811 code.

Was thinking about this point in the 2002 article.

As a Python user, I could imagine a lot of pain going into "try and compile my Python code" (notably downstream from hashing and operator overloading meaning that I might have to end up shipping the entire Python object model into my binary). But perhaps Common Lisp avoids a lot of that mess by being "just" functions?




In Lisp you can write

    (compile-bunny 
      '(setq x (+ (aref y i) x)))
and it looks just like normal code, but really you are just passing a data structure to the compile-bunny function. The direct equivalent in Python is awful:

    compile_bunny(['setq', 'x', ['+', ['aref', 'y', 'i'], 'x']])
but you can build up a pretty decent EDSL in Python for this sort of thing with operator overloading, similar to what Sympy does:

    x.set(y[i] + x).compile()
That looks a lot more like normal Python code, but getting it working is a bigger hassle than in more conventional Lisps where you can just quote whatever S-expression. (Of course, your compiler has to handle each type of expression sooner or later, so this isn't as big a deal as it might sound.)

But neither one of these is compiling your Python code. Python's bytecode may give you a head start on that if you do want to do it.


> your compiler has to handle each type of expression sooner or later, so this isn't as big a deal as it might sound

It's a bigger deal than you think. Not only can you leverage the CL parser on the front end so that you can convert code to data simply by prefixing it with a quote, but you can also leverage the CL compiler on the back end to generate the final code. So you don't have to build a complete back end. All you have to do is translate your language into Common Lisp and then pass that on to the CL compiler. That is almost always orders of magnitude simpler than writing a complete back end. The only time it doesn't work is if your language has semantics that are fundamentally different from CL. (The classic example of this is call/cc.) But 99% of the time you don't need this, and so 99% of the time writing a compiler in CL is 1% of the work it otherwise would be.


This is awkward because I'm mansplaining your own project back to you, but, as I understand it, you were compiling to the 68(HC?)11, which your CL compiler back end didn't support, so I don't think it helped you in that way, to generate the final code. Unless I've misunderstood something fundamental?

Depending on how the compiler was structured, it could have helped you in some other way, like doing target-independent optimizations, if there was a way to get some kind of IR out of it instead of native code for the wrong CPU.

I agree that there are lots of cases where the approach you're describing works very well and has a huge payoff, though I've only done it with C, JS, and Lua compilers rather than CL compilers. Usually it's been more like 20% of the work rather than 1% of the work, but maybe that depends on the level of optimization you're hoping for.


You are correct. I was talking about writing compilers in CL in general, not specifically for the 6811.


Oh, I agree then.

It's a little more complicated than that, and I can't fully explain it in an HN comment. I'd have to give you a crash course in compiler design. But the TL;DR is that if you're building a custom language, Lisp gives you the AST (abstract syntax tree -- look it up if you don't already know) for free. You can also sometimes leverage parts of the Lisp compiler depending on what machine you're targeting, though for the 6811 that was not the case.


I imagine - though I don't know - it might have some similarities with GOAL, see https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp and https://opengoal.dev/docs/intro/


yeah I got that you get the AST.... I suppose you get the macro capabilities "for free" as well so in a lot of cases the actual code you're looking to transform might be quite simple?

I was mostly worried if the underlying CL is so dynamic that it would be hard to compile things down cleanly but I'm probably overthinking it.

This reminds me a bit of Emscripten's problem though. Emscripten does "machine code" (I believe LLVM IR) to Javascript translation and in order to get good performance it has to try really hard to rediscover things like switch statements and for loops in the IR, since using those in the JS will lead to good perf.

Anyways, I suppose that the core Lisp semantics are going to be so small that it all works out quite well.


CL and Scheme mostly aren't that dynamic. CL has functions like map and elt which operate on runtime-determined sequence types, but it's more common to use things like mapcar and nth or aref which aren't polymorphic at all. Arithmetic is pretty polymorphic in both, which can be an efficiency problem.

A lot of things you'd do dynamically in Python (with operator overloading or the like) are done statically (with macros, as you allude) in more traditional Lisps like CL and Scheme.

The core Scheme semantics are pretty small. Common Lisp has significantly hairier core semantics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: