Hacker News new | past | comments | ask | show | jobs | submit | hwpythonner's comments login

Compilers and optimizers are great tools for some use cases, but not all.

Just to name a few limitations:

- Many rely heavily on the CPython runtime, meaning garbage collection, interoperability, and object semantics are still governed by CPython’s model.

- They’re rarely designed with embedded or real-time use cases in mind: large binaries, non-deterministic execution (due to the underlying architecture or GC behavior), and limited control over timing.

If these solutions were truly turnkey and broadly capable, CPython wouldn't still dominate—and there’d be no reason for MicroPython to exist either.


PyXL is a bit more direct :)


PyXL deliberately avoids tying itself to Python’s high-level syntax or rapid surface changes.

The system compiles Python source to CPython ByteCode, and then from ByteCode to a hardware-friendly instruction set. Since it builds on ByteCode—not raw syntax—it’s largely insulated from most language-level changes. The ByteCode spec evolves slowly, and updates typically mean handling a few new opcodes in the compiler, not reworking the hardware.

Long-term, the hardware ISA is designed to remain fixed, with most future updates handled entirely in the toolchain. That separation ensures PyXL can evolve with Python without needing silicon changes.


Fair point if you're looking at it through a strict compiler-theory lens, but just to clarify—when I say "runs Python directly," I mean there is no virtual machine or interpreter loop involved. The processor executes logic derived from Python ByteCode instructions.

What gets executed is a direct mapping of Python semantics to hardware. In that sense, this is more “direct” than most systems running Python.

This phrasing is about conveying the architectural distinction: Python logic executed natively in hardware, not interpreted in software.


Wouldn't an AoT Python-to-x86 compiler lead to a similar situation where the x86 processor would "run Python directly"?


Thank you so much — that really means a lot!

It's still early days and there’s a lot more work ahead, but I'm very excited about the possibilities.

I definitely see areas like embedded ML and TinyML as a natural fit — Python execution on low-power devices opens up a lot of doors that weren't practical before.


Great question!

You're right that it can definitely be faster — there's real room for optimization.

When I have time, I may write a blog post that will explain where the cycles go, why it's different from raw assembler toggling, and how it could be improved.

Also, just to keep things in perspective — don't forget to compare apples to apples: On a Pyboard running MicroPython, a simple GPIO roundtrip takes about 14 microseconds. PyXL is already achieving 480 nanoseconds, so it’s a very different baseline.

Thanks for raising it — it's a very good point.


Thank you!


No, I'm not paying for ModelSim. I've been using free tools like Icarus Verilog — it was good enough for my needs so far. If I need more performance later, I might migrate to Verilator. I could also use Vivado’s built-in XSim, but coming from a software background, I generally prefer more Unix-style tools rather than heavier hardware IDEs.


Python’s execution model is already very stack-oriented — CPython bytecode operates by pushing and popping values almost constantly. Building PyXL as a stack machine made it much more natural to map Python semantics directly onto hardware, without forcing an unnatural register-based structure on it. It also avoids a lot of register allocation overhead (renaming and such).


Thanks so much — really appreciate it! Yes, it's been a one-person project so far — just a lot of spare time, persistence, and iteration.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: