Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What kind of performance gains can you get from that? Seems to me that 99.9% of the compute happens remotely.


Probably minimal, it's mostly lower memory use from not having to boot a separate V8 engine with its own GC, like how Electron apps have to boot a separate browser engine. But CPU-wise it's not doing anything interesting on the client.

The neat thing for me is just not needing to setup a Node environment. You can copy the native binary and it should run as-is.


There are various solutions for turning node programs into standalone binaries, by more-or-less including Node within the binary. One I've used before with success is "pkg".


I would rather not have to package the entire runtime of a language just to run one program, hence Rust is a great choice for this.


Don’t have to wait 2 seconds for the runtime to start.


Yeah, they could probably get away with doing this in Go.


Sure, both language are equally fit for this kind of tasks.


If you're parsing JSON or other serialization formats, I expect that can be nontrivial. Yes, it's dominated by the llm call, but serializing is pure CPU.

Also, ideally your lightweight client logic can run on a small device/server with bounded memory usage. If OpenAI spins up a server for each codex query, the size of that server matters (at scale/cost) so shaving off mb of overhead is worthwhile.


Probably uses less memory?

The big one is not having node as a dependency. Performance, extensibility, safety, yeah, don't really warrant a rewrite.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: