Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well no language is perfect, but Nim can be used in almost every domain because of it's compilation targets(C, C++, JS) and it's fast compile times(who needs interpretation when compile times are that fast!):

* Shell scripting, I still assume most people will just use Bash tho: https://github.com/Vindaar/shell

* Frontend: https://github.com/karaxnim/karax or you could bind to an existing JS library.

* Backend: For something Flask-like: https://github.com/dom96/jester or something with more defaults https://github.com/planety/prologue

* Scientific computing: the wonderful SciNim https://github.com/SciNim

* Blockchain: Status has some of the biggest Nim codebases currently in production https://github.com/status-im?q=&type=&language=nim&sort=

* Gamedev: Also used in production: https://github.com/pragmagic/godot-nim and due to easy C and C++ interop, you get access to a lot of gamedev libraries!

* Embedded: this is a domain I know very little about but for example https://github.com/elcritch/nesper or https://github.com/PMunch/badger for fun Nim+embedded stuff!

Most of the disadvantages come from tooling and lack of $$$ support.



> who needs interpretation when compile times are that fast!

Well, interpretation is pretty useful for a REPL. And a REPL is not just useful to avoid compilation, but also as a way to explore a new API. And, most importantly, to preserve the results of long computations when you do not know yet what to do with it. If computing a value takes half an hour, you certainly don't want to recompute it each time you change something. Rather, you keep an open session, such as a REPL or a notebook, and keep computing with the already existing value


This is exactly right. What is the REPL story with NIM? Having used a REPL, I cannot even imagine doing research & analytics without one.

FWIW, this comparison between R, Pandas and Nim dataframes is quite encouraging: https://gist.github.com/Vindaar/6908c038707c7d8293049edb3d20...

This is one of the aspects that self professed R/Python datascience contenders often get wrong. The very bare minimum is a well supported and thought out dataframe library. Without that, the language is basically dead in the water. Nim seems to have a very well thought out API that also avoids many of the annoying aspects of Pandas (e.g. the huge waste coming from eagerly computing each vectorized operation into separate arrays).


I did quench (most of) my thirst for a Repl building a notebook system (plug): https://github.com/pietroppeter/nimib

Based on that and using a book theme, scinim getting started documentation is being built, e.g.: https://scinim.github.io/getting-started/basics/data_wrangli...


My statement was mostly an exaggeration, than an absolute truth. REPLs are really nice, but it's story with Nim is less nice.

There is: https://github.com/inim-repl/INim and the builtin `nim secret`.

There is also a Jupyter kernel: https://github.com/stisa/jupyternim


Actually, the bare minimum is a well supported and centralised numeric library providing arrays, matrix and the base tools


Perhaps for some things.

Most of my work is time series analysis and I refuse to use an environment where samples are not explicitly labelled/timestamped and where the tooling does not support seamless operations that take this labeling into account.

So for my use case, a fully featured dataframe library is indeed a must.


See my comment from the other reply on this question for potential solutions, but as an fyi for those curious, Nim does come with a VIM that comes in very handy for such purposes: https://nim-lang.org/docs/nims.html


> don't want to recompute it each time you change something

True... May I introduce you to the filesystem?


Wow, what a great invention I have been missing! You made my day! :-)


Had a project once where 70% or so of the 8 month runtime was de/serialization. 800gb or so data wad and 16gb of ram; all messily multiply interlinked and not even the indexes would fit into ram. it sucked.

but the architecture that imposed meat we were surprisingly resilient to power outages.


I looked at the emitted JS when the last Nim story came out. It needs some work there. Lots of globals with names that would likely collide with other existing JS.


Yeah, we definitely need to resolve this and it sounds like a fun project! If you or someone else has the time for this I (and I'm sure the rest of the community too) would love to help lead you in the right direction :)


Also one of Ethereum's proof-of-stake clients is written in Nim: https://nimbus.team/docs/index.html



I haven’t used nim outside of some hello world toying a few years back. Is compiling actually that fast? I thought the nim compiler compiled to C and then called gcc (or whatever system compiler), isn’t that slow in practice?

Any large nim projects that anyone can point me to would be helpful too, I’ll give the compiler a shot later today!


The proof is in the link!

> Fast compile times: a full compiler rebuild takes ~12s (Rust: 15min, gcc: 30min+, clang: 1hr+, Go: 90s) [2].

The compiler is really big and self-hosted too.

For large nim projects check out: https://github.com/mratsim/Arraymancer or https://github.com/treeform/pixie (personal faves).


Thanks for the links, I’ll check those out.


When using Nim for "scripting" it is recommended to use TCC for rapid compilation. It is genuinely fast.


* Backend: For something Jinja/Twig-like: https://github.com/enthus1ast/nimja




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: