Nim is everything Python and Go should be/want to be, as far as language features and semantics go (haha).
It has their easy-to-learn properties, but has a better type system, the generics system and macros are arguably more useful, and more usable than what is offered in either language.
It deserves far more attention than it currently gets.
I've been using nim for a few years. I wouldn't have made either design choice (i always do qualified imports, it's just a few extra characters of typing). That said, there is a reason these superficial things always come up in HN threads. If you haven't used a language before, syntax is the only thing you can talk about. It's the equivalent of low effort political banter and the dopamine hit you get from junk food or watching a tiktok. These things melted away after my first 30 minutes with the language, and that's about when the real work began.
This just feels like gatekeeping. I like nim, I've used it, talked to Dom and the author a few times in chat, filed some small bug reports. But even if I hadn't, I'd still hate the case insensitivity; I'm an experienced developer and I don't need to have used a language to know I will like or dislike some aspect of it.
To me it feels more like gate opening: if you feel discouraged to try a language by this specific feature that you do not like, do not fear, come in and try it since as far as we know no people actually using the language suffered from this feature
I don't think Nim has anything that I'd call a "global namespace". By default importing a module will include its exported symbols into the local module, but that doesn't impact anything globally.
In my opinion, this is the correct default, at least for Nim. Operator overloading and UFCS (https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax) don't really work if I have to prefix everything with a module name, and the static nature of Nim means that I'll get a compiler error if there's any ambiguity. There are downsides as well, but language design is all about tradeoffs, and I think Nim got this one right.
Regarding the case insensitivity, I was initially put off by this as well, but in 2 years of using Nim as my primary language, I have never, ever, encountered a real-life issue with it. I've never seen a Nim code base that uses mixed casing, and never encountered or heard of a bug caused by this behaviour. However, it has allowed my own code bases to stay 100% consistent, regardless of the code style of my dependencies, even when those dependencies written in other languages. Contrast this with Python where the `logging` module uses different casing than everything else, so you're forced to use an inconsistent style if you want to consume it. This type of thing is a non-issue in Nim. I think a case sensitive Nim would still be a fine language, but in my experience the pros of being mostly insensitive outweigh the cons.
In the end you may still disagree with both of these decisions, which is fine. Just understand that it's a nuanced discussion, and there's some solid reasoning behind their choices.
Implicit imports on the global namespaces? What do you mean?
Case insensitivity is somewhat complex topic in Nim, as there are some places where the case actually matters, plus there are some more rules for equivalence of identifiers (related to `_` IIRC). In any case, it was never a problem for me in practice; nimsuggest works quite well.
> Implicit imports on the global namespaces? What do you mean?
Not OP, but they are referring to `import std/strformat` (from homepage). That just imported things into the global namespace. Maybe that isn't required and its just for short examples, I don't know.
Go and Python can both import into the global namespace too (`from thing import *` and `import . somepackage`), but they are pretty frowned upon in most projects.
> Not OP, but they are referring to `import std/strformat` (from homepage).
> That just imported things into the global namespace.
That's what tripped me up: there's definitely no such thing as importing into GLOBAL namespace. That's an import into a top-level (true) namespace of a MODULE. In Python, if you put something into the dict returned from `globals()`, you are indeed importing into global namespace, but I've never seen it actually done. Anyway, that might seem like nitpicking, but it's an important distinction. [EDIT: C `#include`s are an example of actually importing into global namespace]
With that out of the way: the wildcard imports are frowned upon for a reason. What reason? Basically, it's hard to tell a) what identifiers were imported, b) if any of them are still used in the module (ie. if it's safe to remove the import). With multiple wildcard imports in one file you also get c) hard to tell where a given identifier comes from.
Now, none of these problems apply to Nim. Nim is statically typed. The compiler and the tooling knows exactly which identifier comes from where. If you delete all the places you used identifiers from a wildcard import, the compiler and the tooling will tell you that it's safe to delete the import. Another problem with wildcard imports is that the imported modules can change, leading to disasters like silently shadowing an identifier imported from somewhere else. Again, this cannot happen in Nim - it simply won't compile.
Every single linguistic feature has its pros and cons, but you have to remember to only weigh those in the particular context they're used in. What I mean is that wildcard imports in Python and Nim are very, very different beasts, and shouldn't be directly compared, if at all. It would make more sense to compare Nim to Elixir or Scala in this regard.
Note: obviously, Nim also has selective imports and whole module imports, too.
Nim having selective imports solves any annoyances I would have.
But you mentioned all the reasons it was ok in Nim. But the answer is always that the compiler knows. That isn’t why I personally dislike this type of import. I dislike it as a user. I find it super annoying not knowing where an identifier is coming from.
I love having ‘fmt.Printf()’ instead of ‘Printf()’. Trivial example I know. I’m sure everything becomes fine once you are familiar with the ecosystem though.
or, if you wanted to keep uniform call syntax, you'd have to invent some new syntax:
"a string".split<strutils>(" ").join<sequtils>("\n")
Which also doesn't look very appealing.
So, there are trade-offs here, and you obviously might not like the choices made by Nim, which is fine! But, there are reasons for the design decisions made by the language implementers, and they are almost never predicated on a single feature: all the features of a language interact with each other, sometimes in very strange and unpredictable ways. So it makes sense to consider the features your interested in the proper context :)
[1] `import modname` and `from modname import fun1, fun2`
I don't mind the comparison but do wish the comparison wouldn't so often be "it's a better Python."
I've been programming Python 14+ years and I've dabbled in Nim and I agree it's a very different language. Yes, there are similarities, but my expectations about it being a lot like Python from the community and docs actually led to some significant frustration and discouragement.
"Inspired by"..."borrows concepts from" are better descriptors IMO. But it would be better if an unqualified "it's like Python" disappeared.
As much as I hate it for correctness, 2 is better than 3 indeed. No sane developer encodes meaning in the case of the name of the variables, in the overwhelming majority of cases it's a typo.
In the C world ALL_CAPS very often signify preprocessor macro identifiers (or some other "class" of identifier), but, yeah..they may all be "insane". ;-)
OMG, what if C did significant case because of that? I mean, it's fine to write "#import", why didn't K&R use a special syntax for macros, like $MACRO or something and left the uppercase as a convention?
I am no C historian, but I believe part of the idea was to be able to have macro wrappers around proper language symbols like "#define sin(x) real_sin(x, 1e-7)" or some such, for example. The "warning character" (what you used `$` for) was thus only for introduction (#define) not for reference. Whether the initial intent or not, this idea certainly came to become an application, also invoked via compiler command line "-Dsin=real_sin" style definition. With this you want the case conventions/sensitivity to be the same in the preproc/macro language as the base language.
FWIW, I do think that the late 1960s saw a broad "new prog lang" evolution away from case insensitive (often ALL CAPS due to low resolution dot matrix printers/display devices) stylistic tendencies in, e.g. LISP and FORTRAN, towards case sensitivity and lowercase. So, I suspect this was the salient driver - "We have lowercase now/text is more legible! Let's use it!". (Yes, mechanical typewriters/typesetting had it for decades to centuries, but programming was not done on those, but with printers and punch cards and just starting to be on CRTs.) There are analogies with Unicode in PLs these days (though there are obviously also other motivations these days/decades with the compute world i18n).
Those are just the two languages I spend the most time in, but, but I'm sure it's trivial to find numerous examples of cases carying meaning in case-sensitive languages. Especially ones which encourage camel casing.
The options are nim c --styleCheck:hint --styleCheck:usages (you need both). Though not on by default you could put it in your config.nims or nim.cfg's to make it on by default for you or for your code.
The global namespacing is what allows UFCS to work. You can rewrite parseInt("6") to "6".parseInt, but you can't rewrite strutils.parseInt("6") to anything.
Yes, you have listed the two things that have stopped me from giving Nim a chance. I especially dislike the global namespaces, it makes it very hard to understand where a function is defined.
As someone learning programming, should I try nim for my first compiled language, instead of C++? I know it's likely harder to find answers in the Internet, but at the same time it looks easier, coming from Python and JS.
If nim picks up pace in a few years and I already have a sense of it, it may be a good for prospects too.
I'd recommend it. The syntax should make you feel at home if know python, though the type system (and std lib) does make it feel like a very different language once you use actually use it.
One advantage over C++ is that you don't need to learn make, IDEs or a complex build system. The nim compiler and the nimble package manager just takes care of building for you. "nim c main.nim" and off you go, ending up with a single executable, no matter how many imports, modules, etc you have.
I do recommend this as a gentle introduction: https://nim-by-example.github.io/ and recommend you use VS Code with the Nim extension by saem.
Not OP but will definitely recommend Nim if you are intimidated by C++. Nim first compiles to C and then that C code is compiled to binary. If you know python already, you will feel right at home and you get better performance and a great type system and metaprogramming. But the eco-system is definitely not as mature as Python or JS.
The documentation is good enough that I don't need to google most of the stuff while writing Nim, having types definitely help with that.
If you already know python and JS I think you probably won't have too much trouble with Nim, but why not add something new like memory management to learn about? Rust is complicated, but it has training wheels, and its wasm support would integrate with your JS knowledge. Alternatively modern C is still used everywhere and will teach you a lot about hardware.
> but why not add something new like memory management to learn about?
You can learn that with Nim too. It supports manual memory management like C. The new ARC/ORC GC also works more like modern C++ than a traditional GC.
> its wasm support would integrate with your JS knowledge
Nim can also compile to JS. Or you could compile it first to C and then compile that to wasm, although that's not supported out of the box.
Those are good points. I have found that languages that don't force me to use a feature don't teach aa well as ones that do. I usually don't want to use those languages later though ( java and oop or Haskell and functional) because I like having options. Nim gives you lots of options it sounds like.
Go yes. Rust? I wouldn’t say that Rust is easier although I may be biased with my long C/C++ background. The ownership model is a pretty complex thing to learn. I guess maybe if you Box a lot?
It does have an easier on-ramp story for getting started/adding dependencies and that may be important for getting started.
> The ownership model is a pretty complex thing to learn
It is, but if you want to write C or C++ that doesn't crash, or just silently work incorrectly then you have to internalise these rules anyway. And having a compiler that gives you a helpful error message when you get it wrong is much easier than getting a segfault at runtime that may not even occur in a proximate part of the code. Let me put it this way: in 5 years of using Rust I am yet to need a debugger, and only need even println debugging rarely.
That’s not strictly true IMO. There are things that are trivially expressible in C++ that require a lot of complexity on the Rust side to prove to the compiler the code is safe.
I don’t disagree with you on the safety aspects* and 100% real production code should really be starting in Rust. From a learning perspective though… not as sold yet. There’s a lot of things nicer (the stdlib is 100x better and more ergonomic and more pythonic in terms of “batteries included”). There’s a different set of edges though and none of the ownership concepts you learn really transfer anywhere else so you’re learning Rust’isms but not general systems programming things (just like goroutines teach you Go’isms and not what high performance thread safety means).
* The debugger claim feels specious because segfaults aren’t the only source of bugs that need debugging. How do you deal with an unexpected panic or logic bugs?
As someone who properly learned C++ only some years ago, I can confidently state that Rust is harder to learn than C++. You can effectively code in C++ by learning only 20% of the language, but Rust has a far larger mental overhead to start coding. You have to do everything "The Rust Special Sauce Way".
Rust is easier than C/C++, but not at first. After you learn the semantics of it, is very smooth sailing (at least until you get into the most exotic needs like build your own async runtime).
I mean perhaps if you’re writing 20 line toy programs, but to do anything non-trivial in either language you’re going to want to use a library, and that’s already hard in C++
I would recommend just vanilla C for maximum learning value. Learning about stack vs heap, pointers, memory allocation, etc. lets you learn more about what the computer is actually doing. It will make the computer seem less like a magic box, and you will see the things other languages abstract away.
I sincerely hope Nim will pick up more steam. I find it quite enjoyable and a good jack of all trades. You can literally write anything in Nim, in a reasonable amount of time while having decent performance if you have good libraries, frameworks and documentation.
Something I'm excited about: v1.6.2 integrates support for (not yet released) Nimble[1] v0.14, which will introduce project lockfiles. I've had terrible experiences with lockfiles in JS land, but they are sorely needed for Nim projects as (fingers crossed) they'll allow for reproducible builds without having to resort to the nimbus-build-system[2]. The latter isn't completely horrible — a lot of much appreciated hard work has gone into it, and it's been a real workhorse — but some days it feels like a big ball and chain.
I'll be much happier when I can cruise along with choosenim[3] and lockfiles and not have to worry about Makefile + submodules shenanigans.
I've vaguely heard of Nim in the past and for some reason always thought it was some limited scripting language. But this looks like a pretty serious candidate for a nice backend language.
I'm getting tired of how cumbersome Go is and have had my eye on Crystal. Anyone have any insight on how Nim stacks up to Crystal?
I have written toy programs in Nim and Crystal, and the Nim experience is much more polished.
The Nim compiler is as fast as the Crystal compiler is slow!
I think Crystal is more enjoyable to use, personally. I am not a Ruby programmer, but Crystal just somehow feels fun to use. Nim by contrast feels very utilitarian, and frankly more practical.
I found the Nim language documentation better than Crystal, the Crystal standard library docs are just as good as Nim.
Nim overall feels like an "early 1.0-ish" language. Crystal feels like they went to 1.0 m too early. I would probably consider using Nim in a medium-sized application for work, whereas I wouldn't feel comfortable with Crystal for more than a personal hobby project.
They do have some things in common (macros, C interop, static types), but aren't really similar languages. I used to lump them together in my head as "those two new languages", but I don't anymore after trying them both.
If you are checking out new-ish languages, I hear Kotlin has some good design decisions that people appreciate (but that the tooling is pretty bad outside of Jetbrains IDEs).
In case you want a quick example of a backend application in Nim, I've written a small post[1] describing how to create a room based chat using it and HTMX. In total, it's around 70 lines of code (though it doesn't cover more advanced use cases, like username collision). I too have moved from Go to Nim, the syntax and ease in which I can turn a feature into code has been much better.
Genuinely curious - why should I care about Nim? There are already a plethora of general purpose languages (both compiled and interpreted). Also, is anyone using Nim today? What's the adoption?
For me it's the intersection of performance like Rust, the readability like Python, the ability to generate native executables without dependencies like Go and a very cool type system.
All while delivering fast compile speeds!
Last time I tried nimlsp was in June-ish, using neovim.
It was an okay experience, but things broke down quickly when using macros and I got red squigglies everywhere because things rely on a certain setting. Compared to the VSCode experience, it's really lacking though.
It's certainly been getting better and the focus is also on tooling, so I'm hopeful!
I've just looked at some benchmarks, and even though Nim claims C-like performance, that never seems to be confirmed by independent tests. It is usually a bit behind C / C++ / Rust / Crystal, and roughly on-par with Golang.
Fast enough for sure, but "half the speed of C" would be more honest it seems?
I suppose the makers of Nim named that flag for a reason - giving up memory safety by disabling bounds/overflow checks should be never be the default for networked software in a production setting, so benchmarking in that mode would paint an unrealistic picture.
What you want are comparable compiler flags across languages, say "optimized for performance, yet retaining safety" and "go as fast as possible and disable all brakes". Which, to be fair, is the default for C anyway, but this is not a desirable default. I'm sure you can similarly game Rust bechmarks by using "unsafe".
Rust doesn’t have a compiler flag for this, instead it has separate functions that don’t do the checking. E.g. Vec has a .get_unchecked function.
Such functions can only be used in an unsafe block, and should only be used if bounds checking is handled elsewhere such that it would be impossible to cause an out of bounds error at runtime.
It’s also often possible to get rust to elide bounds checks by using iterators, which often don’t need them as they know the length up front.
If you're going into production with Nim code and execution speed is a top concern, you're probably going to remove all the safety belts after thoroughly testing, valgrind'ing, etc., i.e. you're probably going to opt for -d:danger instead of -d:release. But maybe not, depends on the project I guess.
which benchmarks? I play the language benchmark game sometimes and I can always get within 10% of the fastest contender. Beating the fastest contender,though, is rarely possible without using the unsafe subset of Nim, hence `-d:danger`
I've digged some more, and indeed in numeric tests Nim is often close to C. Its hard to find good data though, with documented compiler flags and recent versions.
it depends a lot on where you are coming from. If you are a Python developer/user (without any other information it is possibly the most likely case nowadays), you might find it solves a lot of current Python pain points (speed, portability, lack of typing, package system) and it has some added bonuses (great macro system, compiles to JS, works great for embedded).
If you are not a Python developer/user but you are somehow interested in new languages like Rust, Julia or Zig (you might be both), you might want to hear a different take on how Nim is also able to solve some of the problems they solve (a better C++, a better Python/R/Matlab for scientific computing, a better C - not that those languages reduce to those aspects...).
I think that what is happening now with Rust, Julia and Zig is great and I am happy that many people are looking at those languages and growing their ecosystem and their communities. Part of the drive of people there is to be able to make a significant improvement in the evolution of those languages (harder to do that in C, C++, Java, Python, C#, Swift, ...). I just happen to particularly like Nim and love being involved in it so far.
A recent article that explains the philosophy of Nim is the Zen of Nim by Araq (Nim's BDFL) [0]. Nim has some universal vision in the sense that it can be really used for everything (from kernel programming to web development, from scientific computing to game development, from embedded to devoping compilers and interpreters, from short throwaway scripts to critical infrastructure). Not necessarily it is for _everyone_. Taste of people varies a lot and that is a good thing.
Several years back, I wanted to know what the hype was about with regard to Rust and started looking into Rust a bit. In the process I also stumbled across Nim and decided that Nim suited those needs that I might have had for Rust even better than Rust did.
The killer feature for me is that it transpiles both to C and to JavaScript and I like the clean Python-like syntax.
I now use Nim whenever I write a decent chunk of "pure logic". By that, I mean something that does something complex and useful without leaving the confines of its own codebase all too much. In those cases it's nice to have the option of portability between the C ecosystem and JavaScript ecosystem.
When I write "glue code" (and, unfortunately, most code that most people write falls into that category) where I basically just put bits of language ecosystem together in a trivial way to achieve something useful, then I use Python because for such tasks the sheer size of the language ecosystem is king.
Araq created Nim because he wanted to. Others use it because they want to. Why does it need a raison d'être ?
What it has: garbage collection with option to turn off, small binaries, compiles to C (and other languages) as intermediate step so it's easy to integrate into existing codebases, it's very fast without needing much trickery, it has all the features everyone always asks for with more being added all the time, etc...
Is it better than Rust, C++, Go, Java, whatever? Probably not in a strict sense. But it's pleasant to write plus above features. It's like Pascal meets Python meets Go but compiles to C and JS.
Agreed. First world problem, though I've tried couple of times to get into nimlang, but this feature is such an anti-pattern (anti-feature) that it drove me crazy. I could not easily and reliably grep/search anything. Not to mention that reading code requires extra mental overhead (especially being new to the language), that `getAttr`, `get_attr`, etc., are actually the same thing.
Why this was implemented into the language itself, instead of left as a suggestion or a standard is beyond me.
quick edit: also, everything is imported globally (if that's the right term? like in python `from package import *`). So when you see function call, you need to look it up all the time where it comes from.
This complain comes from people who haven't used the language.
> Not to mention that reading code requires extra mental overhead (especially being new to the language), that `getAttr`, `get_attr`, etc., are actually the same thing.
On the contrary, the language prevents confusion due to mixing getAttr and get_attr in the same codebase and bugs from using the wrong one.
Unsurprisingly, many safety-critical environments have policies to enforce consistent naming styles.
The linter will convert both to "getAttr" and the compiler will complain if the user tries to define the same proc twice.
There are two kinds of case-insensitivity in identifiers. In either cases identifier cases (and in this case, also underscores) are normalized, but case-agnostic languages would allow any mix of them while case-pedantic languages would disallow any pair of identifiers normalizing into the same name (they may still give helpful errors based on that normalized name though). I don't think case-agnostic languages provide a new value not provided by case-pedantic languages.
It is okay to have a feature (NOT necessarily this feature) that translates other conventions to a single consistent convention; for example a language may convert a name "foo_bar" into "fooBar" when used in C bindings, preferably with an escape hatch. However case-agnostic languages do not try to do that, they give zero indication for what convention to use (cf. function names in PHP, which is a horrible mess). No single convention is better than others, but a single consistent convention does matter.
My personal blocker is that identifiers are all imported globally by convention, so when you see that there is a call to a method called "get", you have to get to the top of the file or mouse over the call to see what lib it is from. A "get" from the http lib is not the same as a "get" from the kv store lib.
There is some logic as to why that is. Here [1] is an explanation for why it makes sense but the tldr is that you don't want to be manually importing functions such as `$` and `+`. In languages like Python, those are defined as methods on the object being imported (e.g. `.__str__()`) so they come along for free. Not so in Nim. If there's a conflict (same name, same signature), the compiler will warn you but it's extremely rare.
Thank you for the link but it doesn't address the issue I have. It's not about types, or about the compiler being "unsure". It's about me, as a developer, reading code someone else wrote, not knowing directly what package a call is from. I need to leave my current context to have the answer.
I can do `mypackage.mymethod` but it will only be in my own code, because it's not the convention
There are plenty of cases in Python and similar languages where it's not clear where a method is defined, consider `myClassInstance.myMethod`, how do you find its definition? You do not immediately know which class it belongs to, nor where that class is defined. This is especially the case when you've got classes inheriting from multiple levels of other classes.
To put things in context, I don't come from a Python background but from a Go background, where methods are always called with their package (unless it's in the current package). I got used to it because it makes the context clear.
Ah that makes sense. I agree with you; I’m not a huge fan of trying to infer where the types came from myself either when reading code on GitHub since it doesn’t have the inference that my IDE does.
This is quite bad. Relying on uppercase/lowercase equivalences in a Unicode world is by definition a code smell, no matter if they force the comparison to be based on ASCII. This whole ordeal causes any sort of issue if Unicode letters are allowed, because they will pass through `toLowerAscii` untouched and it is bound to cause confusion or to force people to avoid using Unicode in identifiers altogether.
Case insensitivity is a Western-only concept that should die ASAP, it's immensely complicated to pull off right and it opens a massive can of worms that makes no sense (see the Turkey Test for more about this).
Keywords are in English, and 99% of identifiers in all code I met are too, even when comments are in French or russian.
Pascal and old basic (among other languages) have been case insensitive for decades, and that has not been a problem.
In UI, you are by all means correct. But code is a formal language that happens to be expressed in Latin letters. APL is the only real language that doesn’t impose a western character set.
> Pascal and old basic (among other languages) have been case insensitive for decades
... because back then Unicode was still a pipe dream in the mind of some visionary. Everything used 8 bit encoding, and everyone assumed ASCII or at least something compatible with the 7 bit subset of ASCII.
Nowadays files are formatted in UTF-8, and most modern languages actually fully support UTF-8 identifiers. Nim itself supports UTF-8 "letters" in identifiers, and what is "upper case" or "lower case" 100% depends from the current locale. Restricting your case normalization logic to ASCII is __really bad__, because it basically means that non-Latin letters in identifiers won't be normalized, with possibly unexpected consequences.
> APL is the only real language that doesn’t impose a western character set.
And the list goes on. Even C++ can optionally support Unicode in identifiers (for instance, Clang and GCC do indeed support things like `constexpr auto 黒 { "lol" };`).
> ... because back then Unicode was still a pipe dream in the mind of some visionary. Everything used 8 bit encoding, and everyone assumed ASCII or at least something compatible with the 7 bit subset of ASCII.
They could still have been case sensitive (C was), so I don't understand how that's relevant to the idea that "case insensitivity is a problem".
> Rust, Go, Swift, Python
All of these languages impose ASCII for their keywords and directives. They allow you to use other characters for identifiers, but impose ascii in everything that has pre-defined semantics. Original APL is the only "real"/practical language that I'm aware of that gave up the "western centric view" of the world to the point that it doesn't have a single English keyword. (Brianfuck, etc. exist as well, but ....)
And they all impose a left-to-right reading order, which is just as western-centric. Arabic/Farsi/Hebrew go right-to-left, and there are languages that can also go top-to-bottom.
I think the outrage about "western centrism" is misguided. This is a formal system, and just like math, it reflects some history by using latin letters and left-to-right for the predefined symbols, and even preferred use of latin characters in identifiers.
> Nim itself supports UTF-8 "letters" in identifiers, and what is "upper case" or "lower case" 100% depends from the current locale.
If that's true, that may be a problem. I'll look into that, thanks for pointing out - from memory, Nim only folds the lower 7-bit by a 32 difference in ascii code, so it is well defined regardless of locale, but I'll check.
The whole idea of utf-8 in identifiers is a minefield, whether you fold case or not; e.g:
"Εхаmрⅼе" and "Example" have no single letter in common (I chose them that way using[0]) and no language that allows utf-8 identifiers is going to warn you about that.
> They could still have been case sensitive (C was), so I don't understand how that's relevant to the idea that "case insensitivity is a problem".
The point here is that case insensitivity is only a viable option if you severely limit the encoding allowed in whatever you are using - be it a programming language, filesystem, etc. If the encoding of your files is something is basically akin to ASCII or ISO-whatever (which was what BASIC and Pascal used back in the day) then case insensitivity is trivial and safe.
This whole thing breaks apart as soon as you enter a Unicode world and start accepting identifiers containing more than ASCII, and then the whole concept of "case insensitive" becomes obsolete and outright wrong.
The Unicode equivalent of "case insensitive" is Normalization [0] and it's a big heck of a minefield because it is defined depending on the locale in use. For instance, "FILE.TXT" and "file.txt" are to be considered equivalent under en_US, but not under tr_TR, where the lower case version of "FILE.TXT" is "fıle.txt" and the upper case version of "file.txt" is "FİLE.TXT". This means that normalizing strings can cause to unexpected results depending on the locale, which is especially problematic with filesystems (where a path may exist or not depending on the locale).
> Nim only folds the lower 7-bit by a 32 difference in ascii code, so it is well defined regardless of locale
yes, it is well defined but allowing the entirety of the Unicode letters also means that identifiers may contain glyphs from alphabets that have separate cases, chiefly Greek and Russian, or even accented letters such as `è` or `ö`. Case insensitivity instead of proper normalization makes them potentially confusing, and quite breaks the intent behind allowing Unicode identifiers by making non-US locales second class citizens.
IMHO it is arguably very confusing to non-English speakers that 'mela' is equivalent to 'MELA' but 'tè' isn't equivalent to 'TÈ' while 'Tè' is. It basically means you have to remember what letters are ASCII and what are not, which makes the whole "case insensitive" a potential source of confusion.
I think it is safe to say that in 2021 case insensitivity is an obsolete concept and an obstacle to proper internationalization. Case insensitivity only really works on legacy encodings and with the basic Latin alphabet, and you can rest assured it will be almost always improperly implemented anyway.
I understand your point, but still disagree with it. As I see it, the real problem is unicode identifiers, as I demonstrated with "Example" above, and as follows from your demonstrations as well. Unlike the thousands of unicode characters, which are unlikely to be all familiar to any single person, and whose meaning and "conjugation" (casing, conjugation, pre-joined pairs, precomposed versions, etc) are different in different cultures -
The ascii case folding, as employed by Nim and Pascal refers to 26 specific well known characters. It's a non-issue.
Indeed I too find this off-putting. I understand the rationale behind it, but developers are used to extreme attention to detail, and this discards an important aspect of detail in the important area of naming.
>It allows programmers to mostly use their own preferred spelling style, be it humpStyle or snake_style
It's interesting that some languages, like Go, are specifically designed to avoid it (there's a built-in formatting tool for a single coding convention), while in other languages having a zoo of different styles is viewed as a great idea and the language is designed for it. Or is it about linking with existing C libraries? In that case, I'd introduce some sort of name mapping via attributes/annotations as a whitelist, instead of allowing this behavior by default.
I personally still use Python because I miss list and dict comprehensions.
I know there is a `collect` macro in `sugar` module but it is nowhere close to the python comprehensions. The code is too verbose and basically is just the same multiline for loop :-(
Nim seems to be targeting the space between Python and Go, which is wide open for disruption. (Also touches at the edges of managed memory systems-y light languages like Java/C#.)
Python is a dynamically typed cowboy land and is painfully slow for many applications. People want types, speed, and fixes for decades of baggage, but they like Python ergonomics.
Go until recently lacked generics (it still doesn't have full generics), has bad error handling, and other weird features.
Rust and Swift also seem to be interested in this space. Rust can be heavy. Swift too Apple-centric.
Nim is doing a good job and is growing at a steady clip.
The language that is missing is a simple language that can work well with C or C++, compiles fast, is easy to read, has a good lib, etc.
There are not enough non-GC languages that are statically compiled.
I really, really want a pythonic, statically compiled language. I don't want templates, OOP, abstract stuff, just A C-like language, more readable, with good syntactic sugar. My dream is that there would be as much time money and effort spent on python, than was spent en js engine like v8 and spidermonkey.
There are barriers that make python not really viable everywhere, and I wish it would not be the case.
I always thought of Python as being a massive language. If it was simple, it should be easy to compile and optimize.
I mean, I would feel like an idiot when someone would point out a feature that makes Python code faster. Python classes are quite massive and complicated (with a bunch of tweaks one can do through attributes).
> Genuinely curious - why should I care about Nim?
You definitely shouldn't. If you need a reason to care, then Nim is simply not for you. You won't benefit from Nim, and Nim won't benefit from you. It's best to agree to disagree and walk away from each other (assuming Nim can walk).
EDIT: I got downvoted a bit here, probably because the above seemed rude and/or dismissive? If so, sorry, that wasn't my intention. What I meant to say is that with languages like Nim it doesn't make sense to be interested in them if you're not already interested in programming languages. It'll be another 10-20 years before Nim becomes something the general populace of programmers should (or, if we're lucky, would have to) care about. So if you don't have a particular reason to be interested in Nim, chances are you won't get such a reason from anything that can be said about Nim at this time.
Basically, asking the quoted question already means that there's nothing you'd care about in Nim.
I think others have covered this question well, but for my $0.02 - I've found it easy to write code in Nim that's half the size of and easily out performs (and has increased predictability, which is the big + for me) than Java/Node/etc.
The default runtime also doesn't have stop the world pauses and uses a similar message passing mechanism to Go.
It’s just subtly different than anything else out there. It for the most part feels like a Python-like language, but compiles to C so is very fast.
It also has one of the better macro systems out there. I was interested in developing a programming language with macros, and it has a ton of documentation about it. It’s really well thought out.
I’m not saying to run a business on it, but there are lots of interesting things out there. Language design is still a very active area of innovation.
Of the current crop of new(ish) systems languages, it’s the one that feels most fun to me. It reminds me of Ruby, not necessarily technically (obviously there’s Crystal for that), but in the sense that it’s a breath of fresh air coming from more austere languages.
I wouldn’t make a career or company bet on it (compared to Rust which I think is a very sensible choice) but I’m always keen to noodle away at new hobby projects in Nim.
They kind of occupy two different niches, IMHO. I see Nim more as a competitor to Go, whole Zig wants to replace C mostly and somewhat competes with Rust in that regard.
Zig has a work in progress C backend so it does not only generated llvm IR. Also to ve more specific Zig is macros are awesome but best implemented as C++'s comptime.
I would like to use Nim for data wrangling tasks - for shipping compiled data munging CLI utilities.
Looking at https://nim-lang.github.io/Nim/lib.htmlIt seems to have many of the batteries built in - HTML, JSON parsing, nice set of collections. I wonder how good SQL and especially noSQL support is.
My worry is that there must be rough edges. Plus pandas (not just numpy) is just so convenient at times.
I love Nim's syntax, but is there a specific reason why e.g. the SFML binding cannot imitate C++ RAII approach of handling resources? There is .close everywhere.
Destruction used to be nondeterministic in Nim. Right now though, if you use the ARC/ORC instead of the default GC (ORC will be made the default GC soon) you can have RAII for reference types (RAII is available for value types by default, just write a destructor for your type).
SFML binding probably will be updated to use RAII when ORC is made the default GC.
it does come across as arrogant. A humbler way to put it would be "Macros cannot change Nim's syntax because they transform the AST after it has already been parsed"
That's not the reasoning though. Not allowing macros to change the syntax gives Nim programmers an anchor and reduces friction from needing to learn brand new syntax rules for any big DSLs.
I still see more crypto/blockchain adoration here than disdain, although the tables might be turning slowly now.
HN users are much like magpies: once an idea shoots up, you can't swing a cat without hitting a zealot. If you are ambivalent or skeptic, you'll be furiously explained into the corner. Then, when the fad fizzes out and its inevitable drawbacks are publicized by more and more sources, the "told you so" camp starts merrily chasing the dwindling numbers of zealots around, while some former zealots get depressed from their high hopes being squandered and the resulting dopamine deficiency.
A new shiny comes in (say, "rewrite everything in Rust"), the cycle repeats.
You say "web3/blockchain", I hear "homeopathy", "astrology", and "bioenergetic therapy". The HN crowd might be fad-gullible, but stupid-gullible? I don't think so.
It has their easy-to-learn properties, but has a better type system, the generics system and macros are arguably more useful, and more usable than what is offered in either language.
It deserves far more attention than it currently gets.