Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A cost model for Nim (nim-lang.org)
193 points by WithinReason on Nov 11, 2022 | hide | past | favorite | 75 comments


Nice to see this here! I helped edit parts of this, feel free to ask me questions. Personally I’ve quite enjoyed using Nim in RTOS and embedded systems. I believe it opens up the full power of embedded programming to non-experts.

It’s hard to describe how nice the combination of ARC/ORC memory management with the TLSF allocator is for a ready-to-go real time code and systems programming.

This combo gets 90-95% the speed and size of manual memory management, while being much easier to program. Generally performance and real time metrics or latency is a matter of optimizing your code, not re-writing it. Since Nim v2 integrates the allocator it also makes it trivial to debug allocations, or check for leaks.


particularly relevant context for this post is that Nim v2 is expected to be released within 2022, with the main change being the arc/orc memory management discussed in the blogpost.

For a general review of what nim v2 will be about:

- Nim 2 video by Araq at https://nim-lang.org/nimconf2022/

- all the details about the goals of v2 are in the roadmap RFC "Cylons are here" https://github.com/nim-lang/RFCs/issues/437


Good write-up. Optimizing for embedded, real-time systems is a nice way for them to cut out a little niche among similar languages.

Aside: I've tinkered with Nim a long time ago (when it was called Nimrod) and thought it was pleasant. I'd be interested to hear from anyone in the HN community that has used Nim in production. What did you use it for? How was the experience?


A well-known and nice app that is build with Nim is Nitter (https://github.com/zedeus/nitter), a free and open source alternative Twitter front-end focused on privacy and performance.


It runs part of my billing system and was nice to use, particularly with community support which was fast and effective given their online Nim tools.

Some of the debug / error messages were frustrating red herrings which IIRC I was told was an area needing improvement at least at the time.


As a frequent Nim user, I can unfortunately confirm that misleading error messages are still a problem.


I would gladly work at a company that used Nim as its primary language.


I found it very pleasant as well! It was pythonesque that compiled. Now pythons code makes me wants to pull my hair out. I don't know what I like anymore


Nim is a Wirthian language not a Python language, but Python always had some Wirthian touches so...


I've tried learning it a couple times, but the syntax seems so irregular and non-uniform. It's confusing and annoying.


Nim gives a bit more choice in many dimensions than many languages -- how to manage memory, whether to use the stdlib at all for things like hash tables, and yes, also syntactic choices like several ways to call a function. This can actually be convenient in constructing a DSL for something with minimal fuss. While `func arg1 arg2` might look weird in "real" "code", it might look great inside some DSL and you can just have it be a "regular Nim invocation" instead of something special to get that.

There are also compile-time superpowers like macros that just receive a parsed AST. That can be used to "re-parse" or "re-compile" external code as in https://github.com/c-blake/cligen. So, trade-offs like in all of life.

There is even a book called The Paradox Of Choice [1]. I think there is just a spectrum/distribution of human predisposition where some like to have things "standardized & packaged up for them" while others more like to invent their own rules..and enough variation within the population that people have to learn to agree to disagree more. (EDIT: and FWIW, I think this is context-modulated - the same person could be on one end of the spectrum about gardening and the other about software.)

I do feel like the syntax is far less chaotic than Perl.

[1] https://en.wikipedia.org/wiki/The_Paradox_of_Choice


I think this is because it takes more stylistic influence from Oberon than it does from Python.

https://en.wikipedia.org/wiki/Nim_(programming_language)#Inf...


I've always been surprised Nim hasn't taken off more.

All the productivity of a language like Python but with the performance of C.


The reason is tooling. The Clion plugin doesn't even do type checking and is a year out of date. The language server has been in progress for five years and is still not done. The VSCode plugin is okay, but refactoring, auto-imports, and autocomplete are nowhere near as easy as other languages like Java in jetbrains products. Not to mention a good debugger.

I hope Jetbrains can prioritize the Clion plugin. It would help a lot.


Due to the abstraction that Nim offers, things like autocomplete and refactoring are not needed as much as in Java.


If I start typing newSeq, give me the option to auto import from std, etc. These little things add up.

Also basic usage of generics doesn't really work w/ the vs code plugin, which is the best one.


> If I start typing newSeq, give me the option to auto import from std, etc.

Good idea! I've made a proof-of-concept Neovim plugin: https://sr.ht/~xigoi/nvim-summon/

It wasn't difficult, so if you use a different editor, you can make one too (feel free to reuse my documentation scraper).


I am considering betting my next big project on Nim. If it works out and I can work on it full time then I'll have lots of time to help Nim!


Neat! But I pay for Clion, so I'm hoping they can prioritize this so I don't have to write plugins myself... :)

We'll see when I start my next nim project, how I can contribute..


Note - looks like generics do work, the issue was my file names had dashes in them. Evidently that is not supported, but I didn't get any errors :p


Without disagreeing on the specific points about the tooling, I’m not convinced. Golang took off with a far weaker tooling story. Maybe the table stakes have changed in a decade - but I believe other factors are more important. Moreover I remain skeptical that even if those tooling issues were fixed tomorrow it would help Nim turn the corner.


> Golang took off with a far weaker tooling story

All Go had to do was say, “Built by Google” and people came running.


Well, the "designed by UNIX co-creator" piece helped too I'm sure.


Dart was built by Google, and nobody wants to touch it. Well, a few diehard fans.

Go is a nice language that gets a lot right. Saying people only use it because of Google is kinda shortsighted.


Because Chrome kind of sabotaged the project by dropping Dartium, while Angular team decided to sponsor Typescript and moved away from Dart.

The language was saved from dying by AdWords, which had just migrated from GWT into AngularDart.

Key designers from Dart 1.0 eventually left Google, like Gilad and Lund.

Flutter gave it a new life, but also is the only reason to use Dart, Flutter apps.


Maybe! It would get me on board, though :) I'm working on a new server side project and am using Java for now, because of this reason. I hope someday I have time to help build IDE plugins and so on :)


> Golang took off with a far weaker tooling story

Corporate marketing budget and related astroturfing.


I have really enjoyed using it for random scripts and small programs, but it definitely is slower than plain C for simple things like "loop over lines of stdin and do some basic calculations". Of course, you also get a lot of nice abstraction, memory safety, GC, a great standard library, compiler-checked effects (!), less-noisy syntax, and a nice easy compiler CLI, and that's a tradeoff I'm happy to make. Not to mention that I would feel much more comfortable writing a bigger application in Nim than in C.


I have not found this slower-than-C to be the case. You may need to use something like `iterator getDelims` in https://github.com/c-blake/cligen/blob/master/cligen/osUt.ni... to manage memory with fewer copies/less alloc "more like C", though or perhaps `std/memfiles` memory mapped IO.

More pithy ways to put it are that "there are no speed limits" or "Nim responds to optimization effort about as well as other low level languages like C". You can deploy SIMD intrinsics, for example. In my experience, it's not that hard to "pay only for what you use".

As a more concrete thing, I have timed (yes, on one CPU, one test case, etc..many caveats) the tokenizer used in https://github.com/c-blake/bu/blob/main/rp.nim to be faster than that used by the Rust xsv.

Of course, you really shouldn't tokenize a lot if it's costly/data is big, but rather save a binary answer that does not need parsing (or perhaps more accurately is natively parsed by the CPU).


This stuff honestly goes over my head and beyond my current skillset, but it's useful to know that there is no "speed limit" in the hands of someone who knows what they're doing.

My experience has been with basically translating Python and shell scripts into Nim, and occasionally comparing them with my own feeble attempts at doing the same in C.


Just a note that the "loop over lines of stdin and do some basic calculations" use case highlights probably one of the places where it's really easy to blow up Nim's performance benefits -- string or seq copies. :/

The issue is that Nim's string library ends up making copies when doing things. It's a sane default and ensures sane behavior. However, it often leads beginner's to make slower programs, sometimes even slower than equivalent Python ones. If you used `strcopy` in C a lot, you'd get similar results.

Often the solution is trivial and is just to use the `var` or `openArray[T]` variants of proc's. It's one of the points in the blog post and is crucial for good real time code. I'm hopeful the new `lent` and `view` features will make it easier to use non-copy slices on strings. That'd be nice!


That slowness is tied to the GC, which you can swap in and out with others or turn off. I feel like this is a really undersold feature of the language.


It's a mixture of allocation, copying, and reclamation. You can actually side-step a lot of such issues in a mostly mm-agnostic way with a `var` like interface in Nim -- sticking to one initial allocation like the common pattern of caller-allocated storage in C. These days on three-level cache server CPUs the real cost of hitting virgin, uncached memory is probably worse than other pieces..DRAM latency is >10x L3, but almost everything "depends on..".


100% my experience too. GC in Nim is a rounding error at worst and is often statically elided, anyway. Performance is almost entirely about memory access patterns and you have the same access to this as C.

In other words, performance in Nim is entirely down to the algorithm used, which is a nice place to be.


Just to be sure, did you compile it with -d:release/-d:danger? This is a common mistake people make when benchmarking Nim.


No 800 pound gorilla has taken the language under its wing, so really difficult to build mindshare. Without a Google, Microsoft, etc who can throw unlimited funding to have full time developers polishing rough edges, writing docs, or implementing libraries, a new language is always lacking vs established alternatives.


The only thing close is some ethereum tools like these:

- https://github.com/status-im/nwaku

- https://github.com/status-im/nimbus-eth2

But I think the real killer feature to me is the javascript target. It seems like a real dream for a fullstack experience.


> the real killer feature to me is the javascript target

Agree, this is amazing because you can share code and data structures between front and backend (for example: https://github.com/karaxnim/karax).

Also, it's really nice having high level stuff like metaprogramming and static typing spanning both targets. Things like reading a spec file and generating statically checked APIs for server/client is straightforward, which opens up a lot of possibilities.


Honestly as someone that has used Nim for this extensively, I think that these days with WASM and TypeScript available this isn’t much of a killer feature anymore.


Only if you consider the web side.

Backend and frontend in the same language is a massive feature for many reasons, not least performance and uniformity. Adding TypeScript is increasing interface friction and complexity, instead of simplifying things.

Of course, right now this may or may not be easier/harder/feasible with just Nim depending on the project, and many more people know TypeScript, but the ideal of one language > many languages is a worthy one.


What do you mean? I am considering it from both sides.

TypeScript already allows this, i.e. you can build the backend and frontend in one language (TypeScript). Same for most languages that compile to WASM.

My point is that with TypeScript this killer feature is already realised and in a way that is far more familiar to the many thousands of developers that already know JavaScript.


> you can build the backend and frontend in one language (TypeScript)

Sure, from the uniformity side, but what about performance?

It's not just latency performance either, it's resource performance and overheads, such as with embedded servers.

> with TypeScript this killer feature is already realised

It really depends on your requirements whether this is true. Personally, a systems language I can use for web stuff is more useful and has a wider scope to me than running JavaScript or WASM in a systems role, even though it is technically possible to do so.


In the general case I think you underestimate how performant JavaScript JITs are these days. For 99.9% of use cases the performance is enough.

Sure, for embedded it might not be workable, but again- since we're discussing killer features I think we need to think bigger. Embedded is far too small a niche to be considered to be Nim's killer feature, especially when this space is already dominated by Rust which supports WASM very well (and therefore hits the performance + uniformity you desire).


> In the general case I think you underestimate how performant JavaScript JITs are these days.

No, I'm well aware of JS and WASM performance, internals, and pitfalls.

> For 99.9% of use cases the performance is enough.

Again, it depends on your perspective and requirements. Statistically, PHP is enough for most people so why use TypeScript? At the edges of the resources bell curve however the details do matter. It's not enough to say that for those concerns you must use different language and potentially a whole different set of semantics, intricacies, libraries and workflows.

> I think we need to think bigger. Embedded is far too small a niche to be considered to be Nim's killer feature

That wasn't my point, and ironically you might be underestimating how much minimising power and resource usage drives much of the modern world. Memory/storage + CPU ticks == money.

My point is that a language that can act as a universal tool without sacrificing efficiency, productivity, or extensibility (via AST macros), is itself a 'killer feature'. Nim hits that sweet spot of being easy to read and write as well as 'close to the metal'. It lets me target JS, embedded, and natively interface with C and C++ ABI. That gives me a lot of options with one language. There are other languages that have some of these traits, but not all together in a nice package IMO.

> especially when this space is already dominated by Rust

You mean C?


> Statistically, PHP is enough for most people so why use TypeScript?

In the context of our discussion PHP doesn't make sense as it doesn't have native support for running in the browser. TypeScript is specifically a good choice because it has been built with the browser and the backend in mind.

> you might be underestimating how much minimising power and resource usage drives much of the modern world. Memory/storage + CPU ticks == money.

I have learned over the years that what actually drives the modern world is pragmatism. Sure, performance matters, but when you have to reimplement hundreds of libraries from scratch because the ecosystem doesn't exist then you are losing far too much time (and your implementations of those libraries are likely to be suboptimal). You're better off losing some performance and picking the established language, so that you actually ship your project within a couple of months instead of a couple of years.

Nim has been going now for 15 years and has thus far failed to reach critical mass. With each year gaining that critical mass becomes less and less likely. A killer feature for Nim needs to be so beneficial that it overcomes the lacking ecosystem.

> You mean C?

I was referring to the space of languages that work well on embedded and in the browser. C doesn't support the browser side well.


> TypeScript is specifically a good choice because it has been built with the browser and the backend in mind.

Sure, if your only use case is web front/back end uniformity.

My use case covers web front and back end, but also custom embedded, HPC, ML, simulations, and a lot of other stuff that is simply not viable with TS/JS or WASM. This language lets me share abstractions and reuse code across all this stuff and use C/C++/JS/Python ecosystems directly.

All the above are possible with just JS. Yes, you can even run JS on a microcontroller: https://www.espruino.com/

The V8 JS/WASM engine, however, is written in C++ for good reason. TS/JS/WASM/web is built on people counting ticks and aligning bytes to give even the floppiest callback spaghetti of frameworks the overhead to be responsive as abstractions increase.

> I have learned over the years that what actually drives the modern world is pragmatism.

I agree, except I think it's more pragmatic to have one language that can service any industry. TS/JS ain't it, from my perspective.

> reimplement hundreds of libraries from scratch because the ecosystem doesn't exist then you are losing far too much time

Why would you reimplement libraries from scratch instead of using FFI? FFI is one of the many, apparently unrecognised, 'killer features'. Specifically killer is FFI with ABI compatibility to C++. The pragmatic approach is always to stand on the shoulders of giants and FFI with existing libraries for the reasons you listed.

I'm surprised to hear you say that to be honest as you've used the language for years. Is there an experience behind it?

> You're better off losing some performance and picking the established language, so that you actually ship your project within a couple of months instead of a couple of years.

Well, I don't disagree with the sentiment, but this is more of a commentary on the job market/cost of business than capability.

Again, though, it really depends what you're doing. Chucking out a web app for an internal tool and distributed websites have different requirements.

> Nim has been going now for 15 years and has thus far failed to reach critical mass. With each year gaining that critical mass becomes less and less likely.

This is a strange perspective to me. Programming languages aren't food left out of the fridge, they're tools to do a job.

Python was made in the 1980s and took decades to even be known about. Why did people start using Python? Because it was readable, convenient, and very productive. Why did it become popular? Because people used it more...

What you call 'critical mass' is just a visible hiring pool.

Companies need confidence to find or train people in the language, and developers need confidence they can pay their bills by learning it. Confidence is increased when people hear of business use, say, if FAANG started a some minor project with it, and it feeds back.


I'd agree embedded is a pretty niche area, and that it's likely not a killer app for Nim. Though it does bring a fair bit of attention.

However Rust doesn't dominate embedded. That's still C or even C++.

Rust has a lot of embedded crates, but supports a pretty small subset of hardware given the number of crates and amount of effort poured into them. The trait system particularly sucks for modeling hardware, IMHO, and with the "re-write the world" mentality it means there's still quite a bit of pain.

So it seems rosier than it actually is -- for now. Rust on embedded is gaining momentum.


> However Rust doesn't dominate embedded. That's still C or even C++.

I don't disagree. What I meant specifically is that Rust dominates the space where you want to target embedded and the browser (Rust does well on embedded and has good WASM support). C/C++ does not support the browser well at all (you can say that it supports WASM too, but Rust has far more front-end frameworks that actually target the browser than C afaik).


Ah, gotcha. Rust's tooling for WASM + embedded is pretty good.

I actually want to get Nim + WASM running on embedded so we could do our own MicroPython like setup and not require people to know how to flash the hardware directly.

I know Andreas doesn't care for WASM, but Nim could support native WASM relatively easily. It already does a "re-looper" for the JS backend AFAICT. which is the biggest challenge of WASM.


Who’s the 800 lb gorilla for Rust (because it seems to have large community behind it)?


Almost every large tech company that uses C++ heavily is jumping onto the Rust train.


Mozilla funded multiple full time developers for years. While not a “big” company, it does have significant financial resources and creates a very impactful product.


Yes, in fact, almost every successful language of the last decade has had such support: Mozilla funded a Rust team for years, Google funded a Go team, Apple a Swift team, Microsoft for TypeScript, etc. etc.

There are examples of rising languages that never had such corporate sponsorship, like Zig and Julia, but I actually can't think of one as successful as Rust/Go/Swift/etc.?


I agree, but a counterpoint: Dart from Google. Despite the resources poured into Dart, it does not enjoy Go-levels of popularity. Flutter + Dart is now the focus, but it's too early to see if the language will grow in adoption.


Because it wanted to take over the Web via Dartium, then Chrome dropped support for it, it almost disbanded if it weren't for AdWords team, and Flutter gave it a 2nd life.


I almost mentioned zig as a counterpoint. I’m sure it is a HN bubble thing, but zig does feel like it has a lot of momentum for a grassroots language without a juggernaut supporting development.


Not super related to the entirety of this blogpost but this bit stuck out to me:

  > "If your domain is not “hard” real-time but “soft” real-time on a conventional OS, you can “pin” a thread to particular core via system.pinToCpu"
This is probably one of the easiest/nicest API's I've seen for doing this in a language.


unfortunately this pinToCpu seems to be discouraged or ineffective (not for nim related reason) :/ see this discussion: https://forum.nim-lang.org/t/9596#63093


Not sure why it would be discouraged because of big little. Pinning to a big or pinning to a little is still a potentially useful thing.


>Not sure why it would be discouraged

pinToCPU (at least in Nim) just sets scheduling affinity, but does not hard-allocate like the Linux isolcpus boot parameter.."pin" makes it sound more powerful than it is. It is more like POSIX `nice` than POSIX `mlock`.

But also, `nice` exists for a reason. Your agenda might be to do something as much as possible on a Little core which is indeed potentially useful. pinToCPU takes a cpu number and you can usually know from that number whether it is Big or Little.

The real concern may be that the OS has a more global picture of system activity than any one program..and so discourage "second guessing" the scheduler. OTOH, the OS probably does not have a picture of how important timely completion of any particular thing is - honestly that could almost be end-user-provided data (much like there is a `nice` command).

A lot of the time people motivate affinity just as a "use the same L1/L2 cache" for "absolute max perf", and in that sense you may be getting better perf, but over fewer cores (only the big ones) or you may not do the ahead of time work to map out big-little CPU numbers.

Scheduling is also very OS-specific. So, reasoning about what Y you get when you ask for X can also be tricky in any OS-portable setting.

Sometimes things are "so complicated" that people "who know" just discourage those who do not. My inclination is instead to explain what it can/cannot/does/does not do better rather than discourage its use.


> Most of Nim’s standard library collections are based on hashing and only offer O(1) behavior on the average case. Usually this is not good enough for a hard real-time setting and so these have to be avoided. In fact, throughout Nim’s history people found cases of pathologically bad performance for these data structures. Instead containers based on BTrees can be used that offer O(log N) operations for lookup/insertion/deletion.

I'm surprised that hash maps can stray far enough from O(1) to become unusable in hard real-time. Is it because RT systems have to use lower-quality PRNGs?


Probably a better way of looking at it is hard real time is hard, in several senses of the term. You don't really want any operations that have variable time at all. If that feels like a really difficult specification to work with, it is. Soft real time is more common. Soft real time theoretically has the same considerations, but in general you have enough performance that you can wave away many of these difficulties; yes, technically my hash table is not 100% reliable every time but as I can show it'll never have more than 128 entries, and I've got 25ms targets for this task I'm doing, and I've got 32MB when my code only uses about 4 max, and these numbers are all large enough that I can just not care about the resizing operation. I've got many systems I've written that are just straight-up accidentally soft-real-time in practice simply because running them on even a moderately-powerful modern PC is a hundred times more power than they really need, and they're already 10-50 times faster than they "need" to be even before optimizing them, so barring pathological network performance that would trash any system, they're essentially soft real time already. You don't back your way into hard real time, though.


>I've got many systems I've written that are just straight-up accidentally soft-real-time in practice simply because running them on even a moderately-powerful modern PC is a hundred times more power than they really need, and they're already 10-50 times faster than they "need" to be even before optimizing them, so barring pathological network performance that would trash any system, they're essentially soft real time already.

But the categorization of real-time depends on the requirements of the problem, not how difficult it is to meet them. If your problem tolerates a few missed deadlines every now and then and your hardware is such that a naive implementation misses deadlines so frequently that it's useless, and so you need to optimize it thoroughly to get the misses under control, it's still soft real-time. It doesn't become soft real-time when technology advances enough that even an inefficient implementation will tolerably a deadline only occasionally.


When targeting hard real time you want strong worst-case guarantees, not weak probabilistic arguments like "we hope the hashes are distributed well enough to achieve O(1) in practice".

The malicious version of this is hash DoS, where an attacker chooses the values inserted into the hash table so they have the same hash (or at least same bucket). Randomized hashing using a secure hash mitigates this somewhat, but the attack is still possible to some degree, especially if the hash secret is long lived.

Another issue is that inserting into hash tables is usually only amortized O(1), with the collection growing exponentially when it's full. So most insert operations are cheap, but occasionally you get an O(n) insertion. However it doesn't sound like this is what the article is talking about.


While I agree with all you said, one can expand a bit on it. There are concrete max search depth expectations for well randomizing hash functions such as:

    https://www.sciencedirect.com/science/article/abs/pii/019667748790040X
which is notably logarithmic - not unlike a B-Tree.

When these expectations are exceeded you can at least detect a DoS attack. If you wait until such are seen, you can activate a "more random" mitigation on the fly at about the same cost as "the next resize/re-org/whatnot".

All you need to do is instrument your search to track the depth. There is some example such strategy in Nim at https://github.com/c-blake/adix for simple Robin-Hood Linear Probed tables and a formula based one from that above paper inside https://github.com/c-blake/suggest

This is all more about the "attack" scenario than the "truly hard" real-time scenario, of course.


Having a better PRNG shouldn't help worst case should it?

And worst case is the only thing that matters in a hard real time system.


The thing is that in this scenario you need to consider the worst case access, not the average. So instead of O(1) accesses you're stuck with O(n).


I did some work on Nim's hash tables back in 2020, specifically with OrderedTable, comparable to a Python dict where insertion order is preserved. I stumbled on this table module in a roundabout way, via Nim's database module, db_sqlite. The db_sqlite module was much slower than Python for simple tests, and on investigation, I found that it didn't automatically handled prepared statement caching like Python's sqlite3 module. There were some other issues with db_sqlite, like blob handling and null handling, which led me to a different SQLite interface, tiny_sqlite. This was a big improvement, handling both nulls and blobs, and the developer was great to work with. But it also didn't support prepared statement caching. I filed an issue and he implemented it, using Nim's OrderedTable to simulate an LRU cache by adding a new prepared statement and deleting the oldest one if the cache was too big:

https://github.com/GULPF/tiny_sqlite/issues/3

Performance was hugely improved. There was another LRUCache implementation I played with, and when using that for the statement cache, performance was 25% faster than OrderedTable. That didn't make much sense to me for a 100-entry hash table, so I started running some tests comparing LRUCache and OrderedTable. What I discovered is that OrderedTable delete operations created an entirely new copy of the table, minus the entry being deleted, on every delete. That seemed pretty crazy, especially since it was already showing up as performance problems in a 100-entry table.

The tiny_sqlite developer switched to LRUCache, and I did some work on the OrderedTable implementation to make deletes O(1) as expected with hash table operations:

https://github.com/nim-lang/Nim/pull/14995

After spending a lot of time on this, I finally gave up. The problems were:

- the JSON implementation used OrderedTables and never did deletes. JSON benchmark performance was rather sacred, so changing OrderedTables to be slightly slower/larger (I used a doubly-linked list) was not desirable, even if it changed delete performance from O(n) to O(1)

- the Nim compiler also used OrderedTables and never did deletes

- Nim tables allowed multiple values for the same key (I did help get that deprecated).

- alternatives were proposed by others that maintained insertion order until a deleted occurred, but then it could become unordered. That made no sense to me.

The TLDR is, if you use Nim tables, don't use OrderedTable unless you can afford to make an copy of the table on every deleted.

Current Nim OrderedTable delete code: https://github.com/nim-lang/Nim/blob/15bffc20ed8da26e68c88bb...

Issue for db_sqlite not handling nulls, blobs, statement cache: https://github.com/nim-lang/Nim/issues/13559


unrelated to the post but if you are curious on how a HN spike looks like you can follow it in real time here: https://plausible.io/nim-lang.org?period=day&date=2022-11-11

nim recently implemented public analytics for its website and I think it is a fun thing and something I have not seen around much: https://nim-lang.org/blog/2022/11/01/this-month-with-nim.htm...

do you know of other open source projects which have public analytics?


Had no idea plausible.io existed. Thanks for posting the link for this. Just what I need for a project :)


oh plausible works great! it is paid but open source and you can host it. very nice guys that run it. there are also other nice options out there, but this is the only one I know where it is very easy and straightforward to make your analytics public!


Less than 10K unique visitors. Much lower than what I expected on HN front page.


Interesting to see more focus on embedded systems. This is definitely good news for Ratel!


which is this great project btw: https://github.com/PMunch/ratel

what happened to that cool landing page you had?


Nice post.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: