Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Inko defaults to heap allocating objects (except for Int, Float, Nil, and Bool), meaning the data pointers point to is in a stable place. Thus, moving them around is just moving a pointer around, not a memcpy of the underlying data. This in turn means it's fine to keep references around.

To prevent you from _dropping_ a value while references still exist, Inko uses runtime reference counting. Owned values being moved around incurs no reference counting cost, but creating and dropping references does (just a regular increment for most objects, so pretty cheap). When a value is dropped, the reference count is checked, and if it's _not_ zero a runtime panic is produced, terminating the program. Refer to https://docs.inko-lang.org/manual/latest/getting-started/mem... for some additional details.

This setup is perfectly memory safe and sound, though in its current form the debugging experience of such errors (although surprisingly rare, but that might just be me) is a bit painful; something I want to improve over time.

My long-term vision is to start adding more compile-time checks, such that maybe 80% of the cases where a reference outlives its owned value is detected at compile-time. For the remaining 20% or so, the runtime fallback would be used.

In theory this should provide a good balance and only require a fraction of the mental cost associated with Rust. Whether that will work out remains to be seen :)



> To prevent you from _dropping_ a value while references still exist, Inko uses runtime reference counting.

> Inko doesn't rely on garbage collection to manage memory.

It sounds like Inko is in fact garbage collected? I have no problem with a refcounted language, it's totally reasonable, but reference counting is garbage collection. Am I misunderstanding something here?


The reference counts are used to prevent dropping a value that still has references to it, but it doesn't dictate _when_ that drop takes place. Instead, an owned value is dropped as soon as it goes out of scope, just like Rust.

So no, Inko isn't garbage collected :)


Sounds like the worst of both worlds. You have the overhead of reference counting and you still have to fight a borrow checker?


The cost of reference counting is only present when creating and destroying aliases, moving moved values around incurs no cost. That alone significantly reduces the overhead.

In addition, for objects other than String, Channel and processes, the count is just a regular increment (i.e. `count += 1`), not an atomic increment.

It's true that this can have an impact on caches, as the count increment is a write to the object, and so over time I hope to add optimizations to remove redundant increments whenever possible.

As for the borrow checker: Inko doesn't have one, at least not like Rust (i.e it's perfectly fine to both mutably and immutably borrow a value at the same time), so the cost of that isn't present.


I'm not sure I understand but I'll have to just look into it.


Colloquially, "garbage collection" typically refers to non-deterministic automatic memory management (and/or stop-the-world), whereas ref-counting is typically considered deterministic

Not really correct in an academic sense, but this isn't the only language I've seen talk about ref-counting as something other than garbage collection


In common usage of the terms, "reference counting" and "garbage collection" are completely different.


Well, they shouldn't be.


The first bold claim I see on the page is:

Deterministic automatic memory management

but actually looks like it's neither deterministic (refcounts!) nor actually memory management (deallocating memory can randomly crash the program?! this is even worse than C, where e.g. use after free can crash a program but then you're doing the wrong thing!)


The reference counts don't dictate when memory is released, that happens when an owned value goes out of scope, just as is the case for Rust. The reference counts are merely used as a form of correctness checking. The result is that allocations, destructors, and deallocations are perfectly deterministic.

Deallocating memory itself doesn't crash the program either, rather it's a check performed _before_ doing so (though that's mostly a case of pedantics). This strictly _is_ better than C, because if the program kept running then you'd trigger undefined behaviour and all sorts of nasty things can happen.

If you're familiar with Rust, this idea is somewhat similar to Rust's RefCell type, which lets you defer borrow checking to the runtime, at the cost of potentially triggering a panic if you try to mutably borrow the cell's contents when another borrow already exists.

You can also find some backstory on the idea Inko uses from this 2006 paper (mirrored by Inko as the original source is no longer available): https://inko-lang.org/papers/ownership.pdf


I believe Swift does something similar.


How is this worse than C? In C the program.might bit even crash and instead have a remote code execution vulnerability.


How does this work for collection types (ex. dynamic array) that can reallocate after growing? If there's multiple references, with at least one being mutable.

Is there an extra pointer hop as compared to, for example Rust's Vec? i.e. the value on the stack is a pointer to some heap data that has a pointer to the actual array data.


Arrays store just the pointers to their values, storing them inline isn't supported, like so:

    ptr to array  --> array
                      header
                      size
                      capacity
                      [
                        val1 ptr --> val1
                        val2 ptr --> val2
                      ]
At some point I'd like to support stack allocated values as an optimization, including the ability to allocate values directly into arrays, but that's not a priority at this point.


Hmm I wasn't talking about the values in the array, whether those are pointers or Int or whatever doesn't matter, but the allocation of the array data itself.

If the collection has capacity=2 (2 words) and another element is pushed in, typically you'd double the capacity, allocate new memory (4 words), copy the data over, and deallocate the old data.

If the square brackets in your diagram actually represents another pointer then I think we're on the same page, but otherwise I don't see how the data could be allocated in the same chunk as the header/size/capacity if there can be multiple (potentially mutable) references.

(hopefully the formatting on this works)

    ptr -> header
           size
           capacity
           data ptr -> [ val1
                         val2 ]


Ah gotcha. Yes, the actual layout of arrays is:

    header
    size
    capacity
    data ptr
Where `data ptr` is a raw pointer to data that is realloc'd as the array grows.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: