Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.
Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.
Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.
[12:34] why not Rust?
Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
I wonder if they explored using a Gc like https://github.com/Manishearth/rust-gc with Rust. I think that probably removes all the borrow checker / cycle impedance mismatch while providing a path to remove Gc from the critical path altogether. Of course the Rust Gc crates are probably more immature, maybe slower, than Go’s so if there’s no path to getting rid of cycles as part of down-the-road perf optimization, then Go makes more sense.
I want to stress that for anyone looking to use Neovim as an IDE (lite), all you need is something that speaks the Language Server Protocol (LSP). You don't really need to download all these configuration frameworks.
- No debugger support, I'll use a CLI debugger or an IDE if I need one.
I've been using this setup for very a long time, and I barely touch my init.vim anymore. Here's a recent thread from the neovim subreddit where the OP talks about how much effort it takes to properly setup a configuration framework https://www.reddit.com/r/neovim/comments/11p6iiu/i_love_vim_...:
> I want to fix my problems and consolidate my environments BUT setting it up is too painful and I don't another hobby as a job (I already have servers and a 3d printer lol). I've tried multiple times this week to setup either pure neovim, lunarvim, nvChad, astrovim and LazyVim starter and there's always something that I can't find how to change even after searching online and not even taking into account that setting it up takes like a day for each. I don't really want to dedicate a whole month to reading docs, debugging and discovering plugins to fix issues that i'm hitting and I don't wan to blindly learn some commands to then throw them away because that distro didn't work out.
I feel some exasperation when I see query builders that throw a query string back to the user and ask them to map the results themselves. That’s easily the most tedious and mistake-prone part of using SQL queries. In the case of my library, projection and mapping are handled by the same callback function so in order to SELECT a field you basically map it and it’s automatically added to the SELECT clause.
I skim-read it but couldn't find an example of what I think as challenging: a join of 3 tables (well, even two).
When you join 3 tables (assuming has many and "has many through" relationships), what you get back is enormous rows and multiple rows, all of this in tabular form, but in the software is usually represented as a graph.
I'd love a library that helps building back these massive rows into relationships.
Please forgive me if your library does this, while I saw the "mapping function", I didn't see anything to help me build back graphs. I can map rows "easily", but I cannot recreate associations easily, it requires a bunch of work.
Thank you for the examples. I see the joins example, but they seem to be about creating queries, not mapping data.
sql = "select blog.name, post.content, author.display_name from blog join post on blog.id = post.blog_id join author on post.author_id = author.id"
Assuming the relationship: many blogs have many posts and posts have one author, I'd expect something along the lines of (sudo code):
schemaOnTheFly = Blogs{}.HasMany(Posts{}.HasOne(Author{}) // Sorry the syntax for this doesn't really exist
blogs := query.Exec(sql, params, schemaOnTheFly)
fmt.Printf("%+v\n", blogs[0].Posts[0].Author)
That's what I'd expect. Do notice that the schema is per-query, I'll let the developer handle the sharing portion of the schema (might be shared by a few queries)
hi I was the one who made that comment. The JSON incompatibilities I was referring to at that time was for an older version of the JSON proposal. The current version of the JSON operator in SQLite mimics Postgres (and MySQL) perfectly and I'm very happy about that.
In the older version of the proposal, -> was identical to ->> except -> returned NULL on malformed JSON (while ->> raised an error). Both -> and ->> would automatically convert a JSON-encoded SQL string '"like this"' into an SQL string 'like this'. This is not how the -> operator behaves in Postgres and MySQL, and my examples were simply trying to point out that incompatibility.
I've seen a very good article on this: https://www.trek10.com/blog/dynamodb-single-table-relational.... It's an insightful dive into how KV stores can model relational data. However I also feel that these 'NoSQL can do relational modelling!' proponents are kind of hijacking the term: a relational model is understood to be based on relational algebra, and DynamoDB's Everything is One Big Table design ain't it.
NoSQL can model relational data, but it is not a relational model (as understood by E.F. Codd).
Also, NoSQL data access patterns are pretty much set in stone, which is kind of what the relational model explicitly avoids.
For modelling trees, I much prefer using a list of ancestors compared to the parent-child adjacency list because it doesn't require recursion to answer basic questions. I had thought about how I would model threaded comments in SQL some time ago and wrote down an example of what I mean:
"This module implements a data type ltree for representing labels of data stored in a hierarchical tree-like structure. Extensive facilities for searching through label trees are provided."
Given that the author also wrote https://scattered-thoughts.net/writing/select-wat-from-sql/, I don't think he knows nothing about inner joins. I think he was just using that equivalent form to compare it with the terser `foo.bar.quux`. It is pretty strange to compare it with `fk_join(foo, 'bar_id', bar, 'quux_id', quux)` though, because SQL already has the equivalent `foo JOIN bar USING bar_id JOIN quux USING quux_id`.
[19:14] why not C#?
Dimitri: Was C# considered?
Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.
Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.
Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.
[12:34] why not Rust?
Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.
(https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)