Hacker Newsnew | past | comments | ask | show | jobs | submit | bokwoon's commentslogin

https://www.youtube.com/watch?v=10qowKUW82U

[19:14] why not C#?

Dimitri: Was C# considered?

Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.

Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.

Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.

[12:34] why not Rust?

Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.

(https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)


I wonder if they explored using a Gc like https://github.com/Manishearth/rust-gc with Rust. I think that probably removes all the borrow checker / cycle impedance mismatch while providing a path to remove Gc from the critical path altogether. Of course the Rust Gc crates are probably more immature, maybe slower, than Go’s so if there’s no path to getting rid of cycles as part of down-the-road perf optimization, then Go makes more sense.


>C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#

They could have used static classes in C#.


https://twitter.com/edgedatabase/status/1620582614703964160

EdgeQL:

    select Child {name}
    filter .<child[is Parent].name = 'Uma Thurman';
SQL:

    select child.name
    from child
    join parent_child_rel using (child_id)
    join parent using (parent_id)
    where parent.name = 'Uma Thurman';
IMO EdgeQL is going too far with the sigils


I want to stress that for anyone looking to use Neovim as an IDE (lite), all you need is something that speaks the Language Server Protocol (LSP). You don't really need to download all these configuration frameworks.

- Neovim's native LSP (with the neovim-lspconfig plugin) handles completions, go-to-definition, linting, formatting and refactoring. (Here's my init.vim config for the LSP https://gist.github.com/bokwoon95/d9420fce4836f6b518b02bd60a...).

- Instead of trying to get an autcompletion plugin to work, just use vim's native omnicompletion <C-x><C-o>.

- Instead of a plugin manager, use Vim8 native packages (https://vi.stackexchange.com/a/9523). I use a custom shell script for updating plugins via git (https://gist.github.com/bokwoon95/172ecc04039afdbe9425678946...)

- I use Fern.vim for a file explorer.

- I use Telescope.nvim for fuzzy jump-to-file.

- Dynamic statusline is just a few lines of config (https://gist.github.com/bokwoon95/d9420fce4836f6b518b02bd60a...), no statusline plugin needed.

- No debugger support, I'll use a CLI debugger or an IDE if I need one.

I've been using this setup for very a long time, and I barely touch my init.vim anymore. Here's a recent thread from the neovim subreddit where the OP talks about how much effort it takes to properly setup a configuration framework https://www.reddit.com/r/neovim/comments/11p6iiu/i_love_vim_...:

> I want to fix my problems and consolidate my environments BUT setting it up is too painful and I don't another hobby as a job (I already have servers and a 3d printer lol). I've tried multiple times this week to setup either pure neovim, lunarvim, nvChad, astrovim and LazyVim starter and there's always something that I can't find how to change even after searching online and not even taking into account that setting it up takes like a day for each. I don't really want to dedicate a whole month to reading docs, debugging and discovering plugins to fix issues that i'm hitting and I don't wan to blindly learn some commands to then throw them away because that distro didn't work out.


> at the same time, they provide little (or nothing) utilities to map the data coming from a complex query, back to objects

Example of a query builder with built-in mapping capabilities (it’s mine):

https://github.com/bokwoon95/sq

I feel some exasperation when I see query builders that throw a query string back to the user and ask them to map the results themselves. That’s easily the most tedious and mistake-prone part of using SQL queries. In the case of my library, projection and mapping are handled by the same callback function so in order to SELECT a field you basically map it and it’s automatically added to the SELECT clause.


I skim-read it but couldn't find an example of what I think as challenging: a join of 3 tables (well, even two).

When you join 3 tables (assuming has many and "has many through" relationships), what you get back is enormous rows and multiple rows, all of this in tabular form, but in the software is usually represented as a graph. I'd love a library that helps building back these massive rows into relationships.

Please forgive me if your library does this, while I saw the "mapping function", I didn't see anything to help me build back graphs. I can map rows "easily", but I cannot recreate associations easily, it requires a bunch of work.


> I didn't see anything to help me build back graphs

Hmm you've certainly given me something to think about. Thanks.

BTW joins are not challenging, but you made me realize I didn't show any joins in my basic examples. Here is an UPDATE with JOIN in the meantime: https://bokwoon.neocities.org/sq.html#postgres-update-with-j....


Thank you for the examples. I see the joins example, but they seem to be about creating queries, not mapping data.

    sql = "select blog.name, post.content, author.display_name from blog join post on blog.id = post.blog_id join author on post.author_id = author.id"
Assuming the relationship: many blogs have many posts and posts have one author, I'd expect something along the lines of (sudo code):

    schemaOnTheFly = Blogs{}.HasMany(Posts{}.HasOne(Author{}) // Sorry the syntax for this doesn't really exist
    blogs := query.Exec(sql, params, schemaOnTheFly)

    fmt.Printf("%+v\n", blogs[0].Posts[0].Author)
That's what I'd expect. Do notice that the schema is per-query, I'll let the developer handle the sharing portion of the schema (might be shared by a few queries)


> I assume that the cascading done on GitHub would be done on a FK constraint

GitHub doesn’t use foreign keys, the cascade delete must have been manually implemented.

https://news.ycombinator.com/item?id=21486494


That it took a half hour to do all the pointless activity says something.


They use Rails, right? It's probably a cascade delete but done on the application, Rails makes that very easy.


By far the most important feature that ngrok provides, I’ve never used ngrok for its ability to replay requests


hi I was the one who made that comment. The JSON incompatibilities I was referring to at that time was for an older version of the JSON proposal. The current version of the JSON operator in SQLite mimics Postgres (and MySQL) perfectly and I'm very happy about that.

In the older version of the proposal, -> was identical to ->> except -> returned NULL on malformed JSON (while ->> raised an error). Both -> and ->> would automatically convert a JSON-encoded SQL string '"like this"' into an SQL string 'like this'. This is not how the -> operator behaves in Postgres and MySQL, and my examples were simply trying to point out that incompatibility.


I also didn't realize MySQL supported these JSON query operators now, huh. Since MySQL 2.7.9 in 2015 apparently? Not that new!

https://dev.mysql.com/doc/refman/5.7/en/json-search-function...


Nice, thanks!


I've seen a very good article on this: https://www.trek10.com/blog/dynamodb-single-table-relational.... It's an insightful dive into how KV stores can model relational data. However I also feel that these 'NoSQL can do relational modelling!' proponents are kind of hijacking the term: a relational model is understood to be based on relational algebra, and DynamoDB's Everything is One Big Table design ain't it.

NoSQL can model relational data, but it is not a relational model (as understood by E.F. Codd).

Also, NoSQL data access patterns are pretty much set in stone, which is kind of what the relational model explicitly avoids.


For modelling trees, I much prefer using a list of ancestors compared to the parent-child adjacency list because it doesn't require recursion to answer basic questions. I had thought about how I would model threaded comments in SQL some time ago and wrote down an example of what I mean:

https://gist.github.com/bokwoon95/4fd34a78e72b2935e78ec0f40e...


This data structure you used is covered by the native postgres data type "ltree" right?

https://www.postgresql.org/docs/current/ltree.html

  "This module implements a data type ltree for representing labels of data stored in a hierarchical tree-like structure. Extensive facilities for searching through label trees are provided."


Given that the author also wrote https://scattered-thoughts.net/writing/select-wat-from-sql/, I don't think he knows nothing about inner joins. I think he was just using that equivalent form to compare it with the terser `foo.bar.quux`. It is pretty strange to compare it with `fk_join(foo, 'bar_id', bar, 'quux_id', quux)` though, because SQL already has the equivalent `foo JOIN bar USING bar_id JOIN quux USING quux_id`.


Yeah, it reads like going down the path to write some SQL like parser without actually having used SQL in anger.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: