Hacker Newsnew | past | comments | ask | show | jobs | submit | eatonphil's commentslogin

What is the reason or benefit of them being so secretive about this?

Usually, though not always, a company will tell you if they're making money on something, and if they're not they beat around the bush like this. Notice how, for example, Gywnne Shotwell never beats around the bush like this when talking about Starlink.

Notice the weird language:

> That’s making progress in terms of unit economics very, very positive.

He says the "progress" is "very, very positive," but if you're not paying close enough attention you might come away thinking that the unit economics are what's very very positive.

All that said, what he's saying makes sense. They're able to charge more for their rides since they offer the convenience of not having to deal with a driver, and they're not paying the driver, who is the most expensive part, so yea, I'm bullish on them.


They don't gain much from disclosing anything imo , their competition reads every word they say. I'm not sure it matters that much but as a habit I don't see why they should disclose exact numbers.

Waymo doesn't gain anything. Google i.e. Alphabet Inc, does.

Especially these days. Every scrap of news that could pump the stock price is publicized aggressively.

And this makes the absence of such actions suspicious.


Not really: this past few years, listed companies tend to be _very_ pessimistic on their quarterly projection, and then reveal that either: it wasn't that bad, and nothing change, or that is was great, and their valuation shoots up. Weirdly the market doesn't react over those pessimistic projections, so it seems it's just a safe play for CEOs. They started doing that in Europe as well.

Because no highly indebted company is going to "strongly hint" that they aren't just hemorrhaging cash like everyone assumes--they will absolutely let you know. "Hints" are just best effort accounting aesthetics to seem like the dream is just around the corner.

They have to follow SEC rules about disclosing it.

Guessing from the relevant masters thesis linked on his site and his time at materialize and tigerbeetle, Jamie's been working on databases for at least like 15 years?

At least personally when Jamie says something I listen. :)


Many of these points are not compelling to me when 1) you can filter both rows and columns (in postgres logical replication anyway [0]) and 2) SQL views.

[0] https://www.postgresql.org/docs/current/logical-replication-...


Is it possible to create a filter that can work over a complex join operation?

That's what IVM systems like Noria can do. With application + cache, the application stores the final result in the cache. So, with these new IVM systems, you get that precomputed data directly from the database.

Views in Postgres are not materialized right? so every small delta would require refresh of entire view.


Take a look at Alex Miller's diagrams for what function calls are actually doing on various systems.

https://transactional.blog/how-to-learn/disk-io


> Sqlite's test suite simulates just about every kind of failure you can imagine

The page you link even mentions scenarios they know about that do happen and that they still assume won't happen. So even sqlite doesn't make anywhere near as strong a claim as you make.

> SQLite assumes that the operating system will buffer writes and that a write request will return before data has actually been stored in the mass storage device. SQLite further assumes that write operations will be reordered by the operating system. For this reason, SQLite does a "flush" or "fsync" operation at key points. SQLite assumes that the flush or fsync will not return until all pending write operations for the file that is being flushed have completed. We are told that the flush and fsync primitives are broken on some versions of Windows and Linux. This is unfortunate. It opens SQLite up to the possibility of database corruption following a power loss in the middle of a commit. However, there is nothing that SQLite can do to test for or remedy the situation. SQLite assumes that the operating system that it is running on works as advertised. If that is not quite the case, well then hopefully you will not lose power too often.


There was a time that Oracle databases used raw disk partitions to minimize the influence of the OS in what happens between memory and storage. It was more for multiple instances looking at the same SCSI device (Oracle Parallel Server).

I don't think that is often done now.


> So even sqlite doesn't make anywhere near as strong a claim as you make.

And? If you write to a disk and later this disk is missing, you don't have durability. SQLite cannot automatically help you to commit your writes to a satellite for durability against species ending event on Earth, and hence its "durability" has limits exactly as spelled out by them.


You're arguing a strawman and I pointed at a specific example. Sticking with my specific example they could probe for this behavior or this OS version and crash immediately, telling the user to update their OS. Instead it seems they acknowledge this issue exists and they hope it doesn't happen. Which hey everybody does but that's not the claim OP was making.


It’s not really a libraries job to cover all bases like you’re suggesting. They outline the failure scenarios fairly well and users are expected to take note.


To the contrary, when all the good New York meetups (Papers We Love, Linux User Group) didn't come back, and inspired by the continuously running Munich Database Meetup and TUMuchData, I started the NYC Systems Coffee Club and co-started NYC Systems (talk series) after which came Berlin Systems Group, Bengaluru Systems Meetup, San Francisco Systems Club, Systems from HEL, DC Systems, Vancouver Systems, South Bay Systems, and Seattle Systems.

So I think people are really eager for high quality talks and chances to gather with smart people.

What's more I think there are not enough meetups in almost any major city to satisfy the demand of speakers or attendees. For example, NYC Systems gets hundreds of people asking to speak (we have 12 speakers a year) and gets 2-3x as many attendees wanting to come as we have space for.


Is this sqlite built from source or a distro sqlite? It's possible the defaults differ with build settings.


The one which avinassh shows is MacOS's SQLite under /usr/bin/sqlite3. In general it also has some other weird settings, like not having concat() method, last I checked.


The Apple built macOS SQLite is something.

Another oddity: misteriously reserving 12 bytes per page for whatever reason, making databases created with it forever incompatible with the checksum VFS.

Other: having 3 different layers of fsync to avoid actually doing any F_FULLFSYNC ever, even when you ask it for a fullfsync (read up on F_BARRIERFSYNC).


> it also has some other weird settings

You also can't load extensions with `.load` (presumably security but a pain in the arse.)

    user ~ $ echo | /opt/homebrew/opt/sqlite3/bin/sqlite3 '.load'
    [2025-08-25T09:27:54Z INFO  sqlite_zstd::create_extension] [sqlite-zstd] initialized
    user ~ $ echo | /usr/bin/sqlite3 '.load'
    Error: unknown command or invalid arguments:  "load". Enter ".help" for help


Anyone can implement Raft. There are plenty of implementations of them not by Google engineers, including a custom one in the product I work on. And developers in the Software Internals Discord are constantly in there asking questions on the road to implementing Raft or Viewstamped Replication.


I believe the parent is referring to pre-raft consensus algorithms like Paxos. I recall the explanation of Paxos being a lengthy PDF while the explanation of Raft is a single webpage, mostly visual.


Could be, it was a little ambiguously worded. That said, single-decree Paxos is much simpler than Raft but I agree The Part-Time Parliament's analogy is a pain to read. But it's better if you just ignore the beginning chunk of the paper and read like the appendix; A1 The Basic Protocol being simpler to understand.


There’s also the side-by-side Paxos/Raft comparison in Howard & Mortier’s “Consensus on consensus”[1] paper, which is not enough to understand either by itself, but a great help if have a longer explanation you’re going through.

[1] https://dl.acm.org/doi/10.1145/3380787.3393681


Other way around.

Step 1 of Raft is for the distributed nodes to come to consensus on a fact - i.e. who the leader is.

ALL of Paxos is the distributed nodes coming to consensus on a fact.

Raft just sounds easier because its descriptions use nice-sounding prose and gloss over the details.


Check out Alex Miller's Data Replication Design Spectrum for what you might use instead of Raft (for replication specifically), or what tweaks you might make to Raft for better throughput or space efficiency (for replication).

https://transactional.blog/blog/2024-data-replication-design...


Who isn't? Cockroach rewrote Postgres in Go. CedarDB rewrote Postgres in C++.

And then to lesser degrees you've got Yugabyte, AlloyDB, and Aurora DSQL (and certainly more I'm forgetting) that only replace parts of Postgres.


Both Cockroach and CedarDB didn't rewrite anything, they built stuff from scratch. Just used the same client protocol. There are a bunch of other unrelated databases using Postgres protocol btw.


I'm not talking about speaking the protocol. I'm talking about trying as hard as they can to be as indistinguishable from Postgres (to a non-operations user) as they can. And that list is very small.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: