Hacker Newsnew | past | comments | ask | show | jobs | submit | more zetalyrae's commentslogin

If I'm at the point of dropping foreign keys, why not just switch to DynamoDB?


> Imagine SQL was instead based on an actual programming language, with variables and functions.

This is what the entire article is about. That paragraph is meant to illustrate the problem of SQL through an analogy.


The article is about functors. Some new concept that gives you a tiny subset of the functionality that a full language provides. Most languages have loops, control structures, the ability to store any data as a variable.


This suffers from the "sufficiently smart compiler" problem. The query planner that can do what I mean with maximal efficiency is always just over the horizon. There's always yet another query that can't be optimized automatically.


It's not a problem in MSSQL, so solving the fat head of problems is clearly possible.

The escape hatch in MSSQL for the long tail is materialising an intermediate result to temp table.


One significant difference between PG and MSSQL is MSSQL caches query plans allowing its optimizer to be much more sophisticated and slower.

PG re-optmizes every query over and over unless you manually do a prepared statement and that only last for the session its prepared on. Therefore it's very important the optimizer not take much time before execution.


> PG re-optmizes every query over and over unless you manually do a prepared statement and that only last for the session its prepared on.

But you really should using at least query builder that does the prepared statement thing under the hood, and also because writing dynamic queries with string concatenation sucks.


Parameterized sql is a good thing regardless of plan caching, but it also helps with systems that do plan caching.

That the client must explicitly prepare the statement and the preparation is tied to the connection is pretty clunky and still means the statement must be replanned for each separate connection.

Also since it is assumed by the devs of PG that prepared statements are not the norm they seem to avoid making the optimizer take its time and shoot for a fast optimizer rather than the fastest query.

DB's like MSSQL and Oracle concern themselves much less with how long it takes to optimize since repetitive queries will all use the same plan automatically it also allows interesting features like saving and loading plans from different systems and freezing them so they don't all the sudden go bad in production, but those systems also supports hints unlike PG.


At least in Postgres, table-valued functions can't take tables as arguments, only scalars. That's the main difference: functors can not just return tables, but take tables satisfying some interface as arguments.

https://www.postgresql.org/docs/7.3/xfunc-tablefunctions.htm...

I thought I had written a footnote or appendix about this but I guess I forgot.


MSSQL can take tables as arguments if they are temporary tables declared to be of a certain type. But that restriction limits their use a lot.


> LLMs have proved amazing facile with language.

If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.


Similarly if you could let a person from five years ago have a spoken conversation with ChatGPT Advanced Voice mode or Gemini Live. For me five years ago, the only giveaways that the voice on the other end might not be human would have been its abilities to answer questions instantaneously about almost any subject and to speak many different languages.

The NotebookLM “podcasters” would have been equally convincing to me.


The whole point of the post is that many have updated their beliefs too much.


And they were right. Television created nothing of value.


It broke movie studios as gatekeepers for mass distribution.


> And they were right. Television created nothing of value.

Exactly. Bad things can and do become normalized and then unremarked upon. Some people confuse that phenomenon with those things actually being good.

People often have clearer eyes at the transition.



The author is an ESL speaker.


FSRS gives you the same retention for less work, or, equivalently, you can remember more things for the same amount of time worked.


While being more opaque & difficult to self-correct for. How much more work are we talking about? A theoretical couple of minutes in a year? Not worth it.


Using the scheduler estimates from the FSRS simulator [1], for desired retention held equal at 85%, I received approximately 20-30% improvements in workload upon switching to FSRS from SM-2. Even disregarding the "internal" improvements, the ability to reduce the number of parameters that require modification/present risk to performant scheduling is heavily reduced to only setting desired retention explicitly (a benefit in and of itself) as well as minor decisions (e.g. inclusion of suspended cards). Interpretability really is far less of an issue than efficiency, and frankly the achievements of the team behind FSRS (including their decision to make it publicly available) should be lauded.

[1] https://colab.research.google.com/github/open-spaced-repetit...


Ok, I've identified the root of my confusion here. Mochi has two FAQs:

https://mochi.cards/faq.html Mochi uses SM-2 without EF adjustment and without resetting intervals. This is outdated.

https://mochi.cards/docs/faq/ says Mochi multiplies by a number >1 on recall and by a number in [0,1] on forgetting. This FAQ is linked from the front page and seems to be the correct one.

And, indeed, checking the review settings for my deck shows the multiplier settings.


Hey sorry about that. I will take down the old FAQ, or at least redirect it to the new one. The other thing not mentioned in that section of the FAQ (that I will add now) is the re-review phase. When you miss a card during review it will go to the re-review queue. Missing it again in that phase will reset its interval to 1.

Since I created Mochi there have been some new algorithis developed. Most notably FSRS [0] looks promising.

[0] https://github.com/open-spaced-repetition/free-spaced-repeti...


Would you consider to support FSRS in Mochi?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: