This made sense for product catalogs, employee dept and e-commerce type of use cases.
But it's an extremely poor fit for storing a world model that LLMs are building in an opaque and probabilistic way.
Prediction: a new data model will take over in the next 5 years. It might use some principles from many decades of relational DBs, but will also be different in fundamental ways.
GraphQL was designed to add types and remote data fetching abstractions to a large existing PHP server side code base. Cypher is designed to work closer to storage, although there are many implementations that run cypher on top of anything ("table functions" in ladybug).
Neo4j's implementation of cypher didn't emphasize types. You had a relatively schemaless design that made it easy to get started. But Kuzu/Ladybug implementation of cypher is closer to DuckDB SQL.
They both have their places in computing as long as we have terminology that's clear and unambiguous.
Look at the number of comments in this story that refer to GraphQL as GQL (which is a ISO standard).
Got it. I didn't realize. Checking out the docs, looks like GQL is based on Cypher. So in the thread people were talking about it, just calling it GQL as the common name, not Cypher as the original name and I missed it.
Store your graphs in Parquet files on object storage or DuckDB files and query them using strongly typed Cypher. Advanced factorized join algorithms (details in a VLDB 2023 paper when it was called Kuzu).
Looking to serve externalized knowledge with small language models using this infra. Watch Andrej Karpathy's Cognitive Core podcasts more details.
For every person trying to move an old code base from COBOL to Java to remove tech debt, there are an equal number of people who want rewrite a working C++ code base in Rust/Go/Zig.
Leaders who know that it's a people problem and who have read the Jerry Weinberg book know both sides of the problem.
See my other comment in the thread. I would argue that anything that uses arcane dynamic stuff in python should be renamed to .dpy and the vast majority of the commonly used constructs retain .py
The issue in HN threads like this is that everyone is out to promote their own favorite language or their favorite python framework that uses dynamic stuff. The majoritarian and hacker-ethos of python don't always line up.
Like Chris Lattner was saying on a recent podcast, he wrote much of Swift at home on nights/weekends over 18 months. We need someone like that do this for spy.
I'd imagine a lot of packages that you may want to use make deep use of some of these obscure features. So much magical "it just works" of Django is surely various kinds of deep introspection.
Not sure an AI can fix it yet. It's not just adding type annotations.
The position I take is that such obscure code in the guts of a popular package could be slowing down large amounts of deployed code elsewhere. If such code must exist, it should be marked as special (like how Cython does it).
Beyond adding type annotations, there are other important problems to solve when translating python to rust (the most popular path in py2many so far).
This is why I've urged FastAPI and pydantic maintainers to give up on BaseModel and use fquery.pydantic/fquery.sqlmodel decorators. They translate much better.
One of the problems with Ceph is that it doesn't operate at the highest possible throughput or the lowest possible latency point.
DAOS seemed promising a couple of years ago. But in terms of popularity it seems to be stuck. No Ubuntu packages, no wide spread deployment, Optane got killed.
Yet the NVMe + metadata approach seemed promising.
Would love to see more databases fork it to do what you need from it.
Or if folks have looked at it and decided not to do it, an analysis of why would be super interesting.
https://adsharma.github.io/more-performance-hints/
reply