Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very cool that it’s based off of RDF triples. I’m really curious about your product. What does performance look like? The TerminusDB website says it’s in memory? Can you have a database larger than your RAM?


RDF triples turn out to be a crucial part of the architecture as they make describing deltas really straightforward. It is just these triples were added and these triples were taken away. Performance is good - you get a degradation in query time as you build more appended layers, but you can squash these to a single plane to speed up. Often we have a query branch where the layers are optimized for query and another branch with all the commit history in place. We are working on something we call Delta roll ups at the moment - these are like squashes that keep the history. Hopefully you'll soon be able to automate the roll ups to keep query performance at a specified level (something like the vacuum cleaner in postgres). It is in-memory, so you are limited to what's in RAM for querying, but it persists to disk, and we are betting that memory is going to get bigger and cheaper over the next while.


+1 just wanted to say thanks for pointing out the use of RDF, I would have missed that.

I have been into the semantic web since year zero, and after years of doing deep learning work I just started a Knowledge Graph job two weeks ago.

Anyway thanks, I am going to dig into TerminusDB as soon as I get back from my morning hike.

EDIT: wow, TerminousDB is written in Swi-Prolog.


Yes - the server is in SWIPL and the distributed store is in Rust. Great combo we think.

We tried to take some of the best ideas from the semantic web and make them as practical as possible. Great to hear that people are getting knowledge graph jobs out there!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: