Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

See it's the exact opposite for me, although my experience is mostly a) building giant cubes in giant enterprise orgs with hourly data volumes you couldn't fit in memory, and b) 10-15 years old (so the hardware sucked and we didn't have duckDB). But yeah, I don't think the O in OLAP standing for 'online' ever really made sense.

I'm curious to know how much of this article is OLAP specific vs just generic good practice for tuning batch insert chunk size. The whole "batch your writes, use 100k rows or 1s worth of data" thing applies equally to pretty much any database, they're just ignoring the availability of builtin bulkload methods so they can arguing that INSERTs are slow so they can fix it by adding Kafka, for reasons? Maybe I'm missing something.



Author here—this article was meant to highlight how you can optimize writes to CH with streams.

If you want to directly insert data into ClickHouse with MooseStack, we have a direct insert method that allows you to use ClickHouse's bulkload methods.

Here's the implementation: https://github.com/514-labs/moosestack/blob/43a2576de2e22743...

Documentation is here: https://docs.fiveonefour.com/moose/olap/insert-data#performa...

Would love to hear your thoughts on our direct insert implementation!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: