Yeah I tend to work with strings data around the 15 to 30 ish char count so I'm also sceptical of german strings when it comes to raw memory usage. What really interests me, is that in theory, an SSTable built from German Strings that point into a larger text buffer further down could result in less pagefaults during a binary search? Maybe?
I mean, if you're doing binary searches, the first thing to do is to switch to Eytzinger searches (at least for the prefixes; it's unclear if Eytzing-sorting large strings matters) which are much more cache friendly. (incidentally, Eytzinger was Austrian, not German - same language though!)
Second, I'd probably switch to struct-of-array instead of array-of-struct, since the search part only needs the (head of the) key. Possibly some trie-like logic would also help (that is, avoid duplicating the common prefix, instead point to an array of continuations rather than a single one), but beware of costly indirections.
Third, I would make sure the user is able to specify the SSO limit, since different domains will have different requirements.
Fourth ... if you're column-oriented (or for simple data in the first place), consider implicit union of different `Set` (or whatever) implementations. For example, store all short strings in one array, and all long strings in a different array.
I did not know about Eytzing-sorting, I'll look into it after my vacation, thanks! And yeah our current system is column oriented, and I already tried most of your recommendations (including tries) but the biggest limitation we face is that different kinds of queries will be better served by different kinds of data structures but you can only store a few of them on disk before the cost to index becomes too big.
Whoops, I dropped the "er" the second time. I've also seen it labeled "cache-friendly binary sort", and sometimes the Breadth-First-Search aspect is played up (but that's unsearchable).
Logical-order (in-order DFS) iteration is obvious and trivial (though a lot of tasks are fine with physical-order (BFS) iteration. Beware that some tasks that nominally don't care about input order perform much better on already-sorted input though).
Conversion between logical index and physical index (if you need it - chances are you don't) can still be done efficiently, but requires some (fun!) thought especially for the non-power-of-2 tail.
Explicit prefetching is trivial to arrange for k-great grandchildren if needed (though the width of course grows exponentially). Using an n-ary rather than binary tree is possible but I'm not sure of the cache effects in practice. Switching to linear search just at the last level may be worth it? All of this means more complex logic though.