Anybody with experience in using duckdb to quickly select page of filtered transactions from the single table having a couple of billions of records and let's say 30 columns where each can be filtered using simple WHERE clausule? Lets say 10 years of payment order data. I am wondering since this is not analytical scenario.
Doing that in postgres takes some time, and even simple count(*) takes a lot of time (with all columns indexed)
I've used duckdb a lot at this scale, and I would not expect something like this to take more than a few seconds, if that. The only slow duckdb queries I have encountered either involve complex joins or glob across many files.
I'm not so sure the common index algorithms would work to speed up a count. How often is the table updated? If it's often, and it's also queried often for the count, then run the count somewhat often on a schedule and store the result separately, and if it isn't queried often, do it more seldom.
From what you describe I'd expect a list of column-value pairs under a WHERE to resolve pretty fast if it uses indices and don't fish out large amounts of data at once.
People lose fat on calorie restricted diet. How will you get to it, either by counting them or by improving metabolism or by changing insulin levels, is a different thing.
Vegan or keto diet can both be calorie restricted, as much as any macronutrient mixture. However, it doesn't mean its sustainable. If you are hungry all the time, you can stay on the diet for some time, but not forever. Since insulin is the primary storage hormone, reducing it will make you less fat (just look at type 1 diabetics). We now know that carbs are the highest promoters of insulin, that fat has 0 influence, and protein some. We have drugs like metformin or GLP-1 that brute force some of it and they are working.
So, we know that sugar is mostly bad and that fat and protein are not. Ofc, some fats are bad for other reasons (by promoting inflamation) but that has nothing to do with obesity.
Thing about the keto diet is that "hungry all the time" simply... doesn't happen. In fact, bigger problem for keto dieters tends to be being satiated all the time and consequently undereating.
"Hungry all the time" is actually vegan thing, but plants have so few calories and pass through so quickly that vegans end up being skinny despite eating literally all the time.
You mean leafs, not plants? Cereals, beans, fruits and some roots have plenty of calories but your true fatty friends are all sorts of seeds and nuts. You also can buy their fat extract: oil.
It is not just issue of raw calories but how much body can absorb. Fruitarians for example tend to be corpse-skinny despite fruit being full of sugar, because most of that sugar simply passes through. So effective calories are less than what sugar content would indicate.
But grains and seeds do seem to be quite obesogenic, yes.
I add the word "obesogenic" to my toolbox, love how it sounds!
I don't know Fruitarians but what you describe makes sense. However the vegans I know aren't "Hungry all the time". Some are skinny and some fatty but I wouldn't say the average are skinny, you wouldn't stare at their size if you don't know their diet. Might be a personal bias though, I don't know studies about vegans hungriness or BMI.
It is definitely not protein. I tried carnivore diet for a while (had massive issues tolerating carbs lol), and the higher my protein intake was, more hungry I felt. Reducing protein and increasing fat also increased satiety.
Turns out, it is fats that produce satiety signals, and the effect seems to be by far the strongest with saturated fats, weaker with monounsaturated fats, while polyunsaturated fats actually induce hunger as strongly or even more strongly than carbohydrates do. The idea that "protein induces satiety" is a side effect of the fact that most (though not all) protein foods tend to be quite fatty.
Good point. Have two programs - one checking every even number and returning odd of not even. And then have a program checking every odd number and returning even if not. Then, a simple program to dispatch to either program randomly, so you end up in the long term with good performance for each.
Your mention of Microservices opened up my mind to additional possibilities. How about we create a microservice for each integer, then deploy 4 billion of them. Send a request to all of them simultaneously. Only one of them will respond with the answer. We still need to decide how to deploy those microservices - one per machine, or multiple per machine?
You brought up an important opportunity for optimization. If you know the distribution of your data, it may make more sense to implement it in terms of the odd numbers and leave even numbers as the fallback. It's important to profile with a realistic distribution of data to make sure you're targeting the correct parity of numbers.
> Your brain wouldn’t know what to do with it, nutritionally speaking.
At first. If the food has nutrients that are important to the brain, it will recognize that in the future. There are animal experiment confirming this.
In that case, keep your MS license where there is a migration problem, simple as that. There is no need for the entire gov sector to pay so you and your team can use custom formulas.
Second, it's open source. You and your AI army can inspect the code if you wish. The same is true for literary every other software, so I don't see a point you are making.
Oh, are they famous? Who are they and where would I know them from?
>"You and your AI army can inspect the code if you wish."
Nah. Though, if you want to pay a consulting fee, sure!
>"The same is true for literary every other software, so I don't see a point you are making."
The point I'm making is that if you care about security, you shouldn't install an update manager from some random dude, especially when it hasn't been touched in 6 years.
And if you don't recognize why software that manages your updates is riskier than most software, you really shouldn't install an update manager from some random dude.
Doing that in postgres takes some time, and even simple count(*) takes a lot of time (with all columns indexed)
reply