"Fairly unprofitable [if you ignore all the parts that generate revenue.]"
I will admit that gambling $0.16 in skins on pro matches when I was 15 was a lot of fun. Maybe I'm lucky to have gotten away (relatively) unscathed, but I do have a little nostalgia for those days.
As someone who can't eat gluten, we look for that label because gluten isn't included as one of the mandatory allergens in the US. I would prefer if things were simply labelled as "contains gluten" because instead I have to evaluate the ingredients of things without a GF label to see if it's suitable.
Computers can generate optimal solutions to arbitrary positions. There's no need to apply the human-optimized ergonomic algorithms or methods that human use to speedsolve.
The nice thing about compiler optimizations is that you can improve performance of existing CPU's without physically touching them. Year by year. You squeeze more of the machine someone designed. It adds up.
Imagine what environmental impact you would have if you optimized Python's performance by 1%? How much CO2 are you removing from the atmosphere? It's likely to overshadow the environmental footprint of you, your family and all your friends combined. Hell, maybe it's the entire city you live in. All because someone spent time implementing a few bitwise tricks.
> Imagine what environmental impact you would have if you optimized Python's performance by 1%?
I imagine this would also increase the complexity of the python intepreter by more than 1%, which in turn would increase the number of bugs in the interpreter by more than 1%. Which would burn both manpower and CPU-Cycles on a global scale.
(I assume that optimization, that reduce complexity are already exhausted, e.g. stuff like "removing redundant function calls". )
This is an assumption that a reasonable person naively comes up with.
Then if you actually go ahead and check, it turns out it's not true! It's quite a shocking revelation.
When you dig into the popular compilers/runtimes (with the exception of things like LLVM)
Many of them still have low hanging fruit of the form:
a = b + c - b
Yes, the above is still not fully optimized in the official implementations of some popular programming languages.
Also an optimization of "removing redundant function calls" isn't a binary on/off switch. You can do it better or worse. Sometimes you can remove them, sometimes not. If you improve your analysis, you can do more of that and improve performance. Same for DSE, CSE, etc...
In many languages, you can't just optimize + b - b willy-nilly, as there could be side effects and non-obvious interactions abound. For instance, in JavaScript, where everything is a 64-bit double, a + b - b is definitely not the same as a, given large enough or small enough a or b. In LLVM for floats as well, certainly.
It's an example of how trivial some of this low hanging fruit is, the above is a concrete case that I have personally implemented (arithmetic of in-register 64/32-bit integers). You can get into semantics and restrictions, but I think the point I'm raising is clear.
Why? The at law seems to talk about how performance software improvements isn’t relevant while this article talks about how there improvements to Postgres has been significant.
Is it because you view the 15% to be a low number? Because it really, really, isn’t in the context. It’s certainly smaller than the 60% from your linked law especially if you do the whole 15/10 thing, but you really shouldn’t compare Postgres’s performance to hardware increases because they aren’t the same. You would need absolutely massive amounts of hardware improvements to compare to even 1% performance on what is being measured here.
I don’t think the law is as silly as other posters here, but it’s talking about programming language compile times. I wouldn’t compare something as unimportant as that to what is arguably one of the most important things in computer science… considering how much data storage and consumption means to our world.
That seems like a very weak “law”. Is it meant as a joke? The supporting evidence is just numbers pulled apparently from nowhere (“let’s assume…”) and the conclusion is wildly off base. They seem to imply that if optimization improves the performance of much of the world’s software, by 4% each year, then it’s a waste of time.
Murphy’s law is the only comparison given. I wonder what the comparative expense is to develop faster and faster hardware, vs. to keep improving compilers. Depending on how the ROIs compare, (in dollars-per-percent gain, let’s say) maybe this “law” is worth some weight.
OTOH, the Postgres article in question seems to show diminishing returns from optimization, which belies the whole premise of the “law” which assumes the gain is consistent from year to year. And might prove Prorbstig’s implied point about optimization being a bad investment over the long term!
WCA Delegate here. The limiting factor generally isn't cost, but the need to handle cubes of various sizes and other puzzles entirely. Having a robot that can only scramble a 3x3 isn't useful for the majority of competitions.
I've always wanted maglev but didn't want to spend a lot of money. Is that a pretty good one to get? Would love a maglev main. I currently use a GTS M.
For $10 a month, all Copilot has to do is save me a handful of minutes each month for it to be worth it.
Most of my current work is creating new React components. In these tasks, Copilot probably saves me 20 or 30 minutes a day on average. Often more. It's not always right, but often close enough, and I'm not committing my code without testing it anyway.
reply