> How did we already get to no-one being impressed by 20% better PER YEAR already.
When has 20% been impressive? When Intel to M1 happened, the jump was huge ... not 20%. I can't think of anything with a 20% jump that made waves, even outside of tech.
When I used to do user benchmarking, 20% was often the first data point where users would be able to notice something was faster.
4 minutes vs 5 minutes. That's great! Kind of expected that we'll make SOME progress, so what is the low bar... 10%? Then we should be impressed with 20?
People aren't upgrading from M1, M2, M3 in numbers... so I don't think it's just me that isn't wow'd.
It is only 20% at certain tasks, not all around and depends on what the specific bottlenecks are for a given workflow. I'm not saying it's bad, but not enough to fork over a few thousand every year to upgrade. In 1992 a 496 DX2 @66mhz was king... In 2000 (8 years later) there were Intel and AMD at over 1Ghz. By 2008 you had 4 cores, 8 threads at 3.2Ghz.
I used to upgrade about every year or two, there were massive gains... now, I'll hold out 3-5 years and not even think twice about it.
It’s a bit crazy to compare such massive time differences apart though isn’t it.
Also, who ok their right mind gets a new computer every year? No one (sane / without a giant disposable income) even gets a new phone every year anymore.
Intel chips were getting faster. It's well documented (and glaringly obvious in the i9 16") that Apple just didn't want to accommodate the full TDP. They tweaked their ACPI tables to run the chips until they hit the junction temp so they were both constantly hot and constantly throttling. Apple tweaked all of their Intel chips in this way, which was a software solution to the Apple-designed hardware simply being unable to cope with the thermal stress.
We know this because the Intel Macbook Pro chassis was only ever used to run Apple Silicon chips that were passively cooled, not Pro/Max variants. The old MBP chassis designs are so awful that Apple doesn't consider them viable for cooling ARM CPUs. I blame Ive, not Intel.
Do you consider margin-of-error, single-digit gains to be worth arguing over? Intel offered 14nm for 4 years straight: Skylake, Kaby Lake, Coffee Lake, Coffee Lake Refresh—four different names, same process node, and 3-7% gains each year. Such fast.
> The old MBP chassis designs are so awful that Apple doesn't consider them viable for cooling ARM CPUs
You don't put a 15-20W chip into a thermal system built for 90W+. The old chassis wasn't "too awful" for Apple Silicon, it was completely unnecessary.
The 13" MBP chassis is not built for 90W though, let alone 50W. Intel was making 30W i7 chips and they were still throttling in that chassis. I think we have enough benefit of hindsight to blame Apple's egregious and power-hungry ACPI tables for not throttling to safe temps. I own several other laptops that do not hit 90c, ever.
What's pretty sad, is they could have just slightly underclocked and undervolted the Intel chips for around 95% of the performance without the janky throttling all the time. Whenever I'd spin up my background services in Docker the laptop was nearly unusable for 4-5 minutes and even then should have done better.
Macs barely got faster for ages with Intel - they just got hotter and shorter on battery life.
20% per year is a doubling every 4y. That is awesome.