With the immense success of the apple arm/silicon, is there a long term future of x86? I ask that facetiously of course, because intel makes extremely important and ubiquitous cpus used in so many market segments, takes in so much money, they can spend endless money chasing a better system that is vastly more power efficient.
But they haven't been able to make much progress visible to my eye. I am hopeful that some of the other arm projects that seek to match apple's incredible hardware succeed in performance and energy use, and we can have more desktop systems using it.
Worth noting, a big part of Apple Silicon success is not just ARM, but the fact that they throw enough money at TSMC to buy out all production of the latest and greatest node, locking out any competitors (excluding Intel and Samsung, but they are still playing catch up).
ARM also has a few other problems that need to be fixed that are not seen in x86 IMO, like needing a standardized boot sequence like x86, and the whole problem of device makers releasing devices running outdated and no longer updated forks of linux due to the whole mess of drivers, device trees, etc
I'm an idiot when it comes to understanding CPUs, but as an end user of them, my feeling is that the M1 Pro MBP I bought on release is so absurdly good that I can't imagine needing to buy a new laptop for decade. It's runs silent and cool and simply never struggles with anything. Since Hetzner started offering ARM VMs on their cloud service, I've set up several servers on it which have been trouble-free, run cool, and offer way over twice the value of AMD64. I've no doubt you know what you're talking about with regards to problems that need fixing, but to a CPU idiot like me, they just seem like magic and I would absolutely hate to ever go back to using chips that cause 3rd degree burns of you use a laptop on your lap.
For workloads not offloaded to dedicated hardware (NN/LLM), I say CPUs for end users have been "good enough" for four or five years.
The impressive things have been in efficiency, and in implementing new kinds of logic/instructions/architectures.
I'd be plenty happy with my 5+ year old laptop if it had more than 40 minutes of battery life (Yeah I need to order a new battery. Yes, batteries used to be replaceable in laptops).
For desktop CPU's I would say things have been "good enough" for 12 years.
Sandy Lake was released in 2011, and was the last CPU from Intel that was at least 25% better than its predecessor. Ever since each generational bump has seen a ~10% bump or so.
IMO there's a strong case for Haswel as the first "good enough" cpu. It's not a ton faster than sandy lake, but it adds AVX-2 and FMA3 which in lots of software can easily be a 2x performance difference. (and possibly more importantly, if you define that as your baseline, you can compile all x86 applications with the same arch without giving up much performance.
Generally agree with the statement - I'm really happy with the M1 chip. But I as applications become more demanding over a time, a decade might be too optimistic given the boom in Generative AI.
Generally this is deemed “Jevons Paradox” and is part of why, ironically, consoles do not hold back game development but actually improve the experience for the majority of people - by setting hard caps on how much demand can actually be requested of a computer for a baseline experience.
Nvidia’s Grace-Hopper arch is probably the future here — a dedicated high-performance offload “GP” chip for big calculations with a small chip doing switching and routing and access control
I don't think Apple is interested in a standardized ARM boot sequence. I believe that the current proprietary stuff they have is totally fit for their goals.
Yep. Thankfully that is the case with ARM macs, and there is a dedicated community of hackers who are figuring everything out.
Though I hear in ARM servers there might now be a standard for booting, but I haven't looked into it. Makes sense since servers, can run a variety of OS's depending on the work, I would hope such a standardization makes its way to consumer devices, but I'm not holding my breath based on how things are going.
Apple wouldn't want Linux to be a first-class citizen on their hardware that competes with them, so this is likely a feature not a bug for them. For Linux to always be 6 months or more behind and a little bit buggy (more buggy than macOS at least) keeps them exactly where they want them. They don't want linux competitive or better than macOS, but having that free work available can drive some hardware sales to a group who otherwise might not buy them. It's quite brilliant actually, in kind of an "evil genius" sort of way.
Yeah unfortunately that is the case for most products. Only if they are overwhelmingly popular such as Apple products does this usually prove to be the case.
As long as Apple's processors are tied to Apple's ecosystem, their success doesn't say much about the future of x86. Apple isn't going to win all the desktop and laptop business. I don't think they're interested in supplying processors and whatnots to the rest of the computer builders. They don't have a platform appropriate for servers. For those swayed by clock speed, Apple doesn't compete at all; sometimes you do need that small % throughput increase at large % power increase.
I think x86 has a future in consumer hardware, but it's not going to be in endlessly chasing maximum theoretical performance by making CPUs ever more power hungry. Performance per watt is now the key figure for anything aside from high end enthusiast PCs (and even then, many enthusiasts can't actually make good use of more raw power than is currently on the table).
With that in mind, my thought is that it'd make sense to split off gigahertz-chasing enthusiast CPUs into a dedicated product line and brand while mainstream CPUs focus squarely on excellent zero-tweaking perf per watt.
me too, circa 2003/4 overclocked a 2.4ghz NW P4 to 3.3ghz. P4/Netburst architecture had some of the worse IPC of any chip ever I think, only raw mhz made it competitive, until Athlon 64 that is.
Sure they do, so long as there are enough performance cores to go with it.
Good CPUs today are vastly overpowered for gaming, and it's not remotely a close matter.
There are only a few AAA titles that can push eg an i7-12700, much less anything better. The CPU segment is years ahead of where it needs to be in regards to gaming, there's absolutely zero practical concern in gaming around e-cores. Several years from now that i7-12700 will still be great for gaming as it pertains to AAA titles.
> there's only so much windows crap running in the background that needs e-cores
Sounds like a challenge to Microsoft. I'm sure they can find more shit to do in the "background", like demanding you stand up and say McDonald's before you could have a password prompt.
Games are one of the few areas where performance matters. Sure, there is unoptimized waste everywhere, but if it release a game that can only get 20FPS on the latest gen hardware, you are going to be ridiculed.
Which is to ignore the elephant in the room that most games are now built to l for the console market. Consoles are underpowered relative to enthusiast PC builds (which according to Steam surveys are low percentages of the population).
There are no dramatic shifts in IPC recently and within the same family of processors like Intel core. That makes clock speeds a pretty decent proxy for (peak single core) perf. You can compare a 4GHz from a couple of years ago and know with some accuracy what to expect from 6Gz.
Since there are small yet positive IPC improvements each generation it’s a reasonable assumption/expectation that at least IPC is no worse in gen 14 than in the generations immediately before it.
Sure, until there are benchmarks we can’t know this. If I knew thus I’d no doubt be under NDA or embargo. But I’d be willing to bet that IPC difference is pretty small.
> There are also plenty of other reasons speed doesn't translate well to performance, like thermal limits and memory bottlenecking.
Indeed, the more interesting unknown for a new CPU gen these days isn’t what’s happened to IPC but how long and how many cores can actually run at peak clock. In terms of real perf that’s probably a much bigger unknown factor in the performance. When I see 6GHz I wonder if that’s for 1 core for one second…
But that’s exactly why the thing that’s reasonably easy to guess is exactly that: the peak/instantaneous/theoretical single core perf. And actual “real world” perf is much harder to make any guesses about.
If you take the IPC of the last gen and the clock of the next gen you have a pretty good value if you want to guesstimate the peak single core perf.
You say that, but applications use up orders of magnitude more system resources these days due to being built upon several layers of abstractions. Some modern applications feel more sluggish than their feature-complete counterparts from back in the day.
Correct, we somehow lost grip on things, still we aspire at a simpler ecosystem. But I wouldn’t give up on python that helps me powering so much of my digital life, with only SciTe and a command prompt.
I am the target market for these chips, I have a 5 year old i9 gaming PC that needs replacing... I'll wait until the next generation
all they've done is pump even more power into last year's design and add some more pointless e-cores (into an ENTHUSIAST line that doesn't care about them, they're gaming desktops, not laptops)
if this is kicking ass I'd like to see what failure looks like
Would still take a 7800x3d over anything Intel currently. Only if they can cut the power usage in half with the same performance would i consider them kicking ass.
Higher clock speeds in processors always come with additional self-interference (crosstalk), signal degradation, signal reflection, etc. so such a significantly higher clock speed than typical desktop processors indicates interesting strategies and problem-solving. This article doesn't really talk about why, but performance being irrelevant, a 6GHz clock is notable. It's not implied unless you're not familiar with the problems of higher clock speeds.
Yes, it just comments that the I9 chip has speeds up to 6GHz. It doesn't imply a relation to performance unless you don't know what clock speed does in a processor.