Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel's Latest I9 Chip Has Speeds Up to 6GHz (lifehacker.com)
39 points by HiroProtagonist on Oct 16, 2023 | hide | past | favorite | 63 comments




With the immense success of the apple arm/silicon, is there a long term future of x86? I ask that facetiously of course, because intel makes extremely important and ubiquitous cpus used in so many market segments, takes in so much money, they can spend endless money chasing a better system that is vastly more power efficient.

But they haven't been able to make much progress visible to my eye. I am hopeful that some of the other arm projects that seek to match apple's incredible hardware succeed in performance and energy use, and we can have more desktop systems using it.


Worth noting, a big part of Apple Silicon success is not just ARM, but the fact that they throw enough money at TSMC to buy out all production of the latest and greatest node, locking out any competitors (excluding Intel and Samsung, but they are still playing catch up).

ARM also has a few other problems that need to be fixed that are not seen in x86 IMO, like needing a standardized boot sequence like x86, and the whole problem of device makers releasing devices running outdated and no longer updated forks of linux due to the whole mess of drivers, device trees, etc


I'm an idiot when it comes to understanding CPUs, but as an end user of them, my feeling is that the M1 Pro MBP I bought on release is so absurdly good that I can't imagine needing to buy a new laptop for decade. It's runs silent and cool and simply never struggles with anything. Since Hetzner started offering ARM VMs on their cloud service, I've set up several servers on it which have been trouble-free, run cool, and offer way over twice the value of AMD64. I've no doubt you know what you're talking about with regards to problems that need fixing, but to a CPU idiot like me, they just seem like magic and I would absolutely hate to ever go back to using chips that cause 3rd degree burns of you use a laptop on your lap.


For workloads not offloaded to dedicated hardware (NN/LLM), I say CPUs for end users have been "good enough" for four or five years.

The impressive things have been in efficiency, and in implementing new kinds of logic/instructions/architectures.

I'd be plenty happy with my 5+ year old laptop if it had more than 40 minutes of battery life (Yeah I need to order a new battery. Yes, batteries used to be replaceable in laptops).


For desktop CPU's I would say things have been "good enough" for 12 years.

Sandy Lake was released in 2011, and was the last CPU from Intel that was at least 25% better than its predecessor. Ever since each generational bump has seen a ~10% bump or so.

I still work & game on an i5-2500k.


IMO there's a strong case for Haswel as the first "good enough" cpu. It's not a ton faster than sandy lake, but it adds AVX-2 and FMA3 which in lots of software can easily be a 2x performance difference. (and possibly more importantly, if you define that as your baseline, you can compile all x86 applications with the same arch without giving up much performance.


Generally agree with the statement - I'm really happy with the M1 chip. But I as applications become more demanding over a time, a decade might be too optimistic given the boom in Generative AI.

Also +1 on how silent it is.


Generally this is deemed “Jevons Paradox” and is part of why, ironically, consoles do not hold back game development but actually improve the experience for the majority of people - by setting hard caps on how much demand can actually be requested of a computer for a baseline experience.


Nvidia’s Grace-Hopper arch is probably the future here — a dedicated high-performance offload “GP” chip for big calculations with a small chip doing switching and routing and access control


You are mistaking a small subset of computer systems (home and small server) as a silver bullet.

In my case servers have 512Gb-1Tb of RAM and the cost of running nodes with a lesser amount scales extremely bad.


I don't think Apple is interested in a standardized ARM boot sequence. I believe that the current proprietary stuff they have is totally fit for their goals.


Yet it's easier to dual boot ARM Mac devices with Linux than most other ARM devices.


Yep. Thankfully that is the case with ARM macs, and there is a dedicated community of hackers who are figuring everything out.

Though I hear in ARM servers there might now be a standard for booting, but I haven't looked into it. Makes sense since servers, can run a variety of OS's depending on the work, I would hope such a standardization makes its way to consumer devices, but I'm not holding my breath based on how things are going.


Hackers continually reverse engineering proprietary magic isn't a sustainable strategy.

Ask the 90-10s open source GPU driver folks.


Apple wouldn't want Linux to be a first-class citizen on their hardware that competes with them, so this is likely a feature not a bug for them. For Linux to always be 6 months or more behind and a little bit buggy (more buggy than macOS at least) keeps them exactly where they want them. They don't want linux competitive or better than macOS, but having that free work available can drive some hardware sales to a group who otherwise might not buy them. It's quite brilliant actually, in kind of an "evil genius" sort of way.


Yeah unfortunately that is the case for most products. Only if they are overwhelmingly popular such as Apple products does this usually prove to be the case.


Afterall having a MacOS Darwin kernel bootstrap your Linux kernel is not any worse than having a GPU do it.


> a big part of Apple Silicon success is fact that they throw enough money at TSMC to buy out all production of the latest and greatest node

I see this all the time and I'm skeptical.

Are you saying that Apple M(latest-1) is not faster and more thermally efficient than Intel/AMD(current) ?


Once you stop looking at Intel the situation is a lot less dire. The x86 perf/watt problem is primarily an Intel problem not an x86 problem.


While not as efficient as Apple Silicon, I've had good experiences with AMD processors in mobile hardware, noticeably more better than Intel as well.


It's not like Intel hasn't been there before, though.


Isn’t it latest node *and* advanced packaging?


As long as Apple's processors are tied to Apple's ecosystem, their success doesn't say much about the future of x86. Apple isn't going to win all the desktop and laptop business. I don't think they're interested in supplying processors and whatnots to the rest of the computer builders. They don't have a platform appropriate for servers. For those swayed by clock speed, Apple doesn't compete at all; sometimes you do need that small % throughput increase at large % power increase.


I think x86 has a future in consumer hardware, but it's not going to be in endlessly chasing maximum theoretical performance by making CPUs ever more power hungry. Performance per watt is now the key figure for anything aside from high end enthusiast PCs (and even then, many enthusiasts can't actually make good use of more raw power than is currently on the table).

With that in mind, my thought is that it'd make sense to split off gigahertz-chasing enthusiast CPUs into a dedicated product line and brand while mainstream CPUs focus squarely on excellent zero-tweaking perf per watt.


Intel's Latest I9 Chip Has Temperatures Up to 15 Million Degrees Celsius


I feel nostalgik, had a P4 Northwood I recall. It was kinda cool back then, but not TDP-wise :)


Any time my office got chilly, I'd `make -j` the Linux kernel a few times.


me too, circa 2003/4 overclocked a 2.4ghz NW P4 to 3.3ghz. P4/Netburst architecture had some of the worse IPC of any chip ever I think, only raw mhz made it competitive, until Athlon 64 that is.


so it took us 6 years (1997-2003) to go from 300MHZ to 3GHZ but another 20 years to go from 3GHZ to 6GHZ.


> so it took us 6 years (1997-2003) to go from 300MHZ to 3GHZ but another 20 years to go from 3GHZ to 6GHZ.

Well, with all competition gone (Alpha, PA-RISC, Sparc, MIPS, some x86 vendors) , you can finaly relax.


intel: gamers don't want tens of e cores

10900k was the highlight


Sure they do, so long as there are enough performance cores to go with it.

Good CPUs today are vastly overpowered for gaming, and it's not remotely a close matter.

There are only a few AAA titles that can push eg an i7-12700, much less anything better. The CPU segment is years ahead of where it needs to be in regards to gaming, there's absolutely zero practical concern in gaming around e-cores. Several years from now that i7-12700 will still be great for gaming as it pertains to AAA titles.


> Sure they do, so long as there are enough performance cores to go with it.

9th gen i9 had 8 cores, 10th gen i9 had 10 cores, 11th gen went back to 8 and it's been stuck there since

the 14900k is benchmarking worse than the 14700k

there's only so much windows crap running in the background that needs e-cores

> The CPU segment is years ahead of where it needs to be in regards to gaming, there's absolutely zero practical concern in gaming around e-cores.

speak for yourself, my machine is CPU capped in esports titles after about 300fps or so

more e-cores sat idle does nothing here


> there's only so much windows crap running in the background that needs e-cores

Sounds like a challenge to Microsoft. I'm sure they can find more shit to do in the "background", like demanding you stand up and say McDonald's before you could have a password prompt.


> Sounds like a challenge to Microsoft. I'm sure they can find more shit to do in the "background",

What do you think svchost.exe is for ?


> Several years from now

You are greatly underestimating the ability of software to "utilize" available hardware gains.


Games are one of the few areas where performance matters. Sure, there is unoptimized waste everywhere, but if it release a game that can only get 20FPS on the latest gen hardware, you are going to be ridiculed.

Which is to ignore the elephant in the room that most games are now built to l for the console market. Consoles are underpowered relative to enthusiast PC builds (which according to Steam surveys are low percentages of the population).


> but if it release a game that can only get 20FPS on the latest gen hardware, you are going to be ridiculed.

Crysis developers would beg to differ. /s


I like your use of quotes cause it sure seems like a lot of those hardware gains get sucked up by bloat while trying to maintain performance.


Clock speed is not an indicator of performance.


But good indicator of TDP. Those 6GHz could fry an egg.


I have a 13900k and can confirm, its basically an air fryer.


For per core performance, it surely is.


Clock only translates to performance when combined with Instructions-Per-Clock.


There are no dramatic shifts in IPC recently and within the same family of processors like Intel core. That makes clock speeds a pretty decent proxy for (peak single core) perf. You can compare a 4GHz from a couple of years ago and know with some accuracy what to expect from 6Gz.


That's tautological. Commenter above is saying that IPC may have gone down. You're asserting that it didn't without any evidence.

There are also plenty of other reasons speed doesn't translate well to performance, like thermal limits and memory bottlenecking.


Since there are small yet positive IPC improvements each generation it’s a reasonable assumption/expectation that at least IPC is no worse in gen 14 than in the generations immediately before it.

Sure, until there are benchmarks we can’t know this. If I knew thus I’d no doubt be under NDA or embargo. But I’d be willing to bet that IPC difference is pretty small.

> There are also plenty of other reasons speed doesn't translate well to performance, like thermal limits and memory bottlenecking.

Indeed, the more interesting unknown for a new CPU gen these days isn’t what’s happened to IPC but how long and how many cores can actually run at peak clock. In terms of real perf that’s probably a much bigger unknown factor in the performance. When I see 6GHz I wonder if that’s for 1 core for one second…

But that’s exactly why the thing that’s reasonably easy to guess is exactly that: the peak/instantaneous/theoretical single core perf. And actual “real world” perf is much harder to make any guesses about.

If you take the IPC of the last gen and the clock of the next gen you have a pretty good value if you want to guesstimate the peak single core perf.


no it's not, it's practical.

causality does not imply causation but it sure as shit points a gigantic neon sign at it.


It used to be clocks per instruction, we are spoiled nowadays.


You say that, but applications use up orders of magnitude more system resources these days due to being built upon several layers of abstractions. Some modern applications feel more sluggish than their feature-complete counterparts from back in the day.


Correct, we somehow lost grip on things, still we aspire at a simpler ecosystem. But I wouldn’t give up on python that helps me powering so much of my digital life, with only SciTe and a command prompt.


Assuming you're comparing two chips with the same architecture.

Intel Prescott chips hit just shy of 4ghz and their performance certainly doesn't hold up.


The reality is that Intel is back to kicking ass in the CPU market and it's a glorious thing to see. Good competition is a wonderful thing.

AMD has its work cut out for it, Intel is awake and desperate again.


this has to be satire right?

I am the target market for these chips, I have a 5 year old i9 gaming PC that needs replacing... I'll wait until the next generation

all they've done is pump even more power into last year's design and add some more pointless e-cores (into an ENTHUSIAST line that doesn't care about them, they're gaming desktops, not laptops)

if this is kicking ass I'd like to see what failure looks like


Kicking ass by relabelling last year's CPU with a new label and a small overclock?


Would still take a 7800x3d over anything Intel currently. Only if they can cut the power usage in half with the same performance would i consider them kicking ass.


From the sidelines it looks like Intel just keeps cranking up the TDP to chase higher numbers.


I don't see where in the link it does say that clock speed is an indicator of performance.


Well, it's implied. Why else would it be notable?


Higher clock speeds in processors always come with additional self-interference (crosstalk), signal degradation, signal reflection, etc. so such a significantly higher clock speed than typical desktop processors indicates interesting strategies and problem-solving. This article doesn't really talk about why, but performance being irrelevant, a 6GHz clock is notable. It's not implied unless you're not familiar with the problems of higher clock speeds.


Did you look at the title?


Yes, it just comments that the I9 chip has speeds up to 6GHz. It doesn't imply a relation to performance unless you don't know what clock speed does in a processor.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: