The iPhone has higher single core performance than a desktop CPU with a 105W TDP.
Of course it has more cores. But am I missing something here?
Is this test actually representative of device performance?
Or are certain desktop features not tested?
Does the whole RISC Vs CISC of X86 and ARM make a difference?
Assuming core count was equal. Would a desktop CPU and an Apple SoC run at equivalent performance if it was running Ubuntu and running native compiled code?
Geekbench is not measuring raw performance in terms of operations per second. It's measuring very specific use cases (their current blurb mentions AI and ML, in the past mentioned synthetic tests to approximate browsers). Because thermal constraints would prevent Apple from competing in a brute force approach, Apple have been more willing to include specialised hardware for tasks like AI/ML as in the A12 CPU. Of course, single core AI/ML perf is a bit of a silly metric but it's one thing geekbench is claiming to measure here that Apple probably wins at. I think in the past encryption/decryption similarly had acceleration sooner on Apple platforms, and Safari is able to make better use of the GPU with less varied hardware to support, and I think real web browsing is another component in geekbench tests.
Operations per second is a notoriously useless measure. Which is why we have higher-level benchmarks like GeekBench that are actually incredibly broad in what they test, performing a lot of real-world type activities in a larger macro-benchmark suite.
The Bionic chips aren't cheating to a win. They win GeekBench, and virtually any other cross-platform activity that you can throw at them. As I mentioned in another post, my iPhone 11 absolutely lays waste to my laptop with an i7-7700HQ processor at the JetStream 2 benchmark. Now this is a JavaScript benchmark that runs on completely different software stacks / OS / etc (my i7 running Windows 10, Chrome 80, etc), but it is layers and layers of dependencies on the performance of the platform. And my big beefy i7 is beaten by a tiny little mobile processor. It is quite remarkable.
It's a blazingly fast little processor. We would probably have seen them use them in other hardware sooner if Apple wasn't always suspicious that Intel was sandbagging in some way and was ready to wow the industry.
The Geekbench score is a pretty useless metric when you have a specific workload in mind. If you are choosing a machine for video editing you're not going to look at the Geekbench score. If you are choosing a machine for compiling code you're not going to look a the Geekbench score. If you need a machine for gaming you're not going to look at the Geekbench score. If you have no specific requirements then you might not even care about having the best performance.
I don’t think comparing Geekbench scores directly between completely different processor architectures tells you that much about real world application performance. It’s a synthetic benchmark that may be affected disproportionally by factors that aren’t very significant for ‘normal’ use cases, e.g. specific optimizations (compiler, hardware, etc) that hit the happy flow for some part of the benchmark on one architecture, but not the other.
But the main difference is thermals and performance under sustained load. So far none of the Appple SoC’s have been shown to handle sustained workloads at heir highest frequency. The CPU in the most recent iPhones is known to throttle under load quite fast, and nobody but Apple knows how it would perform in an active cooling setup.
"Assuming core count was equal. Would a desktop CPU and an Apple SoC run at equivalent performance if it was running Ubuntu and running native compiled code?"
Yes. Of course there are going to be edge conditions where one or the other are going to shine particularly well, but overall the performance is going to be close, if not giving a nod to the Apple device.
This is the reason there have been wide expectations that Apple would move their desktop/laptop platforms to their own chips, and if rumors are true that will be next year. In a situation where their own chips had credible heat dissipation and wouldn't be subject to thermal throttling, it would be very impressive. I mean it's already spectacularly impressive, but it would be quite dominant with real cooling.
Apple needs to be careful, though, explaining their patience. There has always been the potential that Intel comes out with a game changer.
Of course in such discussions everyone is going to discount whatever benchmarks are used. Yet we've seen this in benchmark after benchmark, at generalists tasks that annihilate any trick instruction. Out of curiosity I just ran JetStream 2 -- it yields a 81.6 score on my i7 laptop, and 153 on my iPhone 11. The iPhone with incredibly poor heat dissipation.
The ARM / x86 / x86_64 / CISC / RISC things are all abstract higher level notions now and aren't the reason. Apple's team has just proven astonishingly good at designing chips. Which quite honestly I thought would turn out otherwise, and Apple would end up begging the industry for whatever the new hotness was.
> Does the whole RISC Vs CISC of X86 and ARM make a difference?
Not really. Arm hasn't been RISC for quite a while now (there's a "Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero" instruction(!), SIMD, etc etc).
The difference can be explained partially by the memory model: x86 has total store ordering, which can be slower than Arm's weak memory model (it allows the hardware to be more creative).
> running native compiled code
There's more to it than 'running native code'. It depends a lot on what code is running (any CPU implementing the above javascript instruction would be much faster on a web benchmark for example). It also depends on the compiler. If the code is control-flow heavy, there isn't much to do except having large cache sizes and wide pipes, which most high-end, out-of-order CPU do already.
I read an analysis somewhere (anandtech?) which suggested that a lot of the performance of Apple's chips could be attributed to them having a really large / fast cache system.
But the question I've got from looking at the the Geekbench site:
iPhone 11: https://browser.geekbench.com/v5/cpu/1611448
AMD 3900x 12 Zen 2 12 Core: https://browser.geekbench.com/v5/cpu/1611445
The iPhone has higher single core performance than a desktop CPU with a 105W TDP.
Of course it has more cores. But am I missing something here?
Is this test actually representative of device performance?
Or are certain desktop features not tested?
Does the whole RISC Vs CISC of X86 and ARM make a difference?
Assuming core count was equal. Would a desktop CPU and an Apple SoC run at equivalent performance if it was running Ubuntu and running native compiled code?