Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel launches Core Ultra processors (techradar.com)
164 points by talboren on Dec 14, 2023 | hide | past | favorite | 276 comments


Best benchmarks I've seen for these so far. Essentially great gains for Intel but still falling short in many ways to AMD 7000, AMD 8000 (most likely), and especially Apple M3. https://www.youtube.com/watch?v=7wZjhlYqZ2I


Dave 2D and Hardware Canucks had more or less the opposite take:

https://www.youtube.com/watch?v=WH-qtuVRS2c

https://www.youtube.com/watch?v=Obtc24lwbrw

The Just Josh numbers you posted don't square with the ones above. It'll be a while before benchmarks settle, and as with all new devices there will be burps. We'll just see.

But taking a blind average of all of the above and applying to my own requirements perspective: I'm seeing a mid-range device with 80% the single-core performance of a M3 Max and significantly lower idle/regular-use power consumption. That sounds like a better device than a Macbook Pro to me.


> I'm seeing a mid-range device with 80% the single-core performance of a M3 Max and significantly lower idle/regular-use power consumption. That sounds like a better device than a Macbook Pro to me.

Depends how much you focus on the AI parts of the announcement, I guess. I mean, the marketing makes this sound like their brand new, absolute highest end laptop CPU, not their mid-range laptop offering. This article is about an "AI Everywhere event" with a marketing presentation [1] talking about "leadership", "the latest LLMs" and "Local LLaMa2-7B".

Except when it comes to LLMs, it's critical to have lots of high bandwidth memory. The M3 Max can be configured with up to 128GB of memory at 300GB/s, whereas this caps out at 64GB of LPDDR5 and Intel haven't seen fit to mention memory bandwidth but for "LPDDR5/x 7467 MT/s" spec I can't believe it'll be any more than 100GB/s (which I guess is why they aren't mentioning it)

It's good to see Intel is paying more attention to ML, but they're clearly still playing catch up to Apple, let alone to nvidia.

[1] https://download.intel.com/newsroom/2023/ai/ai-everywhere-20...


You need to watch the dave2d video again he says the exast opposite of what you are saying here.

Ultra is just catching up with amd 7000 and idle consumption is bad compared to m3.


At what point does it not really matter anymore for desktops/laptops?

How common is the use case of full CPU loads for hours on battery? Or eeking a 10th hour of light-use autonomy?


If you use an Apple Silicon Mac you won't be asking that last question. Being able to reasonably go days without charging with regular usage is quite nice and opens this laptop up to new ways of being used. There's no other device in my inventory that I just don't have to worry about battery with.

My use case doesn't often pin cores these days, but I imagine the same goes for the full CPU load question. It's not common, because it's not possible. Enabling that enables new ways of using it.

And outside of traditional computer use, those questions are both easily answered if you consider handheld PC gaming devices. People post excitedly about their strategies and successes when they squeeze out an extra hour of game time on both the high and low ends.


I went into the office yesterday and forgot my power brick at home. I just got a new MacBook Pro with M2 Max so I figured I'd see how long I could go on battery before needing to borrow someone's charger.

I started the day at 7:30, worked most of the day on it with multiple apps open - Chrome, Safari, Teams, Slack, VS Code, IntelliJ, DataGrip, Postman, Outlook, etc. Connected to a wireless keyboard and mouse, and streaming music to my AirPods for a few hours of the day. Most of my actively used apps were IntelliJ, Chrome, and Teams (admittedly not doing a LOT of building/running Java cause I was resolving build issues for a chunk of the day). I ended the day at 4:00 with 47% battery remaining.

That was unheard of on my last i9 MBP that could barely survive a few hours on battery, even when it was new.


> Being able to reasonably go days without charging with regular usage is quite nice

No it's much worse than that for us Intel laptop users. Our machines make loud noises, overheat and throttle to the point where it affects Zoom video chats. The fan on my X1 carbon even comes on when booting the machine!


My 14" M3 Max 128GB devours the battery in less than 1.5h when inferencing LLaMA 2 70B and is noisy as a SpaceX rocket... The same with my Zephyrus G14 7840U 4090 64+16GB.


Meteor Lake is behind in both performance and efficiency compared to the M3 family.

https://www.notebookcheck.net/Intel-Meteor-Lake-Analysis-Cor...


I had big expectations from Meteor Lake. However it seems that the compute part of the CPU is built in a 7nm process and that single core performance is worse than Raptor Lake.

I wonder if using TSMC 3nm process as Apple did would have improved things considerably.


No, the CPU tile, which is the only part made by Intel in Meteor Lake, is made in the new "Intel 4" CMOS process.

While it appears that with "Intel 4" Intel has succeeded to reduce the power consumption to be competitive with the CPUs made in older TSMC processes, like the AMD CPUs, history has repeated and exactly like at the launch of the "14 nm" Intel process in 2014 and at the launch of the "10 nm" Intel process in 2018 and 2019, Intel is unable to achieve in the new manufacturing processes clock frequencies as high as in their previous mature manufacturing process.

Because the P-cores of Meteor Lake have the same microarchitecture as the P-cores of Alder Lake and Raptor Lake, but a lower turbo clock frequency, they have a lower single-thread performance.

Nevertheless, the Meteor Lake E-cores are improved and the clock frequency when all cores are active is higher for Meteor Lake, due to the improved power efficiency, so the multi-threaded performance of Meteor Lake is better than in the previous Intel CPUs.

The best part of Meteor Lake is the GPU made at TSMC, which has 4/3 times more FP32 execution units than AMD Phoenix or Hawk Point at a clock frequency that is higher than 3/4 of the AMD frequency, so the Intel iGPU is faster than the AMD iGPU, and it is twice faster than the older Intel iGPUs.

The other nice feature of Meteor Lake is that the SoC tile (made by TSMC) includes everything that is needed when a computer is mostly idle, like when reading documents or watching movies, so in such cases both the CPU tile with 6+8 faster cores and the GPU tile with the 3D graphics can be completely shut down, leading to a great reduction in the power consumption for anything that can be handled by the two slowest cores and by the video display engine.

For consumers who do not need the higher performance of 45 W CPUs, laptop or SFF computers with Meteor Lake are a great choice.

While one model of 45 W Meteor Lake will be available some time later (Core Ultra 9), it will likely be too expensive and too hard to find in comparison with the 45-W AMD CPUs.


The soc being able to do that is a really nice feature for laptops playing thin client on battery power. I don't think AMD has the same trick available at present.


That’s alright though. Scaling pains. Glad they finally entered the “4nm race”, and seems like genuine innovations even if not perfected.


To my knowledge the 4 in "Intel 4" is unrelated to the nanometers (condsidering "Intel 7" was 10nm++)


I guess we will need to see the benchmarks. I’m personally a bit skeptical that you can get decent AI performance out of a CPU even with a built in accelerator, but maybe Intel pulled a rabbit out of their hat.


Why not. If you have an accelerator you have specific hardware for the feature. The is really no fundamental reason it can't be as good as a GPU, if GPU is the best architecture it would just be a GPU.

The main question is did they allocate enough space and power budget to the accelerator. Or is it anemic even for basic tasks.


For one thing, the memory on GPUs is generally much faster, with a wider bus than what typical CPUs have. The die sizes can be quite large, too. Putting a full sized CPU in the same package as the accelerator means making some sacrifices.


> The is really no fundamental reason it can't be as good as a GPU, if GPU is the best architecture it would just be a GPU.

There is a reason, and that reason is called memory bandwidth.


How useful is an integrated GPU at any AI workloads? People seem to generally use things like 3090 or 4090


GPUs are huge, TPUs etc are also quite large, and CPUs are tiny. I'm no expert on any of these, but intuitively you're losing something cramming that functionality into a way smaller chip. Probably something to do with bandwidth (where big helps) vs latency (where small helps).


The GPU chip itself isn’t that big if you take the heat sink off. But it has power delivery and memory etc on the board


Even the GPU chip itself is still much larger than the CPU chip.


software is the fundamental component of AI.

ITS basically this month the And accelerators are supported by llm


One benefit of an AI accelerator on CPU die is that the accelerator gets access to system memory. If you build the system with lots of high-bandwidth system memory, as Apple does with their M3 Max systems, then the system can run more capable AI models locally.

For example, currently most GPUs have a max of 16Gb of NVRAM, but usually 8GB or 12GB (except for more expensive workstation cards and similar [1]). But if you can spec a system with 32GB, 64GB, 96GB, or 128GB or more system memory with sufficient bandwidth to the CPU, then the accelerator can run a broader range of bigger models locally.

Afaik Apple M2 Pro and M3 Pro/Max systems are the only ones with that capability right now.

[1]:https://www.nvidia.com/en-us/design-visualization/desktop-gr...


>One benefit of an AI accelerator on CPU die is that the accelerator gets access to system memory. If you build the system with lots of high-bandwidth system memory, as Apple does with their M3 Max systems, then the system can run more capable AI models locally.

Should Nvidia be afraid of Apple? Is OpenAI going to ditch their A100 for Macbook Airs?


Not Apple since they’re not a player in datacenter AI, only local laptop/desktop AI. But AMD yes, since AMD is developing similar architecture APU systems that will roll out in 2024. Intel is also developing them, but unclear if their NPU and GPU performance will be comparable.

https://www.amd.com/en/products/accelerators/instinct/mi300....

https://www.techradar.com/computing/cpu/intel-launches-core-...

There is also some tech that lets you put some model layers on the Nvidia GPU’s local NVRAM and the rest in system RAM. It’s slower though. Some discussion of that here:

https://www.reddit.com/r/LocalLLaMA/comments/18ire2a/best_mo...

https://www.reddit.com/r/LocalLLaMA/comments/18io4lp/how_are...


> Should Nvidia be afraid of Apple?

Depends on what you think about how technologies become successful.

Some people would say the foundation of nvidia's current dominance is their mass-market gaming GPUs. While every other High Performance Computing technology wanted you to call their enterprise sales team and pay a big markup, any academic or open source developer could get their hands on an nvidia GPU - and that lead to a huge academic and open source ecosystem.

In recent years, nvidia has decided if you want more than 24GB of vram, you've got to call their enterprise sales team and pay a big markup. And a lot of people working on LLMs want more than 24GB of vram.

On the other hand, nvidia do have a lot of momentum. And of course cloud options are available these days.


We need to define what the benchmark for decent is. There are a lot of smaller models/uses that for a variety of reasons it is beneficial to run locally. audio transcription, a lot of image analysis, and so on, much of it doesn't require a 250W, $2000 GPU. You aren't going to run an LLM on this, but loads of stuff will work great.


Intel marketing emphasizes AI (and iGPU) after it became clear that their Meteor Lake generation would not meet its efficiency targets and wouldn’t significantly improve upon previous generations.


Even worse for Intel, AMD announced their 8040 series with both better efficiency and with an NPU which (with what little details we've had so far) can possibly hit much higher TOPs.


the 7000 processors have NPU on the higher end.


Intel Meteor Lake Analysis - Core Ultra 7 155H only convinces with GPU performance

https://www.notebookcheck.net/Intel-Meteor-Lake-Analysis-Cor...


It’s not the CPu it’s the memory bandwidth, isn’t it?


Samsung Galaxy Book4 was announced tomorrow an hour ago (must have gone out at midnight Korea time, I'm ET), and it will be using the Core Ultra series. [1]

At 28W of power consumption, will be interesting to see if these can start to close the gap on battery life between PC and Apple.

[1] https://news.samsung.com/global/introducing-galaxy-book4-ser...


> At 28W of power consumption

For the lessor SKUs, but only at the base clock speed.

> All run at a base TDP of 28 W, with a maximum turbo TDP of up to 115 W.

https://www.anandtech.com/show/21185/intel-releases-core-ult...

The base TDP for the Core Ultra 9 is listed as 45 watts.


I don't think "base clock speed" has been a well-defined quantity or officially published specification for a few generations now. It's really just the default/recommended PL1 (long term turbo power limit) value, and what clock speed that corresponds to will depend on the workload.


I think we can be absolutely certain that the benchmark results Intel highlights are not what you can expect to see while the chip is limited to running at a 28 watt TDP.


There's nothing new or special about that 28W number: that's been one of Intel's standard product segments for years.


Yeah, the 28W number is a load consumption. The driver of "battery life" is idle management, which as much or more to do with OS software as hardware design.

(Also, it needs to be said, with simple battery capacity. Apple ships a monster 72 wH unit in the current MBP, which is fully half again as much as typical Intel/AMD/Chromebook devices.)


> Yeah, the 28W number is a load consumption.

More specifically, it's the sustained power limit used as a target parameter for the processor's power management control loop. It's not a measurement of any specific workload, and doesn't take into account any part of the system outside the Intel chip; DRAM, display, storage, etc. are not included. And laptop OEMs have some leeway to tweak those power limits, and wide variation in their cooling systems so the achievable power draw within safe temperature limits can be quite different.


>these can start to close the gap on battery life between PC and Apple

I was hoping the same.

But after seeing the benchmarks from notebookcheck.com it's clear they can't. For the same perf you are going to consume more. And part of the problem I think it is the compute part of the CPU being done on Intel 7nm vs TSMC 3nm.


Intel 14th gen Core Ultra 14900H

Intel and AMD are both absolutely terrible at naming processors.


Apple’s A21 processor will likely use TSMC’s A14 chips, not to be confused with Apple’s A14 processor.


Is the M2 Max better than the M2 Ultra? Surely Max is the maximum so it's the top model right? Oh no, we're going the video game graphics settings route then?

It's more a matter of which naming scheme you know.


Apple is bad too, truthfully.

Old school BMW naming schemes were the bomb. The first digit was the chassis, then the second and third were displacement. 330, 525, 755, etc.

M2X, M2XL, M2XXL would have been cooler than the max/ultra shenanigans.


That's not a real processor name though.


If you don't use at least one superlative, it's utter crap. Ultra Pro Max+


Ultra Pro Max+ Championship Edition


> Ultra Pro Max+ Championship Edition

Well, depending on where you are, that's bad. In the UK you'd want:

> Ultra Pro Max+ Premier League Edition


What about "The best CPU on the world, really, and you are a loser if you don't buy it"?


with Knuckles


Fatal1ty Series


That name is taken already by nVidia's power connectors.


Your humor is on fire. Wait...


Brings back memories of Street Fighter.


This looks mid compared to Apple and AMD. I don't know who is supposed to be impressed. Gamers should all be switching to AMD soon, it's a bit embarrassing to even be running Intel this year. Maybe Intel should focus on server, where it still dominates, since for whatever reason Arm64/Apple hasn't taken over yet.


Intel is getting its ass kicked in server and has been for literally years. The AMD Epyc hardware is better on every dimension. Intel's marketing is generally more effective.


> since for whatever reason Arm64/Apple hasn't taken over yet.

A server running MacOS?


They used to be the nicest looking things in the rack:

https://commons.m.wikimedia.org/wiki/File:Xserve_G5.jpg


Actual benchmarks and useful info here https://youtu.be/WH-qtuVRS2c


Funny as the AnandTech article wrote "Despite today being the official launch of the Core Ultra series and Meteor Lake platform, you won’t find any reviews for the hardware. And we’re not sure you’ll be able to find much hardware, either" and this YouTube channel reviews two such devices in their video. Benchmark data in the video:

Cinebench R23: 13218

Cinebench 2024: 701

Geekbench 6.2: 12319

Fire Strike GPU: 9317

Time Spy: 3283

Geekbench 6.2 Compute: 30101

Battery life (summary): In their various tests (not many apparent details on exactly how it was done) very comparable with the M2 air/7840U devices.

Gaming FPS (1080P Medium, iGPU): Generally better than the 780M

Overwatch 2: 82 FPS

Apex Legends: 73 FPS

Diablo 4: 50 FPS

Starfield: 31 FPS

Counter-Strike 2: 67 FPS

Valorant: 137 FPS


>Geekbench 6.2: 12319

I wish there are Single Core Benchmarks.



Thanks. Still need 50% speedup to reach the same A17 performance on clock per clock basis. ( Not a fair comparison since they optimise for different thing but still interesting to see )


I would trust Cinebench more than Geekbench. It's funny how Cinebench is biased against Apple, while most other benchmarks are not. And that is for both mobile and laptop CPUs.


>It's funny how Cinebench is biased against Apple,

I dont see that. Cinebench still shows Apple is just about the same ahead of its competition.


Executive summary (all numbers from the 155H part): Arc GPU looks fantastic and market-leading (though more so on rendering than compute, which I guess is to be expected since they're pushing the AI unit for those workloads?). Multicore CPU benchmarks running about 20-40% above the M2 Mac tested and 10% above AMD's 7840U. Battery life tests (a benchmark category I personally hate since it conflates so many different things) seem to have reached parity with the AMD and Apple devices, which doesn't sound notable but this was a BIG shortcoming with ADL/RPL.

On the whole... this sounds big. If these numbers check out in further testing then all of a sudden Intel's back on top of the laptop game.


AMD announced a successor to 7840, mostly with _more_ NPU


What do they use to test GPU Compute?


How does the idle power consumption compare with AMD/Apple. While peak power draw is interesting, most gains will be if LPE core bring down consumption while in a meeting or browsing Internet, reading docs etc.


It's honestly a bit sad. Intel is kicking up such a storm with this launch and the product is about on par with AMD APUs that have been available for months and months. Intel is super lucky lucky they paid of the entire market during the time AMD was weak.


and AMD just refreshed and upted those NPUs


I wish they beefed up their ARC processor to at least match the RTX 3060, that would allow people to play most games without needing to buy a separate video card ... that would be a huge win.


Is it even physically possible to compete with Apple when Apple booked all (90%?) of the next gen manufacturing capacity from TSMC?


Intel still runs its own fabs


Apple has the adventage of vertical integration.

Intel is gonna have to find an OEm that can squeeze every bit of juice out of the cpu to make it compete.


"find an OEM"

Pretty sure current OEMs have done this for decades for their laptops (you can find laptops that have the same CPU but wildly different battery life). They are not idiots and just slap a CPU on any laptop with random components.


The software people at laptop OEMs most definitely are idiots, and Intel's processor designs are increasingly reliant on software being smart enough to properly manage and use that hardware. Intel and Microsoft are doing a mediocre job at best of handling the software challenges, and then the OEMs ship the system with crapware that makes it impossible for the processor to stay in its low-power idle states.

And judging by how many laptop OEMs actually bothered to ship the Arc discrete GPUs, a fair number of them must also be dumb suckers.


Don't those OEMs tend to design around Intel? I think they'll be ok when it comes to integration.


Apple has the advantage of TSMC.


You might have missed it, because this article somehow failed to mention it, but half this chip is TSMC's silicon.

Going forward, Intel is going to be TSMC's second biggest 3nm customer. They're not using TSMC N3 here but it shows their commitment (to temporarily giving up on their own process).


Thanks, I was wondering about that. Shame Intel can't catch up on their own.


Their strategy is to be back out in front with 18A in 2025. I'm not holding my breath.


Unless Apple plans to make their computers more affordable to 80% of the worldwide desktop users, it will hardly matters.


I’d argue that the optics still matter, and that it’s not Apple who Intel are worried about.

Intel have to contend with: “this Apple chip over here is more efficient, why can’t you do the same?”. Apple blew a hole in the side of the age old x86 armour by proving the arch isn’t untouchable.

Then other companies like Qualcomm can finally capitalize on that damage to the x86 armour.


Imagine if NVIDIA had been able to buy ARM. That could be a threat in a couple directions.


Perhaps. Though I don’t think they need to. I don’t see what owning Arm would have done for them since they can already make arm processors, and could do custom cores.

Though perhaps the opposite direction is more interesting and what you were getting at: what if the off the shelf arm cores had NVIDIA’s engineering behind it.


Nvidia could have shifted the model by firing the customers and bringing things in-house. That'd involve phasing out ARM licensees and buying up leading node fab capacity to manufacture their own stuff.

If they pulled that off, that Nvidia would have been very powerful. CUDA + ARM + leading node would have put them in a similar position that Intel had with x86 back in the day.

The competition would have struggled there - nobody has CUDA equivalent, only AMD has GPU equivalent, nobody has install base of software like ARM. RISC-V (or intel) even executing super fast would have been years away from being competitive across the board like this.

It might have also wrecked AMD, since the weak point of their GPU strategy is reliance on x86. If Nvidia+ARM came on strong, that'd weaken x86's stranglehold on PCs, which would ultimately weaken AMD since they don't have a leading position in other markets to fall back on.

Nvidia would have a lock on mobile devices, the increasingly AI-based datacenters, as well as some degree of PC side including gamers. Microsoft would pick up on the software side and we'd have Winvidia instead of Wintel.


This sounds like one of the easiest anti-trust suits, it would be foolish for Nvidia to do this.


Well antitrust is basically what stopped the deal from happening [1]. If they had gone through with the deal and done this strategy I imagine it would have been a slow 5-10 year shift, not an immediate action.

> The main concern among regulators was that Nvidia, after the transaction, would have the ability and incentives to restrict access to its rivals to Arm´s technology, which would eventually lead to higher prices, less choice and reduced innovation. [2]

[1]: https://www.ftc.gov/legal-library/browse/cases-proceedings/2...

[2]: https://www.pymnts.com/news/regulation/2022/collapse-of-nvid...


>Nvidia could have shifted the model by firing the customers and bringing things in-house.

That wouldn't work since many companies already had been licensing the ARM architecture, not just cores.


Well there'd be a transition phase and an honoring of existing contracts to some extent but the way to do it would be to slow roll the release of customer ISA updates or stop providing ISA updates altogether, e.g. anything past ARMv9.4-A would be in-house only. After 4-5 years competition would be struggling.


NVIDIA would charge their customers an ARM and a leg.


amd a leg


or 2 arms and a peg


I wish they'd capitalize on it by making ARM as well supported as x64 is, where any program, any OS, any line of code ever written will most likely run flawlessly. Meanwhile the average ARM computer struggles to support even a single version and distro of Linux. Like why is ARM such a fucking travesty when it comes to compatibility?


Your expectations seem unrealistic to me.

1. Even x86(_64) can’t run every line of code ever written flawlessly. It can only run the code that is on a compatible architecture, where compatibility largely breaks based on intrinsics used or how old the arch is. You already cannot run everything on a modern x86 processor, only a subset and that’ll get worse when Intel drops support for 16 bit soon.

2. That leaves translation or emulation for the arches that are non-native. Which is exactly what arm does today on all three of the big OSs. macOS, windows and Linux all have translation/emulation layers now so can run most x86 code with varying levels of performance penalty.

3. Many Linux distros support arm. I’m not sure where you’re coming from on this. It’s been multiple decades of support at this point, and even the raspberry pi is a decade old itself now as the poster child for consumer arm64 Linux . They may not support every flavor of it, but that’s also true for x86 systems.


> even the raspberry pi is a decade old itself now as the poster child for consumer arm64 Linux

That's a pretty good example actually, the various Pi versions are probably the best supported ARM in existence, and even they are incredibly limited in what they can run.

For some reason the way OS support works on ARM is that every OS needs to explicitly support the exact underlying hardware or it doesn't run. For example the recently released Pi 5 can only really run two OSes right now: Pi OS 12 and Ubuntu 23.10. How is that possible, I ask? Why the fuck isn't the required firmware shipped with the SoC and made compliant to run any aarch64 build of anything? It's not like it's new hardware either, it uses a dated 5 year old Cortex A72.

Meanwhile x64 has apparently done the opposite and standardized hardware to a level where software support is completely irrelevant. Pick any new or old version of Windows or Linux or FreeBSD or whatever, pick any motherboard, CPU, GPU, disk combo and it'll install and just work (with a few exceptions). It baffles me that this standardization that's been a blessing on x64 is impossible to achieve with ARM. I don't need a specific release of Debian with firmware from Gigabyte to work with their motherboard, it doesn't give a shit if it's an Intel or AMD CPU or something third entirely, but for ARM this level of support is apparently like asking for cold fusion.

> that’s also true for x86 systems.

Really? I mean I suppose there must be some very specific OSes out there that aren't compatible, but I've yet to hear of any. Hell, you can even run Android.


That exists, actually; you're looking for systems like https://libre.computer/products/ that support UEFI by default, so you can just grab a generic OS image and have it work because device enumeration works the same as on PC and not the... "interesting" hard-coded stuff that's weirdly common in ARM. The official marketing is ... I think this is it? The Arm SystemReady program - https://www.arm.com/architecture/system-architectures/system... but I find it easier to just say UEFI.


I have two of their Sweet Potato AML-S905X-CC-V2 boards running Fedora IoT for some containers under Podman. Very fun devices to work with so far.


I'll have to keep an eye on that list, one of these might actually be a solid RPi alternative eventually if they can get good long term OS support this way. All I'm seeing on it right now are somewhat obsolete boards though, 2 or 4 GB of memory at most, LPDDR3 and 4, no wifi chip.


Your arguments seem to be around device tree support rather than the actual cores and arch.

That’s largely where the holdup is. Most arm devices use a variety of more unique supplementary hardware that often only distribute their support in binary blobs. So due to lack of ubiquity, the support in distros varies.

If you could skip the rest of the device and focus on the processor itself, the distros would largely all run as long as they didn’t remove support explicitly.

This is the same process as on x86. It’s just that the hardware vendors are also interested in selling the components by themselves, and therefore have a vested interest in adding support to the Linux kernel.

It’s very much the case that when new hardware comes out that you need a new kernel version to support it properly. That is true of processors, GPus and even motherboards. They don’t just magically function, a lot of work goes into submitting the patches into the kernel prior to their availability.

Since arm manufacturers right now have no interest in that market, they don’t do the same legwork. They could. If Intel or AMD entered the fray it would definitely change the makeup.

The one other big issue is there’s no standard BIOS system for arm. But again, it’s just down to the hardware manufacturers having no interest as you’re not going to be switching out cores on their devices.


Device Trees also don't magically cause incompatibilities either. They're just a declarative specification of the non-discoverable hardware that exists. The old BIOS systems are so much worse than DT and harder to test properly.


Yeah great point. I think people just take today’s state of things for granted and don’t realize how much has been going on behind the scenes to enable it today. And it’s still not great.


DTs are part of the problem.

It is one of those "devil in the details" kinds. In theory, DT would be okay, but it's not. The issue starts with HW vendors failing to create 100% (backwards) compatible hardware. For example, if they need a uart, they fail to hook up the standard baud rate divisor and instead use a clock controller somewhere else in the machine because its already there. So now DT describes this relationship, and someone needs to go hack up the uart driver to understand it needs to twiddle a clock controller rather than use the standard registers. Then, of course, it needs to be powered up/down, but DT doesn't have a standard way to provide an ACPI-like method for that functionality. So now it either ends up describing part of the voltage regulation/distribution network, or it needs a custom mailbox driver to talk to the firmware to power the device on/off. Again, this requires kernel changes. And that is just an example of a uart, it gets worse the more complex the device is.

On x86, step one is hardware compatibility, so nothing usually needs to be changed in the kernel for the machine to understand how to setup an interrupt controller/uart/whatever. The PC also went through the plug and play (PnP) revolution in the 1990's and generally continues to utilize self-describing busses (pci, usb) or at least make things that aren't inherently self-describing look that way. Ex: intel making the memory controller look like a pci root complex integrated endpoint, which is crazy but solves many software detection/configuration issues.

Second, the UEFI specification effectively mandates that all the hardware is released to the OS in a configured/working manner. This avoids problems where Linux needs to install device-specific firmware for things like USB controllers/whatever because there is already working firmware, and unless Linux wants to replace it, all the HW will generally work as is. Arm UEFIs frequently fail at this, particularly uboot ones, which only configure enough hardware to load grub/etc, then the kernel shows up and has to reset/load firmware/etc as though the device were just cold powered on.

Thirdly, ACPI provides a standard power management abstraction that scales from old pentium from the 1990s where it is just traping to SMM, to the latest servers and laptops with dedicated power management microcontrollers, removing all the clock/regulator/phy/GPIO/I2C/SPI/etc logic from the kernel, which are the kinds of things that change not only from SoC to Soc but board to board or board revision to board revision. So, it cuts out loads and loads of cruft that need kernel drivers just to boot. Nothing stops amd/intel from adding drivers for this stuff, but it is simply unnecessary to boot and utilize the platform. While on arm, its pretty much mandated with DT because the firmware->OS layers are all over the place and are different with every single arm machine that isn't a server.

So, the fact that someone can hack a DT and some drivers allows the hardware vendors to shrug and continue as usual. If they were told, "Sorry, your HW isn't Linux compatible," they would quickly clean up their act. And why put in any effort, random people will fund ashai like efforts to reverse engineer it and make it work. Zero effort on apples part, and they get Linux support.


> "For some reason the way OS support works on ARM is that every OS needs to explicitly support the exact underlying hardware or it doesn't run."

x64 servers and PCs are compatible because all of them adhere to an architectural standard backed by a suite of a suite of compatibility tests defined by Microsoft: https://learn.microsoft.com/en-us/windows-hardware/design/co... and https://learn.microsoft.com/en-us/windows-hardware/test/hlk/ . The ARM world has no similar agreed on standard.


Per another comment, there apparently is something along the lines of that now: https://www.arm.com/architecture/system-architectures/system...

Took them long enough.


It seems almost like it's all historical accident from the IBM PC wars. Now all the companies want vendor lock in, and developers don't seem to get excited anymore.

Back in the day people liked tech and wanted to use it everywhere and have everything just work every time. Now it seems like everyone just wants to tinker with small projects.

Like, look at all the stuff Bluetooth 5 can do. If that existed in 1997, I think there would be about 5x the excitement. But there are probably more people working on customizing their window manager than doing anything with the big name standards.


> It baffles me that this standardization that's been a blessing on x64 is impossible to achieve with ARM.

Intel only sold CPUs at one point. They needed compatibility with all sorts of hardware. ARM manufacturers, on the other hand, sell SOCs, so they don't want any sort of compatibility.


Not sure what you mean by that.

The only issue I see compatibility-wise is that ARM doesn't have a standard method for an OS kernel to auto-discover peripherals. x86_64 has UEFI and ACPI. ARM manufacturers could adopt that if they wanted to, but apparently they (mostly) don't want to.

Otherwise, non-assembly code written for x86_64 tends to run just fine on ARM, when compiled for it.


> Not sure what you mean by that.

Traditionally the Linux-on-ARM market has sucked because every single board computer and phone and tablet has needed its own specially compiled kernel with its own set of kernel patches, and probably its own half-assed Linux distro that never gets any updates.

You can't even buy a USB wifi or bluetooth stick and expect it to work without checking the chipset - let alone use a single distro across the BeagleBone Black, the Xilinx Zinq, the iMX6 and the RPi.


>ARM manufacturers could adopt that if they wanted to, but apparently they (mostly) don't want to.

ARM manufacturers don't want compatibility.


>I wish they'd capitalize on it by making ARM as well supported as x64 is, where any program, any OS, any line of code ever written will most likely run flawlessly.

Who is they? X86 had IBM to establish a standard. And when IBM wasn't interested anymore, it was Intel who stepped in.


Nobody cares outside the enthusiast market - all they just care about price, that it runs Windows, the latest CPU, availability and the name recognition of Intel and the maker like Dell,Acer or Lenovo.

Office, Youtube and WWW.


This isn't relevant when one CPU can go anywhere and the other doesn't.


Qualcomm sell chips to more diverse uses than Intel do.

If Apple’s opened the gateway on them finally cracking into the desktop/laptop market by removing the stigma of their previous arm offerings, then I’d argue they’re very serious competition to Intel.

The only market I don’t see them tackling is consumer sales of just the chips, but that’s honestly such a small percent of sales to begin with for these companies.


The market on cpus is dominated by cloud sales. Desktop and laptop cpus are just such small business in comparison to everything else it seems anymore.

And in the cloud, custom arm seems to be the path that was chosen


>And in the cloud, custom arm seems to be the path that was chosen

Doesn't the cloud run on x86?


More and more of AWS is available to run on Graviton. It can often be cheaper to use the Graviton instances when possible. And why not, when all you're caring about is having a PostgreSQL RDS instance or OpenSearch or an Athena query, it doesn't really make much difference if its an Intel or AMD or Graviton CPU under the hood so long as the performance per dollar is right.

Microsoft And Google have had Ampere-based CPUs available for over a year. They're still very x86 heavy though.

And don't forget, if you're really craving PowerPC in the cloud you can get LPARs on IBM's cloud for another flavor of non-x86 cloud compute.


The more relevant question is AMD vs. Apple. Intel is pretty much out of the equation now. Anybody have insights into the advantages and disadvantages from both sides from that perspective?


> Anybody have insights into the advantages and disadvantages from both sides from that perspective?

Do either of them have the volume to supply the server market? At the end of the day that's why Intel is still relevant.


Are there actual volume and production issues at TSMC?

I figured intel is still "relevant" because of past momentum, not because of actual specs or production capacity.

So while AMD is relevant. Intel is only "relevant" with quotations.


I think Intel are still relevant on the basis of having broadly competitive specs (not quite as far at as others, but pretty close), and their own production capacity.

There is enough demand for both TSMC and Intels capacity.


In terms of performance for value they aren't relevant. If you have cash and you want the best bang for your buck intel is not a rational option. That's just reality but some people are in denial.

In terms of potential for future success and raw ranking, yeah intel is still relevant in this area. But given that most people on HN are consumers I would think "relevant" applies to my first statement. They aren't currently a rational choice when you are making a purchase decision. But I guess I'm wrong. People will buy a shittier product just out of patriotism I guess? Nothing wrong with that.


HPC nowadays seems using AMD EPYC exclusively.


AWS is pushing their ARM (Graviton) chips over the Intel ones. The price/performance is just better, generally speaking.


> Anybody have insights into the advantages and disadvantages from both sides from that perspective

Yes. Having a lot of cash buys you exclusivity for TSMC 3nm and then for TSMC 2nm.


Yeah so which team is winning on that front? Apple or amd.


> Apple blew a hole in the side of the age old x86 armour by proving the arch isn’t untouchable.

the fact that they had to create an emulation/virtualization layer (rosetta) says other wise. The x86 instruction set is such an entrenched one that i dont think intel has any fear of it being rooted any time in this half-century.


> the fact that they had to create an emulation/virtualization layer (rosetta) says other wise.

That is a nonsensical and ahistorical take.

Apple created an emulation layer because they were migrating ISA and thus had a backlog of legacy which they needed to support for users to switch over.

They did the same from PPC to x86, and from 68k to PPC, and nobody would have accused those of being entrenched.


A half century is a pretty long time. I wouldn't be surprised if even Intel has moved on in 30 years.


But that’s not the point is it?

Apple is getting amazing performance at far lower power consumption.

People have been waiting for Intel’s response. If they flub this and it isn’t good enough, that’s a huge tell of weakness. AMD, Qualcomm, and everyone else will be chomping at the bit to take advantage.

Even if the others can’t beat Intel in performance per watt, performance per dollar may sway things.


>Apple is getting amazing performance at far lower power consumption.

They do that also for iOS. But people still buy Android devices with weaker Qualcomm CPUs. Because not every person in this world is rich enough to pay $2000 on an iPhone. For some people, even $200 is a big effort.

And there's other reasons, too. I can afford an iPhone, but I don't think it's worth that kind of money. And the camera on my current phone is better. As I love photography, having a better camera is more important to me than crunching better scores in Geekbench 6.

On the laptop side, I finally gave up and bought a Macbook Pro for movie watching and couch browsing, because I dislike having the device plugged in. But for work I still use x86 laptops and desktops. Performance is higher than on Apple devices, for lower price, it's just the consumption that is higher. And I don't care about that aspect since I always keep my work device plugged in, hooked to external displays and docking stations.


> Because not every person in this world is rich enough to pay $2000 on an iPhone.

Luckily the cheapest iPhone starts at $429 then.


You can get a new Android phone for $100. The cheapest iPhone also looks and feels like a phone from 2018 with its huge bezels, LCD screen and 64GB storage. Compare a similarly priced Galaxy A54 or Pixel 6a. iPhone SE is faster but unless you're gaming are you really going to notice?


You're gonna notice when your budget Android is barely usable after a couple years. It's well known at this point iPhones age better.

The iPhone SE performs basically the same as the most expensive model you can buy. And it's not about gaming, it's about basic tasks - moving around the OS, cold starting apps, editing photos/videos, etc.


No everyone cares that much about how their phones look or want to have huge amounts of storage. If you're in it for the long run (6-8 years of support so why not) the faster CPU will be useful while the slower Androids will become intolerably slow.


I’m only comparing CPU stats. This isn’t about device prices.

Apple made Intel look bad, Intel needs to respond. If they can’t do that by now it’s going to be a real big tell of how they’re doing.

Noting I wrote said people had to buy $2000 laptops or $1200 phones.


Tests by independent reviewers (i.e., those not dependent on access to Apple for their continued existence) have demonstrated that the M series gains are almost entirely due to node advances, meaning that Intel could do nothing and catch up as soon as the chip fabs have capacity for non-Apple orders for their latest nodes.


The old "anyone who disagrees with my world view is being bought" argument.

M series gains aren't just to do with the node. It's also because of the unified memory architecture and most importantly the fact that custom silicon e.g. Neural Engine is taken advantage of through the OS and SDKs.


That's true, given that the M-series gains are exclusively against Intel chips and are less performant than comparable AMD chips, upon which Apple's memory architecture is based.


> Tests by independent reviewers (i.e., those not dependent on access to Apple for their continued existence) have demonstrated that the M series gains are almost entirely due to node advances

That’s a bold claim conspicuously missing data. Can you cite a source?


Well, I'm out of my depth, but it doesn't seem entirely inaccurate to say their their node advantage allowed them to sort of throw lots of hardware (transistors) at the problem.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

    What really defines Apple’s Firestorm CPU core 
    from other designs in the industry is just the 
    sheer width of the microarchitecture. Featuring 
    an 8-wide decode block, Apple’s Firestorm is by 
    far the current widest commercialized design in 
    the industry.
For a list of relative transistor counts among CPUs:

    https://en.wikipedia.org/wiki/Transistor_count
Perhaps more to the point, I think the onus would be on somebody who claims Apple is doing something uniquely spectacular with their chips, other than simply capitalizing on a massive transistor count and power efficiency advantage made possible by their node advantage.

By the way, I'm not knocking what Apple has achieved. I recently upgraded from an 2018 Intel MBP to a 2021 M1 Max MBP. And, holy smokes. This thing is fast and it is effortlessly fast. I have been running all 10 cores pretty hard for the job I'm currently doing. I am absolutely beyond impressed.

But, from an engineering standpoint, it's definitely worth wrapping our heads around what Apple has and has not achieved here. I don't think they have any magic fairy dust or special sauce.


That quote is kind of the point: nobody would say that Apple didn’t effectively take advantage of the new process nodes but they didn’t _just_ take an off-the-shelf ARM core and run it n% faster, they also did significant work which has made their chips faster than those stock designs even on the same process. Android phones tend to be 4-5 years behind iPhone hardware because even when their chips are made on the same process as the older iPhone, they aren’t the same designs and by all accounts Apple’s design team is outperforming ARM, Qualcomm (prior to the Nuvia acquisition, we’ll see how much that shifts when they ship), and Samsung. Changing something like the execution width of a processor isn’t like adding “-n 8” to a build script, it has huge ripple effects throughout the design and that’s expensive to do.

The underlying challenge here is vertical integration: Apple can invest more in chip design not just on the CPU but also balancing with accelerators for things like AI or media processing, security, etc. because they work closely across the OS and app teams to see what they need & make sure it’s actually used (if they add a new coprocessor to reduce need for certain CPU features, it’s priority 1 for the next OS release). Microsoft and Intel or Samsung/Qualcomm and Google work together, of course, but they have different goals and budgets, they need to support multiple companies worth of corporate overhead on the same sale, and they can’t focus too much on any one deal because they need to hedge their bets.

This cuts the other way, of course: Apple’s stuff is best at the mobile and up to mid-range, but beyond that if you need more than a certain amount of RAM or features like advanced virtualization, they don’t have an alternative the way Google can choose between AMD and Intel for GCP with no concern about affecting the Pixel lineup. We’ll see whether Apple stumbles badly enough for this to matter.


Yes, but Intel is still mostly vertically integrated and I think a lot of people are looking to see if they can compete without completely changing their business model.

Intel says this is built on their Intel 4 process, which is, as I understand, their own 7nm process: https://www.intel.com/content/www/us/en/newsroom/resources/c...


> Intel says this is built on their Intel 4 process, which is, as I understand, their own 7nm process:

nm are meaningless marketing numbers. In terms of performance/density Intel 4 should be in same ballpark as '4nm' of Samsung/TSMC.


I know they're all marketing numbers. My point stands that everyone is watching to see how they compete, because their fab tech has been playing catch up.


>In terms of performance/density Intel 4 should be in same ballpark as '4nm' of Samsung/TSMC.

Any proofs to back that up?


It is true that Intel was ahead of TSMC for transistor density relative to the "nm" labelling of their nodes, which is exactly why the naming was changed. Intel 10nm was roughly equivalent to TSMC 7nm in density. They aren't beating TSMC in real terms, though. Intel is still lagging after their embarrassing stall developing the 10nm node.


Citations required.

More importantly, Apple doesn't fab its own chips. TSMC does. Everyone has access to the same vendor. Similarly everyone has access to ARM cores. Yet somehow Apple managed to build an Intel/AMD competitor and others didn't? What am I missing?



Yes. Absolutely, this. For those unwilling to click the link I think the title says it all:

"Apple Bought All of TSMC's 3nm Capacity for an Entire Year"


Only M3 was made with this latest 3nm process. However, M1 and M2 were produced using N5 and N5P respectively. Samsung, Qualcomm, AMD all have access to 5nm processes. In fact Samsung fabs their own 5nm chips while AMD's 7000 series / Zen4 chips were fabbed using the N5 process. However, they are not nearly as competitive as Apple's chips. The whole premise of "Apple's M series chips are only fast because of the process is incorrect".


During the M2 series Apple had a node disadvantage compared to AMD and Apple was still ahead.

Being the first to buy 3nm doesn't prove what was initially claimed.


> Everyone has access to the same vendor.

Unlikely.

Who can pay as much as Apple - likely in advance and for exclusivity.

My guess is that Apple is financing some of the equipment (TSMC is a high capital business, and Apple has overseas retained profits it doesn't want to repatriate) and there will be contracts for the exclusive use of the new equipment nodes. With everything designed for taxation efficiency.


AMD, Intel, Google, Samsung, and Qualcomm aren’t as big as Apple (a recent development which still sounds odd to say) but they’re all big enough to get first class support from TSMC even if several of them didn’t have the resources to compete directly.

Using the latest process certainly helps but look at the older ones as well - it’s not like the performance gap disappeared when competing AMD processors were launched on the TSMC 5nm process, but that really highlighted the different trade offs those teams make: Zen4 CPUs certainly dusted the Apple chips, but the ones which did were desktop / server designs using far more power, too, since that’s where the money is in the PC side.


Right. Anyone can beat Apple if you’re willing to burn electricity. Intel still makes the absolute top chips the public can buy. There is no contest in GPUs, the discrete AMD/nVidia cards stomp on Apple for the same reason.

But for similar performance, like in the MacBook Air or the MacBook Pros, Intel was embarrassed. The M1s were so much faster and cooler than the Intel chips they replaced it was hilarious.

The Mac Pro is absolutely the weak spot. Apple doesn’t sell enough so it seems unlikely they will spend the money to try and keep up with Intel there. The first Apple Silicon Mac Pro is not what people wanted. And I don’t know if that machine will be coming.


Apple s AI advantage is the chimeric memory dependency. Nvidia seems to artificially stratify the market by memory segregation.

As soon as Intel or AMD say fuck it and shoot the moon on memory Apples distinction evaporates.

I assume Intel is the most likely to do this by 2025.


Which reviewers are you talking about?


The big money in laptop sales is in enterprise fleet sales which is mostly Dell/HP/Lenovo. Retraining a large organization to use Mac OS instead of Windows just because the Apple CPU is 10% faster or gives 10% more battery life or some such is not going to happen.


First off, that is a really faulty assumption. Enterprise laptops make almost nothing for most laptop manufacturers. When I bought Dell laptops at a F500, they were selling us laptops at cost and only making money on support / software contracts. HP/Dell/Lenovo are all making between 2-8% on their profit margin vs 25% with Apple.

Compared to 10 or 15 years ago, mac laptop adoption is increasing. It feels like most tech companies or software engineers are using macs.

Dell, HP and Lenovo all ship massive amounts more laptops, but make a fraction of the margin that apple does on their hardware. IIRC apple made more profit than all 3 combined. And each one of those has software arms of their companies that help their bottom line.


> It feels like most tech companies or software engineers are using macs.

This is a very US-centric perspective. I honestly don't know a single software engineer who uses a mac as their primary machine (except where a mac is necessary or very useful, such as writing apps for macOS or iOS). The typical machine that people who do software development use privately is rather some high-quality ThinkPad model with either GNU/Linux or Windows on it.


In india we get macbooks or thinkpads at work depending on the volume of the company. Voluminous consultancies give thinkpads probably due to cost, while product companies all use macbooks as far as I have seen.


I think this is how it is largely across the US as well (not the valley or NYC) Support at the corporate level for apple products isn’t a serious thing for large corps. They expect to operate at the same margins as consumer companies.

Apple is tolerated because some users demand them, but for the vast majority of mid and lower level employees? You’re getting a dell.

I will update this to say I think I underestimate the share Apple has in the enterprise - they are, according to one source at 25% of enterprise share across the world.


Very global workforce, one of the biggest banks globally, nobody has any apple device from work, even at its most expensive locations :)

I know, hard to grok for SV crowd who often worship Apple unlike any other company, but I would fight very hard to not have anything like that.

CPU power alone is almost meaningless to me, bottlenecks are always elsewhere and Apple brings absolutely nothing on top of cheaper competition with much better corporate support package in that area.


EU unicorn with 5k+ people, whole organization (including sales, marketing, etc.) is strictly Mac only. MBPs and Airs.


Stripe?


What regions are you talking about?


For software developer desktops, MacOS is about as popular as Linux, and both still trail Windows.

https://www.statista.com/statistics/869211/worldwide-softwar...

https://insights.stackoverflow.com/survey/2021#section-most-...


Anecdotally, over the last 15+ years working in tech, only 2 of my employers out of 7 were windows shops. Over the last 10 years it has been 0.


> It feels like most tech companies or software engineers are using macs.

unpopular take around here, but here goes: after 15+ years of developing on Windows targeting Linux deployments, ~3 of which were spent on WSL2 as soon as it shipped, switched to macOS. It sucks for backend development; currently on my second year and I'm only getting more confused as to why it's so popular. I guess it's everything except actual development.


> I'm only getting more confused as to why it's so popular.

I have a theory that there are a lot of users in places like the public sector and behemoth corporations who get issued with the cheapest laptop on the market, by default.

And because Apple's laptops start at $1500 that means for people in those organisations, if you ask for Windows you get a $500 laptop with a 1080p screen and a 1-hour battery life, but if you ask for a Mac you get a $1500 laptop with a retina screen and an 8 hour battery life.


I know a lot of people use this trick. At office I am still stuck with a 2016 Model Windows Machine but has a low end spec that is somewhat closer to 2014. That is coming close to 10 years of technology progress gone. And with increase awareness on security every single year they added more security programs, applications, policy etc to the machine that makes it felt slower than a Machine with HDD.

Unfortunately my section is one of those that hasn't been clear to use a Mac which means I could never do something like that.


> It feels like most tech companies or software engineers are using macs.

At last few workplaces we targeted Kubernetes\Linux but we used Windows laptops. I don't know how using Macs would help us do a better job.


> they were selling us laptops at cost and only making money on support / software contracts.

those contracts will come only when the laptops are sold. Companies aren't giving Dell contract to support Mac/HP. So, you should see the profit from both as profit of PC sale. Though, agree Apple has higher profit margin.


Funny enough, we paid dell to support a number of IBM laptops at one place. Was kind of funny.


I don’t disagree but HP has much higher profit margins than Dell and Lenovo in some segments.


IBM of "IBM PC" fame realised years ago that TCO for Macs is lower than Windows PCs, and this is likely to be a consideration almost exclusively considered by corporate buyers, rather than individuals.


Hardly matters to whom? Intel would surely like computers based on their processors to sell to the same demographic that Apple sells to. That’s where all the profit is.


To 80% of worldwide consumers.


These chips matter to that 80%, just not this year.


Except Apple is the remaining 18%, taking 2% for the penguin folks.


And a good fraction of those penguin folks - like yours truly - run those penguins on top of apples.


It matters a lot, because those 20% hold more economic power.


Or unless Qualcomm comes up with better chips and Windows on ARM would be a thing.


80% of worldwide desktop users are business users.

Apple makes consumer electronics.


> Apple makes consumer electronics.

Actually, Apple is being used in the enterprise more and more. Its a developer darling and because for a few good years now there's real Office for Mac and combined with very good MDM capabilities it stopped being a consumer device quite a while back.


> there's real Office for Mac

which sucks balls, if I may say so, as a self-appointed Office expert


developers are a very small group among "enterprise" customers. Ever seen a non-software company? Or maybe just look at what HR or sales people at your company use. More likely it's Windows rather than Mac.


>developers are a very small group among "enterprise" customers

I am a developer. And for the last three workplaces we used Windows laptops, even if we targeted Kubernetes\Linux.

Some business people had Macs, though.


Out of curiosity: were you working in startups or enterprise/outsourcing companies in the previous jobs where they gave you Windows laptops?


No startups, no outsourcing, software companies with their own products.


Thanks for the answer.

I've seen this at companies that started in the 90s or earlier. Newer ones are mostly Macs.


> Or maybe just look at what HR or sales people at your company use.

MacBook Air.


> Its a developer darling

While this is undoubtedly true, I still don't comprehend why.

The development workflow on linux is an order of magnitude better than on OSX. Package management is built into the distro, the directory structure is sane, LUKS is (probably) less likely to be backdoored by the NSA than FileVault, you can use keyboard-centric window managers, tiling window managers, etc, far more of the system is available/configurable via scripting or terminal, you don't get grey screens of death regularly, it doesn't send tons of telemetry back to the mothership, and you don't have to reset the PRAM and SMC every goddamn week just to get the thing to boot.

OSX is probably only a "good" comparative developer experience if you're forced to use XCode for something. Otherwise, it's a disaster.


> you don't get grey screens of death regularly

Is this a thing that happens to people? I haven't seen one in a decade.

> and you don't have to reset the PRAM and SMC every goddamn week just to get the thing to boot.

The last time I did that was maybe 5 years ago.


> The development workflow on linux is an order of magnitude better than on OSX.

I guess your workflow doesn’t include getting Linux to work.

As in, if you run into an issue, you’re somewhat on your own because of your unique Linux setup. Maybe except if you’re on Ubuntu or something.

On Mac, you can more easily Google search for people with similar environment settings who failed building dependencies in the same way


> As in, if you run into an issue, you’re somewhat on your own because of your unique Linux setup. Maybe except if you’re on Ubuntu or something.

But uhh, I do just use Ubuntu? I haven't even changed the wallpaper. Everything works, nothing has broken in years.

Surely Linux doesn't have to mean some kind of special snowflake Gentoo setup.


If you are spending time getting it to work, I'm not sure that's a bad thing. It makes you a better developer!


Sysadmining your machine doesn’t make you a better developer.


> While this is undoubtedly true, I still don't comprehend why.

If you want to make ios apps you need an Apple computer. Then people get used to it and like most people they will say they prefer whatever they are used to.


I think you are basing this on your dislike of MacOS. And statements like "LUKS is (probably) less likely to be backdoored by the NSA than FileVault" will not win you any credibility points.

For developers, MacOS brings the UI friendliness of Windows with full capability of a *NIX machine. Basically you get the best of best worlds and with MDM capabilities on top which companies actually like and need very much.


>"While this is undoubtedly true, I still don't comprehend why."

If one develops apps for Apple ecosystem then it is must have. If not - I'd say it is a darling of some specific categories of users. Not your general developer.

Personally except couple of encounters I do not need to develop for Mac, hence I do not have it and could not care less if the brand disappears tomorrow. You can't really generalize here


> Package management is built into the distro

Brew can be installed with one command and a couple minutes of waiting time.

> you don't get grey screens of death regularly [...] and you don't have to reset the PRAM and SMC every goddamn week just to get the thing to boot.

I've been using a Mac for a year now and I have no idea what you're talking about. So far it hasn't crashed once or had any other problems except for a few bugs in preinstalled applications.

I'm not saying it's better than Linux but everything is preconfigured well enough so I don't constantly get annoyed by about everything that comes out of the box. The terminal and browser work pretty much exactly the same on both operating systems. OSX isn't great but at least it's not Windows and the hardware is amazing.


For the past ten years I've worked at multiple companies where the engineers were issued Macs. We're talking about four companies and several hundred engineers in total.

And friend, this is nonsense right here:

    you don't get grey screens of death regularly, [...] 
    and you don't have to reset the PRAM and SMC every 
    goddamn week just to get the thing to boot.
I don't know where you've been working or what the maniacs around you have been doing to your Macs, but I guess I worked with the luckiest ~250 Mac owners in the world because that stuff was not a thing for us.


All I care about for development is to have Visual Studio, VS Code and Docker.


I want my machine to just work, not being another programming environment and Macs give me that.

> you don't get grey screens of death regularly, it doesn't send tons of telemetry back to the mothership, and you don't have to reset the PRAM and SMC every goddamn week just to get the thing to boot.

I’ve been using Macs only for 5 years straight and have no idea what you’re talking about.

Also, shortcuts on Mac just make sense, unlike Windows and Linux.


There’s been a real Office for Mac for multiple decades


Yes we all remember the Bill Gates hovering over Jobs event and the investment announcement and confirmation of Office for OS X. I think what the parent was referring to though is performance of macOS Office was trash until like Office 2016 or maybe Office 2019 (with multiple earlier hiccups along the way e.g. the delay on the x86 port etc.).


Being used in != designed for

Industry applications are calling for general purpose computing. Yet Apple only delivers products and services aimed at end consumers.


Tbh, Intel hasn’t been designed for anything. It’s an organically grown isa, based on legacy and monopolist market share


Maybe not being designed for anything is better than being designed for a specific audience.


iPhones, iPads, and Macs are general purpose computers, by any commonly accepted definition.

> Apple only delivers products and services aimed at end consumers.

That is categorically untrue

e.g https://www.apple.com/business/


Can you list some of the "general purpose computing" tasks used by "businesses" for which Macs aren't suitable?

I mean, they're certainly not the price leaders, but you seem to be saying that Macs are downright unsuited to things "businesses" do.


Imagine you're a car manufacturer. You are looking for a fast single-board computer with fast GPU for your FSD capabilities (say). The Apple Mini might fit the bill. How are you going to do this without rebranding everything about the Mini? You want the car to start up without the Apple logo on the screen, nor the Apple startup chime. Nor do you want system update notifications. Basically you want to remove the entire OS.

Replace car manufacturer by medical equipment manufacturer, etc.


None of what you're describing is "general-purpose computing".


It is general purpose computing from the perspective that a client company can build anything they want with it. Which is not true for Apple products. They are end user products aimed at the end consumer.


You are very wrong about this statement. You can't take a Dell/HP/Thinkpad laptop and transform it in the car multimedia device.

You are confusing computer parts with devices. Yes, it is true Apple won't sell you a CPU to put it on the motherboard of your choosing, but it is also true they don't operate in this market.


Right, but increasingly many of those “end users” are large businesses.


Feels like you really moved the goalposts a few times there within the same thread. Bit of a wild ride. First, you were talking about desktop users:

    80% of worldwide desktop users are business users.
Then you moved on to:

    general purpose computing
Which could mean a lot of things, but based on the context -- a thread about the latest chips that Intel has created for the desktop/notebook market -- it sure seemed like we were talking about that.

Now you've focused on a subset of general purpose computing: embedded computing options.

Well, you are correct. Apple, like many computer manufacturers and chip makers, does not compete in the embedded space. Seems like a curious thing to point out since nobody was talking about it and nobody was under any illusions that competes there but hey, you're right.

Can I play too? Apple does not compete in the toaster oven market. Apple does not compete in the inertial guidance systems market for cruise missiles. Apple does not manufacture sneakers. Look at me, being correct over here.


You’ve described really narrow embedded niche which has nothing to do with general computing.


In a world where most of the applications people care about are SaaS, the OS is almost irrelevant to the user be he/she a business user or a consumer. There are a few exclusive applications with high user numbers on Windows, Linux, and Macintosh but the majority are available on all three, and those have far smaller numbers than SaaS applications delivered through the web browser.


Yet MacBooks, iPads, and the like are common managed assets at many large companies.


Still applies, 80% of the mobile device users use Android, not Apple CPUs.


And Apple's got 80% of the profits!


And everything is iOS first, lol. Is there anything going for Android except for FOSS scene?


Mean/median income/wealth of an iOS user has gotta be several times that of an android phone user.


It's not 2002 anymore. Apple has about 1/4 of the enterprise desktop market.


Not every country lives on US style salaries.


Enterprise IT spending doesn't come from salaries. And either way, the size of an addressable market, is orthogonal to the existence of a market targeting or offerings. Market targeting and segmentation is more than just price.

Apple targets the premium consumer, SMB, and enterprise markets.


>Enterprise IT spending doesn't come from salaries.

But it's directly related. If you save money on salaries, you'll save money on IT equipment, office rent, perks, as well.

I've been working in central Europe for a decade and never worked at a compony offering Macs to its workforce. If you move to the super expensive cities, Munich, London, Stockholm, Amsterdam, etc, where wages are much higher, you'll start seeing more Macbooks being offered by emplyers because when you pay a single worker 100k/year and spend millions on your office building, what's another 3k/employee for a laptop? Peanuts. Similarly it's rare to see jobs paid 2k Euros/month but get a 3k Macbook from your employer.


> Similarly it's rare to see jobs paid 2k Euros/month but get a 3k Macbook from your employer.

My first job (small dev agency) after moving to Amsterdam was all Macs despite peanuts salary.

They regularly offered decommissioned Macs on a big discount, so that brings some return.


Also depends how much profit they were making. If they made bank by underpaying you and overselling you to wealthy Amsterdam customers then they can probably afford Macs for everyone. Also depends on the type of work. Boring enterprise SW dev is usually ass Windows but frontend and mobile tends to be more Macs.


True, that’s exactly what happened.


    what's another 3k/employee for a laptop
Macs are more expensive and that cost is often not worth it, but $3K?

For mundane business use we're talking about more like $1K (USD) for a Macbook Air versus roughly $700 for a halfway decent Dell.

Or then again... maybe $200 for a Chromebook is a better comparison, for companies that just want to give their employees a way to run web apps and check email.


> $1K (USD) for a Macbook Air

8GB RAM and 256 SSD? Even my phone has double of that.


An Air with 16GB is $200 extra, so $1299.

Is $200 a good deal for that extra 8GB? Oh hell no. But we're still an extremely long way away from $3K as initially claimed and the idea of "basic" business users needing >8GB is pretty debatable.

This discussion is ridiculous. If somebody is going to claim that Product XYZ is overpriced but obviously had no idea what they're talking about and are objectively off by a factor of greater than 2x -- 130%! -- that's worth pointing out.

By the way. Looking at Dell's mainline 13" Latitude laptops with 16GB, guess what? They start at $1,065 and most of them are more than that.

https://www.dell.com/en-us/shop/scc/sr/laptops/latitude-lapt...

I certainly do know you can certainly get a new Dell with 16GB for less, or even just buy a carload of old laptops from a local e-cycler for like $100 each and slap some DIMMs in them. I've done it. I mean I get it. I'm not saying everybody should roll out a fleet of Macs, I'm pointing out that it's not some totally outlandish thing that costs "$3K" per user.


You don't get to become a multi trillion dollar company by offering good value for money on HW upgrades.


>Apple has about 1/4 of the enterprise desktop market.

Anything to back that?



* Apple makes social status projecting consumer electronics


It's true. Nothing says middle class more than toting around the shiny silver status symbol.


I have no idea why you're being downcoted. Most people who buy Apple for their personal use buy it for the status.

It's really no big secret but of course HNers will love to think that technical merit plays a large role in their selection. The average Apple consumer does not care about the underlying hardware.


It is likely being poorly received because it is phrased as if it is suggesting a dichotomy that doesn't exist. Product perception and positioning almost always has multiple dimensions, and it certainly does in the case of choosing a computer. There are many differences between Apple computers and other brands, other than the name. I'll leave enumeration of that list to the reader, but you can choose any one of them, and people may make purchasing decisions based on them.


I think many buy Apple for status, they keep buying Apple because their products generally work better/with less hassle for regular people who just need to accomplish tasks with their computers.


    Most people who buy Apple for their personal use buy it for the status.
Not even remotely HN worthy.


I’m curious what kind of person would come into my home and think I’m fancy because they saw my old ass MacBook Pro thrown on a bookshelf.

Or someone that saw me getting coffee and were like “woooow, that guys rich! He has an iPhone!”

Who the hell thinks like that?


Probably more like "that guy's not poor, he has an iPhone"

Then again, there are those people who claim to be repulsed by non-Apple products, which I assume is some kind of social status proxy: https://metro.co.uk/2021/12/26/owning-an-android-is-a-major-...


Unless you’re getting the really low-end android devices, you’re not saving much versus a comparable iPhone even before you account for the longer lifetime, and that’s for something you use many times a day, every day. If it’s a status symbol, it’s one of the most accessible in history - a Starbucks barista can have the same phone as their CEO for a few days’ pay, which certainly isn’t true of their food, shoes, clothing, car, or residence.


Probably just Real Hackers(tm) with a chip on their shoulders.


Maybe Intel should start stepping up their Arc GPU game first.


Sure, many more people want to play games than read hallucinations of 7B models.


Intel's Core Ultra packs new tech, but can it really match Apple's M3's ecosystem and energy efficiency?


I think ecosystem is a big thing to call out here. A decent proportion of why the M* CPUs are so efficient is that the OS is built around them, with tasks being appropriately scheduled to CPU cores based on developer hints, and things like the widgets framework taking into account minimising power used to update data which doesn't need it, and background updates for other apps being batched rather than each app waking up for network traffic at random. While it's possible to write inefficient software for macOS if you follow the happy path you'll like come out the other side with something that respects the user's battery.


> A decent proportion of why the M* CPUs are so efficient is that the OS is built around them, with tasks being appropriately scheduled to CPU cores based on developer hints

It helps, but its far from the whole story. Asahi Linux developers have reported exceptionally good battery life on Linux. IIRC they were getting around 7 hours before the GPU driver was implemented and it was continually burning ~60% all-core CPU just to render.

I believe the hardware itself (and the firmware) handles a lot of the power management. Aside from which, Apple's CPU cores do seem to be straight up more power efficient than anybody else's.


>Aside from which, Apple's CPU cores do seem to be straight up more power efficient than anybody else's.

That reads TSMC 3nm seems to be more power efficient than any other process.


I might be mistaken, but I believe Apple's first gen M1 chips (TSMC 5nm) are still more power efficient than AMDs Zen 4 chips (also TSMC 5nm), although the difference isn't huge (~10-20% ?)


Does the ARM architecture have much to do with it?


7 hours is still quite a bit below the 20-22 hours with MacOS though. It may seem a lot given how bad batteries are on non-apple computers, but it's still less than 1/3 of what it could be.


7 hours is incredible if the CPU was really at 60% sustained usage.


Do Mac app devs really do much to reduce power consumption? It doesn't seem like anything special to me, but I haven't actually shipped anything Mac-specific in a long time.

Most of the power on a given day will go to web apps anyway, which ofc the dev doesn't optimize for Mac, and it seems like the Macs do that more efficiently too.


On what do you base these conclusions? There are no numbers here.

As far as the components, Apple Silicon also contains an NPU and GPU. Thus far, there is no reason to believe that Intel has anything significant here aside from chiplets, a technology in use elsewhere already.


Reading the article. I'm not making any hard claims.

I think the M3 is most likely still going to be better, but I want it to have more competition.


"Chiplets" doesn't adequately describe the situation. The packaging tech Intel is using for these new laptop processors is far more advanced than the packaging tech AMD has been using for any of their chiplet-based consumer CPUs.


The packaging tech actually has to be less advanced to be economical.

"That was the first generation. Xilinx worked with TSMC on CoWoS. Their codename was CoWoS. It’s a funny name for TSMC’s silicon interposer. That was a first-generation advanced package technology... I talked to one of their VP. I talked to them many, many times, until one time, I had dinner with [a Qualcomm] VP, and he just very casually told me, he said, you know, “If you want to sell that to me, I would only pay one cent per millimeter square.” One cent per millimeter square. He said, “That’s the only cost I will pay for it.” I said, “How come you didn't tell me earlier?” He said, “You should know that. Why I should tell you? You should know that.” ...Our second generation called InFO meet that criteria and it was sell like a hotcake. So that one word saved my life and the InFO was why Apple was hooked by TSMC."

https://www.computerhistory.org/collections/catalog/10279267...


The packaging AMD uses for their desktop processors is cheaper, and much worse in terms of performance and power efficiency: moving data between chiplets is too expensive to work well for a laptop processor. (That hasn't stopped them from shipping the latest version in a BGA package for use in massive gaming laptops to compete against Intel's desktop-silicon-in-laptop-package parts, but the idle power draw still sucks.)


IMO Intel has over-engineered Core Ultra. The packaging is very advanced but what if that complexity isn't needed? Never send a bunch of chiplets to do a single die's job.


In the short term, this makes them much less dependent on their first EUV process, since that's now only used for a small chiplet of just the CPU cores, while the rest is on a more affordable and reliable TSMC process. That risk reduction may have been critical to being ready to ship this year.

In the long run, I think they also need this kind of packaging to become affordable enough to use across their consumer lineup, not just for the exotic data center parts. Making a big change like this mobile-first and bringing it to desktop in a subsequent generation worked out alright for them with Ice Lake and their 10nm process, so I'm not surprised to see them do it again.


Per guidelines[0]: "Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize."

The title of this should be "Intel launches Core Ultra processors to power up your next laptop with AI"

[0]: https://news.ycombinator.com/newsguidelines.html


That sounds hella editorialized.


fwiw, when I wrote this reply, the post I responded to said the original title was uneditorialized.


FTA:

“AI innovation is poised to raise the digital economy’s impact up to as much as one-third of global gross domestic product,” Intel CEO Pat Gelsinger said in an Intel press release.

What an incredibly strange thing to say. It's neither sober enough to imply that AI won't completely deliver on the hype, and yet insufficiently breathless to imply an AI singularity.

Why only up to one third? Anything more sounds "un-CEO-like", I suppose?


You think estimating that AI will be impactful but won't deliver the singularity is incredibly strange? I'm struggling to imagine what a non-strange statement would be to you.


I do think it's very weird that the upper bound is presented with confidence, but the lower bound is not, yes.

I also think that it's weird because the one-third figure itself appears to have materialized from nothing but hype and wishful thinking, so why is it an upper bound? No singularity required -- even just human-level AGI would clearly be more than a one-third GDP impact.


> Bringing the biggest CPU architecture upgrade to the everyday laptop Intel's new Core Ultra processors introduce three new major architecture features to its laptop portfolio.

Phew, i was crushing my brain to remember when the M3 processors were better than Intel. On laptops.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: