This is quite concerning honestly. I don't mind ARM being acquired, and I don't mind Nvidia acquiring things. But I'm concerned about this combination.
Nvidia is a pretty hostile company to others in the market. They have a track record of vigorously pushing their market dominance and their own way of doing things. They view making custom designs as beneath them. Their custom console GPU designs - in the original Xbox, in the Playstation 3 - were considered a failure because of terrible cooporation with Nvidia [0]. Apple is probably more demanding than other PC builders and have completely fallen out with them. Nvidia has famously failed to cooporate with the Linux community on the standardized graphics stack supported by Intel and AMD and keeps pushing propietary stuff. There are more examples.
It's hard to not make "hostile" too much of a value judgement. Nvidia has been an extremely successful company because of it too. It's alright if it's not in their corporate culture to work well with others. Clearly it's working, and Nvidia for all their faults is still innovating.
But this culture won't fly well if your core business is developing chip designs for others. It's also a problem if you are the gatekeeper of a CPU instruction set that a metric ton of other infrastructure increasingly depends on. I really, really hope ARM's current business will be allowed to run independently as ARM knows how to do this and Nvidia has time and time again shown not to understand this at all. But I'm pessimistic about that. I'm afraid Nvidia will gut ARM the company, the ARM architectures, and the ARM instruction set in the long run.
[0]: An interesting counterpoint would the Nintendo Switch running on an Nvidia Tegra hardware, but all the evidence points to that this chip is a 100% vanilla Nvidia Tegra X1 that Nvidia was already selling themselves (to the point its bootloader could be unlocked like a standard Tegra, leading to the Switch Fusee-Gelee exploit).
You are not wrong, but the facts you have cherry picked fail to portrait the whole picture.
For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
The claim that Nintendo is the only company nvidia successfully collaborates with is just wrong:
- nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards
- nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology
- nvidia collaborates with OS vendors like microsoft very successfully
- nvidia collaborated with mellanox successfully and acquired it
- nvidia collaborates with ARM today...
The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
I mean, this is not nvidia specific.
You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).
I mean, you even try to paint this as if Nvidia is the only company that Apple has parted ways with, yet Apple has long track record of parting ways with other companies (IBM PowerPC processors, Intel, ...). I'm pretty sure that the moment Apple is able to produce a competitive GFX card, they will part ways with AMD as well.
> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong [...]
Hey! Wait a second, there. Nvidia isn't bad because it has a properietary Linux driver. Nvidia is bad because it actively undermines open-source.
Quoting Linus Torvalds (2012) [0]:
> I'm also happy to very publicly point out that Nvidia has been one of the worst trouble spots we've had with hardware manufacturers, and that is really sad because then Nvidia tries to sell chips - a lot of chips - into the Android Market. Nvidia has been the single worst company we've ever dealt with.
> [Lifts middle finger] So Nvidia, fuck you.
Nvidia managed to push some PR blurbs about how it was improving the open-source driver in 2014, but six years later, Nouveau is still crap compared to their proprietary driver [1].
Drew DeVault, on Nvidia support in Sway [2]:
> Nvidia, on the other hand, have been fucking assholes and have treated Linux like utter shit for our entire relationship. About a year ago they announced “Wayland support” for their proprietary driver. This included KMS and DRM support (years late, I might add), but not GBM support. They shipped something called EGLStreams instead, a concept that had been discussed and shot down by the Linux graphics development community before. They did this because it makes it easier for them to keep their driver proprietary without having work with Linux developers on it. Without GBM, Nvidia does not support Wayland, and they were real pricks for making some announcement like they actually did.
In recent years, Linux computers have evolved to a major revenue source for Nvidia thanks to deep learning. However, not desktop users are behind this but servers, due to Nvidia's proprietary CUDA API. If they open sourced it or rebuilt it on top of mesa, it'd make it easier for AMD to implement CUDA, getting access to the deep learning ecosystem that's currently locked into CUDA. Nvidia's sales would take a huge drop. So I think it's even more likely that their drivers remain proprietary.
I don't have so much of a problem with CUDA staying closed, but rather Nvidia sabotaging Nouveau through signed firmware which they don't release (and obfuscate in their blob). Nouveau would be probably be decent by now (not as fast or feature complete, but usable for real workloads on newer cards) if it weren't for the fact that Nvidia has added features which have the direct effect of making it impossible to have a competitive open source driver.
Maybe something will change on this soon. There was speculation about this: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-O.... But I'm not holding my breath, and it would be nice if the solution wasn't "wait and hope until Nvidia releases the software necessary to control their GPUs".
> I don't have so much of a problem with CUDA staying closed, but rather Nvidia sabotaging Nouveau through signed firmware which they don't release (and obfuscate in their blob)
Do you have more info on this ? There is a big difference between not supporting open source, and actively sabotaging it. What are they doing, exactly ?
Look up "nouveau signed firmware". Phoronix has a bunch of articles on it. The Nouveau developers also talk about it at FOSDEM 2018 (and probably a later conference). This comment is a good intro: https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...
TL;DR starting with the 9xx series, Nvidia started making it so their GPUs would only run firmwares signed by them (likely to prevent counterfeits, i.e. 2060s sold as 2080s). So it is impossible to control the fans and reclock the GPU. There's no workaround, so even as the person who owns the device, I can't run my own firmware. AMD has signed firmwares too, but they actually release sufficient blobs to fully run the device.
That is because AMD split graphics vs. compute and you look at a non-compute card.
The Vega-based chips work quite well and the Radon VII is (or was, I'm afraid) an excellent value proposition.
So this is well outside my area of expertise; but that seems weird. I want AMD to shepherd the ecosystem to a point where I can run PyTorch with some support from a graphics card. Supporting some graphics cards and not others doesn't sound very promising.
It is amazing to watch how much of a struggle AMD is having with getting PyTorch to work with ROCm. It makes me appreciate what a good job Nvidia must have done with CUDA.
AMD: some open source drivers, but many things are inop
Nvidia: all closed source, but everything works
Source: I have a Vega 56 and I am gonna give it the sledgehammer when 3000 series arrives. Fuck that shit to high heavens my dude. ROCm is buggy as fuck. Only the bare basics work on Linux. Drivers are buggy, and crash all the time. Even the most basic bullshit is not implemented right. Like there is no fan control. That's how bad it is.
PyTorch supports ROCm quite well if you can follow three step build instructions instructions (https://lernapparat.de/pytorch-rocm/).
TVM.ai also supports ROCm well.
TF I didn't check.
FP32 performance for the VII is comparable to GTX20280Ti e.g. on Resnet training. Tensor Cores are cool, but 90% of people don't use FP16 that much.
I'm not sure that is the best reply that stackoverflow has ever seen.
I can only speak for myself, but my understanding is that the Navi cards are not intended nor advertised for compute.
Now, one might wish to get a consumer card with Arcturus after the Instinct MI100 is released or dislike that the Radeon VII is discontinued, or even that it wasn't split into render and compute lines, but basically, to me the complaint about Navi not being supported by ROCm sounds a bit like buying a boat and complaining that it can't fly.
Driver support was bumpy for me with Linux 5.0 or so, but I haven't had trouble with just running whatever kernel Debian ships for quite a while now. But I must admit I don't use the GPU for anything but compute, so I don't know about graphics.
But your mileage may vary, I don't want to make people swear.
a GPU is a GPU, it's all just programs running on the shaders.
If AMD is unwilling to properly support its consumer GPUs in GPGPU workloads and NVIDIA is, then well... sounds like a good reason not to buy AMD.
It's really not a major ask, people have been running GPGPU programs on consumer cards since forever, even though they're "consumer" and sold as graphics cards and not compute accelerators. NVIDIA basically does this on purpose, people use their personal hardware to get a foot in the door of the ecosystem and end up writing programs that get run on big compute GPUs which NVIDIA makes big profits on, it's very intentional.
If AMD chooses not to do that, well... can't really blame people for avoiding their stuff when NVIDIA is willing to let you do this stuff on their cards and AMD isn't.
This divestiture happened long ago, if your running Linux not on a server, then Intel and AMD are the only players making chips that work right out of the box.
Even there though, only Intel has a buttery smooth experience. Ryzen for laptops is half baked (terrible USB C docking performance, occasional crashing on Windows and Linux with the 2xxxU series CPU/GPU chips) and AMD GPUs still require manual intervention to load proprietary firmware.
AMD does make some performant mobile GPUs though, they work well in Debian!
How so? Aren't you still trapped on Xorg with their proprietary drivers?
I switched to Wayland a few years back as vsync is quite nice to have, but whenever I go back to Xorg for AnyDesk or TeamViewer on AMD or Intel there is a fair bit of tearing.
Nvidia could have a competitive open source driver tomorrow if they released redistributable firmware that allowed reclocking the GPU.
> How so? Aren't you still trapped on Xorg with their proprietary drivers?
As a FreeBSD user I'm still "trapped" on Xorg anyway, and in any case I'd rather stick with what works than learn some whole new way of doing things for marginal benefits.
How do you go back to Xorg? Logout/login and/or reboot or is there a better way? I'm asking this because I often need screen sharing to work with my customers (demoes, check problems, etc) and I'm sticking with Xorg because Wayland doesn't do screen sharing.
You don't even have to go to open source. You can see this hostile behaviour from their top-paying clients!
Microsoft own previous gen Xbox emulator on the next gen xbox (i think it was original xbox emulated in the 360, but i might be wrong) was impacted by the team having to reverse-engineer the GPU because nvidia refused to let the emulator people to have access to the documentation provided to the original team.
Sadly I have to agree. Ryzen 3400G here, getting hardware transcoding on the iGPU is something I still haven't sorted out. There have been several recent issues in kernel, AGESA firmware (I suspect there might be newer versions with potential fixes that my mobo manufacturer hasn't released yet; this is 1.0.0.4 Patch B) and drivers.
I've had several rounds of hunting down and compiling various versions of driver packages, modules and kernels from source, trying third-party PPAs, to no avail. The amdgpu/amdgpupro mess adds another layer of confusion.
I am not sure if I am missing some update, need to set some undocumented kernel flag and/or BIOS setting, if it's a software issue or Í just made a mistake somewhere. Debian 10/11.
Meanwhile, as much as I wanted to get away from Intel, their drivers have never posed any issue at all.
I believe AGESA 1.0.0.4 Patch B broke something for the APUs, you should try either upgrading or downgrading your BIOS but what worked for me was downgrading to AGESA 1.0.0.3 ABB, both Windows and Linux has stopped crashing now, although I still get the occasional lockup when browsing with Firefox on Linux. I found out the culprit after stumbling into this thread:
https://old.reddit.com/r/AMDHelp/comments/gj9kpz/bsod_new_pc...
[citation needed]
My experience directly conflicts with this and IIUC most GNU/Linux users have exactly the opposite impression. Maybe you're thinking of some past situation?
I would say that's a much more prevalent attitude in the Windows and Mac worlds. Linux tries to keep compatibility with really old software. It was only 4 years ago that major distros started to require at least a 686, aka Pentium Pro, released in November 1995!
But at some point you have to consider if it's really worth it keeping a 10 year old laptop around. It's painful to say them goodbye, I know, I have been there, but for me it's just not worth it.
Asus sold the laptop with Windows 7 support as well, the drivers kept being updated up to Windows 8.1, and thanks to Windows driver ABI, those drivers work perfectly fine in Windows 10.
No need to throw a perfectly working laptop to enjoy the DirectX 11 and OpenGL 4.1 capabilities that it was sold for.
No I am thinking of the situation where the open source driver doesn’t have opencl support and the AMD drivers (fglrx) doesn’t compile or requires dependencies so old that it was dropped by the packagers for Arch Linux, all this for a few years until they come out with something that actually works when I’ve never ever had an issue with Nvidia. Also AMD never ever figured out how to fix screen tearing, or do so in a sane that doesn’t involve trial and error editing xorg.conf.
Even on Windows AMD drivers are the most unstable, bugged software that’s even been shipped. It’s been a long standing joke that AMD “has no drivers”.
Take any AMD 5000 series card. I had a top of the line 5700.
Still no driver for compute 1 year later. I'm so happy i decided to return it and switch to intel instead of waiting for AMD or some random joe on their free time to add support for it to their open source driver.
So yeah. I'd take a working proprietary driver over no driver any day.
I have recently used Radeon 550, 560, 570, 5500 on AMD 5050e (yes, that old!), Ryzen 1600 (non af), 3100, 3600 and all have worked fine, Ubuntu 16.04, 18.04 and 20.04. In fact on average I have found the various hardware configurations to be about 5% faster on Linux than Windows.
It's anecdotal of course, but my RX560 has been absolutely flawless on both Ubuntu and openSUSE, literally out of the box support on a standard install.
Is this an Ad Hominem ? Linus does not mention there a single thing that they are actually doing wrong.
> Drew DeVault, on Nvidia support in Sway [2]:
Nvidia has added wayland support to both KDE and GNOME. Drew just does not want to support the nvidia-wy in wl-roots, which is a super super niche WM toolkit whose "major" user is sway, another super super niche WM.
Drew is angry for two reasons. First, sway users complain to them that sway does not work with nvidia hardware, which as a user of a WM is a rightful thing to complain about. Second, Drew does not want to support the nvidia-way, and it is angry and nvidia because they do not support the way that wl-roots has chosen.
It is 100% ok for Drew to say that they don't want to maintain 2 code-paths, and wl-roots and sway do not support nvidia. It is also 100% ok for nvidia to consider wl-roots to niche to be worth the effort.
What's IMO not ok is for Drew to feel entitled about getting nvidia to support wl-roots. Nvidia does not owe wl-roots anything.
---
IMO when it comes to drivers and open-source, a lot of the anger and conflict seems to steem from a sentiment of entitlement.
I read online comments _every day_ of people that have bought some hardware that's advertised as "does not support Linux" (or Macos, or whatever) being angry at the hardware manufacturer (why doesn't your hardware support the platform that says it does not support? I'm entitled to support!!!), the dozens of volunteers that reverse engineer and develop open source drivers for free (why doesn't the open source driver that you develop in your free time work correctly? I'm entitled to you working for free for me so that I can watch netflix!), etc. etc. etc.
The truth of the matter is, that for people using nvidia hardware on linux for Machine Learning, CAD, rendering, visualization, games, etc. their hardware works just fine if you use the only driver that they support on the platforms they say they support.
The only complaints I hear is people buying nvidia to do something that they know is not supported and then lashing out at everybody else due to entitlement.
What exactly are you invested in in this discussion? We were originally discussing Nvidia's business practices don't match ARM's business practices. But you seem to just want to take on people's personal views on Nvidia now.
You're now somehow arguing with people that they should stop complaining about Nvidia's business practices. I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market. But even if you don't agree with some/most of the complaints surely you must agree that Nvidia's track record of pissing of both other companies (and people) is problematic for when they take control of a company with an ecosystem driven business model like ARM's?
> I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market.
I'd agree with you this is OP's argument, however it's main flaw is in explicitly omitting the fact that NVidia is not the only party that's "free" to do things.
We're not obliged to buy their cards and we aren't obliged to stay silent regarding its treatment of the open-source community and why we think it would be bad for them to acquire ARM.
I am always amazed at the amount of pro-corporate spin from (presumably) regular people who are little more than occasional customers.
> We were originally discussing Nvidia's business practices don't match ARM's business practices.
We still are. I asked about "which specific business practices are these", and was only pointed out to ad hominems, entitlement, and one sided arguments.
Feel free to continue discussing that on the different parent thread. I'm interested on multiple views on this.
> You're now somehow arguing with people that they should stop complaining about Nvidia's business practices
No. I couldn't care less about nvidia, but when somebody acts like an entitled choosing beggar, I point that out. And there is a lot of entitlement in the arguments that people are making about why nvidia is bad at working with others.
Nvidia has some of the best drivers for Linux there are. This driver is not open source and distributed as a binary blob. Nvidia is very clear that this is the only driver that they support on Linux, and if you are not fine with that, they are fine with you not buying their products. This driver supports all of their products very well (as opposed to AMD's, for example), its development is made by people being paid full time to do it (as opposed to most of their competitors which also have people helping on their drivers on their free time - this is not necessarily bad, but it is what it is), and some of their developments are contributed back to open source, for free.
People are angry about this. Why? The only thing that comes to mind is entitlement. Somebody wants to use an nvidia card on Linux without using their proprietary driver. They know this is not supported. Yet they buy the card anyways, and then they complain. They do not only complain about nvidia. They also complain about, e.g., nouveau being bad, the Linux kernel being bad, and many other things. As if nvidia, or as if the people working on nouveau or the Linux kernel for free on their free time owes them anything.
I respect people not wanting to use closed source software. Don't use windows, don't use macosx, use alternatives. Want to use linux? don't use nvidia if you don't want to.
No, thank you. Can we do without the Reddit attitude here please? The choosing beggars, the needless quoting of debate fallacies in what is not a debate?
If you ask your impression of Nvidia's business practices, and they give you their opinion, you can't somehow invalidate that opinion by retorting with debate fallacies. That's the "fallacy fallacy" if you're sensitive to that. This is not a debate competition about who's right, this is people giving their opinions based on Nvidia's past and current actions. You asked a question, and they answered. This is not a competition. Please give them the basic respect of acknowledging their opinion.
We are discussing a topic, and people throwed multiple arguments that do not make sense.
You are claiming that I should just shut up and respect their feelings, but that is worthless.
Two examples:
---
Somebody's argument was: "Linus doesn't like them, therefore I don't like them".
The reason these are called logical fallacies is because these arguments are illogical. I told them that this was a logical fallacy (argument of authority - just because someone with authority makes an argument does not mean they are right), and ask them _why_, what is it that linus and you do not like.
I am happy I did that, because many of them have raised multiple actually-valuable arguments in response. For example, because nvidia's hardware throttles down if the driver firmware is not signed, and this makes the open source drivers slower for no reason.
That's a valid and valuable argument. Linus doesn't like them is worthless.
The person who raised this argument learned something from somebody else which knew what Linus did not like, and so did I.
---
The same happened when I called out the entitled choosing beggars. "Why are you angry at nvidia for not providing an open source driver ? You knew before buying their product that only the binary driver was supported."
Read the responses. The reason they are angry, is because they don't have a choice but to use nvidia, because the competition products (AMD in those cases) are much worse. AMD does have open source drivers, but they are crap, and they don't support many of AMD's products, at least for compute, which is something that many (including myself) use for work.
These people have picked a platform that values open source code, but due to their job requiring them to actually get some work done, they must use nvidia for that, and they don't like having to compromise on a proprietary driver.
Honestly, I think this is still entitlement, but I definitely sympathize with the frustration of having to make compromises one does not like.
---
From the point of view of whether nvidia buying ARM is good or bad. I still have no idea. ARM does _a lot_ of open source work, its major market are Android and Linux communities.
I understand that people are afraid that Nvidia will turn ARM into a bad open source player. It can happen. But without Android, iOS, and Linux, ARM is worthless. So a different outcome of this could be that NVIDIA buying ARM ends up making NVIDIA more open source friendly, since at least the Linux market is important for nvidia as well (~50% of their revenue).
It definitely makes sense for regulatory authorities to only allow somebody to buy ARM that will preserve ARM deals with current vendors (apple, google, samsung, etc.), and that also will preserve ARM open platform efforts.
If nvidia does not agree to that, they should not be allowed to buy arm.
Speaking as a Linux graphics developer, I can confirm that NVidia indeed is a pretty terrible actor. There could be a viable Open Source driver for most of their GPUs tomorrow if they changed some licensing, NVidia knows this.
A NVidia purchase of ARM would also create a lot of conflicts of interest.
How many ARM chips will the average consumer buy in their life? How much profit will ARM make on each of those chips? How many Tiktok videos will the average consumer watch in their life? How much profit will Tiktok make on each of those views?
ARM doesn't manufacture chips, they license ARM Processor 'designs'.
ARM is more of a household name for their use in mobile phones but that is just the tip of the iceberg.
I think you underestimate how many ARM chips you have in a single car or delivery truck.
Add to that:
- farming machinery
- construction machinery
- factory line automation
- elevators, escalators
- EV chargers
- fridges, washing machines, ovens
- medical devices such as drug-infusion pumps, ventilators, surgical machinery, etc.
- Auxiliary modules in aeroplanes and shipping containers.
- Infrastructure for Road, Rail, Power Grid with ARM processors running headless embedded systems
- Anything the size of a pebble with bluetooth connectivity uses nordic's nRF chip (which is yet again an ARM chip)
ARM processors are hiding in plain sight in the world all around you.
I understand the point you make, from a business standpoint, on how Tiktok might scale. The thing that is bizarre for me and I agree with the parent's sentiment, is how disconnected the valuation is from the real-world impact and objective _usefulness_ of ARM versus Tiktok.
> The thing that is bizarre for me and I agree with the parent's sentiment, is how disconnected the valuation is from the real-world impact and objective _usefulness_ of ARM versus Tiktok.
Also how easily the technology behind Tiktok can be duplicated compared to ARM.
Anyway, it's probably all about the network effect and brand value.
Per-video profit isn't really meaningful; these kinds of companies report revenue (and thus profit) base on the number of active users.
Consider that in 2017, 21 billion ARM chips were manufactured (doubled in 4 years, from 10 billion in 2013), and that ARM's licensing fees are over 2% of chip cost for current high-end designs (and they're talking about raising that even more). They have 95%+ of the mobile phone market, are making inroads in the server market, and will soon be in every Apple laptop, which I expect will grow the market for ARM laptops even outside Apple. It wouldn't be a stretch to find them in common desktop computers after that. They're in smart TVs, washing machines, robot vacuum cleaners, and all other kinds of smart (and non-smart) appliances. Even SSDs have their own embedded ARM chip. And that's just home/consumer stuff; haven't even scratched industrial/commercial applications, of which there are a ton.
I could easily see yearly production at over 100 billion chips before 2030, probably even before 2025. While I'd love to see something like RISC-V take off commercially, I don't think that's realistic.
Meanwhile, social media users are incredibly fickle; platforms are subject to fad and fashion. Certainly Facebook and Instagram are still huge behemoths, but their growth is nothing like it once was, with people -- especially younger people, trying to distinguish themselves from their older, boring relatives -- flocking to TikTok. I fully expect TikTok will be in Instagram's boat in under 10 years, with some other platform taking its place.
ARM seems like an amazingly great short-, medium-, and long-term bet, while TikTok feels like a nice short- (maybe medium-, if they're lucky) term money-maker, and even that feels like a big maybe: I have no idea what their ad revenue per user looks like, but it's probably not great since their audience skews younger. Teenagers and college kids don't have much in the way of discretionary income. Then again, TikTok doesn't pay their creators like e.g. YouTube does, so they get to keep all that ad revenue.
21 billion is 3 per person per year for everyone in the world, including rural goat herders etc. who don’t consume very many. I don’t doubt the number but it’s still wild.
TikTok is doubling every one year, and is getting around one billion views per day. I'd imagine that TikTok's growth over the next five years is going to be both faster and larger than ARM's. ARM makes around the same per chip (10 cents) that Youtube is making per view.
Now, certainly TikTok might not be sustainable and might disappear off the face of the earth tomorrow. Or it might become a juggernaut that overtakes Facebook.
If you don’t see RISC-V taking off commercially, then why is ARM trying to sell? I ask because my understanding was that they were trying to exit because of RISC-V.
I think it's just trying to sell because Softbank is a venture capital company, they only care about short-term profits.
PS I applaud RISC-V but it won't take over the market for a long time, and it wouldn't drive ARM out completely, I'm sure. Intel's had many competitors and they're doing just fine (even despite screwing up repeatedly with their processes!)
Look at all the failed attempts to move away from x86(/64). Even intel tried it with Itanium and failed, HP has to pay them to keep making it so they can fulfull their server contracts. I'm sure ARM has a similar hold on the mobile market.
> Intel's had many competitors and they're doing just fine (even despite screwing up repeatedly with their processes!)
With AWS offering ARM systems, all the Chromebooks, Apple, the complete loss of the phone market, Intel’s staying power is about to be tested to the extreme.
Most of those don't matter and didn't matter anyway: Chromebooks and Apple desktops are a tiny portion of a small market; Apple's volume chips are iPhones. Intel never really had a grip on any kind of mobile handset market and hasn't for years, despite that they still post record profits. That's because the margins on handsets are very slim, unless you're Apple.
The only actual major change here is AWS offering Graviton, which actually hints at their real cash cow: datacenter SKUs with absurd markup. Something like 80% of their profit margins are here. More accurately, the change is that there are now viable silicon competitors to Intel in the performance department. So it's now clear that ultra-integrated hyperscalers who can actually afford tape out costs (7nm CPUs are not cheap to produce in volume) have an option to vertically integrate with e.g. Neoverse. Smaller players will not do this still, because alternative options like Rome will be adequate. But the only reason any of them are changing anything is cost savings, because now there are actual viable competitors to Intel when there were zero of them for like, 15 years. Producing cutting edge silicon products isn't easy, but it's very profitable, it turns out.
To be clear, Intel isn't charging $10,000 for a Xeon Platinum because it costs $9500 to make and they make $500 in profit. (Likewise, AMD doesn't produce competitors at 1/5th the price because they made a revolutionary, scientific breakthrough in processor design.) They're charging what you'll pay, not what it takes to produce. Seeing as they currently still have a complete stranglehold on the datacenter industry and make more in a quarter than most of their competitors do in several years, I suspect they've got much more "staying power" than the watercooler chat on this website would lead you to believe.
The Softbank guy invested a ton of money on WeWork. Tried to sell WeWork for 60 billion, but before that happen WeWork valuation dropped out to 2-5 billion (huge loss). That was in 2019. Afterwards, Softbank invested another 10 billions to try to save it. WeWork owns and also pays rent or hundreds of office buildings in the most expensive zones of all the major capitals in the world. 2020 COVID now means these super expensive offices are now empty, since WeWork customers pay a premium to be able to cancel their leases in <1 week. So essentially, WeWork is broke, worth 0, and Softbank has lost dozens of billions on it.
On top, Softbank owns a huge chunk of Uber, which is also worth close to zero now that people are not travelling due to COVID...
So... yeah... Softbank is selling ARM because they must. They are super broke, and investors are going to pull the money that remains out. Selling ARM and giving investors a tiny benefit so that they keep their money is better than them taking a huge loss this year.
MySpace is dead, dead. You'll still find devices, especially in manufacturing, that are still running on a MIPS, and are essential to the process. You'll still find them in various cheap toys on the shelves.
MIPS _should_ be dead, half the manufacturers of the chip have stopped. But, Imagination Technology still sell a considerable number to Apple every year.
that article makes me wonder if the beos would have survived if they switched from Hobbit to arm chips and stuck with selling hardware+software, and later on added x86/x64/PPC/POWER. A tight integration of hw+sw has worked for apple.
I loved BeOS, but there was an even more fundamental problem that put a limit on its days: A failure to anticipate coming need for home computers to become more secure. At the same time that Microsoft and Apple were both working frantically to ditch their old single-user desktop operating systems and replace them with, in effect, spruced up versions of existing server/workstation operating systems, Be was trying to launch a brand new OS on the dying model. Had they survived even a couple years longer, they would have had to reckon with that, and they simply didn't have the resources to navigate such a fundamental transition.
By this logic, TicTac should be worth more than all car manufacturers combined, since there are more TicTacs in an 1$ pack than I will ever buy cars in my entire life. And I'm sure their margin is much higher, percentage-wise, at least compared to most car manufacturers.
Let's be real here: TikTok is a big social network, yes, but ARM owns most of the embedded market. Every smartphone your average consumer buys runs an ARM chip. And hardware is harder to replace than software.
Maybe ARM should copy the business model of more valuable companies like TikTok by harvesting and selling it's users data. Surely the data flowing through a SOC is worth something.
It is quite simple really. You seem to be under the mistaken impression that valuations in tech are based in any way on logic. They're not. They're completely hype-driven.
That's how this year's myspace, which people will have trouble remembering 5 years from now, can get a higher "valuation" than a large semiconductor company with a 30 year track record.
Investors and the financial sector are proving time and time again that they're unable to learn from their mistakes, through no "fault" of their own, because apparently it's human nature to just be horribly bad at this.
It amazes me that people think investors somehow learned anything from the dot-com bubble, given they've been repeating all of their other major mistakes every odd year or so.
I am starting to see a lot of these similar comments around the internet.
I think, it is because people are now so used to Apple and Amazon's trillion valuation, with Apple closing in to 2 Trillion, people think $32B is low or ( relatively ) cheap.
Reality is ARM was quite over valued when it was purchased by Softbank.
We don't really live in that world anymore – what is value investing when the Federal Reserve sets the price? More and more, "intagibles" like brand value are becoming more important on balance sheets than investors want to admit.
For me Tesla stock value is where it got confirmed stock valuation diverged from any data points. I understand there is a potential huge upside there, but no real data points would justify anything close to the current valuation.
Especially in comparison to 9 months ago, the fundamentals did not really change (conpetitive landscape etc.) There was the CyberTruck announcement, though
> There could be a viable Open Source driver for most of their GPUs tomorrow if they changed some licensing
I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.
Do you have a source explaining what licensing changes they would have to make and what impact would that have for Linux and Nvidia ? I'd like to read that.
Nvidia has a very different model for what they're trying to get out of their drivers. They spent something like 5x more on the number of driver developers than AMD, then would send engineers to work with AAA game studios to "optimize their games" for Nvidia. Good so far. But then these optimizations went so far as fixing (in the driver) broken game code. Like apparently it was so bad that games were being shipped without issuing BeginScene/EndScene on DirectX.
Hence AMDs push for Mantle then Vulkan. The console like API is the carrot to get people to use an API that has an verification layer so that third parties can easily say "wow what a broken game" rather "wow this new game runs on Nvidia and not AMD, what broken AMD drivers".
Nvidia open sourcing their drivers completely destroys a large chunk of their competitive advantage and is so intertwined with all the IP of the games they have hacks for that I'd be surprised if they ever would want to open source them, or even could if they wanted to.
The problem was never opening the existing driver.
It was:
- all kinds of problems wrt. the integration of the driver in the Linux Eco system, including the properitary driver having quality issues for anything but headless CUDA.
- nvidea getting in the way of the implementation of an open source alternative to their driver
But most of these games don't even exist on Linux.. So they wouldn't have to fix all that stuff.. As a Linux user I'd gladly do without that bloat anyway (also explains why a "driver" has to be 500MB lol)
AMD supported Radeon & AMDGPU rather than releasing their own drivers, there is no reason Nvidia can't provide documentation and a simple open source firmware for their GPUs.
We could care less about the hacks and cludges baked into the proprietary Nvidia drivers and firmware focused on DirectX powered gaming. The current path Nvidia has chosen with signed firmware locks out open source developers from much of the low level operations of their GPUs.
> But then these optimizations went so far as fixing (in the driver) broken game code.
AMD does the exact same thing and always has. When you see shaders come down the wire you can replace them with better-optimized or more performant versions. It's almost always fixing driver "bugs" in the game rather than actual game bugs. And the distinction is important.
I do agree with you, but that element is something everyone has to do to remain competitive in games. Developers will only optimize for one platform (because they're crunching), and 9 times out of 10 that's a RTX2080Ti.
Nvidia did more than that, hooking their drivers in the same way that they would for benchmarks "oh actually ignore this API call" "oh actually issues these two calls when you see this one call with this signature" "yes this game requested synchronization here, but just ignore it" kinds of things.
While yes AMD did similar things when they could, it was way less prevalent (if only because they didn't have the staff necessary to pull it off to the same degree).
> I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.
This persistent bit of FUD really needs to die. Yes, you have to be careful, but at this point it's ridiculously well-known what is obviously correct and what is obviously incorrect when dealing with GPL. I'm sure there are some grey ares that haven't been worked out, but avoiding those is fairly simple.
Nvidia is already in a weird grey area, releasing binary blobs with an "open source" shim that adapts its interfaces to the GPL kernel. As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.
I don't know if statement about interfacing with GPL is true or not but your statement first calls it a FUD, meaning you believe it is false, and then you say all the driver code should be considered derivative work and therefore be subject to GPL, meaning that original statement you called a FUD is actually true.
Seems to me that a lot of GPL advocates are actually responsible for a good part of the GPL FUD.
> This persistent bit of FUD really needs to die. [...] I'm sure there are some grey ares that haven't been worked out
Way to contradict yourself.
> but avoiding those is fairly simple.
[Citation needed].
> As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.
Maybe this is what needs to happen to force companies to change their mindset, but where I work, lawyers tell us to (1) never contribute to any GPL'ed based code, (2) never distribute GPL'ed code to anybody (e.g. not even a docker container), etc.
Their argument is: a single slip could require us to publish all of your code, and make all of our IP open, and to make sure this doesn't happen, an army of lawyers and software engineers and managers would need to review every single code contribution that has something to do with the GPL. So the risks are very high, the cost of doing this right is very high as well, and the reward is... what exactly ? So in practice this means that we can't touch GPL'ed code with a 10 foot pole, it is not worth the hassle. If I were to ask my manager, it will tell me that it is not worth it. If they ask their manager, they will tell them the same. Etc.
BSD code ? No problem, we contribute to hundreds of BSD, MIT, Apache, ... licensed open source projects. Management tells us to just focus on those.
Nouveau is a highly capable open source driver for NVIDIA GPUs based on reverse-engineering.
For some older card generations (e.g. GTX 600 series) it was competitive with the official driver. But in every hardware generation since then, the GPU requires signed firmware in order to run at any decent clock speed.
The necessary signed firmware is present inside the proprietary driver, but nouveau can't load it because it's against the ToS to redistribute it.
Most GPU features are available but run at 0.1x speed or slower because of this single reason. Nvidia could absolutely fix this "tomorrow" if they were motivated.
Solution: Download the Nvidia blob, isolate the binary firmware, extract, load. Be fun? Absolutely not. Desperate times and inconsciable acts of legalism call for equally extreme levels of overly contrived legalism circumvention.
At this point I've gotten so bloody tired of the games people play with IP, that I'm arriving at the point I think I wouldn't even mind being part of the collateral damage of our industry being burned to the ground through the complete dissolution of any software delivered or related contract. If you sell me hardware, and play shenanigans to keep me from being able to use it to it's fullest capability, you're violating the intent of the rights of First Sale.
To be honest, I think every graphics card should have to be sold bundled with enough information for a layperson (or I'll throw out a bone,a reasonably adept engineer) to write their own hardware driver/firmware. Without that requirement, this industry will never change.
AMD didn't directly open source their driver because if legal issues.
The point is it's not about open sourcing your properitary driver but about not getting in the way of an alternative open source driver, maybe even letting it a bit of an have even if just unofficially.
I thing if I where nvidea I might go in the direction of having a fully it at least partially open source driver for graphic stuff and a not so open source driver for headless CUDA (potentially running alongside a Intel integrated graphics based head/GUI).
Through I don't know what they plan wrt. ARM desktops/servers so this might conflict with their strategies there.
The GPL can't cause your code to automatically become GPL licensed. It can only prevent you from distributing the combination of your incompatibly licensed code and others' GPL code.
>I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.
The only tricky things involve blatantly betraying the spirit of the agreement while trying to pretend to follow the letter and hoping a judge supports your interesting reading of the law.
Even so there is no provision in law wherein someone can sue you and magically come into possession of your IP.
Take a look at a recent snapshot of changesets and lines of code to the Linux kernel contributed by various employers: https://lwn.net/Articles/816162/
Arm themselves is listed at 1.8% by changesets; but Linaro is a software development shop funded by Arm and other Arm licensees to work on Arm support in various free software, and they contributed 4% of changesets and 8.8% by lines of code. And Code Aurora Forum is an effort to help various hardware vendors, many of whom are Arm vendors, get drivers upstreamed, and they contributed 1.8% by changesets and 10.1% by lines changed. A number of other top companies listed are also Arm licensees, though their support may be for random drivers or other CPU architectures as well.
However, Arm and companies in the Arm ecosystem do make up a fairly large amount of the code contributed to Linux, even if much of it is just drivers for random hardware.
And Arm and Linaro developers also contribute to GCC, LLVM, Rust, and more.
There’s no defending Nvidia’s approach to Linux and OSS. It is plain awful no matter how you try to twist the reality. And it is actively damaging because it forces extra work on OSS maintainers and frustrates users. You should not be required to install a binary blob in 2020 to get basic functionality (like fan control) to work. Optimus and wayland support is painfully, purposely bad.
> You should not be required to install a binary blob in 2020 to get basic functionality (like fan control) to work.
You are not required to do that. Use nouveau, buy an AMD or intel GFX card.
You are not entitled to it either. People developing nouveau on their free time don't owe you anything, and nvidia does not owe you an open source driver either.
I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
I don't use nvidia GFX cards on linux anymore (intel suffices for my needs), but when I did, I was happy to have a working driver at all. That was a huge upgrade from my previous ATI card, which had no driver at all. Hell, I even tried using AMD's ROCm recently on Linux with a 5700 card, and it wasn't supported at all... I would have been very happy to hear that AMD had a binary driver that made it work, but unfortunately it doesn't.
And that was very disappointing because I thought AMD had good open source driver support. At least when buying Nvidia for Linux, you know beforehand that you are going to have to use a proprietary driver, and if that makes you uncomfortable, you can buy just something else.
> You are not required to do that. Use nouveau, buy an AMD or intel GFX card.
Has internet discussion really fallen this low that all needs to be spelled out and no context can ever be implied?
We're in a thread about NVidia, so of course OP's talking about NVidia hardware here. Yeah, they can get AMD, but that does not change their (valid) criticisms of NVidia one bit.
> I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
Aaand?
Windows and macOS have different standard for drivers than many Linux users do. Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too?
I find it really weird to assume that because something is happening somewhere, it's some kind of an "objective fact of reality" that has to be true for everyone, everywhere.
When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?
It's the same thing. NVidia is not some lonely developer on GitHub hacking during their lunch break on free software.
Do you also assume that the kind of music you find interesting is objectively interesting for everyone?
This has nothing to do with entitlement. It's listing reasons for why someone thinks NVidia buying ARM is a bad idea.
> Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too? When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?
It is to me. When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed.".
That's exactly how these comments being made sound to me. Nvidia is very clear that they only support their proprietary driver, and they deliver on that.
I have many GFX card from all vendors over the years, and I've had to send one back because the vendor wasn't honest about things like that.
Do I wish nvidia had good open source drivers? Sure. Do I blame nvidia for these not existing, not really. That would be like blaming microsoft or apple for not making all their software open source.
I do however blame vendors that do advertise good open source driver support that ends up being crap.
What does any of this have to do with nvidia buying or not buying arm ? Probably nothing.
What nvidia does with their GFX driver can be as different from what ARM does, as what Microsoft does with Windows and Github.
> When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed."
That's a bad analogy. A better one would be it's like you bought a car, (an open-source operating system) and this one accessory supplier is selling you what are really motorcycle parts, but just about fit the car barely, (a less-than-great proprietary driver when you explicitly are on an open system).
Additionally, they are extremely secretive and absolutely refuse to answer any sort of questions or allow you to modify the parts you purchased from them to fit better by implementing various forms of DRM.
You can just not buy those parts and indeed that's what many users are doing.
This is separate from raising concerns about this somewhat dodgy parts manufacturer potentially acquiring another manufacturer, specifically one that does require a lot of cooperation with others by its very nature.
> Nvidia is very clear that they only support their proprietary driver, and they deliver on that.
It's more complex than that. They seem to actively implement features to make it purposely more difficult to develop an independent open-source driver. This is rather different than just being passively indifferent to open-source. Moreover their proprietary driver can be less than seller too, so am not so sure they "deliver" even on that.
Therefore we, Linux users, can refuse to support a company that only supports their (lacking) proprietary driver and certainly we are within our rights to raise concerns about its purchase of ARM given its actively hostile approach to open-source.
I would probably agree with you if everything was modular and commodity and easily swappable. If I decide I won't buy hardware with nvidia in it, that chops out a chunk of the possibly laptops I can have. It means I can't repurpose older hardware; sure, hindsight may be 20/20, but perhaps I didn't have the foresight 7 years ago to realize I'd want to run Linux on something today (yeah, older hardware is better supported, but it's by no means universal). It means that I can't run some things that require CUDA and don't support something like OpenCL.
And you can argue that that still is all fine, and that if you're making a choice to run Linux, then you have to accept trade offs. And I'm sympathetic to that argument.
But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests. And that's not a fair thing to require of us. If nvidia perhaps simply didn't care about supporting Linux at all, and just said, with equanimity, "sorry, we're not interested; please use one of our competitors or rely on a possibly-unreliable community-supported, reverse-engineered solution", then maybe it would be sorta ok. But they don't do that. They foist binary blobs on us, provide poor support, promise big things, never deliver, and actively try to force their programming model on the community as a whole, or require that the community do twice the amount of work to support their hardware. That's an abusive relationship.
Open source graphics stack developers have tried their hardest to fit nvidia into the game not because they care about nvidia, but because they care about their users, who may have nvidia hardware for a vast variety of reasons not entirely under their control, and developers want their stuff to work for their users. Open source developers have been treated so poorly by nvidia that they're finally starting to take the extreme step of deciding not to support people with nvidia hardware. I don't think you appreciate what a big deal that is, to be so fed up that you make a conscious choice to leave a double-digit percentage of your users and potential users out in the cold.
> None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
Not sure how that's relevant. Windows and macOS are proprietary platforms. Linux is not, and should not be required to conform to the practices and norms of other platforms.
I don't know why this is downvoted. If any, Nvidia has been providing quality drivers for Linux for decades, and it was the only way to have a decent GPU supported by Linux in the 2000s, as ATI/AMD cards were awful in Linux.
> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
But for entirely different reasons. Apple switched from PowerPC to Intel because the PowerPC processors IBM was offering weren't competitive. They switched from Intel for some combination of the same reason (Intel's performance advantage has eroded) and to bring production in-house, not because Intel was quarrelsome to do business with.
Meanwhile Apple refused to do business with nVidia even at a time when they had the unambiguously most performant GPUs.
Apple didn't part ways with Intel and IBM because they were difficult to work with. They parted ways because Intel and IBM fell behind in performance. Nvidia has certainly not, and Apple has paid a price with worse graphics and machine learning support on Macs since their split. It's clearly different.
Correct, IBM didn't care to make a power efficient processor and Motorola didn't see the benefit in Multimedia extension in their processor because they needed them for their network devices.
Nvidia introduced a set of laptop GPUs that had a high rate of failure. Instead of working with and eating some of the cost of repairing these laptops they told their customers to deal with it. Apple being one of their customers got upset and left holding the bag of shit and hasn't worked with them since.
Intel and AMD have used their x86/AMD64 patents to block Nvidia from entering the x86 CPU market.
Nvidia purchasing ARM will hurt not the large ARM licensees like Apple and Samsung but the ones that need to use the CPU in a device that does not need any of the Multimedia extensions that NVidia will be pushing.
With intel it's a bit more complicated than that. I think running Macs on their own processors is a bit cost saver for them, and allows more control. And intel's CPU shortages have hurt their shipping schedules. I don't think this transition is about CPU performance.
But yeah I don't think it's about collaboration either.
> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
Out of curiosity, is there any large open source product from NVidia? I can't think of any.
Only for their own hardware (RAPIDS, all the cuda libraries, etc.), which other companies like AMD have just forked and modified to work on their hardware keeping the algorithms intact.
Not agreeing or disagreeing, just want to point out that LLVM was always open source and wasn't developed by Apple. Apple just happened to hire the dev who initially wrote it.
Most LLVM development has to my knowledge been funded by Apple. LLVM and Clang has seen the bulk of their work done on Apple payroll.
It is a bit like WebKit. It was based on KHTML which was an open source HTML renderer. But Apple expanded that so greatly on their own payroll that it is hard to call WebKit anything but an Apple product.
This is not true anymore. Apple funded a significant part of LLVM/Clang work in the 2010s, and then again with the aarch64 backend, but nowadays Google and Intel contribute much more to LLVM than Apple.
I think the common theme in your examples is that in these situations other parties bend to Nvidia's demands. Nvidia has no problem with other parties bending to their demands. But when another company or organization requires Nvidia to bend to their demands, things go awry almost without exception.
EDIT - for added detail:
> - nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards
Most (all?) of which bend to Nvidia's demands because Nvidia's been extremely successful in getting end users to want their chips, making the Nvidia chip a selling point.
> - nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology
IBM bends to Nvidia's demands so POWER can remain a relevant HPC platform.
> - nvidia collaborates with OS vendors like microsoft very successfully
Microsoft is the only significant OS vendor with which Nvidia collaborates successfully. It's true - but for the longest time Nvidia would have been out of business if they didn't. I will concede this point, but I don't find this is enough to paint a different picture.
> - nvidia collaborated with mellanox successfully and acquired it
Mellanox bent over to Nvidia's demands to such an extent that they were acquired.
> - nvidia collaborates with ARM today...
Collaboration in what sense? My impression is that Nvidia and ARM have a plain passive customer/supplier relationship today.
> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
Nvidia is humongously behind their competitors Intel and AMD in open source contribution while having a large amount more R&D in graphics. They are terrible at open source compared to the "industry standard" of their market, and only partake as far as it serves their short term needs.
They are perfectly entitled to behave this way, by the way. But Nvidia's open source track record is only more evidence is that they don't understand how to work in an open ecosystem, not less.
> You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).
The "whataboutism" is valid but completely irrelevant here. I would also not appreciate Apple buying ARM.
> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
Apple has parted ways with Intel, IBM, Motorola, Samsung (SoCs) and PowerVR for technology strategy reasons, not relationship reasons. Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.
> Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.
I'm typing this on a MacBook with an Nvida GPU that was created in 2012, many years after the failing laptop GPU debacle. AFAIK, Apple used that GPU until 2015?
I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)
> I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)
This used to be true, as Apple swapped between AMD and Nvidia chips several times in 2000-2015. Then Nvidia and Apple fell out, and Apple has not used Nvidia chips in new designs in 5 years - a timeframe in which Nvidia coincidentally achieved its largest technical advantages over AMD. Apple goes as far as to actively prevent Nvidia's own macOS eGPU driver from working on modern macOS. A simple pricing dispute does not appear to be a good explanation here.
Whatever the reason may be, the fact that Apple used Nvidia GPUs until 2015 at least debunks the endlessly repeated theory that it was because of the broken GPUs of 2008.
What Apple is really worried about with NVIDIA is vendor lock-in. They own the walled garden, they absolutely are not going to accept an "external dependency" they can't control.
CUDA is such an "external dependency". It locks you in to something that's not an Apple product.
I never quite got how the Apple story was an indictment of NVIDIA. The first-gen RoHS-compliant solders sucked and would fracture... how exactly is that NVIDIA's fault? In fact, wouldn't Apple have been the ones who chose that particular solder?
It is the same issue that caused the Xbox 360 red-ring-of-death, and caused "baking your graphics card" to become a thing (including AMD cards). It basically affected everyone in the industry at the time, and Apple would not have gotten any different outcome from AMD had they been in the hotseat at the time. They were just throwing a tantrum because they're apple damn it and they can't have failures! Must be the supplier's fault.
That one has always struck me as a "bridezilla" story where Apple thinks they're big enough to push their problems onto their suppliers and NVIDIA said no.
And as far as the Xbox thing... Microsoft was demanding a discount and NVIDIA is free to say no. If they wanted a price break partway through the generation, it probably should have been negotiated in the purchase agreements in the first place. NVIDIA needs to turn a profit too and likely structured their financial expectations of the deal in a particular way based on the deal that was signed.
Those are always the go-to "OMG NVIDIA so terrible!" stories and neither of them really strike me as something where NVIDIA did anything particularly wrong.
> Microsoft is the only significant OS vendor with which Nvidia collaborates successfully
Canonical, which ships nvidia's proprietary driver with Ubuntu, is another quite major OS vendor that collaborates with nvidia successfully. Recently, Ubuntu's wayland-based desktop environment was also the first to work with nvidia's driver (the results of this work are open source).
Apple is parting with Intel over quality control issues. That is the same reason why Apple parted with IBM and Motorola that and Intel chips were faster.
You will find ARM Macs are cheaper than Intel Macs, even if not as fast but consume less power due to mobile technology.
Microsoft had the Surface tablet with ARM chips and an ARM version of Windows which didn't sell as well, but then they are not Apple who won't make the same mistakes as Microsoft.
Not really. There is nothing any future owner of ARM can do to cut Apple out. Which is why they are not interested in purchasing ARM themselves. They co-developed ARM6 with Acorn and VLSI. Their license allows them to build on to the ARM core. Most Nvidia can do is try to outperform Apple but it will come at a lost of customer that don't need desktop features like GPUs. https://en.wikipedia.org/wiki/ARM_architecture#Architectural...
I'm not normally one to defend Nvidia (paratiulalrly not from my Linux laptop), but at least the Xbox and PS3 issues never really seemed to be their fault from what I've heard on the grapevine.
Xbox: The Xbox's security was broken, and Nvidia apaprently took the high road, claimed a loss on all existing chips in the supply chain (claiming a loss fo the quarter out of nowhere and tanking their stock for a bit) and allowed Microsoft to ship a new initial boot ROM as quickly as possible for a minimum of cost to Microsoft. When that new mask ROM was cracked within a week of release Microsoft went back to Nvidia looking for the same deal and Nvidia apparently told them to pound sand and in fact said that they would be doing no additional work on these chips, not even die shrinks (hence why there was no OG Xbox Slim). There are other reasons why Microsoft felt like Nvidia still owed them albeit, but it was a bit of a toxic relationship for everyone involved.
PS3: they were never supposed to be the GPU until the eleventh hour. The Cell was supposed to originally crank up to 5GHz (one of the first casualties of the end of Dennard scaling, and how it affected Moore's law as we conceived it) and there were supposed to be two Cell processors in the original design, and no dedicated GPU. When that fell through and they could only crank them up to 3.2GHz, they made a deal with Nvidia at the last second to create a core with an new bus interconnect to attach to the Cell. And that chip was very close to the state of the art from Nvidia. Most of it's problems were centered around duck taping in a discrete PC GPU into the console with time running out on the clock, and don't think that anyone else would have been able to deliver a better solution under those circumstances.
Like I said, Nvidia is a scummy company in a lot of respects, but I don't think the Xbox/PS3 issues are necessarily their fault.
I would agree that general Nvidia troubles don't particularly stand out in the PS3 hardware design clusterfuck, but Microsoft's and Nvidia's falling out really is indicative of terrible relationship management even if it was from both sides. Again my point is that Nvidia just doesn't understand how to work together, how not to view everything as a zero sum game. That doesn't mean that Nvidia is the only bad actor in these situations, but Nvidia really does end up in these situations quite a lot.
That is just not true. And I am with parents I dont normally come and defend Nvidia.
If by working with everyone meant stepping back and relenting in every possible way then Nvidia would not be profitable. I am not sure why Microsoft felt they were entitled to Nvidia. And Nvidia just said no. It was that simple.
Nvidia wants to protect their business Interest, and that is what business is all about. And yet everyone on the internet seems to think company should do open source or throw resources into it etc.
I've already mentioned in the top parent comment that Nvidia is perfectly entitled to behave this way. They clearly know how to run a successful business in this way. I have bought Nvidia chips in the past and will continue to do so in the future when they are the best option for my use case - I don't really try to personify companies or products like this.
I am just pointing out that Nvidia's evident opinion on how to run a business (their corporate culture) is not in line with cultivating an open ecosystem like ARM is running. And the cultivation of this ecosystem is ARM's key to success here. Nvidia is entitled how to run a business how they want, but I'm very much hoping that that way of working does not translate to how they will run ARM.
People everywhere in this thread are having huge difficulty separating the point "Nvidia's way of doing business does not match ARM's" with "I have personal beef with Nvidia's way of doing business". I'm trying to make the former argument.
> That is just not true.
Out of curiosity - what isn't true here? Am I missing facts, or are you expressing disagreement with my reading of the business situation? If the latter is based on some understanding I have some personal beef with Nvidia, then please reconsider.
The ARM company is not just about the instruction set architecture. The ISA wouldn't be interesting at all if no good processors were built with the ISA [0]. For RISC-V to succeed, it requires a company that builds some good processor designs for it - for smartwatches, smartphones, tablets, laptops desktops - and licenses that to others. That company (one or multiple) does not (yet) exist, and is not easy to build.
[0]: Which is exactly why SPARC, and with one exception Power is dying, and why RISC V is yet to deliver. Nobody (bar IBM's POWER line) is building good processors with those ISAs that make it worth the effort to use. Nothing to do with the ISA - you just need chips people are interested in using.
Yep. It's difficult to build a community-- you need to have enough mass to get further interest in. From a business view you end up with the question of "Why bother with RISC-V when ARM is doing what we need and has enough critical mass to keep things going forward?"
About the only thing that could force that to change would be another company buying up ARM and changing the licensing mechanisms (e.g. pricing or even removing some license options) going forward.. or just wrecking the product utterly.
I do think RISC-V has an opportunity here, but only if ARM sells out to NV and NV screws this up as hard as they're likely to in that situation.
> For RISC-V to succeed, it requires a company that builds some good processor designs for it - for smartwatches, smartphones, tablets, laptops desktops - and licenses that to others
The way I see it is that this may actually generate incentive for someone to do that. One of the reasons that that isn't happening yet is because there's no real need with ARM vendors supplying and no real chance with ARM vendors as competition. This could, in theory, clear the way.
This is assuming it would have to be a new company, rather than an existing company like Qualcomm or AMD which could produce a processor with a different ISA if nVidia/ARM became unreasonable to deal with.
This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.
> This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.
Google could easily afford to design their own RISC-V CPUs and port Android to it, if they thought it was in their strategic interests to do so.
I think it really depends on how nVidia-owned Arm behaves. If it behaves the same as Softbank-owned Arm, I don't think Google would bother. If it starts to behave differently, in a way which upsets the Android ecosystem, Google might do something like this. (I imagine they'll give it some time to see whether Arm's behaviour changes post-acquisition.)
I’m not saying your claim is wrong but it’s not clear to me how that article backs the claim that China is pushing hard on RISC-V. It seems more like the old CEO of Arm China was doing no good very bad things, most likely for his own benefit.
“Arm revealed that an investigation had uncovered undisclosed conflicts of interest as well as violations of employee rules.”
That was intended to support the second part of my comment (although I can definitely see how that wasn’t clear - sorry about that). A lot of Chinese companies have come out with new RISC-V designs and it’s clear they’re prioritizing making it a possible alternative platform in the case that Arm can no longer be used. The RISC-V foundation also decided to move from the US to Switzerland to avoid the exact sorts of restrictions that have been placed on Arm.
This can happen anywhere in the world. In order to remove a CEO, you have to follow the proper process. Allen Wu claims that the process wasn't followed and, therefore, his dismissal was illegal and void in effect.
It's not like he was dismissed and he just didn't leave his office. He's challenging the legality of his dismissal.
a•verse /əˈvɜrs/ (adj.): Having a strong feeling of opposition to; unwilling: Not averse to spending the night here.
a•verse (ə vûrs′), (adj.): Having a strong feeling of opposition, antipathy, repugnance, etc.; opposed: He is not averse to having a drink now and then.
I apologize. I've been grumpy about neurological issues causing me to have more trouble typing than usual the last few weeks since a lower back injury. My response was over the top and didn't need to be so short-tempered, so again I apologize.
They can also design their own ISA. An ISA is a document, they can write their own. Now, can you think of reasons why they wouldn't want to do it that don't also apply to MIPS?
I was thinking the same thing. Nvidia could start charging obscene fees for ARM licenses, but then RISC-V is poised to receive more investment and become increasingly mainstream. Not such a bad thing. We can switch architectures. The toolchain is maturing. Imagine if more companies started making high-performance RISC-V chips?
Don't forget the GeForce Partner Program they pushed a while back which required partners to make their gaming brands exclusive to GeForce products. They ended up cancelling it and I bet the reason was due to all the anti-competitive violations the FTC would have slapped on them.
While Nvidia has a vastly superior product to AMD and Intel, they have less than 20% of the GPU market. Intel has held greater than 50% market share since 2010.
It is very hard to make an anti-competition case against someone who is consistently 2nd and 3rd in the market.
The “GPU market” is not an ideal market (E.g. with total fungibility of goods, low non-elastic demand, etc) - Intel has most of the share of GPUs sold because it’s impossible for NVIDIA - or anyone else - to compete with Intel in the spaces where Intel supplies without competition: CPU-integrated GPUs.
On a related note: with PCs now definitely heading towards ARM, this is a sensible move by NVIDIA: they could now sell GeForce-integrated ARM chips for future Windows and Linux boxes - and then they would be the ones with the dominant marketshare.
If only a point of balance. Intel integrated GPU is safe choice on Linux. If it was not there would be space for competitors - entry level GPUs, AMD iGPU.
The anti-competitiveness isn't the market isn't all GPUs, it's gaming hardware. The sole providers GPU providers for that space is just AMD and Nvidia.
Nvidia's GPP would require manufactures such as ASUS, Gigabyte, MSI, HP, Dell, etc. to have their gaming brands only use Geforce GPUs. So all the well known gaming brands such as Alienware, Vodoo, ROG, Auros and Omen would only be allowed to have Geforce. nVidia already has aggressive marketing plastering their brand across every esports competition, which is fair game, but the GPP would be a contractual obligation to not use AMD products.
Which is a perfectly reasonable and legal thing to do, even if you don't personally like it.
Nike is the exclusive clothing brand of every NBA team. American Airlines is the exclusive airline of the Dallas Cowboys. UFC fighters can only wear Reebok apparel the entire week leading up to a fight.
Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.
> Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.
How did that work out? Did your company secure a good rate - and/or did the vendor become complacent once they realised they didn’t have to compete anymore? Did the contract require minimum levels of improvement in server reliability and performance with each future product generation?
It's a tough spot to be in. AMD and Intel split the largest chunk of the cake because their products are cheap, so they make money in volume.
The only reason nvidia has 20% of the GPU market at all is because their products are better, but without volume, there is very little separating you from losing the market.
If NVIDIA slips over AMD and Intel perf wise during one generation, the competition will have cheaper and better products, so it's pretty much game over.
>While Nvidia has a vastly superior product to AMD
Which product? It can't possibly be their GPUs you mean because that would be hilariously wrong. That is like saying a Lamborghini is a better car than a VW because it has a higher top speed.
To me -and to many many buyers- AMD is the superior product. To most Intel has the best product by far (business laptop, Chromebook, etc.)
AMD has the best iGPUs available, which (unlike Intel's) are actually fast enough to play a lot of games. They're also significantly more power efficient as a result of 7nm. For any use where this is fast enough -- and this is a huge percentage of the PC market -- nVidia has no answer to this.
AMD is the only option for a performant GPU with reasonable open source drivers. Intel has the drivers but they don't currently offer discrete GPUs at all. nVidia doesn't have the drivers.
AMD makes it a lot easier to do GPU virtualization.
AMD GPUs are used in basically all modern game consoles, so games that run on both are often better optimized for them.
They also have the best price/performance in the ~$300 range, which is the sweet spot for discrete GPUs.
At this point, do Apple and Qualcomm even depend on ARM's new designs? In the same way that AMD branched from Intel but are still mostly compatible, can the same thing happen in mobile chipsets?
Apple very much does not and is reported to have a perpetual license to the ISA. Apple will most likely be fine even in the worst case scenario.
Qualcomm however has been rebranded/tweaked (which is unclear) ARM standard CPU core designs since 2017. They very much depend on ARM doing a lot of heavy lifting.
A few years ago 15 companies, among which Qualcomm, Apple, Intel, Nvidia, Microsoft, Samsung, Huawei and more had an architecture license which is perpetual so that probably puts them all on safe ground. I'm sure that the specific licensing terms can vary but I doubt someone like Qualcomm didn't take any precautions for exactly such eventuality given how much they rely on being able to ship new ARM based SoCs. They probably gave up on designing custom cores because of the effort/costs involved and the fact that 99% of the Android market doesn't really require it. But they'd still have to ship ARM cores, standard or not.
Apple is probably the safest of the bunch given how they helped build ARM.
ARM announced it would cut ties with Huawei after the US ban but reconsidered the decision less than half a year later so I assume that the architecture license is either usually iron clad or simply too valuable to both sides to give up.
Perpetual means they can indefinitely deliver as many designs as they want using the ISA they licensed. Not "perpetual for everything ARM present and future". It's similar to the perpetual multi-use license except the holder has more freedom with the modifications and customized designs. All other licenses are time limited.
And again, the terms of the license may vary. I have the impression that Apple has a far more permissive license than anyone else out there for example.
Qualcomm has shown in the past to be able to build great custom ARM CPUs not based on an ARM standard design. But it seems they decided the investment was not worth it after their custom Kryo design (which was not a complete failure but definitely not better than what ARM was producing at the time). But I think they'll need to go back to their own silicon at some point if this acquisition happens.
For sure Huawei and Samsung (and smaller manufacturers like Rockchip, Mediatek, Allwinner) don't have an impressive track record designing custom CPU IP and definitely not custom GPU IP. These guys should be terribly alarmed if this were to happen.
Definitely in the short run, because of the understandable fear from NVIDIA's competitors to use their (now) technology. Maybe in the mid run if those fears begin to crystallize. Unlikely in the long run, I'd assume NVIDIA would spin ARM off before killing it entirely, buying ARM would be a multi-billion investment.
Indeed. Nvidia shouldn't be allowed to buy it, given their status (not just a reputation) of an anti-competitive bully.
But anti-trust is so diluted and toothless these days, that the deal will probably be simply rubber stamped. If they aren't stopping existing anti-competitive behavior, why wouldn't they allow such bullies to gain even more power?
Yeah, I, too am worried about this. I would have rather had a conglomerate of companies with Apple being one buying them and keeping them private. But oh well. Hopefully Nvidia does right by all of ARM’s existing customers.
With this buy Nvidia has GPUs, CPUs, networking, what else do they need to be a vertically integrated shop?
If all 'closed' companies would support Linux as well as NVIDIA does then I would throw a party. Keep in mind that they don't have to open up their stuff. Instead, they support it to the hilt and as long as I've been using Linux and Nvidia together (2006 or so) they've never let me down.
> Neumann created a company that destroyed value at a blistering pace and nonetheless extracted a billion dollars for himself. He lit $10 billion of SoftBank’s money on fire and then went back to them and demanded a 10% commission. What an absolute legend.
Is the global industry (cloud, PC, peripheral, mobile, embedded, IoT, wearable, automotive, robotics, broadband, camera/VR/TV, energy, medical, aerospace and military) loss of Arm independence our only societal solution to a failed experiment in real-estate financial engineering?
Why would Arm be valued at $10B publicly and $32B+ privately? Nvidia shareholders would be paying a premium for ... what exactly? Did Softbank overpay for Arm?
Is Arm not profitable as a standalone business? They recently raised some license fees by 4X.
I don’t believe NVidia will pay $30B. But certainly they might believe ARM has value outside its current cash flow and mediocre growth. Like strategically combining technologies.
I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.
> Nvidia shareholders would be paying a premium for ... what exactly?
They'd be paying a premium for a path to an all-nvidia datacenter & supercomputer.
Consider HPC applications like Oak Ridge's Frontier supercomputer. They went with an all AMD approach in part due to AMD's CPUs & GPUs being able to talk directly over the high-speed Infinity Fabric bus. Nvidia's HPC GPUs can't really compete with that, since neither Intel nor AMD are exactly in a hurry to help integrate Nvidia GPUs into their CPUs.
This makes ARM potentially uniquely valuable to Nvidia - they can then do custom server CPUs to get that tight CPU & GPU integration for HPC applications.
A $30B acquihire would be impressive, 100 times more than Amazon paid for Annapurna, the team who built AWS Graviton server CPU on top of Arm's reference design. If the HR department is having so much trouble hiring Arm engineers that Nvidia needs to pay 30 billion dollars to hire a CPU design team, something's wrong. Nvidia already has CPU design teams, e.g. they made a 2014 Transmeta-like design.
> .. this chip is fascinating. NVIDIA has taken the parts of Transmeta's initial approach that made sense and adopted them for the modern market and the ARM ecosystem -- while pairing them with the excellent GPU performance of Tegra K1's Kepler-based solution.
> there’s an interesting theory ... that Denver is actually a reincarnation of Nvidia’s plans to build an x86 CPU, which was ongoing in the mid-2000s but never made it to market. To get around x86 licensing issues, Nvidia’s chip would essentially use a software abstraction layer to catch incoming x86 machine code (from the operating system and your apps) and convert/morph it into instructions that can be understood by the underlying hardware.
Which other Arm licensee has been talking about x86/Arm instruction morphing in 2020?
If the goal of acqui-billion-hiring the Arm reference design team is to prevent other companies from using those designs, that would endanger smaller vendors in the Arm supply chain, along with many of the devices that run modern society. Regulators may not like that.
An ARM owned and fully controlled by NVIDIA is probably worth more to them than an independent and reasonably neutral ARM who's willing to do business with NVIDIA's competitors. Maybe not $22B more, though.
Doing IPO would mean they will use the money raised meaningfully. Shareholders probably see more upside with Nvidia integration. I’m not really sure what ARM need a bunch of money for in an IPO, they are pretty established.
The goal of independence is typically to execute on a vision.
According to some comments in this thread, the alternative is the slow destruction of the neutral Arm ecosystem. While some new baseline could be established in a few years, many Arm customers could face a material disruption in their supply chain.
With the US Fed supporting public markets, including corporate bond purchases of companies that include automakers with a supply chain dependent on Arm, there is no shortage of entities who have a vested interest in Arm's success.
If existing Arm management can't write a compelling S1 in the era of IoT, satellites, robots, edge compute, power-efficient clouds, self-driving cars and Arm-powered Apple computers, watches, and glasses, there will be no shortage of applicants.
ARM was publicly traded between 1998 and 2016. In that period its value multiplied about 25x, not counting the premium of the acquisition. Could you elaborate, please? Where do you see the disaster? (Honest question).
Apple is a small, although significant, part of ARM's total market share. And that 25x is, as I said, without taking into account the premium. If you do, and there are good arguments to do so, the valuation growth is 35x, in almost 20 years.
Regarding innovation, ARM's been at it since 1990. I'm sure it's not the same now as it was 30 years ago, but we're well past the point where one can reasonably fear it to be an unsustainable business. Last time I heard numbers, they were talking about more than 50 billion devices shipped with ARM IP in them. That is a massive market.
You don't answer my question. Why wouldn't licensing businesses work as publicly traded companies? What's the fundamental difference, specially in an increasingly fabless market, between a company licensing IP to other companies and a company selling productized IP to consumers?
At the moment ARM lives or dies by the success of the ecosystem as a whole.
When its owned by a customer this may no longer be the case and there are huge potential conflicts of interest. For example, would an Nvidia owned ARM offer a new license to a firm that would be a significant competitor to an existing Nvidia product (eg Tegra)? Will Nvidia hinder the development efforts of other competitors? Will Nvidia give itself access to new designs first? How will it maintain appropriate barriers to the flow of information about competitors new designs to its own design teams?
I can see this getting very significant regulatory scrutiny and rightly so.
China is already aware and aiming to supply 70% of their own demand for chips. [0]. Thanks to that, we might also see a rise in RISCV chips [1] which could even get the attention of other states besides China and India.
I think Trump started something unintentionally that might put other countries in a better position to deal with the American semiconductor hegemony 5-10 years from now.
IMO, Nvidia should be allowed to buy ARM just for it to get bad enough for people to want to buy non NVIDIA products. For years NVIDIA has had shitty business practices, but I bet most people on HN (and the rest of the world) don't give 2 shits about competition and market leadership. They just buy NVIDIA because it's the standard.
Things have to get worse before the get better. It's really, the only way humans and the public seem to be able to learn.
It's Nvidia circling back for a kill on intel. They didn't go head to head, even they wanted to. Instead they built a completely different space within data centers for them, got a foothold, expanded (mellanox) and now going for the missing piece, which will also allow them to expand the battleground with intel outside of datacenters. Interesting times and Nvidia, so far, showed they know their strategic moves.
Nvidia is making a play for the data center business not the desktop business. Especially with its Mellanox acquisition, Nvidia wants to build high performance data centers (they have GPUs and networking but need CPUs since most computing is general purpose). I doubt that they'll succeed though since they don't have the software layer to provide the 'private cloud' that they are looking to build.
They basically want to run a cloud built around feeding enormous amounts of data to their GPUs, for AI operations and the like. This is also why they bought SwiftStack in March: to manage that storage.
NXP (formerly Freescale) generally has good docs and tools for their iMX series SoCs. For some of them you have to sign up to get the reference manuals though.
RISC-V is inherently a customizable ISA though, whereas ARM implementations are very specific about what they require to be called an "ARM processor". This wouldnt change from this acq.
no. they're isolated for a reason, with the RISC-V processor being used as the controller to manage the behavior of the other parts of the chip. beyond just licensing ARM is expensive because it's required to implement a lot. With that chip being RISC-V they can make it as minimal and perfectly tuned as possible, so it's slow when it can afford to be cheap and fast when it needs to be.
That isn't the same at all. Canonical being a major backer of Linux is significant, but Linux is significantly open and diversified to where it does not stand or fail by one party, abet there are some that have more influence than others.
The other part of it is Softbank will be rescued from their idiotic and incredibly wasteful investments. They bought so much crap and had too much money and they still managed to buy a few valuable things, probably by accident. It's slightly infuriating. At least I'm happy to see amd succeeding by actual intelligent engineering.
So basically, it will be two companies which own both the CPU and GPU stack (AMD/ATI and Nvidia/ARM) and intel will just sort of end up at the wayside. Not really what I expected.
FWIW, Intel has been pushing towards launching a dedicated discrete GPU platform. If/when they recover, this would place us in having three distinct CPU/GPU companies.
If I was Intel, I would be going straight for the TPU market. GPU have a bunch of legacy from the G=Graphics legacy. The real money maker is not likely to be gamers (although it has been healthy enough market). The future of those vector processing monsters is going to be ML (and maybe crypto). This is the difference between attempting to leapfrog compared to trying to catch up.
> The future of those vector processing monsters is going to be ML (and maybe crypto)
That's a heavy bet on ML and crypto(-currency? -graphy?). Has ML, so far, really made any industry-changing inroads in any industry? I'm not entirely discounting the value of ML or crypto, just questioning the premature hype train that exists in tech circles (especially HN).
Well, yes that is the point. My theory is that the gaming market for GPUs is well understood. I don't think there are any lurking surprises on the number of new gamers buying high-end PCs (or mobile devices with hefty graphics capabilities) in the foreseeable future.
However, if one or more of the multitude of new start-ups entering the ML and crypto (-currency) space end up being the next Amazon/Google/Facebook then that would be both unforeseeable and unbelievably transformative. Maybe it won't happen (that is the risk) but my intuition suggests something is going to come out of that work.
I mean, it didn't work out for Sony when they threw a bunch of SPUs in the PS3. They went back to a traditional design for their next two consoles. So not every risk pans out!
Is that opinion based on anything firmer than a pessimistic outlook?
Lots of people jump on any trend, it doesn't mean the hype is unjustified or that nobody is "moving the needle" with that trend. Recommendations (e-commerce, advertising, music, etc) have been pretty revolutionized by it.
As a personal anecdote, the quality of the recommendations I receive across the board has been roughly inversely proportional to the level hype around ML in the tech press and academia.
C.f. Youtube's, Amazon and Netflix (products that bet BIG on recommendations) being incapable of recommending compelling material.
ML contributes to a significant fraction of revenue at three of the world's largest companies (Amazon, Google, Facebook - largely through recommendations and ad ranking). It also drives numerous features that other tech companies build into their products to stay competitive (think FaceID on iPhone). Hard to argue that it doesn't move the needle...
A ton. Look at the nearest device around you, chances are it runs Siri, Alexa, Cortana, or Google voice assistant. This will only grow.
Same with machine vision. It's going to be everywhere — not just self-driving trucks (which, unlike cars, are going yo be big soon), but also security devices, warehouse automation, etc.
All this is normally run on vector / tensor processors, both in huge datacenters and on local beefy devices (where a stock built-in GPU alongside ARM cores is not cutting it).
This is a growing market with.a lot of potential.
Licensing CUDA could be quite a hassle, though. OpenCL is open but less widely used.
nvidia's largest revenue driver, gaming, made 1.4B dollars last year (up 56% YoY). nvidia's second largest, "data center" (AI) made 968M (up 43% YoY). Other revenue was 661M. Up to you if nvidia's second largest revenue center, of nearly a billion/year is "industry changing"
> The future of those vector processing monsters is going to be ML (and maybe crypto).
Hopefully some of those cryptocurrencies (until they get proof-of-stake fully worked out) move to memory-hard proof-of-work using Curve25519, Ring Learning With Errors (New Hope), and ChaCha20-Poly1309, so cryptocurrency accelerators can pull double-duty as quantum-resistant TLS accelerators.
I'm not necessarily meaning dedicated instructions, but things like vectorized add, xor, and shift/rotate instructions, at least 53 bit x 53 bit -> 106 bit integer multiplies (more likely 64 x 64 -> 128), and other somewhat generic operations that tend to be useful in modern cryptography.
The one thing I don't get is, there are a lot of machines out there that would gain a lot from specialized search hardware (think about Prolog acceleration engines, but lower level). For a start, every database server (SQL or NoSQL) would benefit.
It is also hardware that is similar to ML acceleration, it needs better integer and boolean (algebra, not branching) support, and has a stronger focus on memory access (that ML acceleration also needs, but gains less from). So how comes nobody even speak about this?
You would need large memory bandwidth and a good set of cache pre-population heuristics (putting it directly on the memory is a way to get the bandwidth).
ML would benefit from both too, as would highly complex graphics and physics simulation. The cache pre-population is probably at odds with low latency graphics.
From what we know so far (https://www.tomshardware.com/news/intel-xe-graphics-all-we-k...), it will be a while before Intel competes in the GPU space. The first offering of Xe graphics (still not out yet) will probably not be competitive with cards that AMD and Nvidia released over a year ago.
If Intel can survive their current CPU manufacturing issues, manage to innovate on their design again, and manage to improve the Xe design in a couple generations, they might be in a good position in several years. I (as a layman) give them a 50/50 shot at recovering or just abandoning the desktop CPU and GPU market.
> The first offering of Xe graphics (still not out yet) will probably not be competitive with cards that AMD and Nvidia released over a year ago.
Being a only a year behind market leaders with your first product actually seems pretty impressive to me. Especially if that's at the (unreasonably priced) top of the line, and they have something competitive in the higher volume but less sexy down market segments.
Intel's i860 was released in 1986, that evolved to the i740 in 1996, and later on to the KNC, KNLs, Xeon Phis, etc.
The >= KNC products have all been "one generation behind" the competition. When the Intel Xe is released, Intel will have been trying to enter this market for about 30 years.
This market is more important now than ever before. I hope that they keep pushing and do not axe the Intel Xe after the first couple of generations only to realize 10 years later that they want to try again.
They had a bad product that they had to sell at below cost to get market share. The x86 tax is pretty small for a big out of order desktop core but it's much more real at Atom's scale.
I think radio tech is an issue in the mobile space. It is heavily patented and the big players who can integrate this technology into their chips can offer much more energy efficient solutions.
Intel had that in-house too... it just also kinda sucked too...
That is the thing with a lot of these side projects Intel is always working on. It would be great if they actually delivered good products, but they often spend billions acquiring these companies and developing these products only to turn out one or two broken products and then dump the whole project.
I think this time is different with Xe, but I can't blame anyone for looking at the past history and being dubious that Intel is in it for the long haul.
> I wouldn't count Intel out yet. Sure, things don't look that great
I always get a kick out of the sentiment toward Intel on HN.
Intel is booming financially. Things have never been better for them in that respect. They have every opportunity to fix their mess.
Intel has eight times (!) the operating income of Nvidia, with a smaller market cap.
Intel is one of the world's most profitable corporations. $26 billion in operating income the last four quarters. Their margins are extreme. Their sales are at an all-time high. Their latest quarterly report was stellar.
In just 2 1/2 years Intel has added a business the size of Nvidia and AMD combined.
If they can't utilize their current profit-printing position to recover, then they certainly deserve their tombstone. Nobody has ever had an easier opportunity to find their footing.
This sounds very similar to the situation Nokia was in around 2007:
- Nokia was booming financial. Things had never been better for them in that respect.
- Nokia had XX times (!) the operating income of Apple's mobile business.
- Nokia was one of the world's most profitable corporations.
And, yet, the writing was on the wall. Nokia was doomed once the smartphone era came. That's where Intel is today: AMD crushes them on the high-end general purpose CPUs. ARM crushes them on I/O performance and the low-end for general purpose CPUs. GPUs crush Intel in the middle, for special-purpose (mainly single-precision floating point) computing.
Right now, large portions of new computer sales, and an even larger portion of the high-margin cpu sales, come from cloud computing. AMD and ARM are stealing huge market share from Intel on that front. I don't see that momentum changing any time soon.
There's a reason that Intel has 8x the operating income of NVidia while having a smaller market cap. It's not because of where they are currently--it's where they are going. Stock market valuations are forward-looking, and the future doesn't look so bright for Intel.
Nokia was fine - just switch flagship from Symbian to Android, continue feature phones and Maemo. After so many years and under different manufacturer brand is still alive.
Stock market valuations are forward-looking, but they aren't always predicting the right future.
Personally I won't be betting on or against Intel - it wouldn't shock me if they follow the Nokia route, it also wouldn't shock me if they come out with a new generation that puts them back on top within the next few years.
Intel is booming financially. Things have never been better for them in that respect. They have every opportunity to fix their mess.
So was RIM in 2010. Profits are a trailing indicator. The PC market is really small compared to the mobile market and declining. While Apple only has 10% of the overall market, it has a much higher percentage of high end personal computers and Intel is about to lose Apple as a customer.
PCs also are having longer refresh cycles. What does “recovery” look like? PC sells going up? That’s not going to happen.
They still have the server market while that is probably growing, Amazon is pushing its own ARM processors hard and MS and Google can’t be too far behind.
Intel has a habit of putting snatching defeat from the jaws of victory.
Their decades of more or less monopoly status has made them complacent, their revenue is high still because of this inertia built into the market that simply will not vanish.
The datacenter for example is still dominated by Xeon not because people like Xeon over Epyc but because there is not a easy migration path between the 2 platforms. If I was building a whole new server farm with all new VM's I would choose Epyc all day... but if I need to upgrade hosts in an existing farm with no down time well that will need to be Xeon then...
When it comes to desktops/laptops though, Lenovo's AMD line is attractive and suffers no such problems
I find it humorous too, but I think it's easily explained: we get a constant stream stories of the plucky underdogs with their fancy engineering achievements. HN loves both industry disruption and engineering achievements, so it's a sort of self-reinforcing reality distortion field. See Tesla for a very similar sort of story.
The innovators and underdogs are always great to see, and they fuel our collective imagination, so it's no surprise that they dominate the HN front page. Of course, that mind-share dominance is in stark contrast to the well-entrenched money-printing machines they're trying to disrupt, who are happy to keep dominating their respective industries year after year instead.
This is very true. They got this profitable though by gutting every long term investment in new forays: they sold off their ARM business, their modem business, they never bothered to make a serious GPU or mobile chip, etc.
The margins on those big Xeon chips have been so good that they ditched everything else, and painted themselves into a corner, sitting by the sidelines for the past 20 years as new markets emerged.
Right, that's kinda my point, maybe not strongly enough stated. The "don't look that great" is "they'll probably have to buy fab resources from someone else" to keep up perf-wise (like their competitors, who are fabless), not "they're going bankrupt anytime soon".
Is this comparable to that situation? AMD & Intel were building chips basically to the same standard. ARM/Nvidia vs. Intel would be a lot more asymmetric.
From the outside it sounds more like Intel has been infighting about this for 20+ years...
The original i740 was theoretically a capable card; although fairly hampered by being forced to use Main Memory for Textures. Intel eventually backed down from the graphics market back then, and instead continued to use the 740 as a basis for the integrated graphics in the i810/815 chipsets.
But, as GPUs became closer to what we saw as real GPUs, Intel continued to press on with the idea that keeping things done in the CPU was better for them (i.e. encouraging upgrades to higher end CPUs vs selling more lower margin graphics cards.)
You saw a similar pattern with the 845/855/865: Shaders were all done in software (Hey, it finally almost justified Netburst, right? ;)
And this pattern seems to continue with various forms of infighting between groups up to this day.
The other Consistent problem they have had is driver compatibility/capability.
Also, the i740 drivers were really bad. I had one and I remember all kind of bug and graphical glitchs on games that worked fine on a GeFoce 2 MX that I got latter.
Heh... ever since the days of the chips and technology acquisition, if not sooner, they vacillate between wanting to be in the graphics business and not being in the graphics business...
The top level comment was about companies owning a CPU and GPU stack. NVIDIA licenses ARM IP now, if they bought ARM they would actually own the designs, putting them on the level of AMD and Intel where they would have greater control over the technology.
Just licensing the designs puts a company on a lower chipmaking tier with Qualcomm, Samsung, Apple, Huawei, etc.
> NVIDIA licenses ARM IP now, if they bought ARM they would actually own the designs
NVIDIA Carmel is NVIDIA's own Arm v8.2-A design [0]. They only licence the architecture. The core is quite interesting, as it's doing dynamic recompilation for an underlying VLIW architecture.
However, NVIDIA Orin[1] (followup for Tegra Xavier) will use a Cortex-A78 core[2] (formerly Hercules) licenced from ARM.
One difference is that AMD only offers integrated graphics with weak CPUs, while Intel, unless something's changed recently, offers them on all their consumer models.
Intel has started offering "F" SKU processors without integrated graphics only very recently with 9th and 10th gen Core CPU's.
But yes otherwise, nearly every Intel CPU has integrated graphics whereas only a few select AMD CPU's have integrated graphics (and AMD brands them as APU's not CPU's).
I'd be okay with only a few select CPUs, if even one of them was a reasonably powerful one. Instead, it's only the bottom of the barrel CPUs performance-wise.
It seems that is changing somewhat with the 4000-series APUs, but guess what, those are only going to be sold to OEMs, not individuals.
It's all rather frustrating, since I'm still on an i7-4770k and wouldn't mind an upgrade.
Right. That's not a significant upgrade IMO. I'm not even sure it would be worth it if it merely required a CPU swap. Since it actually requires a new motherboard, it's not even close to worth it.
A real upgrade would be to a 3900X (passmark: 32861), or at the very last, a 3600 (passmark: 17828). But those require a discrete GPU.
The 4700G looks like it would more or less suffice (passmark: unknown, ~18k?) , but it won't be sold to individuals, only OEMs.
> 3.25x the GPU.
As far as I can tell, I've never run into any limits of the 4770K's iGPU, so I don't think this matters. Running dual 1920x1200 monitors.
It would also let you upgrade from DDR3 to DDR4 and ~double your memory bandwidth. But if you wait another year or two, you could jump right to a DDR5 system :)
Maybe I should review AMD's GPU offerings again. Do you happen to know anything about this? Last time I was looking for (fanless + dirt cheap + dual display), and couldn't find anything that fit all 3. However... I didn't ask the question, does the fan run all the time, or only under heavy load?
Also, with lots of games reportedly working on Linux these days, maybe I should replace "dirt cheap" with "reasonably cheap."
Do you not know Intel is close to releasing their own GPUs or do you just think they will just fail at it?
Even with the acquisition of ARM, I don't see Nvidia any better off than Intel at this moment as far as CPU/GPU stack goes. Frankly, I would think AMD would be the one to end up by the wayside since they still are weak on the software side.
I think there's a good chance they'll fail. This is true of any new venture, so it's a bit lame to say, but there are reasons:
Right now I have a fairly decent GPU in my Macbook which I've hardly used. Very little supports it, because it's not nVidia. I can't use it for AI training, for example. Sure, it might work ok for some games, but Macbooks aren't really for gaming, and nVidia has captured that market nicely anyway.
Things can change; maybe Intel's software stack will be incredible. I don't know. But they have quite a hill to climb before they reach that summit.
There's ROCm[1] though. It's just almost every ML platform blindly bent to the NVIDIA vendor lock-in. CUDA is a disaster, like DirectX was back in time. One day it will go, hopefully soon enough.
Intel has good record to provide software, contribute to OSS, support for developers to support hardware than AMD, so possibly they can do better than RADEON on some world if they can provide great hardware.
>Do you not know Intel is close to releasing their own GPUs or do you just think they will just fail at it?
Given this is about their third attempt at releasing a high performance gpu, I think skepticism is warranted until they’re actually selling something to the general public.
They will fail. If they release anything less than a card as fast as a standard nvidia card and with as good driver support, they have absolutely no chance.
nvidia has a huge market, people buy their stuff because it's fast and there's software support for everything from AAA games to scientific/engineering modelling to ML.
intel took far, far too long to come up wiht a viable graphics ecosystem to ever be successful.
Maybe I'm jaded, but my current hope is they will fail at a discreet GPU, then dump that tech into making their onboard much better and using that as a selling point for the CPUs, thus helping everyone in the long run.
There's a huge market for mobile devices that aren't Apple and are fast. Perhaps this would offer better SoC prospects than whatever slop Qualcomm is dishing out.
Where is this “huge” non Apple mobile market? The mobile market outside of Apple is a commoditized race to the bottom. The average selling price of an Android phone is around $270.
The ASP of iPhones is $760. Also, Apple never sold an iPhone 7 for $200. If you are finding one, they are being subsidized by the carrier and they are still paying the wholesale cost.
While ARM can be great, I don’t think anyone is entirely writing off x86. Intel owns a lot of fab. And while I don’t get as hyped on it as other people, Intel going hard into RISCV could change the game for ARM (if it was handled in non-typical-Intel way)
But Intel has already threaten MS with a lawsuit if it tries to emulate x86 for its ARM products. Meanwhile when Apple switched from PowerPC to x86 part of the agreement was a share of patents which gave Intel access to the multimedia extensions of the PowerPC (AltiVec?). If this "share" went both ways then Apple might be able to provide x86 translation on their SOC. NOTE: there is no evidence of this anywhere other than what MS was blocked when working on Windows 10 for ARM and the Apple-Intel agreements.
If this does happen I think Intel will not sweat it because it will be only Apple. Apple has no interest in selling CPUs. They want to be able to make severe changes (cut the fat) between revisions and not have people crying about having to update their architecture to support it.
IANAL, but it's been argued here that the x86_64 patents will expire soon (the specification was available in 2000, first processor in 2003). Probably neither Apple or MS will be blocked; MS had a problem earlier since it decided to release while the patents were still in force, rather than wait.
As a founder of ARM, Apple is grandfathered into a "perpetual architecture license," clear sailing unless nVidia deprecated ARM in favor some something completely new (as I understand it), which seems unlikely to say the least.
Apple was a founding member of AIM (Apple, IBM, Motorola) for PowerPC. (As rumour has it, after DEC refused to go for a higher-volume lower-margin Alpha AXP derivative when Apple came knocking for an m68k replacement, Apple then asked IBM for a higher-volume lower-margin POWER derivative, leading to AIM and PowerPC.) As far as I know, only Acorn had a role in founding/spinning off ARM.
On the other hand, I would be very surprised is Apple wasn't smart enough to get a very-long-term/perpetual license on the ARM instruction set before investing heavily in custom core design.
"In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd."
" Apple has invested about $3 million (roughly 1.5 million pounds) for a 30% interest in the company, dubbed Advanced Risc Machines Ltd. (ARM), but the exact ownership stake of VLSI and Acorn was not disclosed. "
IIRC they used StrongARM for the last models, the Apple MessagePad 2000 and 2100. (I bought a used MP2K after the Newton was discontinued that I used in high school. Fun little device!)
Apple was part of the change from "Acorn RISC Machine" to "Advanced RISC Machines", so basically a founder of modern ARM. AIM was completely separate and later.
Apple and a bunch of others also have a "Arm architectural licence" which IIRC cannot be taken back.
Nvidia hardware is great but my biggest concern is that this would eventually spell the end for Raspberry Pis and other good things that are ARM. That would not be cool.
Designs of future ARM versions could end up being skewed toward NVIDIA's needs or worse, designed to or information withheld in ways that kill other embedded products and monopolize the embedded market.
They could theoretically simply stop licensing ARM to Broadcom, although that might invoke some anti-trust suits.
And I'd hardly call Intel focused on one use-case. They seem to have their hands in all sorts of random-as-heck product lines like IoT devices and such, most of which rarely see much long-term support.
The random stuff at Intel are usually little experiments to see how complimentary tech drives Xeon sales. When they pitched IoT stuff, the real "sale" was to get Intel-based edge devices in place to herd the IoT. And to get "intel" about what customers are planning in the place around opportunities/threats like 5G.
I think that effort in particular (about 5 years ago for me) was a warning sign about Intel -- they didn't internalize that your smart water meter would have a chip beefy enough very soon to connect to a cellular or other long range network and just hit a cloud endpoint.
Isn't this a monopoly? doesn't seem very legal. If the future is so dependent on GPU compute, and there are only 2 companies (AMD/NVIDIA) making them.... isn't it insane to only have 2 companies the entire world depends upon?
Edit: ok Duopoly, but still... kinda insane that only 2 companies in the world do it.
Part of it is the software world's fault as well. Almost the entire open source machine learning ecosystem is written for NVIDIA's CUDA and nothing else.
And almost all competent non-CUDA platforms (e.g. Google TPUs, Tesla's secret in-house hardware) haven't been open-sourced, or even sold to consumers, which further enables the NVIDIA monopoly.
It’s really hard to call a duopoly much better than a monopoly. Sure you can probably identify points, but neither outcome is anywhere near efficient. I don’t know that it should necessarily be illegal. But it is market concentration.
but by definition isn't a duopoly not competing with each other? I might be wrong, but I remember it being about the two companies making deals with each other to keep both prices high and not innovate or improve, like ISPs in the US
Anticonsumer practices really need a test of the impact on consumers, similar to how hiring practices are tested. There is a give and take in most broadband suppliers, for example, where there is no written agreement to not compete but they don't compete.
This would be bad. Not because of the CPU business - I think RISC V will eventually make that irrelevant. Once CPUs are open source commodities, the next big thing is GPUs. This merger will eliminate a GPU maker, and one that licenses the IP at that.
I think you are confused of what RISC-V is and how these things work.
RISK-V is just ISA.
You will have open source RISK-V microarchitectures for processes that are few generations old. You can use the same design long time when the performance is not so important.
You will not get open source optimized high performance microarchitectures for the latest process and large volumes. These cost $100s of millions to design and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology. They have patents.
Intel, AMD, Nvidia, ARM, all have to design new microarchitectures every few years.
It's not just doing some VHDL design. It involves research, pathfinding, large scale design, simulation, verification and working closely with fab business. The software alone costs millions and part of it is redesigned. Timelines are tight or the design becomes outdated over time.
"Building" for the latest process and large volumes is another story, but as far as I can see, large scale logic design is something not _that_ far away from software. Large scale, open source, and performant software designs exist in the wild. (see Linux, llvm, ...)
Why wouldn't we get a logic netlist which could perform reasonably well when placed on silicon by people who know what they are doing? (Yeah, lots of handwave.) I'm asking this out of curiosity. Not an expert in the field by any means.
SonicBoom is an open source Risc V core that is clock for clock competitive with the best from ARM. There is still the issue of matching clock speed and tweaking for a given process.
Foundries actually have an interest in helping to optimize an open core for their process as a selling point since it can be reused by multiple customers.
Their own paper[0] shows it performing at similar levels to the A72, a four year old core. Those are obviously really impressive results for an open source core.
Agreed, this reminds of the AMD acquisition of ATI a couple decades ago. Almost killed AMD and now Nvidia is thinking about doing the same? Technically the reverse since Nvidia is the GPU company acquiring a CPU company.
At the same time, Nvidia may be trying to hedge its future as its other competitors (Intel, AMD, even Apple) all have their own CPU and GPU designs. The animosity with Apple has shown the power dynamics and the high stakes.
Apple's CPU's are still ARM64 cores at heart, even if heavily modified
A thought I've been kicking about, though I fully understand it to be incredibly unlikely would be if NVIDIA would simply terminate any license Apple has to use ARM at all. The move would arguably be done out of pure spite. "payback" for the GeForce 8600M issues that cost them 200~ million and $3 billion in market cap in part due to Apple pushing for a recall.
Apple also seemingly pushed Nvidia utterly out the door, going as far as to completely block NVIDIA from providing drivers for users to use their products even in "eGPU" external enclosures on newer versions of MacOS. Even if only a minority of Apple users ever bought Nvidia cards, being completely banned from an OEM's entire lineup would likely ruffle feathers
Apple has an "ARM Architecture License". They have a custom implementation that's very different from the reference design (which is why they're miles ahead of other ARM CPU makers). I'm sure the license has contractual obligations that protects Apple. In short, Apple is in no danger, most likely.
You're swallowing the poison pill with pride. The reason why no ARM customer actually wants to buy ARM is because there is almost no way to make money off of it. If you cancel licenses ARM is going to implode and $32 billion worth of value locked up in ARM will vanish into thin air. If you want to extract value out of ARM you'll have to play nice and that is definitively not worth $32 billion either.
You have to license the ARM ISA as well, you can't just freely implement it. IANAL, but as far as US law is concerned, I don't think you're violating copyright by cloning an ISA, but if that ISA is covered by any patents, you'd be fairly screwed without licensing. ARM definitely considers there to be enforceable patents on their ISA (just follow their aggressive assault on any open source HDL projects that are ARM compatible).
If Nvidia bought ARM and decided to find some legal way to terminate the contract with Apple out of spite, Apple would have to find another ISA for their "Apple Silicon".
ARM has previously shut down open source software emulation of parts of the ARM ISA.
For a while QEMU could not implement ARMv7 I think, until changed their mind and started permitting it. There was an open source processor design on opencores.org that got pulled too.
The reasoning was something like "to implement these instructions you must have read the ARM documentation, which is only available under a license which prohibits certain things in exchange for reading it".
Theoretically Apple could probably outbid Nvidia, but realistically regulators would never let Apple make that purchase so it's irrelevant that they have the cash to do so.
I'm not particularly familiar with US antitrust law specifics...
One big point that came up in the congressional hearing the other day was how Google, when buying DoubleClick, said (under oath to congress) that not only would they not merge data but that they legally couldn't if they wanted to - and then years later did just that.
Is there any way to acquire in such a way that Apple would own ARM but there'd be a complete firewall between them, with ARM having a separate board, CEO, etc. and nothing between them except the technical ownership (and any contracts between the two companies)?
I hope not, as I'm in favour of breaking up huge companies myself... but if legal firms can have a system in place for firewall of information between clients I don't see why a similar legal situation could be feasible for allowing Apple to buy on the condition that they can't have any say over operations, with selling ARM being their only way of influencing them in any way?
Of course, owning a company where you have no control at all isn't great, but in this case it might be worth it if Apple trusts ARM to keep doing well without Apple's help, and if it would prevent someone like NVIDIA from shutting Apple out.
And would there be any antitrust issues if Apple bought a 500 year license to freely (or at a pre-set calculation of pricing) use any and all current and future ARM designs?
I should have read the article before commenting! But I guess that means Apple aren't worried, either because their contracts are solid enough and/or they already have backup plans ready that are cheaper than buying ARM.
(Or, of course, Apple are making a huge mistake. But seems a bit less likely to me.)
To my knowledge, "Apple Silicon" or what ever you want to call it is an ARM64 ISA, with Apple extensions
Third parties have been able to get IOS running in an emulator for security research and Android(!) has even been ported to the IOS devices that contain the "Checkrain" Boot ROM exploit (though with things like GPU, Wi-Fi etc in varying states of completion)
ISA is just about the instruction set that the silicon is designed to interpret. Aww compatibility does not imply any shared heritage in the silicon between apple and arm LTD cores. Just like AMD and Intel share no silicon design even though they are both sell processors implementing the AMD64 ISA.
(The silicon implementation of an isa is referred to as the microarchitecture btw)
Right, but the ARM ISA is covered by various patents, so you have to get a license for it. ARM has aggressively shut down any open efforts to make ARM compatible cores (without licenses).
In a GPU the ISA isn't decoupled from the architecture in the way it is for a post-Pentium Pro CPU. Having a fixed ISA that you couldn't change later when you wanted to make architectural changes would be something of a millstone to be carrying around for a GPU.
It's much more advantageous to be able to respin/redesign parts of the GPU for a new architecture since the user interface is at a much much higher level compared to a CPU. They basically only have to certify that it'll be API compatible at CUDA/OpenCL/Vulkan/OpenGL/DirectX level and no more. All of those APIs specify that the drivers are responsible for turning it into the hardware language, so every program is already re-compiled for any new hardware. This does lead to tiny rendering differences in the end (it shouldn't but it frequently does, due to bug fixes and rounding changes). So because they aren't required to keep that architectural similarity anymore, they're free to change as they need new features or come up with better designs (frequently to allow more SIMD/MIMD style stuff, and greater memory bandwidth utilization). I doubt they really change all that much between two generations, but they change enough that exact compatibility isn't really worth working at.
If you want to look at some historical examples where this wasn't quite the case, look at the old 3DFX VooDoo series. They did add features but they kept compatibility to the point where even up to a VooDoo 5 would work with software that only supported the VooDoo 1. (n.b. This is based on my memory of the era, i could be wrong). They had other business problems, but it meant that adding completely new features and changes in Glide (their API) was more difficult.
Of course they have ISAs, my point is that the economics of standardization around a single ISA a la RISC-V isn't as good by virtue of the way we use GPUs on today's computers. You could make GPU-V but why would a manufacturer use it
> GPUs are generally black boxes that you throw code at.
umm... what? what does that even mean? lol
I could kind of maybe begin understand your argument from the Graphics side, as users mostly interact with it at an API level, however keep in mind that shaders are languages the same way "cpu languages" work. It's all still compiled to assembly, and there's no reason that you couldn't make an open instruction set for a GPU the same as a CPU. This is especially obvious when it comes to Compute workloads, as you're probably just writing "regular code".
Now, that said, would it be a good idea? I don't really see the benefit. A barebones GPU ISA would be too stripped back to do anything at all, and one with the specific accelerations needed to be useful will always want to be kept under wraps.
Just 'cause Nvidia might want to keep architectural access under wraps doesn't necessarily mean that everyone else is going to, or that they have to in order to maintain a competitive advantage. CPU architectures are public knowledge, because people need to write compilers for them, and there are still all sorts of other barriers to entry and patent protections that would allow maintaining competitive advantage through new architectural innovations. This smells less of a competitive risk and more of a cultural problem.
I'm reminded of the argument over low-level graphics APIs almost a decade ago. AMD had worked together with DICE to write a new API for their graphics cards called Mantle, while Nvidia was pushing "AZDO" techniques about how to get the best performance out of existing OpenGL 4. Low-level APIs were supposed to be too complicated for graphics programmers for too little benefit. Nvidia's idea was that we just needed to get developers onto the OpenGL happy path and then all the CPU overhead of the API would melt away.
Of course, AMD's idea won, and pretty much every modern graphics API (DX12, Metal, WebGPU) provides low-level abstractions similar to how the hardware actually works. Hell, SPIR-V is already halfway to being a GPU ISA. The reason why OpenGL became such a high-overhead API was specifically because of this idea of "oh no, we can't tell you how the magic works". Actually getting all the performance out of the hardware became harder and harder because you were programming for a device model that was obsolete 10 years ago. Hell, things like explicit multi-GPU were just flat-out impossible. "Here's the tools to be high performance on our hardware" will always beat out "stay on our magic compiler's happy path" any day of the week.
You could make a standardized GPU instruction set but why would anyone use it? We don't currently access GPUs at that level, like we do with the CPU.
It's technically possible but the economics isn't there (was my point). The cost of making a new GPU generally includes writing drivers and shader compilers anyway, so there's not much of a motivation to bother complying with a standard. It would be different if we did expose them at a lower level (i.e. if CPU were programmed with a jitted bytecode then we wouldn't see as much focus on ISA as long as the higher level semantics were preserved)
SPIR-V looks like a promising standardization. It can not be translated directly into silicon but it doesn't have to. Intel also essentially emulates x86 and runs RISC internally.
>Intel also essentially emulates x86 and runs RISC internally.
By that logic anything emulates its ISA because that is the definition of an ISA. An ISA is just the public interface of a processor. You are wrong about what x86 processors run internally. Several micro ops can be fused into a single complex one which is something that cannot be described with a term from the 60s. Come on, let the RISC corpse rot in peace. It's long overdue.
I've been studying and blogging about GPU compute for a while, and can confidently assert that GPUs are in fact astonishingly complicated. As evidence, I cite Volume 7 of the Intel Kaby Lake GPU programmers manual:
That's almost 1000 pages, and one of 16 volumes, it just happens to be the one most relevant for programmers. If this is your idea of "simple," I'd really like to see your idea of a complex chip.
The most complex circuit on the GPU would be the thing chops the incoming command stream, and turns it into something which matrix multiplicators can work.
I get the feeling you're only really thinking about machine learning style workloads. Your statement doesn't seem to take into account scatter/gather logic for memory traffic (including combine logic for uniforms), resolution of bank conflicts, sorting logic for making blend operations have in-order semantics, the fine rasterizer (which is called the "crown jewels of the hardware graphics pipeline" in an Nvidia paper), etc. More to the point, these are all things that CPUs don't have to deal with.
Conversely, there is a lot of logic on a modern CPU to extract parallelism from a single thread, stuff like register renaming, scoreboards for out of order execution, and highly sophisticated branch prediction units. I get the feeling this is the main stuff you're talking about. But this source of complexity does not dramatically outweigh the GPU-specific complexity I cited above.
No, there is hardware for it, and it makes a big difference. Ballpark 2x, but it can be more or less depending on the details of the workload (ie shader complexity).
One way to get an empirical handle on this question is to write a rasterization pipeline entirely in software and run it in GPU compute. The classic Laine and Karras paper does exactly that:
An intriguing thought experiment is to imagine a stripped-down, highly simplified GPU that is much more of a highly parallel CPU than a traditional graphics architecture. This is, to some extent, what Tim Sweeney was talking about (11 years ago now!) in his provocative talk "The end of the GPU roadmap". My personal sense is that such a thing would indeed be possible but would be a performance regression on the order of 2x, which would not fly in today's competitive world. But if one were trying to spin up a GPU effort from scratch (say, motivated by national independence more than cost/performance competitiveness), it would be an interesting place to start.
The host interface has to be one of the simplest parts of the system, and I mean no disrespect to the fine engineers who work on that. Even the various internal task schedulers look more complex to me.
If you don't have insider's knowledge of how these things are made, I suggest using less certain language.
Just because a big part of the chip are the shading units it doesn't mean it's simple or there's no space for sophistication. Have even you been following the recent advancements in recent GPUs?
There is a lot of space for absolutely everything to improve. Especially now that Ray Tracing is a possibility and it uses the GPU in a very different way compared to old rasterization. Expect to see a whole lot of new instructions in the next years.
> 90%+ of the core logic area (stuff that is not i/o, power, memory, or clock distribution) on the GPU are very basic matrix multipliers.
>All best possible arithmetic circuits, multipliers, dividers, etc. are public knowledge.
Combine these 2 statements and most GPUs would have roughly identical performance characteristics (performance/Watt, performance/mm2, etc)
And yet, you see that both AMD and Nvidia GPUs (but especially the latter) have seen massive changes in architecture and performance.
As for the 90% number itself: look at any modern GPU die shot and you'll see that 40% is dedicated just to moving data in and out of the chip. Memory controllers, L2 caches, raster functions, geometry handling, crossbars, ...
And within the remaining 60%, there are large amounts of caches, texture units, instruction decoders etc.
The pure math portions, the ALUs, are but a small part of the whole thing.
I don't know enough about the very low level details of CPUs and GPUs to judge which ones are more complex, but in claiming that there's no space for sophistication, I can at least confidently say that I know much more than you.
Matrix multipliers? As in those tensor cores that are only used by convolutional neural networks? Aren't you forgetting something? Like the entire rest of the GPU? You're looking at this from an extremely narrow machine learning focused point of view.
Is that last sentence provable? If so, that's an impressively-strong statement (to state that the provably-most-efficient mathematical-computation circuit designs are known).
RISC-V is in danger of imploding from its near-infinite flexibility.
It is driven largely by academics who lack a pragmatic drive in areas of time-to-market, and it is being explored by companies for profit motives only. NXP, NVIDIA, Western Digital see it as a way to cut costs due to Arm license fees.
RISC-V smells like Transmeta. I lived through that hype machine.
Transmeta as in a company that was strong armed by Intel in a court case Transmeta could have won if they hadn't run out of cash and time (and they ran out of those things because of that court case)?
Transmeta was never a viable solution: it was a pet project billed as a disrupter on the basis of it being a dynamic architecture. Do I need to explain the mountain of issues with their primary objectives or do you want to google that?
"MIPS Open" came after the RISC-V announcement, and is still currently somewhat of a joke. Half the links on the MIPS Open site are dead.
I think one of the major points for RISC-V was to avoid the possibility of patent encumbrance of the ISA so that it can be freely used for educational purposes. My computer architecture courses 5-6 years ago used MIPS I heavily. MIPS was not open at the time, but any patents for the MIPS-I ISA had long since expired.
POWER is actually open, but it is tremendously more complicated. RISC-V by comparison feels like it borrows heavily from the early MIPS ISAs, just with a relaxation of the fixed sized instructions and no architectural delay slots and a commitment to an extensible ISA (MIPS had the coprocessor interface, but I digress).
The following is my own experience - while obviously high performance CPU cores are the product of intelligent multi-person teams and many resources, I believe RISC-V is simple enough that a college student or two could implement a compliant RV32I core in an HDL course if they knew anything about computer architecture. It wouldn't be a peak performance design by any measure (if it was they should be hired by a CPU company), but I think that's actually a point of RISC-V as an educational platform AND a platform for production CPU cores.
As a teaching tool RISC-V is clearly great, as it is for companies that want to add custom instructions to their microcontrollers like NVidia or WD. But if I was looking to design a core to run user applications then to me it looks like everything is stacked in favor of Power. The complexity of the ISA is dwarfed by the complexity of a performant superscalar architecture. And to be performant in RISC-V you'd probably be needing extensive instruction fusion and variable length instructions anyways further equalizing things. And you really need the B extension which hasn't been standardized yet. Plus binary compatibility is a big concern on application cores and ISA extensions get in the way of that.
The MIPS Open ISA project was actually shut down after only a year. [^1]
POWER has technically only been "open" for a little under a year. OpenPOWER was always a thing, but this used to mean that companies could participate in the ISA development process and then pay license fees to build a chip. This changed last year when POWER went royalty-free (like RISC-V).
The real defniition of "open" is can you answer the following questions in the negative:
- Do I need to pay someone for a license to implement the ISA?
- Is the ISA patent-unencumbered?
RISC-V was the only game in town for a long time and thereby attracted large companies (including NVIDIA) and startups that were interested in building their own microprocessors, but didn't want to pay license fees or get sued into oblivion.
Because they aren’t as open in reality as they sounded when announced.
The advantages of those two (and of ARM) is that there actual Implementations with decades of development behind them, Yes, some technical debt but also many painfully-learned good decisions.
RISC V, which I’m really excited about, you can think of as a PRD (customer’s perspective product requirements document). That’s what an ISA is. Each team builds to meet that using their own iomplemetation, none of which is more than a few years old yet. But the teams incorporate decades of learning , and have largely a blank sheet to start with. I think it will be great....but isn’t yet.
No, if anything it's overvalued. Previous year's profit was just over $400m. At a 10x multiple that's a $4bn price tag. If you go the year before it was over $600m, which at a 10x multiple is $6bn. Even a crazy 30x multiple would only be $18bn. Softbank massively overpaid, which is why Softbank is SELLING Arm and other companies, because they racked up massive debt.
Interest rates are at a record low. Amd is at a pe ratio of like 100, the average pe ratio today is around 15 iirc. With a growth expectation increasing because of apples adoption and all, plus with record low interest rates, a 60x multiple is not insane.
That was a mistype, thanks for catching that. I meant 25, but I haven't kept up with the last quarter so I have no idea what it looks like with all the recent insanity, so that might be pretty far off as well
This is more from a increasing adoption of arm industry wide perspective as people begin writing code on and optimizing code for ARM machines. Could trickle over into increased adoption by other companies and other sectors.
Nvidia's own pe ratio is around 80 it looks like. Its not actually that insane. Overvalued? Maybe. But all this is saying is that basically, you think that your current required rate of return minus your expected growth rate for ARM is around 1.7%, and your required rate of return is going to be pretty low right now because interest rates are so low. You value future earnings more and you think it's going to grow a lot. You might be wrong, and for a company like AMD with a pe ratio above a hundred you might be betting in an awful lot of growth happening, but in this environment, these are not "insane" numbers.
What you brought up above in this thread is how expensive the acquisition looks as a multiple of earnings. That multiplier is the price of the company vs the earnings for the company, which is literally just a simplified P/E ratio that ignores dilution etc. You may be thinking of revenue multiple based valuation instead?
Neither is a great indicator for companies that are experiencing significant growth. ARM Holdings has nearly tripled revenue over the past decade. So you have to weight future growth, as well as potential future market opportunities.
ARM + nVidia can be a powerhouse combo, especially in the cloud/server market.
NVidia is already an ARM licensee. What does owning ARM holdings give them besides a massive amount of debt? They could license everything ARM owns for decades for less than this buyout would cost.
If I sell 100 lemonades per day and can't pay my rent, yet my neighbor sells 50 lemonades but manages the business in a way to pay his rent and a salary, how can my business possibly be more valuable? It's a silly example but I don't think it's far fetched. Similarly, Apple is worth far more than 20% of the phone market despite having a grasp on only 20% (give or take, I don't recall the exact number) market shares. There's a reason companies aren't valued based on revenue.
But its definitely not in any financial metrics. If you own walmart for a year, you'll make around 4 billion, while if you own Google, you'll make around 7. And Google is likely going to be growing faster than Walmart.
If you look at the products these chips are in and see the breakdown. Cost of IP license from ARM, price to make that chip and price that chip sold in that product, let alone the final product price. The margins are thinest in ARM's area and for each chip they sell, the others down the line do have larger margins. I don't have TMSC's production costs and what they charge at hand, but i'd hazard that what TMSC make per chip is possibly more than what ARM makes per chip.
Maybe why SOftbank want to raise those licence costs and that is with alternatives now about, not that easy an equation than just raising those costs as many for controllers that RISCV become more than fine for and some HD manufacturers already transitioning for a few cents extra savings currently, let alone if ARM license increased.
Sure ARM here there and everywhere but it got that way as much on the cheap costs of using that IP as much as the array of IP packaged up. So yip they make money, but it's if you breakdown how they make money, ARM are regular/reliable, hence less spikes in income either way.
Logicaly, for ARM to increase revenue, it would need to branch out into other markets. Nvidia honestly would be a good fit for that over shafting licensing costs, which may well see an increase, but for what softbank wants for returns - ARM would not wear out the level of increases they would want and Softbank know this - hence sell or IPO them best for Softbank and also ARM.
> I don't have TMSC's production costs and what they charge at hand, but i'd hazard that what TMSC make per chip is possibly more than what ARM makes per chip.
Most ARM chips made aren’t pushing 7nm ultra expensive processes - and you are probably still correct.
Why do you think ARM is undervalued? While I agree, that considering the value of the ARM Arch brought to the mobile world, it may seem undervalued when you compare it to the insane valuation of some BS software companies, their business model of licensing IP doesn't allow them to make mega money. Apple and others have a perpetual licenses even.
Sure, if they want to, they could change the licensing model to bleed their customers dry and increase share price but that would just work temporary as everyone would then accelerate the movement to RISC-V.
ARM(the company, not the Arch) has peaked and will probably just stagnate for the foreseeable future.
I just find it hard to believe that Softbank would settle for $1B in profit for a highly successful business. They may have overpaid themself or ARM is losing them money, but why would Nvidia want to pay $32B for a money losing business?
They probably have no idea how to hype up ARM. It's like I bought out Apple Store in NY in cash but have no idea what to do with it other than to admire it
Also keep in mind that Softbank has needed money to pay off their large debt load, which probably reduces the amount of leverage they have in negotiations.
I wonder if it would be possible for a consortium of companies like Apple, Microsoft, and Google to swoop in and outbid Nvidia? All of them rely on customization agreements with ARM and Nvidia, being a chip maker, would be competition. And a consortium like that would allow the consortium members to keep their changes to themselves so whatever Qualcomm and Microsoft are doing with the SQ1 or Apple with their Silicon stuff wouldn't have to be shared will all consortium members -- or a competing chip maker like Nvidia.
Apple has a perpetual license on the ARM instruction set, so they just pay a royalty per chip. Apple owning ARM would be a massive problem for regulators, as they'd be the sole source of chips to their competition.
MS and Google don't make chips, and MS doesn't even license any ARM tech (I don't think, they just make software and buy CPUs, they don't make any). They're a bad fit. Google's dipping their toes in it, but I don't think they're doing fully custom silicon. Most companies buy the rights and layouts, and tweak those layouts.
NVidia makes chips, but they don't make many mobile chips, and virtually no chips for phones compared to Samsung and Qualcomm. They're a better fit.
MS designed and produced custom silicon for HoloLens 2 (the Holographic Processing Unit 2.0) [0][1]. The Microsoft SQ1 in Surface Pro X is probably also produced by them, though the design is in collaboration with Qualcomm.
Nvidia might a chip maker, but they aren't really a competing chip maker. The only real competitor to Nvidia is AMD, which does not use ARM IP in any significant amount that I know of.
Nvidia might be tempted to block ARM customers if they had competing designs to push them to, but they don't. So they would be sacrificing revenue without any other sales to make up for it. It doesn't seem like a good idea to me.
This would be a different story if a company like Apple were buying ARM. They could definitely benefit by gouging Samsung and Google on license fees but they've got to balance that with the chance of getting fined for anti-competitive practices.
So much like a shared patent pool, which in effect does partially describe ARM.
But when those companies can get access to those patents for more accountant friendly costs without balancing out risk/asset/management aspects of owning part of that slice, then the motivation, corporate culture that would be needed to drive such an investment is just not there.
That's why having at least three "heavies" as part of the consortium will enforce good behavior or at least keep it from getting out of control. Well-armed (legal teams) companies are polite companies... (except Oracle, but they are mainly a litigation firm that also sells databases).
True, yet the incentive is just not there business wise over what they have now and be a hard sell to the board as what advantages does it give them?
It actually makes more sense for some governments to buy ARM than many companies, so at least we can be thankful (so far) that avenue has not transpired. Might even be best overall if ARM was partially IPO'd. If SOftbank sold 50% stake via IPO, I dare say that way of selling the company would yield the best and also quick return and best of both Worlds and that option I'd still not rule out. Indeed, mooted interest from many large companies sniffing as a prelude to an IPO may not be unheard of.
I too share your aversion towards Oracle's business practices.
I think you'd have significant regulatory back-pressure preventing Microsoft, Google, and Apple - between them they effectively own all the operating systems and app stores of the world. There's a real problem here that will prevent competition from outside (QCOM & Samsung are two massive ARM chip designers for mobile, there's a bunch more in the embedded space). It's important to look at the risks and not just the upsides as there are both.
Apple has never shown much interested in a lot of B2B type licensing companies that aren't 100% apple, and potentially owning something their competitors need and all the legal complications that brings.
Meanwhile Apple's home grown chip initiatives are mature, not sure they need much from ARM.
Why is ARM selling itself? May be I am missing something here but if Qualcomm can generate a bulk of their revenues through licensing fees, shouldn't ARM be bringing in more? It feels like going public would be a profitable route for their investors. Wonder why they are opting for that.
And what would that argument be? That the increase in unemployment leads to more new businesses being founded that can somehow miraculously afford luxury(-priced) offices?
That a lot of businesses are deciding they don't need as much full-time office space as it seemed like last year. Long term leases are expensive. On-demand, short-term leases are looking more attractive.
Wireless technology is a patent minefield. CPUs generally aren't because the techniques for internals are old, and internal design, manufacturing, and especially validation (where intel fell flat lately) methodologies dominate the costs and quality.
For much of ARM's recent volume use in MCUs and slightly larger embedded devices, there is credible threat from first party usage of RISC-V (see WD, Nvidia).
Access to the ISA itself can be of high value.. see x86 and s390x for prime examples. Although I don't really see how ARM could pull that off outside being an acquisition like this, and making the licensing process onerous enough that people move to buying chips from nvidia instead of doing their own designs. In such a scenario, RISC-V can become a credible threat to phones too, and the server thing pundits keep pushing for the past 15 years never happens.
So there is a lot of value here, but it's pretty hard to grow as a pure licensing play as ARM has been since there are many risks and opportunities for price compression.
ARM isn’t selling itself. SoftBank bought ARM several years ago. I would imagine all the cash they hemorrhaged on wework has finally forced their hand.
I think it's for the datacenter. They have already purchased Mellanox. In many HPC applications, having fast GPUs and fast interconnects is 99% of what you need. A reasonably competent CPU architecture would round this out, and I don't think RISC-V is there yet.
ARM have also made some HPC plays, e.g. buying Allinea (probably the best supercomputer debugging tool).
ARM would become owned by an American company. It has to comply with American restrictions on dealing with China but that would make it essentially no different from any other American company, either legally or in fact.
So in the current context this would be bound to raise eyebrows in Beijing, and China could only react by doubling-down on developing domestic alternatives.
Yep, the United States government gaining the ability to directly block the licensing of ARM reference designs from companies like Huawei's HiSilicon (and the fabless chip designers such as Rockchip) is a VERY big deal. It's a very different situation to the US government having to pressure Japan's SoftBank, UK's ARM Holdings (or their respective governments).
I'm kind of wishing this trend will push investment in something open like RISC V. Up and coming countries should definitely be thinking - if US targets China today, it could be them next.
Can anyone explain what is going on between Apple and Nvidia? Seems Apple will not add Nvidia hardware to its machines no matter what. What’s the back story to that?
And where this is related is I wonder if Apple will have to relent (assuming the purchase goes through) and do business with Nvidia since it licenses some technology from ARM. Or, have I got it wrong here? Does Apple not rely on ARM?
NVIDIA really ratfucked them with the GPUs in MacBook Pros maybe a decade ago. They started failing like crazy, NVIDIA said they fixed it, they hadn’t, and then when things went to litigation NVIDIA blamed Apple for the GPU failures. Apple ran a giant recall and swore off using NVIDIA ever again.
A little tidbit, and I of course don’t have any proof to back this up but: for the eligible Mac models to qualify for a free mlb replacement from apple, the computers HAD to fail a specific diagnostic (called VST - video systems test). The repair couldn’t be classified as “covered” unless there was a record of this test failing. I’m also certain that privately nvidia agreed to cover the cost (or some of the cost) of these repairs, but stipulated they would only cover the computers that failed the test. Most computers did fail the test, but I definitely saw some machines that absolutely were experiencing gpu issues but wouldn’t have been eligible (most of the time Apple did the right thing and paid for it out of pocket)
Sorry for rambling! Just thought it was an interesting tidbit
Google "macbook faulty nvidia". e.g. [1] references to "arrogance and bluster" from Nvidia.
My personal experience: I had a 2011 Macbook Pro with an nvidia card. It started to fail randomly. Apple identified that certain nvidia GPUs were failing and created a test the "Geniuses" would run. My Macbook always passed, even though it kept throwing noise on the screen anywhere other than the Apple store. Eventually it finally failed their test: Four days after the (extended) warranty period. They refused to replace it. Bitterly, the best option for me was to pay the $800 for a new board.
Wow. I have had Apple do me right for free repairs out of warranty (and past AppleCare expiration) but several things applied: It was in California where retail culture is less combative about helping the customer; 2) the laptop did have AppleCare; 3) It was at the Palo Alto store which tends to have very well educated employees, with cross flow between the employee pool and corporate; 4) the laptop was purchased at an Apple store, which inexplicably gives them more leeway (well, possibly due to their certainty it was always in Apple’s hands prior to direct sale to the customer).
For what it's worth, I had almost exactly the same experience with the same 2011 MacBook as 'lowbloodsugar'. From my point of view, it was clearly starting to fail while still in AppleCare warranty (snow on the screen when doing anything graphics intensive), but somehow it passed their 'tests'.
Shortly after the warranty ended, the Nvidia card failed for good. On the bright side, it was a dual-video system, and I was able to keep it running a few years longer through complicated booting rituals that convinced it to boot with the lower power Intel graphics (although I lost the ability to recover from sleep).
I'm not sure why we have such different experiences with Apple customer service. I was also in the Bay Area, but not Palo Alto. It's possible that with a more aggressive approach I could have gotten them to fix it. Instead, I accepted their verdict, and now spread the word through forums like this that their warranties are not to be trusted.
I'm worried that you are right, and that I had an AMD card as well. I'm unsure, as the focus of my disappointment is Apple, rather than the manufacturer of the card.
That’s a super, super shitty experience. I made a comment somewhere else in this thread outlining my theory that apple were so strict with the test because part of the financial arrangement with nvidia stipulated the machine had to fail a diagnosistic they provided or contributed to.
Still, as someone who used to work as a “genius”, it would have been easy to cover your mlb repair under warranty for a million reasons, so the technician failed you there, and then to not do anything for you 4 days outside of eligibility especially based on your experience... that’s fucked. Not all “genius” bar people are like that. If something similar happens in future, call AppleCare and ask to speak to “tier 2”. They sometimes have leeway to issue coverage in extenuating circumstances.
I think people are overlooking a worse outcome. Chinese companies have already bought the Asian subsidiary of ARM, if China bought ARM who knows how they would retaliate with it given the current attacks on Huawei. They might use it as leverage.
realistically we need an open unowned architecture like RISC-V, because whoever buys ARM will cause concern given how hyperconpetitive mobile is, the incentive to abuse the ownership is high.
We really want to avoid another Oracle/Java scenario as well.
Looking at how qualcomm has a virtual monopoly for mobile processors nvidia buying arm and getting back into the mobile space might be a good thing. Though I think this is more a play for desktop processors intel is getting into discrete cpu market Amd already is. Every year integrated gpus are getting better at giving good enough performance so the market for discrete gpus will decrease in the end Nvidia might need its own cpu to stay relevant.
Oh no, look what have you done WeWork. Who would have thought that rich kids burning VC money, who themselves were running elaborate ponzi scheme would end-up as a threat to the democracy of computing.
Ofcourse U.S. Govt. wouldn't have problems with this unlike Broadcom-Qualcomm deal, on the contrary this will put American semiconductor industry in a more dominant position.
RISC-V is the only hope for the rest of the world now.
This isn't a bad move strategy wise. If the Apple performance is as awesome as is rumoured and hype, there are going to be a lot of people looking for new arm chips, with no obviously ahead players. nVidia has looked a couple of times at building their own x86 core, and ARM cores may be a better bet.
At this point - AMD, Intel, Apple are all looking at fully integrated APU/CPU/GPU stacks. That leaves NV out in the cold if they don't do something.
1) There are many domains that don’t care about single threaded CPU performance or CPU performance at all. I would say gaming is becoming one such domain
2) This might soon change if Microsoft and Apple succeed in their quest to push ARM machines to the consumer market. I can imagine than an ARM macbook is all that's needed to start an avalanche in adoption
This might be a lucky break for Intel. If NVidia buys ARM, this might slow the ARM ecosystem's intrusion on Wintel. As others have noted, NVidia will probably go and develop a data center CPU to complement their other offerings. Qualcomm already tried that and failed though [0], so NVidia's effort may meet the same fate several years from now. Regardless, Intel would get a little more time to work through its process problems.
There is no easy for market for 32b of any stock . You cannot just make sell order for that qty and expect that Order to be filled.
That stock market is also there for ARM stocks . There is no need to trade for nvidia shares at an undesirable price if there is no immediate cash in it.
Nvidia could potentially raise money from an FPO or similar instruments and leverage advantageous stock price they have, I.e. issue lesser newer stock than they would have at a lower price ,but only way SoftBank will consider a deal is if they get cash
There actually is. For example - on the last annual Berkshire shareholders meeting Buffett was talking about how easy it was for them to unload billions worth of airlines, 100% of their holding.
On a typical single day $4B worth of NVIDA stock is traded.
Wonder how much of this is pushed by US policy to secure IC dominance like getting TSMC to build US fab. Barr was floating the idea of getting Cisco to acquire Ericsson for 5G, not a stretch to also ask Nvidia to buy ARM.
This makes no sense. nVidia is already transitioning away from ARM and shipping RISC-V controllers in their GPUs. They already gave up on Denver, their high perf ARM server chip. Why would they buy ARM?
Pure uneducated guesses from me: maybe the reason for giving up on Denver would no longer be a reason if they owed ARM. Or maybe they see a future use of ARM's tech that will either benefit them to own, or that scares them that ARM competition (or competition from other companies working with ARM) in a current or future area. Or just that they believe ARM is a solid business that will make money whoever owns it.
But I don't have a clue if it's a good idea or not.
They're using RISC-V for their micro controllers. I don't think we have any reason to believe they won't stick with ARM for application cores. And they've tended to ping-pong between Denver cores and standard ARM cores so I wouldn't entirely write Denver off.
Controlling the ARM holdings the company is very different value prop than a single arm chip .
When they made those decisions ARM wasn’t available for buying, SoftBank was ok top of the world didn’t have to sell anything . A year back do you think SoftBank would consider a proposal so close to their purchase price ?
I can see why Apple wouldn't want to buy ARM since that would put them in very delicate position, but I'd be surprised if Intel or AMD weren't trying to buy ARM as well.
This acquisition is in US interests and can be supported by US government. They try to stop leaking silicone industry in general (famous CHIPS act), and block Huawei and its partners.
I hope that this ends up giving Nvidia enough leverage over Qualcomm that they can start putting Tegra chips in smartphones again without cellular radio patent disputes.
People in this thread discuss what Nvidia would do with ARM after they buy it, but I'm honestly not even sure if anyone even could buy ARM to be honest. The article barely even mentions the anti-competitive nightmare that it would probably become, and if Nvidia agreed to let everyone else keep buying licenses and designs as it is now, then you have to argue is there any reason for them to buy ARM at all?
Antitrust nowadays is just a buzzword politicians throw around every couple years in reference to household names like FAANG in order to get reelected. Unless your average grandma knows the name Nvidia, don't count on those in power to even pretend to do something about this.
Why? From where I sit there will still be plenty of (or at least no less) competition in the CPU/GPU space--especially considering the fact that ARM itself is not a semiconductor manufacturer.
Thats a lot of money and must be very tempting.
Lets hope that ARM doesn’t get bullied to bite the apple.
Nvidia sees the writing on the wall with GPUs and knows it must acquire the future king of CPUs to survive.
Is nvidia seriously going to get into ARM business? Are they going to grok the business of embedded development and microcontrollers? I can't help but think that this would mean the end of ARM as we know it.
The Arm people would still be there running the company, it's not like they all get fired and overnight it's Nvidia people with no experience doing it.
In the ML GPU they are a monopoly but man they do a great job providing support for TF or Pytorch where AMD hasn't invested. So in Cloud Nvidia is the only way to go
Well nvidia need to create an operating system now and we could end up with three ecosystems that could be competitive with each other. Good for consumers I say.
Apple is not fundamentally against the use of nVidia products. nVidia refused to implement the Metal APIs knowing full well that in a few years they'd be replaced by Apple's own silicon. Why spend a 1-2 years implementing adding a feature that will be legacy 1-2 years after it ships?
> Apple is not fundamentally against the use of nVidia products.
Are you sure about that? I don't know this story about Metal, but I believe Apple has refused to certify an nVidia driver on Mac since like 2018 (I think), effectively cutting Apple users off from using nVidia products on Apple platforms.
My theory on why Apple prefers AMD follows from Nvidia's main advantage over AMD being software, both their drivers and CUDA.
Apple needs/wants to be in full control of their driver stack. They don't necessarily want to write it themselves, but they want tighter control than Nvidia is willing to give them. AMD on the other hand is much happier to offload responsibility and accept help.
I don't think this set of circumstances would come up for an ARM processor.
Apple has an ARM architecture license from the very early days. I don't know if it's perpetual or what the terms are. b
Apple doesn't use any stock ARM cores and already develops their own compiler, libraries, OS so how much do they rely on ARM, Inc?
ARM was originally Acorn RISC Machine. (I had an ARM desktop in 1988.) ARM was spun off as a separate company when Apple joined in, with a 43% stake. I imagine that when they sold it, they kept a perpetual license. [1]
Apple is currently against NVIDIA GPUs simply because they got screwed over with NVIDIA GPUs before in terms of hardware reliability and such, which prompted them to switch to AMD for GPUs. It doesn't have much to do with NVIDIA as a company and more to do with their hardware.
But when it comes to ARM chips, Apple designs their own ARM-compliant chips, so NVIDIA owning ARM would do nothing in that aspect imo. Especially since iPhone/iPad chips have been ARM-based for the past decade, so I don't think that Apple really has a good option or reason to switch away from ARM at this point.
I'm not an expert, but I thought Apple's reason for ditching Nvidia was speculated to be that their ecosystem was closed and integrating Nvidia slowed down development and lead to failures. I don't think there's an absolute opposition to using Nvidia owned stuff, especially for ARM where Apple licenses IP and custom designs the hardware on their timeline.
Apple's customers don't care about NVidia -- they don't do serious computation, scientific computing. Our Hollywood clients all use PC-architecture machines with NVidia for video editing and rendering. The writers, etc, use Macs.
So Apple can save the cost and get higher profit margins by not having powerful GPUs in their products.
I know a lot of VFX and animation people that do serious rendering / ray-tracing computation and do wish they could use nVidia GPUs in their Macs. It's worth considering how, unlike writers, rendering needs machines not just for all the animators, but for the render farms too.
Apple typically has forward-looking supply chain contracts. I imagine ARM is under contract for the licensing for long enough that Apple will build out their own design team.
Already, TSMC is the manufacturer for the cores, so Apple really doesn't have too much of a dependency on ARM in the short term.
I think what we'll see over the next 3-5 years is a divergence from the ARM-licensed design to an in-house-designed architecture: "Apple Silicon II"
Apple already has a perpetual license for the Arm ISA (analogous to the api). The actual chip design has been Apple customer for several years now. Apple does not get their chip designs from Arm.
1) Nvidia my produce good performing GPUs, but it doesn't provide Open Source Drivers and it is not so cooperative with Linux Desktop. Also it is quite buggy.
2) Due to (1) , i believe that this acquisition might promote locked down devices. I don't want this situation to occur.
3) This acquisition may have some affect on Apple since Apple is transitioning to ARM Hardware. If this is the case, Apple may even transit to some other Hardware architecture like RISC-V or OpenPower once again.
Doesn’t sound legit. ARM was purchased by SoftBank in 2016 for 32bn USD.and from then it might have grown more than 20% per year. Selling a growing company with bright perspectives today for the same price you purchased it 4 years ago makes no sense, even for a struggling Softbank.
Is there a way to get rid of these paywalls? So annoying to click on a popular post with plenty of upvotes only to find it is paywall-restricted. Incognito mode works but inefficient.
Wouldn't Alphabet be able to buy Arm. I think as long as it is separate from Google it should be fine. It would be the company starting with A in their portfolio.
Nvidia is a pretty hostile company to others in the market. They have a track record of vigorously pushing their market dominance and their own way of doing things. They view making custom designs as beneath them. Their custom console GPU designs - in the original Xbox, in the Playstation 3 - were considered a failure because of terrible cooporation with Nvidia [0]. Apple is probably more demanding than other PC builders and have completely fallen out with them. Nvidia has famously failed to cooporate with the Linux community on the standardized graphics stack supported by Intel and AMD and keeps pushing propietary stuff. There are more examples.
It's hard to not make "hostile" too much of a value judgement. Nvidia has been an extremely successful company because of it too. It's alright if it's not in their corporate culture to work well with others. Clearly it's working, and Nvidia for all their faults is still innovating.
But this culture won't fly well if your core business is developing chip designs for others. It's also a problem if you are the gatekeeper of a CPU instruction set that a metric ton of other infrastructure increasingly depends on. I really, really hope ARM's current business will be allowed to run independently as ARM knows how to do this and Nvidia has time and time again shown not to understand this at all. But I'm pessimistic about that. I'm afraid Nvidia will gut ARM the company, the ARM architectures, and the ARM instruction set in the long run.
[0]: An interesting counterpoint would the Nintendo Switch running on an Nvidia Tegra hardware, but all the evidence points to that this chip is a 100% vanilla Nvidia Tegra X1 that Nvidia was already selling themselves (to the point its bootloader could be unlocked like a standard Tegra, leading to the Switch Fusee-Gelee exploit).