Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How did Qualcomm snatch defeat from the jaws of victory with their Oryon SoC? (semiaccurate.com)
68 points by Tuldok on Sept 29, 2023 | hide | past | favorite | 20 comments



This is one of the dumbest things I've ever seen.

To sum up:

- Qualcomm mandated that manufacturers use their power supply chips with their processor

- these power supply chips are wildly inappropriate for laptops, requiring 4-6 in parallel when a single properly spec'd chip would work

- said chips are designed for space-constrained phones, and are so small they require a different type of PCB

- those PCBs can't handle the power and need to double the layer count

- cost of PCBs becomes absolutely ludicrous at this scale

- Qualcomm refuses to allow manufacturers to use a more appropriate power supply

- manufacturers literally cannot afford the design and stopped using Qualcomm

- Qualcomm is now paying manufacturers to use their chipset, and are losing money in the deal

I don't even know how to express what an absolutely terrible idea this is. The Qualcomm chips are not fit for purpose, at all. Totally inappropriate for the application. No engineer in their right mind would go for this.

Qualcomm is mandating that manufacturers eat a fuckton of cost in raw materials, drastically reduce the efficiency and battery life of their product, and charge way more than the market would pay for an underpowered, overheating laptop with a pitiful battery life. For... Reasons?

It makes no sense at all. No one is going to use this chipset. If any devices do make it to market, their terrible performance will absolutely trash Qualcomm's reputation in this space and consumers won't buy them in the future.

Absolutely insane


A bit off topic, it it keeps surprising me how many dedicated chips cellphones have.

iPhone 15 Pro has 10 different power management chips. It has one dedicated transmitter and two receiver chips. It has two different rf front end chips. Plus four chips for cellular modem, wifi/BT, uwb, and nfc. And then two chips for envelope tracking and clock generating.

I have this feeling that designs in general really are multi-chio, and no one bothers engineering alternatives. Almost everyone goes out and buys the recommended power supply chip for the chip they want to use. Qualcomm not having an actual decent power supply solution & just asking people to keep adding more copies of the known power supply unit seems both all too expectable & also ridiculous. And it just seems all too typical that no one, even the very upper end, isn't going to take the time & risk to engineer their own better support solution anyways.

https://www.ifixit.com/Guide/iPhone+15+Pro+Max+Chip+ID/16532...


It’s interesting. A big reason why Qualcomm got to making CPUs in the first place was that they could sell it as a value add to their modem chips everyone was using: now you can have both in one chip!

We have seen Apple try to make a modem of course, but it seems to have fizzled out again. Wonder why they never looked at WiFi/BT, it’s a lot easier and Intel recently sold theirs iirc.


Apple isn’t making WiFi/bt precisely because it’s not that hard. There are a number of potential companies they can partner with so they can negotiate better rates. CPUs and modems are different, they effectively had a single supplier that will work for their requirements which meant that they couldn’t differentiate their products from other companies and had to deal with a third party supplier who essentially had them over a barrel at the negotiating table.

If they can pull off making their own modem, Qualcomm can’t charge them like they’re the only option even if not all iPhones use the new modem. The makers of various easier to make ICs already know they are not the only option and charge accordingly.


> Wonder why they never looked at WiFi/BT...

They have

https://www.extremetech.com/computing/142553-apple-acquires-...


> For... Reasons?

"Hey, engineer, can we bundle our chipset with something to force the buyers to buy both?"

"Well, maybe, we can use the power supply for some appli.."

"Great, starting selling a forced bundle to a notebook vendors tomorrow."

*Engineer hesitates for some seconds*

"Please, be advised what these power chips are not..."

*C-level glances at the engineer*

"Did I asked you to how to do the business?"

"Sorry. I would leave now."

"Good."

Don't forget, not only they a business conglomerate in which the top lost connection with the entire company decades ago, it's a company which has "never say no to a senior" in it's DNA.


This all sounds so inept on Qualcomm’s part that I have trouble believing the explanation.

Qualcomm is trying to make money on the whole shebang, not on the PMICs. They’re trying to sell, for lack of a better word, a system on multiple chips. Intel and AMD do this too (see their chipsets, for example). When someone makes a laptop, the the CPU vendor accounts for some of the BOM cost, and that’s their revenue. The manufacturer is willing to pay some total BOM cost at a given performance point, and they’ll buy more units at a lower total. So the CPU vendor wants the total to be as low as possible and their portion of it to be as high as possible. Except to the extent that supplying PMICs (if done competently) diverts more of the BOM to them without increasing the total, they don’t really care whose PMICs are used in the grand scheme of things. And they certainly wouldn’t torpedo the entire product launch over PMICs.

I bet this is actually just overconfidence in their product. They designed around their proprietary PMIC, and that design is important: a CPU draws variable amounts of power, and the system will not work if the power and voltages are wrong. For bonus points, the CPU would much rather operate at lower power (skip clocks or whatever) than simply crash if an intensive workload starts and the battery plus whatever power supply is connected can’t keep up. [0]. So I bet they tried to get all this right, they designed around their PMICs, they didn’t realize the chip format was inappropriate, and they can’t fix it in time for market. Oops.

Qualcomm has never done the laptop thing for real before. Intel, AMD, and Apple have.

[0] Anyone remember Apple’s issues here?


I distinctly recall my disappointment when I heard it was Qualcomm that acquired Nuvia because I knew they'd find a way to f*ck it up. This is unfortunately incredibly on brand for Qualcomm. They do the same bundling with the 5G Radios and Snapdragon SoC's. They're more of a patent law firm than a tech company.


I was mostly disappointed because I hoped that Nuvia would result in a product that was fairly open from the perspective of users. You can buy an Intel or AMD CPU, pair it with pretty much anything, and do whatever you want with it. You need to deal with awkward firmware messes and a frequently messy security ecosystem, but it mostly just works. You don’t need software blobs from Intel or (ignoring sometimes messy graphics) AMD.

Even Apple is fairly good in this regard.

Qualcomm, OTOH, does not have a good track record in this regard.


Remember , and for those who don't know. This is semiaccurate, known for hyperbole. I can't remember a single thing they got right.


Honestly was ever Qualcomm not a master of proprietary cash grabs? After all, even in 90s they had their own standards as opposed to GSM. It is in their DNA.

We could have been better off had any of the more "universal" chip companies won the Android SoC race (like TI, Freescale and so on)


Even after GSM they lobbied for their CDMA standard which is why 3G and its successors are based on it. Due to their parents.

It's too bad because GSM was good tech with guaranteed cell capacity and better on weak connections.


DEC's Alpha was doomed at release, because the operating systems licensed with it were a minimum of $30k just to get the install CD (for VMS or OSF/1 "Unix").

There must have been desperation among the design team, as 3rd party Alpha motherboards slowly began to appear that ran basic Linux.

However, Alpha was a glutton for power. DEC then designed the StrongARM, which amazingly worked out far better for low power applications.

I think that I can hear DEC designers screaming inside Qualcomm.


Not sure if this is weird revisionism or simple ignorance, but it's not remotely right (I worked for a DEC reseller back in those days and had lots of quality time with Alpha (which I miss)).

Neither VMS nor OSF/1 were remotely like $30k unless you're talking about the DEC 4000/7000/10000 superminis (which ran to the US$1m range). I'd have to dig up a BoM from the old days to be exact, but the workstation license was in the US$100s/low US$1k unit price, depending on the box, and install media shipped with the box AFAIK. I have a couple of dozen copies of OSF/1 install media in a box in storage someplace so I guess I should get them on eBay ASAP.

Other than the early DEC 3000 line, Alpha ran M$ WinNT on DEC hardware with the right ARC firmware (user installable, tho I seem to recall there were some NT-only products), and Linux was ported very quickly. Many of our customers never even considered VMS or OSF/1 AFAIK. We sold a lot of Alphas to run NT+SQLServer in otherwise x86 Windows shops.

Third party Alpha motherboards existed but were never a big deal in the market mostly because the DEC OEM motherboards (e.g. PC164) were very good and reasonably priced (for the high end), and came in standard ATX form factors. There were, however, a lot of 3rd parties who took DEC OEM boards, put them in their own case and sold them under their own brand.

Comparing Alpha and StrongARM is...weird, since they were for completely different markets. It's like saying Xeon is doomed because Atom or ARM9 is so much more power efficient. And in Alphas heyday, absolutely noone was talking about TDP outside of mobile.

Alphas problem IMHO was there was never a 'low end' to capture market. Alpha was always a 'high-end PC' to 'workstation' class chip (no, Multia/21066 don't count...they were awful). In the end, AMD got 64-bit and AMD & Intel got performance competitive quickly and at the end of the day software is more important than hardware and why would end users want to have x86-64 on the low end and AXP on the high end when they can have one arch across the board.


Linux and Windows NT were also available.


...after quite some time.

That delay was the kiss of death, for an architecture that could never have a low-power implementation (and thus could not run in a laptop), and had insanely weak memory barriers in an SMP configuration.

No, Alpha would not have survived. There was just no way.


EV4 shipped in '92. WinNT for Alpha shipped in '93 or close to it. Is that really 'quite some time'?


I see from the article that: "Qualcomm has an exclusive on ARM laptops until the end of 2024"

So no one else is allowed to make an ARM laptop chip?


This was like listening to one of Quals engineering leaders try to explain, repeatedly, to business leadership why what they were doing was not going to work.

Popcorn worthy.


That was a very hard article to read (more on this later)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: