Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

FWIW, Intel has been pushing towards launching a dedicated discrete GPU platform. If/when they recover, this would place us in having three distinct CPU/GPU companies.


If I was Intel, I would be going straight for the TPU market. GPU have a bunch of legacy from the G=Graphics legacy. The real money maker is not likely to be gamers (although it has been healthy enough market). The future of those vector processing monsters is going to be ML (and maybe crypto). This is the difference between attempting to leapfrog compared to trying to catch up.


> The future of those vector processing monsters is going to be ML (and maybe crypto)

That's a heavy bet on ML and crypto(-currency? -graphy?). Has ML, so far, really made any industry-changing inroads in any industry? I'm not entirely discounting the value of ML or crypto, just questioning the premature hype train that exists in tech circles (especially HN).


>That's a heavy bet on ML and crypto

Well, yes that is the point. My theory is that the gaming market for GPUs is well understood. I don't think there are any lurking surprises on the number of new gamers buying high-end PCs (or mobile devices with hefty graphics capabilities) in the foreseeable future.

However, if one or more of the multitude of new start-ups entering the ML and crypto (-currency) space end up being the next Amazon/Google/Facebook then that would be both unforeseeable and unbelievably transformative. Maybe it won't happen (that is the risk) but my intuition suggests something is going to come out of that work.

I mean, it didn't work out for Sony when they threw a bunch of SPUs in the PS3. They went back to a traditional design for their next two consoles. So not every risk pans out!


> Has ML, so far, really made any industry-changing inroads in any industry?

Does the tech industry not count, or are you only considering industries that are typically slower moving?


The tech industry claims it applies machine learning all over the place, but I doubt it actually moves the needle much.


Is that opinion based on anything firmer than a pessimistic outlook?

Lots of people jump on any trend, it doesn't mean the hype is unjustified or that nobody is "moving the needle" with that trend. Recommendations (e-commerce, advertising, music, etc) have been pretty revolutionized by it.


As a personal anecdote, the quality of the recommendations I receive across the board has been roughly inversely proportional to the level hype around ML in the tech press and academia.

C.f. Youtube's, Amazon and Netflix (products that bet BIG on recommendations) being incapable of recommending compelling material.


> Recommendations (e-commerce, advertising, music, etc) have been pretty revolutionized by it.

I'd like to learn more about how they've been revolutionized by it. Any sources you could share?


ML contributes to a significant fraction of revenue at three of the world's largest companies (Amazon, Google, Facebook - largely through recommendations and ad ranking). It also drives numerous features that other tech companies build into their products to stay competitive (think FaceID on iPhone). Hard to argue that it doesn't move the needle...


A ton. Look at the nearest device around you, chances are it runs Siri, Alexa, Cortana, or Google voice assistant. This will only grow.

Same with machine vision. It's going to be everywhere — not just self-driving trucks (which, unlike cars, are going yo be big soon), but also security devices, warehouse automation, etc.

All this is normally run on vector / tensor processors, both in huge datacenters and on local beefy devices (where a stock built-in GPU alongside ARM cores is not cutting it).

This is a growing market with.a lot of potential.

Licensing CUDA could be quite a hassle, though. OpenCL is open but less widely used.


> Has ML, so far, really made any industry-changing inroads in any industry?

It is (IIRC) a pretty fundamental part of self driving tech. I honestly think this is what drives a lot of Nvidia's valuation.


nvidia's largest revenue driver, gaming, made 1.4B dollars last year (up 56% YoY). nvidia's second largest, "data center" (AI) made 968M (up 43% YoY). Other revenue was 661M. Up to you if nvidia's second largest revenue center, of nearly a billion/year is "industry changing"


> crypto(-currency? -graphy?)

TPUs are massively parallel Float16 engines - not really applicable to anything outside of ML.


> The future of those vector processing monsters is going to be ML (and maybe crypto).

Hopefully some of those cryptocurrencies (until they get proof-of-stake fully worked out) move to memory-hard proof-of-work using Curve25519, Ring Learning With Errors (New Hope), and ChaCha20-Poly1309, so cryptocurrency accelerators can pull double-duty as quantum-resistant TLS accelerators.

I'm not necessarily meaning dedicated instructions, but things like vectorized add, xor, and shift/rotate instructions, at least 53 bit x 53 bit -> 106 bit integer multiplies (more likely 64 x 64 -> 128), and other somewhat generic operations that tend to be useful in modern cryptography.


This is what they tried to do with Nervana and are trying again to do with Habana.


The one thing I don't get is, there are a lot of machines out there that would gain a lot from specialized search hardware (think about Prolog acceleration engines, but lower level). For a start, every database server (SQL or NoSQL) would benefit.

It is also hardware that is similar to ML acceleration, it needs better integer and boolean (algebra, not branching) support, and has a stronger focus on memory access (that ML acceleration also needs, but gains less from). So how comes nobody even speak about this?


I don't understand how database servers would benefit. You would have to add the search hardware directly to DRAM for any meaningful gains.


You would need large memory bandwidth and a good set of cache pre-population heuristics (putting it directly on the memory is a way to get the bandwidth).

ML would benefit from both too, as would highly complex graphics and physics simulation. The cache pre-population is probably at odds with low latency graphics.


They have. Their first purchase (Nervana) hasn’t worked out for them so they are now working through their purchase of the more conventional Habana.


Intel did go for the tpu market. It was called the nervana chip, and they cancelled it.


From what we know so far (https://www.tomshardware.com/news/intel-xe-graphics-all-we-k...), it will be a while before Intel competes in the GPU space. The first offering of Xe graphics (still not out yet) will probably not be competitive with cards that AMD and Nvidia released over a year ago.

If Intel can survive their current CPU manufacturing issues, manage to innovate on their design again, and manage to improve the Xe design in a couple generations, they might be in a good position in several years. I (as a layman) give them a 50/50 shot at recovering or just abandoning the desktop CPU and GPU market.


> The first offering of Xe graphics (still not out yet) will probably not be competitive with cards that AMD and Nvidia released over a year ago.

Being a only a year behind market leaders with your first product actually seems pretty impressive to me. Especially if that's at the (unreasonably priced) top of the line, and they have something competitive in the higher volume but less sexy down market segments.


Intel's i860 was released in 1986, that evolved to the i740 in 1996, and later on to the KNC, KNLs, Xeon Phis, etc.

The >= KNC products have all been "one generation behind" the competition. When the Intel Xe is released, Intel will have been trying to enter this market for about 30 years.

This market is more important now than ever before. I hope that they keep pushing and do not axe the Intel Xe after the first couple of generations only to realize 10 years later that they want to try again.


I wonder, why Intel abandoned the mobile market? I think only a single smartphone was released using an Intel CPU.


They had a bad product that they had to sell at below cost to get market share. The x86 tax is pretty small for a big out of order desktop core but it's much more real at Atom's scale.


I think radio tech is an issue in the mobile space. It is heavily patented and the big players who can integrate this technology into their chips can offer much more energy efficient solutions.


Intel had that in-house too... it just also kinda sucked too...

That is the thing with a lot of these side projects Intel is always working on. It would be great if they actually delivered good products, but they often spend billions acquiring these companies and developing these products only to turn out one or two broken products and then dump the whole project.

I think this time is different with Xe, but I can't blame anyone for looking at the past history and being dubious that Intel is in it for the long haul.


I had one model from Asus, battery life was terrible and if you were doing anything remotely intensive it doubled up as a nice hand warmer...

I know that these probably are solvable problems, but they left a pretty bad taste...


The first TAG Heuer android wear device ran an atom chip. Fun times with the power optimization, that was.


Not enough profit for their liking was the reason.


Not their first try on that front, and Intel push for new product branches has not exactly gone well the past decade or so.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: