Not since the Ozaki scheme has appeared. Good high-precision perf from low-precision tensor units has unlocked some very interesting uses of low-fp64-perf GPUs.
We were saying the same thing about AI less than one decade ago, of course... and then the Vaswani paper came out. What if it turns out that when it comes to quantum computing, "Used Pinball Machine Parts Are All You Need"?
I'm not sure if this is what you mean too, but by the same logic it's not a 'graphics company' nor gaming etc. either. 'Chipmaker' as they say, specialising in highly parallel application-specific compute.
Indeed, why would they not call themselves NvidAI to begin with. This company has twice already been super lucky to have their products used for the wrong thing (given GPUs were created to accelerated graphics, not mining or inference)
3 times, if you count the physics GPGPU boom that Nvidia rode before cryptocurrencies.
And other than maybe the crypto stuff, luck had nothing to do with it. Nvidia was ready to support these other use cases because in a very real way they made them happen. Nvidia hardware is not particularly better for these workloads than competitors. The reason they are the $4.6T company is that all the foundational software was built on them. And the reason for that is that JHH invested heavily in supporting the development of that software, before anyone else realized there was a market there worth investing in. He made the call to make all future GPUs support CUDA in 2006, before there were heavy users.
I don't think the physics processing units were ever big. This was mostly just offloading some of their physics processes from the CPU to the GPU. It could be seen as a feature of GPUs for games, like ray-tracing acceleration.
That's not what I was referring to. I was talking about NV selling GPGPUs for HPC loads, starting with the Tesla generation. They were mostly used for CFD.
I don't think it's luck. They invested in CUDA long before the AI hype.
They quietly (at first) developed general purpose accelerators for a specific type of parallel compute. It turns out there are more and more applications being discovered for those.
It looks a lot like visionary long term planning to me.
I find myself reaching for Jax more and more where you would have done numpy in the past. The performance difference is insane once you learn how to leverage this style of parallelization.
Are you able to share a bit, enough to explain to others doing similar work that this "Jax > numpy" aspect applies to what their work (and thus that they'd be well-off to learn enough Jax to make use of it themselves)?
A lot of this really is a drop in replacement for numpy that runs insanely fast on the GPU.
That said you do need to adapt to its constraints somewhat. Some things you can't do in the jitted functions, and some things need to be done differently.
For example, finding the most common value along some dimension in a matrix on the GPU is often best done by sorting along that dimension and taking a cumulative sum, which sort of blew my mind when I first learnt it.
GN did a video a few weeks ago in which they were showing a slide from Nvidias shareholder meeting in which it was shown that gaming was a tiny part of Nvidias revenue.
Basically, almost half of their revenue is pure profit and all of that comes from AI.
There's a lot of software involved in GPUs, and NVIDIA's winning strategy has been that the software is great. They maintained a stable ecosystem across most of their consumer and workstation/server stack for many years before crypto, AI and GPU-focused HPC really blew up. AMD has generally better hardware but poor enough software that "fine wine" is a thing (ie the software takes many years post-hardware-launch to actually properly utilize the hardware). For example, they only recently got around to making AI libraries usable on the pre-covid 5700XT.
NVIDIA basically owns the market because of the stability of the CUDA ecosystem. So, I think it might be fair to call them an AI company, though I definitely wouldn't call them just a hardware maker.
As someone who codes in CUDA daily, putting out and maintaining so many different libraries implementing complex multi-stage GPU algorithms efficiently at many different levels of abstraction, without having a ton of edgecase bugs everywhere, alongside maintaining all of the tooling for debugging and profiling, and still having regular updates, is quite a bit beyond "barely passable". It's a feat only matched by a handful of other companies.
I mean afaik the consumer GPUs portion of their business has always been tiny in comparison to enterprise (except to begin with right at the start of the company's history, I believe).
In a way it's the scientific/AI/etc enterprise use of Nvidia hardware that enables the sale of consumer GPUs as a side effect (which are just byproducts of workstation cards having a certain yield - so flawed chips can be used in consumer cards).
No, gaming revenue for NVIDIA was historically the major revenue percentage from the company (up until 2023). Only with the recent AI boom this changed.
TIL that "uppercase" and "lowercase" come from the actual cases where the types were stored. When I saw this picture https://cdn8.openculture.com/2025/09/23222831/OC-Printing-Ty... from the article, it all became clear. Even the numbers (for which you don't have to press "shift" on the keyboard either) were also stored in the lower case (which was obviously for the most often used types). Which then translated nicely to the typewriter: no shift for the more commonly used characters (lowercase), shift for the less common (uppercase).
Yes. However in the US at least, the California job case was much more popular in the 20th century and it moved the capitals to the right of the "lower case" letters rather than above them. Nevertheless during the years when I was a typesetter we still called them "upper case" and never "right case".
If you buy an old typecase at an antiques store in the US, 99% of the time it will be a California job case.
How has mathematics gotten so abstract? My understanding was that mathematics was abstract from the very beginning. Sure, you can say that two cows plus two more cows makes four cows, but that already is an abstraction - someone who has no knowledge of math might object that one cow is rarely exactly the same as another cow, so just assigning the value "1" to any cow you see is an oversimplification. Of course, simple examples such as this can be translated into intuitive concepts more easily, but they are still abstract.
It is abstract in the strict sense, of course. Every science is, as "abstract" simply means "not concrete". All reasoning is by definition abstract in the sense it all reasoning by definition involved concepts, and concepts are by definition abstract.
Numbers, for example, are abstract in the sense that you cannot find concrete numbers walking around or falling off trees or whatever. They're quantities abstracted from concrete particulars.
What the author is concerned with is how mathematics became so abstract.
You have abstractions that bear no apparent relation to concrete reality, at least not according to any direct correspondence. You have degrees of abstraction that generalize various fields of mathematics in a way that are increasingly far removed from concrete reality.
Mathematics arose from ancient humans need to count and measure. Even the invention\discovery of Calculus was in service to physics. It has probably only been 300 years or so since Mathematics has been symbolic, before that it was more geometric and more attached to the physical world.
Leibniz (late 1600s) helped to popularize negative numbers. At the time most mathematicians thought they were "absurd" and "fictitious".
Almost from the first time people started writing about mathematics, they were writing about it in an abstract way. The Egyptians and the Babylonians kept things relatively concrete and mostly stuck to word problems (although lists of pythagorean triples is evidence for very early "number theory"), but Greece, China and India were all working in abstractions relatively early.
Symbolic here refers of doing math with place holders, be it letters or something. Ancient world had notations for recording numbers. But much less so to do math with them. Say like long division.
> My understanding was that mathematics was abstract from the very beginning.
It wasn't; but that's a common misunderstanding from hundreds of centuries of common practice.
So, how has maths gotten so abstract? Easy, it has been taken over by abstraction astronauts(1), which have existed throghout all eras (and not just for software engineering).
Mathematics was created by unofficial engineers as a way to better accomplish useful activities (guessing the best time of year to start migrating, and later harvesting; counting what portion of harvest should be collected to fill the granaries for the whole winter; building temples for the Pharaoh that wouldn't collapse...)
But then, it was adopted by thinkers that enjoyed the activity by itself and started exploring it by sheer joy; math stopped representing "something that needed doing in an efficient way", and was considered "something to think about to the last consecuences".
Then it was merged into philosophy, with considerations about perfect regular solids, or things like the (misunderstood) metaphor of shadows in Plato's cave (which people interpreted as being about duality of the essences, when it was merely an allegory on clarity of thinking and explanation). Going from an intuitive physical reality such as natural numbers ("we have two cows", or "two fingers") to the current understanding of numbers as an abstract entity ("the universe has the essence of number 'two' floating beyond the orbit of Uranus"(2)) was a consequence of that historical process, when layers upon layers of abstraction took thinkers further and further away from the practical origins of math.
> That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?
Agree that math is built on language. But math is not any specific set of abstractions; time and again mathematicians have found out that if you change the definitions and axioms, you achieve a quite different set of abstractions (different numbers, geometries, infinity sets...). Does it mean that the previous math ceases to exist when you find a contradiction on it? No, it's just that you start talking about new objects, because you have gained new knowledge.
The math is not in the specific objects you find, it's in the process to find them. Rationalism consider on thinking one step at a time with rigor. Math is the language by which you explain rational thought in a very precise, unambiguous way. You can express many different thoughts, even inconsistent ones, with the same precise language of mathematics.
Agreed that we grew math to be that way. But there is an easy to trace history on the names of the numbers. Reals, Rationals, Imaginary, etc. They were largely named based on their relation to the language on how to relate them to physical things.
I think Asimov wrote something along those lines in one of his robots cycle novels (which leads us back nicely to "iRobot", er, "I, Robot"): why build a vacuum robot, a window-cleaner robot, a robot lawnmower, a robot car etc. etc. when you can build a humanoid robot which can operate all the devices designed for humans? Ok, currently it's "if you could build" rather than "when you can build", but looking at it from the science fiction writer's perspective, it makes sense...
I think we've settled on the idea that the bipedal human form is most efficient when it's a bit naive. The vacuum robot is more optimal because it's small form allows it to reach all parts of the floor and under small spaces we usually can't get easily. We build tools to do jobs more effectively. Not just to mimic how we've done them before.
Call me naive, but I would presume that even a junior, once they start working at a company, should be familiar enough with a language that they know all the basic syntax, idioms etc. Still, even if they are, over-using some language features will make your code less readable (to anyone). E.g. some will prefer good ol' if/else to the notorious ternary operator and its many descendants. But that brings us back to your own personal taste...
Because the current right-leaning US administration (ok, some would call it far right - or rather, if you would go by the standards of pretty much any other country, you'd have to classify it as far right) is so fond of conspiracy theories and rejects science? But yes, in principle I agree that accepting scientific consensus shouldn't be a partisan issue...
Serious question I’d like you and others to consider: why shouldn’t they be [a jobs program]?
Seriously, think about it for a moment. We have built a global society on the singular foundation that one must work to survive, yet we do not provide guarantees of work to provide the wages needed for survival of the labor class. It effectively condemns the unluckiest among us to suffering and an early death solely because they could not find gainful employment for whatever reason - disability, illness, lack of skills, insufficient wages, missing credentials, regime policies, the list goes on.
In that context, shouldn’t we be mandating companies at least abstain from layoffs when they’re posting profits or engaging in share buybacks, for a set period of time? Should we not be mandating governments provide job guarantees and minimum wages that enable every worker to have a safe, livable home and nutritious food?
Just something to chew on, because you’re right in that companies aren’t a jobs program.
Case in point: the current US administration (https://www.independent.co.uk/news/world/americas/sin-empath...)
reply