Hacker Newsnew | past | comments | ask | show | jobs | submit | rob74's commentslogin

Yeah, "being a dick" is kind of the opposite of "being empathetic".

Case in point: the current US administration (https://www.independent.co.uk/news/world/americas/sin-empath...)


This isn't reddit, you don't need to inject politics where it does not belong.

The thought expressed in the title came to my mind when I saw Nvidia described as an "AI company" in the press recently...

An object is what it does. NVIDIA is making the most money through AI, so that's what it is now to the market

The hardware is heavily optimized for low precision matrix math, pretty much only used for AI.

Not since the Ozaki scheme has appeared. Good high-precision perf from low-precision tensor units has unlocked some very interesting uses of low-fp64-perf GPUs.

And in graphics rasterization.

Those parts (the tensor cores) aren't used for rasterization.

While raster units are separate from tensor cores, both can be leveraged for image rendering. The simplest example of this is Nvidia's DLSS.

Just like it was a crypto company. Its a computational fad chaser.

Next up: quantum. And that will be the end of them.


They're just selling. Thats it. If 50 Pittsburgh Steelers fans show up to your bar every Sunday, congrats, you're a Steelers bar now.

I don't think they will fall for the quantum hype. Jensen has publicly stated that quantum computing is at least decades away from being useful.

We were saying the same thing about AI less than one decade ago, of course... and then the Vaswani paper came out. What if it turns out that when it comes to quantum computing, "Used Pinball Machine Parts Are All You Need"?

Nvidia is selling hardware. What the buyers are doing with it doesn't change anything about Nvidia.

A company selling knives is not considered a butcher or cook, despite the main uses of knives being just that.


A company that sells knives, and also invests heavily in restaurants, might be considered to be in the restaurant business, however

Nvidia spends a lot of money investing in downstream AI companies, in what feels like a rather incestuous circle


I'm not sure if this is what you mean too, but by the same logic it's not a 'graphics company' nor gaming etc. either. 'Chipmaker' as they say, specialising in highly parallel application-specific compute.

But it clearly does, as NVIDIA rolls out hardware and software optimised for deployment as AI compute.

You have a point. Then it's a "compute" company.

Indeed, why would they not call themselves NvidAI to begin with. This company has twice already been super lucky to have their products used for the wrong thing (given GPUs were created to accelerated graphics, not mining or inference)

3 times, if you count the physics GPGPU boom that Nvidia rode before cryptocurrencies.

And other than maybe the crypto stuff, luck had nothing to do with it. Nvidia was ready to support these other use cases because in a very real way they made them happen. Nvidia hardware is not particularly better for these workloads than competitors. The reason they are the $4.6T company is that all the foundational software was built on them. And the reason for that is that JHH invested heavily in supporting the development of that software, before anyone else realized there was a market there worth investing in. He made the call to make all future GPUs support CUDA in 2006, before there were heavy users.


I don't think the physics processing units were ever big. This was mostly just offloading some of their physics processes from the CPU to the GPU. It could be seen as a feature of GPUs for games, like ray-tracing acceleration.

That's not what I was referring to. I was talking about NV selling GPGPUs for HPC loads, starting with the Tesla generation. They were mostly used for CFD.

They still had a boom of being used for a lot of HPC loads, even non-AI supercomputers, although it was quickly dwarfed by their other markets.

I don't think it's luck. They invested in CUDA long before the AI hype.

They quietly (at first) developed general purpose accelerators for a specific type of parallel compute. It turns out there are more and more applications being discovered for those.

It looks a lot like visionary long term planning to me.

I find myself reaching for Jax more and more where you would have done numpy in the past. The performance difference is insane once you learn how to leverage this style of parallelization.


Are you able to share a bit, enough to explain to others doing similar work that this "Jax > numpy" aspect applies to what their work (and thus that they'd be well-off to learn enough Jax to make use of it themselves)?

This should be a good starting point:

https://docs.jax.dev/en/latest/jax.numpy.html

A lot of this really is a drop in replacement for numpy that runs insanely fast on the GPU.

That said you do need to adapt to its constraints somewhat. Some things you can't do in the jitted functions, and some things need to be done differently.

For example, finding the most common value along some dimension in a matrix on the GPU is often best done by sorting along that dimension and taking a cumulative sum, which sort of blew my mind when I first learnt it.


Or that parallel computing is immensely useful in general and that more use cases will be found for it in the future beyond AI.

At some point, maybe it isn’t luck anymore but a general trend towards parallel computing.


  > parallel computing.
Maybe because the acronym PCU will invite to many toilet jokes.

"No, I see the pee" and at least another that I'd rather not express in polite company ))


to be fair, the percentage of their revenue derived from ai-related sales is much higher now than before. Why is that not accurate?

GN did a video a few weeks ago in which they were showing a slide from Nvidias shareholder meeting in which it was shown that gaming was a tiny part of Nvidias revenue.

Basically, almost half of their revenue is pure profit and all of that comes from AI.

While the slide looked a lot nicer, the data is also available on their site https://nvidianews.nvidia.com/news/nvidia-announces-financia...



Just because customers use their hardware for AI does not mean the hardware maker is an AI company.

There's a lot of software involved in GPUs, and NVIDIA's winning strategy has been that the software is great. They maintained a stable ecosystem across most of their consumer and workstation/server stack for many years before crypto, AI and GPU-focused HPC really blew up. AMD has generally better hardware but poor enough software that "fine wine" is a thing (ie the software takes many years post-hardware-launch to actually properly utilize the hardware). For example, they only recently got around to making AI libraries usable on the pre-covid 5700XT.

NVIDIA basically owns the market because of the stability of the CUDA ecosystem. So, I think it might be fair to call them an AI company, though I definitely wouldn't call them just a hardware maker.


*barely passable software while their competitors literally shit the bed, but I take your point.

As someone who codes in CUDA daily, putting out and maintaining so many different libraries implementing complex multi-stage GPU algorithms efficiently at many different levels of abstraction, without having a ton of edgecase bugs everywhere, alongside maintaining all of the tooling for debugging and profiling, and still having regular updates, is quite a bit beyond "barely passable". It's a feat only matched by a handful of other companies.

Literally?

"Literally" as an intensifier predates the United States.

You aren't even "dying on this hill", people like you are inventing a hill made out of dead bodies.


Literally inventing a hill made out of dead bodies, or figuratively?

https://news.ycombinator.com/item?id=45487334

When more of their revenue comes from AI than graphics, and they're literally removing graphics output from their hardware...


This is similar to evolution. Evolution repurposes old systems for newer tasks. The GPU name is stuck but it has been deployed for AI.

I mean afaik the consumer GPUs portion of their business has always been tiny in comparison to enterprise (except to begin with right at the start of the company's history, I believe).

In a way it's the scientific/AI/etc enterprise use of Nvidia hardware that enables the sale of consumer GPUs as a side effect (which are just byproducts of workstation cards having a certain yield - so flawed chips can be used in consumer cards).


No, gaming revenue for NVIDIA was historically the major revenue percentage from the company (up until 2023). Only with the recent AI boom this changed.

Source (I am not sure how reliable this is because I got this from ChatGPT, but I remember seeing something similar from other sources): https://www.fool.com/investing/2024/02/12/gaming-was-nvidias....


Nvidia started as a gaming company and gaming was the majority of their business until the last 5-10 years.

TIL that "uppercase" and "lowercase" come from the actual cases where the types were stored. When I saw this picture https://cdn8.openculture.com/2025/09/23222831/OC-Printing-Ty... from the article, it all became clear. Even the numbers (for which you don't have to press "shift" on the keyboard either) were also stored in the lower case (which was obviously for the most often used types). Which then translated nicely to the typewriter: no shift for the more commonly used characters (lowercase), shift for the less common (uppercase).

Wikipedia: https://en.wikipedia.org/wiki/Letter_case#Terminology


Yes. However in the US at least, the California job case was much more popular in the 20th century and it moved the capitals to the right of the "lower case" letters rather than above them. Nevertheless during the years when I was a typesetter we still called them "upper case" and never "right case".

If you buy an old typecase at an antiques store in the US, 99% of the time it will be a California job case.

https://en.wikipedia.org/wiki/California_job_case


How has mathematics gotten so abstract? My understanding was that mathematics was abstract from the very beginning. Sure, you can say that two cows plus two more cows makes four cows, but that already is an abstraction - someone who has no knowledge of math might object that one cow is rarely exactly the same as another cow, so just assigning the value "1" to any cow you see is an oversimplification. Of course, simple examples such as this can be translated into intuitive concepts more easily, but they are still abstract.

It is abstract in the strict sense, of course. Every science is, as "abstract" simply means "not concrete". All reasoning is by definition abstract in the sense it all reasoning by definition involved concepts, and concepts are by definition abstract.

Numbers, for example, are abstract in the sense that you cannot find concrete numbers walking around or falling off trees or whatever. They're quantities abstracted from concrete particulars.

What the author is concerned with is how mathematics became so abstract.

You have abstractions that bear no apparent relation to concrete reality, at least not according to any direct correspondence. You have degrees of abstraction that generalize various fields of mathematics in a way that are increasingly far removed from concrete reality.


Right? Math is abstraction at its very core. Ridiculous premise acting as if this is anything but beyond ancient.

Mathematics arose from ancient humans need to count and measure. Even the invention\discovery of Calculus was in service to physics. It has probably only been 300 years or so since Mathematics has been symbolic, before that it was more geometric and more attached to the physical world.

Leibniz (late 1600s) helped to popularize negative numbers. At the time most mathematicians thought they were "absurd" and "fictitious".

No, not highly abstract from the beginning.


Almost from the first time people started writing about mathematics, they were writing about it in an abstract way. The Egyptians and the Babylonians kept things relatively concrete and mostly stuck to word problems (although lists of pythagorean triples is evidence for very early "number theory"), but Greece, China and India were all working in abstractions relatively early.

In particular, ancient Greek geometry at least after 300 BC proceeded from axioms, which is a central component of the abstract approach.

> Leibniz (late 1600s) helped to popularize negative numbers.

Wasn't that imaginary numbers?



He didn't connect the dots, so no he didn't do calculus even if he did some things related to it.

Sorry what? Ancient humans invented symbols to count. How is that not symbolic?

Geometry is “attached” to the physical world… but in an abstract way… but you can point to the thing your measuring maybe so it doesn’t count…

Abstraction was perfected if not invented by mathematics.


Symbolic here refers of doing math with place holders, be it letters or something. Ancient world had notations for recording numbers. But much less so to do math with them. Say like long division.

> My understanding was that mathematics was abstract from the very beginning.

It wasn't; but that's a common misunderstanding from hundreds of centuries of common practice.

So, how has maths gotten so abstract? Easy, it has been taken over by abstraction astronauts(1), which have existed throghout all eras (and not just for software engineering).

Mathematics was created by unofficial engineers as a way to better accomplish useful activities (guessing the best time of year to start migrating, and later harvesting; counting what portion of harvest should be collected to fill the granaries for the whole winter; building temples for the Pharaoh that wouldn't collapse...)

But then, it was adopted by thinkers that enjoyed the activity by itself and started exploring it by sheer joy; math stopped representing "something that needed doing in an efficient way", and was considered "something to think about to the last consecuences".

Then it was merged into philosophy, with considerations about perfect regular solids, or things like the (misunderstood) metaphor of shadows in Plato's cave (which people interpreted as being about duality of the essences, when it was merely an allegory on clarity of thinking and explanation). Going from an intuitive physical reality such as natural numbers ("we have two cows", or "two fingers") to the current understanding of numbers as an abstract entity ("the universe has the essence of number 'two' floating beyond the orbit of Uranus"(2)) was a consequence of that historical process, when layers upon layers of abstraction took thinkers further and further away from the practical origins of math.

[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...

[2] https://en.wikipedia.org/wiki/Hyperuranion


I think it is fair to say that it was always an abstraction. But, crucially, it was built on language as much as it was empiricism.

That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?


> That is, numbers were specifically used to abstract over how other things behave using simple and strict rules. No?

Agree that math is built on language. But math is not any specific set of abstractions; time and again mathematicians have found out that if you change the definitions and axioms, you achieve a quite different set of abstractions (different numbers, geometries, infinity sets...). Does it mean that the previous math ceases to exist when you find a contradiction on it? No, it's just that you start talking about new objects, because you have gained new knowledge.

The math is not in the specific objects you find, it's in the process to find them. Rationalism consider on thinking one step at a time with rigor. Math is the language by which you explain rational thought in a very precise, unambiguous way. You can express many different thoughts, even inconsistent ones, with the same precise language of mathematics.


Agreed that we grew math to be that way. But there is an easy to trace history on the names of the numbers. Reals, Rationals, Imaginary, etc. They were largely named based on their relation to the language on how to relate them to physical things.

I think Asimov wrote something along those lines in one of his robots cycle novels (which leads us back nicely to "iRobot", er, "I, Robot"): why build a vacuum robot, a window-cleaner robot, a robot lawnmower, a robot car etc. etc. when you can build a humanoid robot which can operate all the devices designed for humans? Ok, currently it's "if you could build" rather than "when you can build", but looking at it from the science fiction writer's perspective, it makes sense...

I think we've settled on the idea that the bipedal human form is most efficient when it's a bit naive. The vacuum robot is more optimal because it's small form allows it to reach all parts of the floor and under small spaces we usually can't get easily. We build tools to do jobs more effectively. Not just to mimic how we've done them before.

Thanks for the heads up, I just cleared the history for the last hour, that works too (but using incognito mode is definitely better).

Call me naive, but I would presume that even a junior, once they start working at a company, should be familiar enough with a language that they know all the basic syntax, idioms etc. Still, even if they are, over-using some language features will make your code less readable (to anyone). E.g. some will prefer good ol' if/else to the notorious ternary operator and its many descendants. But that brings us back to your own personal taste...

Because the current right-leaning US administration (ok, some would call it far right - or rather, if you would go by the standards of pretty much any other country, you'd have to classify it as far right) is so fond of conspiracy theories and rejects science? But yes, in principle I agree that accepting scientific consensus shouldn't be a partisan issue...

> Accenture are just trying to sell it as a good thing.

Layoffs to increase profits are a good thing for shareholders. Not so much for those laid off of course, but who cares about those?


That's kind of a one-sided view. Yes, you need to treat your employees honorably, but a company isn't a jobs program, either.

Serious question I’d like you and others to consider: why shouldn’t they be [a jobs program]?

Seriously, think about it for a moment. We have built a global society on the singular foundation that one must work to survive, yet we do not provide guarantees of work to provide the wages needed for survival of the labor class. It effectively condemns the unluckiest among us to suffering and an early death solely because they could not find gainful employment for whatever reason - disability, illness, lack of skills, insufficient wages, missing credentials, regime policies, the list goes on.

In that context, shouldn’t we be mandating companies at least abstain from layoffs when they’re posting profits or engaging in share buybacks, for a set period of time? Should we not be mandating governments provide job guarantees and minimum wages that enable every worker to have a safe, livable home and nutritious food?

Just something to chew on, because you’re right in that companies aren’t a jobs program.

But maybe they should be.


Ok, maybe that was a little bit clickbaity, but the first sentence should clarify it:

> The challenge, simulate millions of particles in golang, multi-player enabled, cpu only, smart tv compatible.

Usually you wouldn't do that on the server, but if you want performance metrics, it's probably easier to measure on the server than on X clients?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: