Hacker Newsnew | past | comments | ask | show | jobs | submit | BSOhealth's commentslogin

I love this. It speaks to me in a similar ways as a lot of the AI zeitgeist—why shouldn’t we optimize for how the brain actually operates at scale versus hundreds-years-old ideas about ligatures designed for reading in candlelight? (In the AI case, a romanticism for having to learn and prove memory in such a rote way)


A little of a tangent, but I always thought it’d be cool to have certain libraries printed out in very high quality as posters. Redux was one example in particular—something very concise yet powerful and kind of worth admiring to that extent.


Intel as a store of value?


It’s easy to be cynical specifically in this case, when Elon has in the past very gleefully amplified AI fakes to drum up social sentiment


I don't get it. Is the implication that Elon/Tesla/X specifically promoted/amplified the post?


I infer that the implication is "that's rich coming from Elon/Tesla" because Elon is not honest and amplifies misinformation often?

(not singling Elon out, he's one of many)


The implication is that Elon is a massive hypocrite for complaining when these dishonest tactics are used against him because he uses them all the time.


During election, Musk promoted a lot of deep fakes about Kamala Harris, including fake images generated using Grok. He's a total asshole.

https://www.nbcnews.com/tech/misinformation/kamala-harris-de...


Here's the video - it was very obviously a parody.

https://x.com/MrReaganUSA/status/1816826660089733492


> - it was very obviously a parody.

Not to the stupid.



"couldn't have happened to a nicer guy"


With all the lensing going on out there, is it possible for us to observe the light from our sun (and potentially our planet) billions of years ago?

A cool achievement would be, observe the moon/earth separation event(s)


Theoretically yes but although this black hole is big enough to make that more realistic, the redirected light would be have lost so much energy we’d likely be unable to observe it. We’d need an orbital hypertelescope to even stand a chance. Even then we wouldn’t see the earth because it would be drowned out by the sun.

The bigger problem is all the dust and other stars in the way. I’m not aware of any black holes close enough that would have a direct path for the light to cross without being absorbed and scattered.


The other problem is the angle at which the light must be redirected. The Cosmic Horseshoe is composed of two systems almost directly in line, the light comes from the farther system and bends infinitesimally around the black hole to come to us. I don't know if a 180 degree bend is possible.

Also, the foreground galaxy/supermassive black hole in the Cosmic Horseshoe is 5.6 billion light years away, so any light that could come from our solar system, go around the black hole, and come back to our hypothetical hypertelescope would be over 11 billion years old - almost triple the age of our sun.

Saggitarius A* in our own galaxy is, of course, directly in the elliptic and therefore badly occluded by dust, but it would be interesting to look at as it's only 27k light years away. In the absence of that pesky dust, it would give us a picture of the solar system as of the Paleolithic. Andromeda, at 2.5 million light years away, would give us 5-million-year-old light. There are other black holes in the Milky Way on the order of a thousand light years away which are not at the center of the galaxy but have masses comparable to or slightly larger than our sun, these are far closer (within a few thousand years) but have much smaller gravitational fields. Luminous intensity drops off with the square of the distance, but I'm not sure how the gravitational field strength affects the ability of a particular galaxy to bend light.


> The other problem is the angle at which the light must be redirected. The Cosmic Horseshoe is composed of two systems almost directly in line, the light comes from the farther system and bends infinitesimally around the black hole to come to us. I don't know if a 180 degree bend is possible.

It is possible to get a deflection angle of 180 but under a few million solar masses, hitting the “sweet spot” in between the photon sphere and the boundary of the shadow would basically be a once in the lifetime of the universe type probability, if it were possible at all. At billions of solar masses that sweet spot become much bigger, but then those are much further away.


> almost triple the age of our sun.

In this insanely hypothetical scenario, would it be possible to see a sun before our sun? (In the same galactic vicinity)


I was under the impression that our sun is not large enough to form the heavier elements on earth and this means supernova or collision of neutron stars had to be responsible for creating these elements, some of the stuff flying off this explosion formed our solar system, so we could see those progenitor stars.


I thought elements were created inside stars and dispersed by supernovas... Our sun has clearly not exploded yet (and I don't think it's big enough to ever ho supernova), so why does it matter what elements it can create?


This is true. Most of the potatoes eaten are valuable in caloric-deprived situations, but they are not a long-term healthy food due to the thrashing they do to insulin management.


These figures are for a very small number of potential people. This leaves out that frontier AI is being developed by an incredibly small number of extremely smart people who have migrated between big tech, frontier AI, and others.

Yes, the figures are nuts. But compare them to F1 or soccer salaries for top athletes. A single big name can drive billions in that context at least, and much more in the context of AI. $50M-$100M/year, particularly when some or most is stock, is rational.


It’s just a matter of taste, but I am pleased to see publicity on people with compensation packages that greatly exceed actors and athletes. It’s about time the nerds got some recognition. My hope is that researchers get the level of celebrity that they deserve and inspire young people to put their minds to building great things.


I think I'm mostly with you but it also depends how it exactly plays out.

Like I definitely think it is better for society if the economic forces are incentivizing pursuit of knowledge more than pursuit of pure entertainment[0]. But I think we also need to be a bit careful here. You need some celebrities to be the embodiment of an idea but the distribution can be too sharp and undermine, what I think we both agree on is, the goal.

Yeah, I think, on average, a $100M researcher is generating more net good for a society (and world) than a $100M sports player or actor. Maybe not in every instance, but I feel pretty confident about this on average. But at the same time, do we get more with one $100M researcher or 100 $1M researchers? It's important to recognize that we're talking about such large sums of money that at any of these levels people would be living in extreme luxury. Even in SV the per capita income is <$150k/yr, while the median income is medium income is like half that. You'd be easily in the top 1%. (The top 10% for San Jose is $275k/yr)

I think we also need to be a bit careful in recognizing how motivation can misalign incentives and goals. Is the money encouraging more to do research and push humanity's knowledge forward? Or is the money now just another means for people that just want money to exploit, who have no interest in advancing humanity's knowledge? Obviously it is a lot more complicated and both are happening but I think it is worth recognizing that if things shift towards the latter than they actually make it harder to achieve the original goals.

So on paper, I'm 100% with you. But I'm not exactly sure the paper is matching reality.

[0] To be clear, I don't think entertainment has no value. It has a lot and it plays a critical role in society.


I think it's pretty funny because, for example, Katalin Karikó was thought to be working in some backwater, on this "mRNA" thing, that could barely get published before COVID...and, the original LLM/transformer people were well qualified but not pulling quarter billion dollars kicking around trying to improve machine translation of languages, a time-honored AI endeavor going back to the 1950s. The came upon something with outstanding empirical properties.

For whatever reason, remuneration seems more concentrated than fundamentals. I don't begrudge those involved their good luck, though: I've had more than my fair share of good luck in my life, it wouldn't be me with the standing to complain.


  > Katalin Karikó was thought to be working in some backwater, on this "mRNA" thing, that could barely get published
There's a ton of examples like this, and it is quite common in Nobel level work. You don't make breakthroughs by maintaining the status quo. Unfortunately that means to do great things you can't just "play it safe"


Intel made an ad series based on a similar idea in ~2010.

“Our Rock Stars Aren't Like Your Rock Stars”

https://youtu.be/7l_oTgKMi-s


It’s closer to actors and athletes than we’d all hope, in that most people get a pittance or are out of work while a select few make figures that hit newspapers.


Agree and it's as if these "super smart top researchers" aren't feeding off the insanely hard work of open source contributors who basically got paid nothing to do a bunch of the work they're profiting off.

None of these models are operating in a vacuum.


Sounds vindictive. And yet. According to Forbes, the top 8 richest people have a tech background, most of whom are "nerdy" by some definition.


Those are nerds who did founding rather than being an employee, though. Maybe that's the distinction they're trying to make?


I don’t think this distinction actually exists. At that salary this person is moving from being a founder of his own company to being a founder of his own business unit inside Facebook.


The money these millions are coming from is already based on nerds having gotten incredibly rich (i.e. big tech). The recognition is arguably yet to follow.


Nerds run the entire world how much recognition do they need?!


Not really the same, is it? Actors are hired to act. Athletes get paid to improve the sport. It's not like nerds are poached to do academic research or nerd out at their hearts desire. This is a business transaction that Zuck intends to make money from.

Locking up more of the world's information behind their login wall, or increase their ad sales slightly is not enough to make that kind of money. We can only speculate, of course, but at the same time I think the general idea is pretty clear: AI will soon have a lot of power, and control over that power is thought to be valuable.

The bit about "building great things" certainly rings true. Just not in the same way artists or scientists do.


how do you know they are nerds?


What I don't understand in this AI race is that the #2 or #3 is not years behind #1, I understand it is months behind at worst. Does that headstart really matter to justify those crazy comps? Will takes years for large corporations to integrate those things. Also takes years for the general public to change their habits. And if the .com era taught us anything, it is that none of the ultimate winners were the first to market.


There is a group of wealthy individuals who have bought in to the idea that the singularity (AIs improving themselves faster than humans can) is months away. Whoever gets there first will get compound growth first, and no one will be able to catch up.

If you do not believe this narrative, then your .com era comment is a pretty good analysis.


  > There is a group of wealthy individuals who have bought in to the idea that the singularity is months away.
My question is "how many months need to pass until they realize it isn't months away?"

What, it used to be 2025? Then 2027? Now 2030? I know these are not all the same people but there are trends of to keep pushing it back. I guess Elon has been saying full self-driving is a year away since 2016 so maybe this belief can sustain itself for quite some time.

So my second question is: does the expectation of achievements being so close lengthen the time to make such achievements?

I don't think it is insane to think it could. If you think it is really close you'd underestimate the size of certain problems. Claim people are making mountains out of molehills. So you put efforts elsewhere, only to find that those things weren't molehills after all.

Predictions are hard and I think a lot of people confuse critiques with lack of motivation. Some people do find flaws and use them as excuses to claim everything is fruitless. But I think most people that find flaws are doing so in an effort to actually push things forward. I mean isn't that the job of any engineer or scientist? You can't solve problems if you can't identify problems. Triaging and prioritizing problems is a whole other mess, but it is harder to do when you're working at the edge of known knowledge. Little details are often not so little.


> My question is "how many months need to pass until they realize it isn't months away?"

It's going to persist until shareholders punish them for it. My guess is it's going to be some near-random-trigger, such as a little-known AI company declaring bankruptcy, but becoming widely reported. Suddenly, investing in AI with no roadmap to profitability will become unfashionable, budget cuts, down-rounds, bankruptcies and consolidation will follow. But there's no telling when this will be, as there's elite convergence to keep the hype going for now.


Indeed, this will continue as long as the market allows it to. There is a well known quote about how long markets can stay irrational, but I would like to remind where we are right now:

Telco capex was $100 billion at the peak of the IT bubble, give or take. There's going to be $400 billion investments in AI in 2025.


why we acting like this group wealthy individuals don't know what happening???

they know it may be or not gonna happen because months its ridiculous, but they still need to do it anyway since if you not gonna ride it, you are gonna miss the wave

stock market has not been rational since??? forever??? like stop pumping and dumping happen all the time


What I don't understand is with such small of a gap why this isn't a huge boon for research.

While there's a lot of money going towards research, there's less than there was years ago. There's been a shift towards engineering research and ML Engineer hiring. Fewer positions for lower level research than there were just a few years ago. I'm not saying don't do the higher level research, just that it seems weird to not do the lower level when the gap is so small.

I really suspect that the winner is going to be the one that isn't putting speed above all else. Like you said, first to market isn't everything. But if first to market is all the matters then you're also more likely to just be responding to noise in the system. The noisy signal of figuring out what that market is in the first place. It's really easy to get off track with that and lose sight of the actual directions you need to pursue.


LLaMA 4 is barely better than LLaMA 3.3 so a year of development didn't bring any worthy gains for Meta, and execs are likely panicking in order not to slip further given what even a resource-constrained DeepSeek did to them.


  > given what even a resource-constrained DeepSeek did to them.
I think a lot of people have a grave misunderstanding of DeepSeek. The conversation is usually framed comparing to OpenAI. But this would be like comparing how much it cost to make the first iPhone (the literal first working one, not how much each Gen 1 iPhone cost to make) with the cost to make any smartphone a few years later. It's a lot easier and cheaper to make something when you have an example in hand. Just like it is a lot easier to learn Calculus than it is to invent calculus.

Which that framing weirdly undermines DeepSeek's own accomplishments. They did do some impressive stuff. But that's much more technical and less exciting of a story (at least to the average person. It definitely is exciting to other AI researchers).


If comparing Deepseek to OpenAI, sure. But OP is comparing DeepSeek to Meta: both are followers.


Yes, but there's also more to what I said than just leader-follower.


But muh China.


Yeah this makes zero sense. Also unlike a pop star or even a footballer who are at least reasonably reliable, AI research is like 95% luck. It's very unlikely that any AI researcher that has had a big breakthrough will have a second one.

Remember capsule networks?


Hm, I thought that these salaries were offered to actual "giants" like Jeff Dean or someone extremely knowledgeable in the specifics of how the "business side" of AI might look like (CEOs, etc). Can someone clarify what is so special about this specific person? He is not a "top tier athlete" - I looked at his academic profile and it does not seem impressive to me by any measure. He'd make an alright (not even particularly great) assistant professor in a second tier university - which is impressive, but is by no means unique enough to explain this compensation.


I think the key was multimodality. Meta made a big move in combining texts, audio, images. I remember imagebind was pretty cool. Allen AI has published some notable models, and Matt seems to have expertise in multimodal models. Molmo looks really cool.


Looking at Molmo description:

"Our key innovation is a new collection of datasets called PixMo that includes a novel highly-detailed image caption dataset collected entirely from human annotators using speech-based descriptions, and a diverse mixture of fine-tuning datasets that enable new capabilities. Notably, PixMo includes innovative 2D pointing data that enables Molmo to answer questions not just using natural language but also using non verbal cues. We believe this opens up important future directions for VLMs enabling agents to interact in virtual and physical worlds. The success of our approach relies on careful choices for the model architecture details, a well-tuned training pipeline, and most critically the quality of our newly collected datasets, all of which we have released."

This is a solid engineering project with a research component - they collected some data that ended up being quite useful when combined with pre-existing tech. But this is not rocket science and not a unique insight. And I don't want to devalue the importance of solid engineering work, but you normally don't get paid as much for non-unique engineering expertise. This by no means sounds unique to me. This seem like a good senior-staff research eng project in a big tech company these days. You don't get paid 250M for that kind of work. I know very talented people who do this kind of work in big tech, and from what I can tell, many of them appear to have much more fundamental insight and experience, and led larger teams of engineers, and their comp does not surpass 1-2M tops (taking a very generous upper bound).


A PhD dropout with an alright (passable) academic record, who worked in a 1.5-tier lab on a fairly pedestrian project (multimodal llms and agents, sure), and started a startup.. Reallyttrying to not sound bitter, good for him, I guess, but does it indicate that there's something really fucked up with how talent is being acquired?


> and started a startup

You bring up the only relevant data point at the end, as a throw in. Nobody outside of academia cares about your PhD and work history if you have a startup that is impressive to them. That's the only reason he's being paid.


Molmo was pretty slick


Frontier AI that scales – these people all have extensive experience with developing systems that operate with hundreds of millions of users.

Don’t get me wrong, they are smart people - but so are thousands of other researchers you find in academia etc. - difference here is scale of the operation.


Yeah, I guess if you have a datacenter that costs $100B, even hiring a humble CUDA assembly wizard that can optimize your code to run 10% faster is worth $10B to the company.


10% is an enormous amount. Let’s say 1%.

Even if it’s 1% at the scale you’re talking that’s 1B to the company. So still worth it.

Wild.


Yeah but then you factor in the costs of building and running the machines vs the revenue and realize they are actually burning money for the blind hope of major breakthroughs in cognitive understanding that could be centuries away.


I can print jersey with Neymars name on it and drive revenue. i can't do that with some ai researcher. they have to actually deliver and i don't see how a person with $100M net-worth will do anything other than coast.


Top athletes they have stats to measure. I guess for these researchers I guess there are papers? How do you know who did what with multiple authors? How do you figure out who is Jordan vs Steve Kerr?


Yeah, who knew that Kerr would have the more successful overall career in basketball?


This is not remotely true. The gap between them as players far exceeds the gap elsewhere, combined, many times over.

People forget Kerr was a bad GM


A very major difference is that top athletes bring in real tangible money via ticket / merch sales and sponsorships, whereas top AI researchers bring in pseudo-money via investor speculation. The AI money is far more likely to vanish.


It's best to look at this as expected value. A top AI research has the potential to bring in a lot more $$ than a top athlete, but of course there is a big risk factor on top of that.


The expected value is itself a random variable, there is always a chance you mischaracterized the underlying distribution. For sports stars the variance in the expected value is extremely small, even if the variance in the sample value is quite large - it might be hard to predict how an individual sports star will do, but there is enough data to get a sense of the overall distribution and identify potential outliers.

For AI researchers pursuing AGI, this variance between distributions is arguably even worse than the distribution between samples - there's no past data whatsoever to build estimates, it's all vibes.


We’ve seen $T+ scale impacts from AI over the past few years.

You can argue the distribution is hard to pin down (hence my note on risk), but let’s not pretend there’s zero precedent.

If it turns out to be another winter at least it will have been a fucking blizzard.


The distribution is merely tricky to pin down when looking at overall AI spend, i.e. these "$T+ scale impacts."

But the distribution for individual researcher salaries really is pure guesswork. How does the datapoint of "Attention Is All You Need?" fit in to this distribution? The authors had very comfortable Google salaries but certainly not 9-figure contracts. And OpenAI and Anthropic (along with NVIDIA's elevated valuation) are founded on their work.


When Attention is All You Need was published, the market as it stands didn't exist. It's like comparing the pre-Jordan NBA to post. Same game, different league.

I'd argue the top individual researchers figure into the overall AI spend. They are the people leading teams/labs and are a marketable asset in a number of ways. Extrapolate this further outward - why does Jony Ive deserve to be part of a $6B aquihire? Why does Mira Murati deserve to be leading a 5 month old company valued at $12B with only 50 employees? Neither contributed fundamental research leading to where we are today.


Seriously. The transformer coupled with tons of compute is why we got here. When that paper came out and people (AI researchers) saw the results many were confused or unconvinced. No one has any clue such an architecture would yield the results it has. AI systems has always been far more art than science and we still don’t even really know why it works. I feel like that idea being stumbled upon was sort of more luck than anything…


If you imagine hard enough, you can expect anything. e.g. Extraordinary Popular Delusions and the Madness of Crowds


Sure, but the idea these hires could pay out big is within the realm of actual reality, even if AGI itself remains a pipe dream. It’s not like AI hasn’t already had a massive impact on global commerce and markets.


My understanding is that the bulk of revenue comes from television contracts. There has been speculation that that could easily shrink in the future if the charges become more granular and non-sports watching people stop subsidizing the sports watching people. That seems analogous to AI money.


Another major difference is, BigTech is bigger than these global sporting institutions.

How much revenue does Google make in a day? £700m+.


OOf. Trying awfully hard to have a bad day there eh?


Rational inside a deeply olligopolistic and speculative market.


F1 or soccer salaries are high because these are MARKETABLE people. The people themselves are a marketable brand.

They're not high because of performance/results alone.


The current product _must_ simply be a funding mechanism for whatever AI solution will ultimately define them. The idea that we’ll continue to have rigidly defined design mockups and specification seems relatively naive compared to generative UX defined by the user and their interaction preferences.


Imagine in that future world you describe, how immensely valuable a human artist will be: originality, wit, brilliance, their design will completely conquer any competitor generating the slop you dream of.


Agreed. I do still worry that this will upend the status quo. As with any ecosystem: slow changes are fine, but fast changes can be catastrophic.

I’ve worked with many people over the years who are good enough at their job, but will be replaced by AI (management’s choice, not mine). I’m probably one of those people as a mediocre engineer who prioritized family over career.

I have some backup plans, but it’s still tough and going to affect lots of people.


That will always be valuable, but I don’t think that’s what most designers are doing. If AI can copy flat design and Corporate Memphis style, it’ll compete just fine with the average designer.


We must live in different realities because most design has almost no creativity or originality at all. "Good design" means the website/design looks exactly the same as everything else.

We have the tools to do anything imaginable with film and video but the top box office films right now in the US are all completely derivative, non-creative human slop.

"Good design" is so trivial to do with generative AI.

We hardly live in 1910 Paris with all the cool people drinking absinthe in between cranking out all these artistic masterpieces.


As much as a skilled potter making fancy coffee cups conquers IKEA. Most people don't care about craftsmanship, and our economy nudges everyone towards convenient slop.


On the subject of LLMs and cats, I continue to find it disappointing that if you search for one of the leading AI services in the Apple App Store that they all seem to have centralized on images of cats in their first app screenshot as the most-converting image in that setting

Edit: a quick re-search shows they’ve differentiated a bit. But why are cats just the lowest common denominator? As someone who is allergic to them any cat reference immediately falls flat (personal problem, I know).


What a terrible website


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: