Hacker Newsnew | past | comments | ask | show | jobs | submit | sampton's commentslogin

Jupyter use is declining because coding agents got really good. Multiplayer mode is not going to save it.

Samsung has access to the same ASML tools but has much lower yield than TSMC. Chip making is hard.

MLX and MPS are 2 completely different teams within Apple. It's more like MPS team doesn't have control or visibility into PyTorch roadmap and can only contribute so much from their side.

The kid carrier looks dangerously top heavy.


You can train a cnn to find bounding boxes of text first. Then run VLM on each box.


Using building geometry to correct gnss signals is new.


I recall Uber looking into this around 8 years ago? Don’t know if it went past a publication.


Yep, they also used 3D building data for correcting position in urban canyons: https://www.uber.com/blog/rethinking-gps/


A lot investment is banking on agi. There’s no sign agi is going to happen this decade.


What's a sign it's going to happen ever?


I used to believe in AGI but the more AI has advanced the more I’ve come to realize that there’s no magic level of intelligence that can cure cancer and figure out warp drives. You need data, which requires experimentation, which requires labor and resources of which there is a finite supply. If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources. Isn’t that what the greatest minds in cancer research would say as well? Why do we think that just being more rational or being able to compute better than humans would be sufficient to solve the problem?

It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.


Generally, I agree, but it also depends on perspective. Intelligence exists on many levels and manifests differently across species. From a monkey's standpoint, if they were capable of such reflection they might perceive themselves as the most capable creatures in their environment. Yet, humans possess cognitive abilities that go far beyond that, abstract reasoning, cumulative culture, large scale cooperation etc

A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.

Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.


> If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

It likely couldn't, though, that's the problem.

At a basic level, whatever abstract system you can think of, there must be an optimal physical implementation of that system, the fastest physically realizable implementation of it. If that physical implement was to exist in reality, no intelligence could reliably predict its behavior, because that would imply that they have access to a faster implementation, which cannot exist.

The issue is that most physical systems are arguably the optimal implementation of whatever it is that they do. They aren't implementations of simple abstract ideas like adders or matrix multipliers, they're chaotic systems that follow no specifications. They just do what they do. How do you approximate chaotic systems which, for all you know, may depend on any minute details? On what basis do we think it is likely that there exists a computer circuit that can simulate their outcomes before they happen? It's magical thinking.

Note that intelligence has to simulate outcomes, because it has to control them. It has to prove to itself that its actions will help achieve its goals. Evolution doesn't have this limitation: it's not an agent, it doesn't have goals, it doesn't simulate outcomes, stuff just happens. In that sense it's likely that certain things can evolve that cannot be intelligently designed (as in designed, constructed and then controlled). It's quite possible intelligence itself falls in that category and we can't create and control AGI, and AGI can't improve itself and control the outcome either, and so on.


I agree that computational irreducibility and chaos impose hard limits on prediction. Even if an intelligence understood every law of physics, it might still be unable to simulate reality faster than reality itself, since the physical world is effectively its own optimal computation.

I guess where my speculation comes in is that "simulation" doesn’t necessarily have to mean perfect 1:1 physical emulation. Maybe a higher intelligence could model useful abstractions/approximations, simplified but still predictive frameworks that are accurate enough for control and reasoning even in chaotic domains.

After all, humans already do this in a primitive way, we can't simulate every particle of the atmosphere, but we can predict weather patterns statistically. So perhaps the difference between us and a much higher intelligence wouldn't be breaking physics, but rather having much deeper and more general abstractions that capture reality's essential structure better.

In that sense, it's not "magical thinking", I just acknowledge that our cognitive compression algorithms (our abstractions) are extremely limited. A mind that could discover higher order abstractions might not outrun physics, but it could reason about reality in qualitatively new ways.


> A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.

Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.

Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.


My point wasn't so much about how fast humans achieved these things, but about what's possible at all given a certain cognitive architecture. Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms to accumulate and transmit abstract knowledge.

So I completely agree that intelligence alone isn't the only factor, it's the whole foundation.


> Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms

Given a million years, that could change.


I think I see what you’re getting at, but the difference between apes and humans isn’t that we can reason in 3D. If someone could actually articulate the intellectual breakthrough that makes humans smarter than apes, then maybe I would accept there’s some intellectual ability AI could achieve that we don’t have, but I don’t see how it could be higher dimensional reasoning.


Agreed.

And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.

Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.

No debate will be entered into on this topic by me today.


Actually, no, it isn't. They say it isn't necessarily possible, but not self-contradictory as far as we know. It's good that you aren't going to debate this.

https://en.wikipedia.org/wiki/Alcubierre_drive


You failed reading comprehension.


You think I'm the one who's failing here?

You said:

"(...) if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory."

"Isn’t that what the greatest minds in physics would say as well? Yes, yes it is."

That is not in fact what the greatest minds in physics would say. Your meta-knowledge of physics has failed you here, resulting in you posting embarrassing misinformation. I'm just having to correct it to prevent you from misleading anyone else.


You failed to realise that I'm not debating you, I'm berating you. Some people see statements like "not debating" as a personal challenge, a reason to get aggressive. Lets be clear: they are not nice people, and you don't want to be trolls like them.


Yes, I can see that you're just trolling, not debating. I appreciate the fact that you aren't debating, because I don't want to have to correct more of your misinformation. I don't think your berating is productive either, although it does demonstrate that—as you said—you are not a nice person.


AGI isn't a synonym for smarter-than-human.


What’s your point? I’m saying there’s no level of smartness that can cure cancer, the bottleneck is data and experimentation not a shortage of smartness/intelligence


And I'm saying that AGI doesn't imply a level of smartness at all.


Eliezer’s short story “That Alien Message” providing a convincing argument that humans are cognitively limited, not data-limited, through the device of a fictional world where people think faster: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...

> Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.

> But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.

This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.

——

Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:

> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)

> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.

——

Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.

Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.

Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.

——

There does seem to be an open question about how general intelligence is. We know that there isn't much difference in intelligence between people; 90+% of the human population can learn to write a computer program, make a pit-fired pot from clay, haggle in a bazaar, paint a realistic portrait, speak Chinese, fix a broken pipe, interrogate a suspect and notice when he contradicts himself, fletch an arrow, make a convincing argument in courts, program a VCR, write poetry, solve a Rubik's cube, make a béchamel sauce, weave a cloth, sing a five-minute lullaby, sew a seam, or machine a screw thread on a lathe. (They might not be able to learn all of them, because it depends on what they spend time on.)

And, as far as we know, no other animal species can do any of those things: not chimpanzees, not dolphins, not octopodes, not African grey parrots. And most of them aren't instinctive activities even in humans—many didn't exist 1000 years ago, and some didn't exist even 100 years ago.

So humans clearly have some fairly flexible facility that these other species lack. "Intelligence" is the usual name for that facility.

But it's not perfectly general. For example, it involves some degree of ability to imagine three-dimensional space. Some of the humans can also reason about four- or five-dimensional spaces, but this is a much slower and more difficult process, far out of proportion to the underlying mathematical difficulty of the problem. And it's plausible that this is beyond the cognitive ability of large parts of the population. And maybe there are other problems that some other sort of intelligence would find easy, but which the humans don't even notice because it's incomprehensible to them.


Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.

The basic issue is that we have to deduce stuff about the world we live in, using resources from the world we live in. In the story, the data bandwidth is contrived to be insanely smaller than the compute bandwidth, but that's not realistic. In reality, we are surrounded by chaotic physical systems that operate on raw hardware. They are, in fact, quite fast, and probably impossible to simulate efficiently. For instance, we can obviously never build a computer that can simulate the behavior of its own circuitry, using said circuitry, faster than it operates. But I think there's a lot of physical systems that are just like that.

Being data-limited means that we get data slower than we can analyze and process it. It is certainly possible to improve our ability to analyze data, but I don't think we can assume that the best physically realizable intelligence would overcome data limitation, nor that it would be cost-effective in the first place, compared to simply gathering more data and experimenting more.


> Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.

Well, yes. it's from Eliezer Yudkowsky. The kind of people who who generally find him persuasive, will do so. Those who don't find him convincing or even find him somewhat of a crank, like the other self-proclaimed "rationalists", will do do. "muddled" is correct, he lacks rigour in everything, but certainly brings the word count.


You're the guy who in https://news.ycombinator.com/item?id=45517647 I demonstrated was a physics crank: unskilled and unaware of it, dismissing the Alcubierre metric as "fantasy, nigh nonsensical and self-contradictory", unlike actual physicists. And, when I presented the evidence that that's not what actual physicists say about it, you responded by heaping personal abuse on me. Perhaps you posted this comment later as an additional form of ego defense, since it implicitly calls me a crank, by implying that I'm a "rationalist"?


Those are odd claims, but they don't interest me. You have not and are not demonstrating anything outside of your own fixations. Project much?


You seem to be agreeing with the story's thesis, rather than disagreeing. The story claims that we get an enormous amount of data from which we could compute much more than we do. You, too, are claiming that we get an enormous amount of data from which we could compute much more than we do. If that's true, then we aren't limited by our data, which is what I meant by "data-limited"—although you seem to mean the opposite, "we get data slower than we can analyze and process it", in which we are limited not by the data but by the processing. This tends to rebut the claim above, "If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources."

It may very well be true that you could cure cancer even faster or more cheaply with more experimental data, but that's irrelevant to the claim that more experimental data is necessary.

It may also be the case that there's no "shortcut" to simulating a human body well enough to test drugs against a simulated tumor faster than real time—that is, that you need to have enough memory to track every simulated atom. (The success of AlphaFold suggests that this is not the case, as does the ability of humans to survive things like electric shocks, but let's be conservative.) But a human body only contains on the order of 10²⁴ atoms, so you can just build a computer with 10²⁸ words of memory, and processing power to match. It might be millions of times larger than a human body, but that's okay; there's plenty of mass out there to turn into computronium. It doesn't make it physically unrealizable.

Relatedly, you may be interested in seeing Mr. Rogers confronting the paperclip maximizer: https://www.youtube.com/watch?v=T-zJ1spML5c


It's not a strawman, it's a thought experiment: if the premise of AGI is that a superintelligence could do all these amazing things, what could it do today if it existed but only had its superintelligence? My suggestion is that even something a billion times more intelligent than a human being might not be able to cure cancer with the information it has available today. Yes it could build simulations and throw a lot of computing power at these problems, but is the bottleneck intelligence or computing power to run the algorithms and simulations? You're conflating the two, no one disagrees that one billion times more computing power could solve big problems, the disagreement is whether one billion times more intelligence has any meaningful value which was the point of isolating that variable in my thought experiment.


It's fair that I'm conflating raw computational power with strategic usage of that power. And it is at least theoretically conceivable that brute force computational power is not something that could be replaced by clever algorithms.

But if you agree that with 10²⁸ more times more computational power we could almost surely cure cancer without gathering much more data, then you agree that we have enough empirical data and just need to analyze it better. We're sort of arguing about the details of what kinds of approaches to analyzing the data better would work best.

I'll continue that argument about details a bit more here. So far, even with merely human intelligence, hard computational problems like car crash simulation, protein folding, and mixed integer-linear programming (optimization) have continued to gain even more efficiency from algorithmic improvements than from hardware improvements.

According to our current understanding of complexity theory, we should expect this to continue to be the case. An enormous class of practically important problems are known to be NP-complete, so unless P = NP, they take exponential time: solving a problem of size N requires k**N steps. Hardware advances and bigger compute budgets allow us to do more steps, while algorithmic improvements reduce k.

To be concrete, let's say k = 1.02, we have a data center full of 4096 1-teraflops GPUs, and we can afford to wait a month (2.6 megaseconds) for an answer. So we can apply about 10²² operations to the problem, which lets us solve problems up to about size N = 2600. Now suppose we get more budget and build out 1000 such data centers, so we can apply 10²⁵ ops, but without improving our algorithms. This allows us to handle N = 2900.

But suppose that instead we improve the heuristics in our algorithm to reduce k from 1.02 to 1.01. Suddenly we can handle N = 5100, twice as big.

We can easily calculate how many data centers we would need to reach the same problem size without the more intelligent algorithm. It's about 6 × 10²¹ data centers.

For NP-complete problems, unless P = NP, brute-force computing power lets you solve logarithmically larger problems, while intelligence lets you solve linearly larger problems, equivalent to an exponentially larger amount of computation.


Humans. There are arrangements of atoms that if constructed and activated, act perfectly like human intelligence. Because they are human intelligence.

Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.

Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.

My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.


> Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term.

No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.

Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.

But also, LLMs are not anywhere close to becoming human level intelligence.


>It could also be random, unpredictable at times.

It isn't. But if it were, we can also write that into the algorithm.

>But also, LLMs are not anywhere close to becoming human level intelligence.

They're no farther than about 5 million years distant.


"Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. "

Determinism is a metaphysical concept like mathematical platonism or ghosts.


> Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen.

~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall


>We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall

We've already set the course for human extinction, we're about 6-8 generations away from absolute human extinction. We became functionally extinct 10-15 years ago. Still, if we had another 5 million years, I'm one hundred percent certain we could crack AGI.


> if deterministic, then can be done in software.

You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.

Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.

</s>


There needs to be break through papers or hardware that can expand context size in exponential way. Or a new model that can address long term learning.


That's what people have said about technologies in every decade, Sam


That's one way to lower the interest rate.


If nobody can borrow money for anything it doesn't matter what the interest rate is.

I'm very smart!


Powell said explicitly a year or two ago that it was his goal to bump unemployment numbers to tame inflation. Kinda sick to think about, but, here we are I guess.


> a year or two ago

3 years ago. September 2022. https://edition.cnn.com/2022/09/23/economy/powell-fed-labor-...

When unemployment rates were back down to the pre-covid 2020 levels, only time they were even lower was before the 1969 recession. When inflation was at the highest it's been since the 80s.

The statement & monetary policy decision was entirely appropriate at the time.


Thanks!

I actually tried a brief search to get a more accurate time, but all the results were skewed heavily to the last few months so I gave up. I'm impressed you found it.


That's the inherent nature of the interest rate. Both unemployment and inflation have bad consequences, so the role of the central bank is to balance rates to target a specific inflation rate.


and inflation can continue forever! no problems with that at all, the economy is definitely built on money and not a finite supply of resources constrained by entropy.


I actually don't see a problem with that because what is useful information in a price system is the relative price of things, not the absolute price, which is arbitrary.


Isn’t that the charge of the federal reserve? I think this is what he is required to do statutorily.


An astute and accurate observation. However, there is no numeric target set in the mandate you allude to: "The Federal Reserve was created by Congress in 1913 to provide the nation with a safer, more flexible, and more stable monetary and financial system. In 1977, Congress amended the Federal Reserve Act (FRA) to provide greater clarity about the goals of monetary policy. The amended FRA directs the Board of Governors and the FOMC to conduct monetary policy “so as to promote effectively the goals of maximum employment, stable prices, and moderate long-term interest rates.” [https://www.federalreserve.gov/aboutthefed/files/the-fed-exp...]


What he is doing is counter to the Fed charter, but if you're pro-Capital, you like some unemployment because it disciplines Labor.

> The chair's main responsibility is to carry out the mandate of the Fed, which is to promote the goals of maximum employment, stable prices, and moderate long-term interest rates.

(per Investopedia https://www.investopedia.com/articles/investing/082415/what-...)


> The chair's main responsibility is to carry out the mandate of the Fed, which is to promote the goals of maximum employment, stable prices, and moderate long-term interest rates

You’re correctly quoting your source. But this is crap, as their source [1] makes no reference to “moderate long-term interest rates”.

The Fed is mandated to promote “maximum employment” and “stable prices” [2]. (It defines the former as “the highest level of employment or lowest level of unemployment that the economy can sustain while maintaining a stable inflation rate.”) If inflation is unstable, the economy is above maximum employment.

[1] https://www.federalreserve.gov/paymentsystems/coin_about.htm

[2] https://www.federalreserve.gov/aboutthefed/fedexplained/mone...


Isn't the idea that maximum employment, stable prices, and moderate long-term interest rates are somewhat in tension with each other though? Which would mean the mandate is to balance those three things – e.g. maximize employment to the extent possible while maintaining stable prices and moderate interest rates.


Yes lets blame Powell he pushed Trump into putting tariffs on everything and gutted the remaining manufacturing jobs.


And either way, voters around the world have made it extremely clear that price inflation annoys them more than high unemployment. So if Powell's options are to cause unemployment or allow continued inflation, I can see why he'd pick the former.


Powell doesn't decide or control the unemployment rate. His job is simply to react to it.


> Powell said explicitly a year or two ago that it was his goal to bump unemployment numbers to tame inflation. Kinda sick to think about, but, here we are I guess.

This is basic monetary economics via the Phillips curve (https://en.wikipedia.org/wiki/Phillips_curve). There's a strong relationship between unemployment rate and the inflation rate. Of course, that is based on historical data in normal times, and, since 2010s and the financial crisis with years of ZIRP, we are now in very unnormal times with Trump's tariffs and general fuckery of the economic system.


Those left coast elites.


I bet they rehearsed a dozen times and never failed as bad live. Got to give them props for keeping the live demos. Apple has neutered its demos so much it's now basically 2 hr long commercials.


The new Apple presentations are much more information dense, and tailored to the main (online) audience. They’re clearly better.


More dense but less trust worthy. I don't think they would have pushed apple intelligence the way they did if there was a live demo.


Live Apple demos were always held together with duct tape in the first place. That first "live" iPhone demo had a memorized sequence that Jobs needed to use to keep the whole phone OS from hard crashing.


During that first iPhone demo they also had a portable cell tower (cell on wheels) just off-stage to mimic a better signal strength than it was capable of. NYTimes write-up on the whole thing is worth the read [0].

0.https://web.archive.org/web/20250310045704/https://www.nytim...


That _was_ worth it indeed--thanks :)


There was one demo where Steve Jobs told everyone to turn off their WiFi.


It was the demo of FaveTime with the iPhone 4 IIRC


Even with that, Live demos are incredibly more better than hour long demos.


They also force the developers to make it work, under threat of being fired, and in the ire of Steve Jobs case, being yeeted in to the sun along with their ancestors and descendents.


They are boring infomercials now. The live audience used to keep it from feeling too prepackaged.


You gotta keep your infomercials engaging:

https://www.youtube.com/watch?v=DgJS2tQPGKQ

Microsoft really nailed the genre. (Although I learned just now while looking up the link that this one was an internal parody, never aired.)


and so boring. I would take Jobs presenting a live demo than any of this heavily-produced stuff.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: