Hacker Newsnew | past | comments | ask | show | jobs | submit | evrydayhustling's commentslogin

This is so true. When you get DSA wrong, you end up needing insanely complex system designs to compensate -- and being great at Testing just can't keep up with the curse of dimensionality from having more moving parts.


Those human imperfections likely decrease randomness - for example leaving cards that started adjacent more likely to remain adjacent han by strict chance.


They most definitely decrease randomness.

But I guess the article’s point is that human imperfections offset that with lower correlated failure modes.


This all day. Programmer since c64, c++, java, f#, python, JavaScript and everything in between. Code was never the point, but it wasn't just commerce either - it's fun making machines do things they couldn't before. AI is an s-tier upgrade to that mission.


Feels like cultivating acceptance and indifference to your own entanglements is the most isolationist thing you can actually do. To be entangled is to be biased about what's happening to you... do we think the crocodile was indifferent to the escape of his prey, or to being culled in an act of revenge?

Anyway, if folks enjoy this theme I recommend Scavengers Reign, which does a beautiful job of illustrating struggle with biological entanglement.


Drawing on your other comment about spurious correlations, might there be a more direct mathematical test for an unexpectedly high number of aligned correlations?


Trust was its own reason. It's useful for the whole world to have a currency and business environment that operates by rules, even when the rules aren't perfect or fair.

That environment isn't being outcompeted by better, more fair rules - it's just getting vandalized for a few people's gain, and creating risk for everyone else .


I'm totally with you on the value prop at the time we signed up. I was more surprised that it sounds like you are reluctant to delete now, when the company is going through an unpredictable transition.

Did I get that right? If so, is there an ongoing value you want to maintain, or it more out of respect for the organization that provided you value in the past?


Nice and provocative read! Is it fair to restate the argument as follows?

- New tech (eg: RL, cheaper inference) are enabling agentic interactions that fulfill more of the application layer.

- Foundation model companies realize this and are adapting their business models by building complementary UX and witholding API access to integrated models.

- Application layer value props will be squeezed out, disappointing a big chunk of AI investors and complementary infrastructure providers

If so, any thoughts on the following?

- If agentic performance is enabled by models specialized through RL (e.g. Deep Research's o3+browsing), why won't we get open versions of these models that application providers can use?

- Incumbent application providers can put up barriers to agentic access of the data they control. How does their data incumbency and vertical specialization weigh against the relative value of agents built by model providers?


Hi. Yes this is wholly correct.

On the second points:

* Well I'm very much involved in making open more models, pretrained the first model on free and open data without copyrigh issues, released the first version fo GRPO that can run on Google Colab (based on Will Brown). Yet, even then I have to be realistic: open source RL has a data issue. We don't have the action sequence data nor the recipes (emulators) that could make it possible to replicate even on a very small scale what big labs are currently working on.

* Agreed on this and I'm seeing this dynamic already in a few areas. Now it's still going to be uphill as some of the data can be bought and advanced pipelines can shortcut some of the need for it, as models can be trained directly on simulated environments.


Thanks for the reply - and for the open AI work!

> We don't have the action sequence data nor the recipes (emulators) that could make it possible to replicate even on a very small scale what big labs are currently working on.

Sounds like an interesting opportunity for application-layer incumbents that want to enable OSS model advancement...


answering the first question if i understand it correctly.

The missing piece is data obviously. With search and code, it's easier to get the data so you get such specialized products. What is likely to happen is: 1/ Many large companies work with some early design partners to develop solutions. They have the data + subject matter expertise, and the design partners bring in the skill. This way we see a new wave of RL agent startups grow. My guess is that this engagement would look different compared to a typical saas engagement. Some companies might do it inhouse, some wont because maintaining such systems is a task. 2/ These companies open source part of their dataset which can be consumed by oss devs to create better agents. This is more common in tech where a path to monopoly is to commoditize the immediately previous layer. Might play out elsewhere too, though I do not have a high degree of confidence here.


Why will application layer value props be squeezed out? And if so, where does value accrue going forward in an RL first world?


I think the discussion here is confusing the algorithm for the output. It's true that diffusion can rewrite tokens during generation, but it is doing so for consistency with the evolving output -- not "accuracy". I'm unaware of any research which shows that the final product, when iteration stops, is less likely to contain hallucinations than with autoregression.

With that said, I'm still excited about diffusion -- if it offers different cost points, and different interaction modes with generated text, it will be useful.


Your (3) is beautifully said... and to prove the point, we are perfectly able to make computational systems that can manipulate symbols in the same way that Gödel did to verify the incompleteness theorem. Humans and computers are both able to do work within either logical system, and incapable of doing work that crosses between them.

Everything that makes "truth" slippery makes "intelligence" and "consciousness" even more slippery and subjective. This is why AGI has such negative impact on AI discourse -- the cause only advances when we focus on improving at measurable tasks.


Indeed, proof-checking tools like Lean can reason about their own logical systems and prove their own incompleteness, but I doubt Penrose would conclude that they are not formal systems as a result.

I like to think people can still make progress on questions of intelligence and consciousness though. Michael Levin's work comes to mind, for instance. Science is just at very early stages of understanding =)


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: