Hacker Newsnew | past | comments | ask | show | jobs | submit | TeMPOraL's commentslogin

It depends on what parties you go to :). Business events are where you get logos of programming languages and various technologies; hacker events are where you get the more opinionated and artsy ones.

There's the common, and IMO boring, sociological take on AI/Singularity vs. religion: humans are so infantile they prefer to worship advanced technology than to understand it. I.e.: we turn anything too difficult into religion.

Then there's the more interesting, speculative take: sufficiently advanced systems acquire properties associated with deities, such us being everpresent, acting in mysterious way, and seemingly omnipotent or at least omniscient in limited domains. Related, per Arthur C. Clarke, "any sufficiently advanced technology is indistinguishable from magic". I.e. we turn things into religion when it's pragmatic.

Then there's the more speculative fiction take: maybe it's not the first time humanity has been there, maybe propensity for religious practice and thinking is a consequence of humanity's previous, otherwise forgotten, dealings with advanced technology :).


> Yes, LLMs are literally just matmul. How can anything useful, much less intelligent, emerge from multiplying numbers really fast? But then again, how can anything intelligent emerge from a wet mass of brain cells? After all, we're just meat. How can meat think?

LLMs actually hint at an answer to that, but most people seem to be focusing too much on matmuls or (on the other end) specific training inputs to pay attention to where the interesting things happen.

Training an LLM builds up a structure in high-dimensional space, and inference is a way to query the shape of that structure. That's literally the "quality of quantity", reified. This is what all those matmuls are doing.

How can anything useful, much less intelligent, emerge from a bunch of matmuls or wet mass of brain cells? That's the wrong level of abstraction. How can a general-purpose quasi-intelligence emerge from a stupidly high-dimensional latent space that embeds rich information about the world? That's the interesting question to ponder, and it starts with an important realization: it's not obvious why it couldn't.


I've been around long enough to see this saying that "As soon as it works, no one calls it AI anymore" in action many times.

It is almost infuriating how dismissive people are of such amazing technologies when they understand it. If anything, progress is often marked by having things becoming simpler rather than more complex. The SpaceX Raptor engine versions are such a cool example of that.


For better or worse, it's also what makes for reliable systems.

> cheap phones

Maybe this is where the difference comes from?

> Eventually I got fed up, and started using hand-me-down iPhones for second phones.

Wonder how it would be if you tried a hand-me-down Galaxy flagship for comparison - that would be a more fair comparison. Cheap Androids are not in the same category as iPhones.


Updates themselves are a security risk, because they're a first-party malware vector. Pick your poison.

Right. Ads in the dialer? I saw that once, with one of the Chinese brands. Samsung? Never.

In fact, I've been exclusively on Samsung phones for over a decade now, never had any experience remotely similar to what GP describes. My greatest annoyances are 1) Bixby, and 2) apps being pretty basic and missing obvious functionality (but then it's not like any other vendor offers better apps...).

I'm going to guess GP is in the US; I'm in the EU, and maybe phones for EU market come with less of this kind of bullshit.


> If this were the case, why isn’t most innovation done outside the US?

Are you measuring by where the work is done, or where the people signing their names on it live? Two different things.


> The Dead Internet and the triumph of quantity over quality loom

always_has_been.jpg

The Internet has drowned to death in garbage back when they coined the term "content marketing". That was long before transformer models were a thing.

People have this weird impression that LLMs created a deluge of slop and reduced overall quality of most text on-line. I disagree - the quality arguably improved, since SOTA LLMs write better than most people. The only thing that dropped is content marketing salaries. You were already reading slop, LLMs just let the publisher skip the human content spouter middleman.


I’m old enough to remember people complaining about the exact same thing except they called it eternal September.

Back in my day, we lamented the loss of bang paths for email... and you had to pay Robert Elz to bring in a news group because munnari connected Australia to the world...

Yeah, I'm old.


I am too, and the wisdom of old age has taught me: once Eternal September hits a community, it's time to walk away.

I dunno if HN is at that point yet, but it's certainly creeping closer compared to where it was 5-10 years ago. Reddit passed the point of no return within the last few years.


Eternal September is a different, unrelated problem. Slop isn't next generation of clueless kids - it's marketing communication.

Right, but they're flying them close on purpose - point is, at first glance it looks feasible and the close formation aspect has enough benefits that it's worth exploring further. For me, it's the first time I saw the idea to exploit constellations for benefit within the system (here, communication between satellites), and not externally (synthetic aperture telescopes/beaming, or just more = lower orbit = cheaper).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: