Hacker Newsnew | past | comments | ask | show | jobs | submit | JeremyNT's commentslogin

Well, maybe it isn't so insane?

Server side rendering has been with us since the beginning, and it still works great.

Client side page manipulation has its place in the world, but there's nothing wrong with the server sending page fragments, especially when you can work with a nice tech stack on the backend to generate it.


Sure. The problem with some frameworks is that they attached server events to things that should be handled on the front-end without a roundtrip.

For instance, I've seen pages with a server-linked HTML button that would open a details panel. That button should open the panel without resorting to sending the event and waiting for a response from the server, unless there is a very, very specific reason for it.


In a work context, for me at least, this class of reply can actually be pretty useful. It indicates somebody already minimally investigated a thing and may have at least some information about it, but they're hedging on certainty by letting me know "the robots say."

It's a huge asterisk to avoid stating something as a fact, but indicates something that could/should be explored further.

(This would be nonsense if they sent me an email or wrote an issue up this way or something, but in an ad-hoc conversation it makes sense to me)

I think this is different than on HN or other message boards, it's not really used by people to hedge here, if they don't actually personally believe something to be the case (or have a question to ask) why are they posting anyway? No value there.


> can actually be pretty useful. It indicates somebody already minimally investigated a thing

Every time this happens to me at work one of two things happens:

1) I know a bit about the topic, and they're proudly regurgitating an LLM about an aspect of the topic we didn't discuss last time. They think they're telling me something I don't know, while in reality they're exposing how haphazard their LLM use was.

2) I don't know about the topic, so I have to judge the usefulness of what they say based on all the times that person did scenario Number 1.


Yeah if the person doing it is smart I would trust they had the reasonable prompt and ruled out flagrant BS answers. Sometimes the key thing is just to know the name of the thing for the answer. It's equally as good/annoying as reporting what Google search gives for the answer. I guess I assume mostly people will do the AI query/search and then decide to share the answer based on how good or useful it seems.

I really like this framing.

A lot of people mysticise them or want to quibble about intelligence, and then we end up out in the "no true Scottsman" weeds.

We know what they are, how they work, what they generate. Let them be their own thing, do not anthropomorphize them.

The thing we don't understand so well is humans, really.


> Like, is there truly an agentic way to go 10x or is there some catch? At this point while I'm not thrilled about the idea of just "vibe coding" all the time, I'm fine with facing reality.

Below is based on my experience using (currently) mostly GPT-5 with open source code assistants.

For a new project with straightforward functionality? I think you (and "you" being "basically anybody who can code at all") can probably manage to go 10x the pace of a junior engineer of yesteryear.

Things get a lot trickier when you have complex business logic to express and backwards compatibility to maintain in an existing codebase. Writing out these kinds of requirements in natural language is its own skillset (which can be developed), and this process takes time in and of itself.

The more confusing the requirements, the more error prone the process becomes though. The model can do things "correctly" but oops maybe you forgot something in your description, and now the whole thing will be wrong. And the fact that you didn't write the code means that you missed out on your opportunity to fix / think about stuff in the first pass of implementation (i.e. you need to seriously review stuff, which also slow you down).

Sometimes iterating over English instructions will take longer than just writing/expressing things in code from the start. But sometimes it will be a lot faster too.

Basically the easy stuff is way easier but the more complex stuff is still going to require a lot of hand holding and a lot of manual review.


Yes, and as much as I hate to defend modern UI designers, I believe the icons in the menus here are actually extremely helpful (even when duplicated).

In the first example, when I want to find the option to add or delete a row in this massive menu, the icons clearly convey the purpose. I can instantly filter a huge list of possibilities down to a few relevant entries.


yes; I think it depends a lot on how accepted the icon is. Every few months when I have to open ST's CubeProgrammer, my brain substantially deteriorates because they don't use text and the icons for their main tabs are not always understandable to me. (and the thing's otherwise a relic from 20+ years ago)

X, volume & mute icon, disk icon, upload/download icon -- these are all fine and good; you don't need to spell those out for me because their use is so widespread that even if I didn't know what they meant, it'd be very useful to learn. I have no idea what a generic "integrated circuit" icon means, though, and I doubt anyone will use it elsewhere to mean the same thing, so I just click around randomly until I find what I'm looking for, like I'd do with the previously-unlabeled 6-switch panel in my living room where the positioning of the switches have no obvious relation to the physical placement of the ceiling lights.

I think Apple's menu actually shows exactly what I want and expect; show icons if it'll help me, don't show them if they wouldn't; though in come cases, I think Apple could apply some more icons (like for "Stop", there are a few good choices for that).


I'm not sure this is bad? It's still maintained, and it isn't like there are frequent revolutions in UI design - if it works, it works.

Slow and boring is a pretty nice place to be.


It doesn't really work for me. The first thing I always do with it is installing a taskbar extension.

Lest we forget, they initially rose to prominence by being cheaper than the existing solutions, not better, and I suppose this is a tradeoff a lot of their customers are willing to make.

Is it actually worse than the status quo though? I'm not so sure.

I hate this era of consolidation but Warner and HBO have already degraded, so this may be the least bad outcome here.


I don't want you to think I'm picking on you; but, I've been thinking about the MBA-bullshittism "consolidation" for a while. It's really a euphemism for "trust formation", right? It seems like we fought tooth-and-nail just 100 years ago to set up real antitrust laws, with real teeth... and now every industry is "consolidated". What's going on in health and seed and cars makes me seethe.

If you want some considered thoughts on consolidation and antitrust implications, Cory Doctorow's writings are interesting. Some relevant examples:

"Hate the player AND the game (10 Sep 2025)" https://pluralistic.net/2025/09/10/say-their-names/#object-p...

"The one weird monopoly trick that gave us Walmart and Amazon and killed Main Street (14 Aug 2024)" https://pluralistic.net/2024/08/14/the-price-is-wright/#enfo...

"End of the line for Reaganomics (13 Aug 2021)" https://pluralistic.net/2021/08/13/post-bork-era/#manne-down

"10 Oct 2022 Antitrust is – and always has been – about fairness" https://pluralistic.net/2022/10/10/play-fair/#bedoya

And his archives for more:

https://pluralistic.net/tag/monopoly/

https://pluralistic.net/tag/antitrust/


The laws only exist if people are willing to apply them.

Yeah, HBO has moved decidedly down market.

Apple is at least trying to fill their old niche. It seems quite telling that the only company truing to do the whole “prestige TV” thing is a kind of side-project for a hardware company. At least nobody can buy them, though.


Apple has a family friendly bent that HBO had been degraded by sadly. Disnified. Adult oriented HBO quality shows don't exist anymore do they?

Apple seems to have a no-nudity policy more or less (or at least, minimal nudity).

I dunno. Sex is part of human existence so it shouldn’t be off-limits for media IMO. But the sort of perfunctory thing where every show on Netflix or HBO has to have some nudity in the first couple episodes was a bit annoying. I don’t mind the lack of nudity in Apple’s stuff. There’s a balance that Apple falls on the “overly conservative” side of, though.

What’s adult mean to you? Nudity, violence, I dunno. Severance considers things like self-identity and the fake personalities, and fake social constructs of our workplaces… it seems more adult to me than a gangster or cowboy story.

I also quite like Pluribus, it feels like actual sci-fi (in the same way 3 Body Problem from Netflix does, actually—legit sci-fi, not action heroes in space).


> Apple

do we really want big tech to also control our media?


> want

I described what is happening, not what I want to have happen.

Anyway it is entertainment media, not news media, so less of a big deal. But yeah it would be nice if somebody else tried.


Warner Bros has had their best summer in years (Sinners, Superman, etc). HBO still makes highly regarded prestige TV series (The Last Of Us, Task, etc). This is just false.

That video game/superhero IP adaptations are considered "prestige TV" says more about diminished creative expectations than HBO continuing to uphold it's traditional high standards.

Nothing against people who like them, to each his own. But the throughput of quality programming out of HBO has dropped off a cliff through it's multiple changes in ownership.


Sure, and you can point out the stats all day long, but you're not going to defeat irrational parental concerns with this One Weird Trick.

So much of the way we treat education is based on vibes rather than reality.


Gemini "fast" provides this response:

> Yes, 2026 is the next calendar year. The current year is 2025.

I wonder how much self-inflicted brand damage google is doing to itself by using its crappier model and inserting it on all search results?

If they can't afford to run a decent model for their flagship product, maybe they shouldn't be doing anything at all...


"Brand damage" is a pre-2020s way of thinking. Brands are openly degrading themselves for short term results now and have found that it's sufficiently tolerated.

> sufficiently tolerated.

just to point out that if consumers tolerated changes, then by definition, there isn't brand damage.

what that tolerance level is cannot be know with 100% confidence ahead of time and I would argue that a business should have principles that build long-term customer value (and hence business value) rather than being short-term focused.


Right, I'm saying brand damage isn't so much a thing anymore, with the way so much has enshittified and those brands survive. There's still some nonzero risk to it, but certainly where tech companies are involved it's not as feared as it traditionally has been.

I'm sad to say it works on me. Sometimes I know I want an AI response. Instead of going to an AI provider, I just type the prompt into the url bar and Google via enter. Because I'll know I'll get googled AI blurb.

I used to write my search query in search terms, now I write it as an AI prompt.


See, this makes perfect sense... if the thing were actually reliable enough, but the current implementation is wrong a disturbingly high percent of the time.

If anything they should be throwing more money at it right now to get people hooked, then use a cheaper model later once people have already incorporated it into their workflows.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: