Hacker Newsnew | past | comments | ask | show | jobs | submit | cxr's commentslogin

Your response is like seeing the cops going to the wrong house to kick in your neighbors door, breaking their ornaments in their entry way, and then saying to yourself, "Good. I hate yellow, and would never have any of that tacky shit in my house."

As your first sentence of your comment indicates, the fact that it's supported and there for people to use doesn't (and hasn't) result in you being forced to use it in your projects.


Yes but software, and especially browser, complexity has balooned enormously over the years. And while XSLT probably plays a tiny part in that, it's likely embedded in every Electron app that could do in 1MB what it takes 500 MB to do, makes it incrementally harder to build and maintain a competing browser, etc., etc. It's not zero cost.

I do tend to support backwards compatibility over constant updates and breakage, and needless hoops to jump through as e.g. Apple often puts its developers through. But having grown up and worked in the overexuberant XML-for-everything, semantic-web 1000-page specification, OOP AbstractFactoryTemplateManagerFactory era, I'm glad to put some of that behind us.

If that makes me some kind of gestappo, so be it.


Point to the part of your comment that has any-fucking-thing to do with the topic at hand (i.e. engages with the actual substance of the comment that it's posted as a reply to). Your comment starts with "Yes but", as if it to present it as a rebuttal or rejoinder to something that was said, but then proceeds into total non-sequitur. It's an unrestrained attempt at a change of subject and makes for a not-very-hard-to-spot type of misdirection.

Your neighbors' ugly yellow tchotchkes have in no way forced you—nor will they ever force you—to ornament your house with XSLT printouts.


Alright, you're extremely rude and combative so I'll probably tap out here.

But consider if the "yellow tchotchkes" draw some power from my house, produce some stinky blue smoke that occasionally wafts over, requires a government-paid maintenance person to occasionally stop by and work on, that I partly pay for with my taxes.

That's my point.


> Alright, you're extremely rude

In contrast to poisoning the discussion with subtle conversational antimatter while wearing a veneer of amiability. My comments in this thread are not insidiously off-topic non-replies presented as somehow relevant in apposition to the ones that precede them.

> consider if the "yellow tchotchkes" draw some power from my house, produce some stinky blue smoke that occasionally wafts over, requires a government-paid maintenance person to occasionally stop by and work on, that I partly pay for with my taxes

Anyone making the "overwork the analogy" move automatically loses, always. Even ignoring that, nothing in this sentence even makes any sense wrt XSLT in the browser or the role of browser makers as stewards of stable Web standards. It's devoid of any cogent point and communicates no insight on the topic at hand.


Remove crappy JS APIs and other web-tech first before deprecating XSLT - which is a true-blue public standard. For folks who don't enable JS and XML data, XSLT is a life-saver.

If we're talking about removing things for security security, the ticking time bomb that is WebUSB seems top of the list to me of things that are dangerous, not actually standards (it is Chrome only), and yet a bunch of websites think it's a big, good reason to be Chrome-only.

But XSLT can be replicated with JavaScript and the reverse is, sadly, untrue.

So if only one needed to go, it seems obvious which it should be.


Where's the best collection or entry point to what you've written about Chrome's use of Gnome's XML libraries, the maintenance burden, and the dearth of offers by browser makers foot the bill?

> When that solution isn't wanted, the polyfill offers another path.

A solution is only a solution if it solves the problem.

This sort of thing, basically a "reverse X/Y problem", is an intellectually dishonest maneuver, where a thing is dubbed a "solution" after just, like, redefining the problem to not include the parts that make it a problem.

The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.

> As mentioned previously, the RSS/Atom XML feed can be augmented with one line, <script src="xslt-polyfill.min.js" xmlns="http://www.w3.org/1999/xhtml"></script>, which will maintain the existing behavior of XSLT-based transformation to HTML.

Oh, yeah? It's that easy? So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right? Or is this another instance where well-paid engineers on the Chrome team who elected to accept the responsibility of maintaining the stability of the Web have decided that they like the getting-paid part but don't like the maintaining-the-stability-of-the-Web part and are talking out of both sides of their mouths?


> So the Chrome team is going to ship a solution where when it encounters un-un-fucked content that depends on XSLT, Chrome will transparently fix it up as if someone had injected this polyfill import into the page, right?

As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."

And that's the issue with XSLT: it won't.


> As with most things on the web, the answer is "They will if it breaks a website that a critical mass of users care about."

This is a (poor) attempt at gaslighting/retconning.

The phrase "Don't break the Web" is not original to this thread.

(I can't say I look forward to your follow-up reply employing sleights of hand like claims about how stuff like Flash that was never standardized, or the withdrawal of experimental APIs that weren't both stable/finalized and implemented by all the major browsers, or the long tail of stuff on developer.mozilla.org that is marked "deprecated" (but nonetheless still manages to work) are evidence of your claim and that browser makers really do have a history of doing this sort of thing. This is in fact the first time something like this has actually happened—all because there are engineers working on browsers at Google (and Mozilla and Apple) that are either confused about how the Web differs from, say, Android and iOS, or resentful of their colleagues who get to work on vendor SDKs where the API surface area is routinely rev'd to remove whatever they've decided no longer aligns with their vision for their platform. That's not what the Web is, and those engineers can and should go work on Android and iOS instead of sabotaging the far more important project of attending to the only successful attempt at a vendor-neutral, ubiquitous, highly accessible, substrate for information access that no one owns and that doesn't fuck over the people who rely on it being stable.)


Mozilla's own site expounds on "Don't Break the Web" as "Web browser vendors should be able to implement new web technologies without causing a difference in rendering or functionality that would cause their users to think a website is broken and try another browser as a result."

There is no meaningful risk of that here. The percentage of web users who are trying to understand content via XSLT'd RSS is nearly zero, and for everyone who is, there is either polyfill or server-side rendering to correct the issue.

> and those engineers can and should go work on Android and iOS

With respect: taken to its logical conclusion, that would be how the web as a client-renderable system dies and is replaced by Android and iOS apps as the primary portal to interacting with HTTP servers.

If the machine is so complex that nobody wants to maintain it, it will go unmaintained and be left behind.


> Mozilla's own site expounds on "Don't Break the Web" as

I'm a former Mozillian. I don't give a shit how it has been retconned by whomever happened to be writing copy that day, if indeed they have—it isn't even worth bothering to check.

"Don't break the Web" means exactly what it sounds like it means. Anything else is a lie.

> If the machine is so complex that nobody wants to maintain it, it will go unmaintained

There isn't a shortage of people willing to work on Web browsers. What there is is a finite amount of resources (in the form of compensation to engineers), and a set of people holding engineering positions at browser companies who, by those engineers' own admission, do not satisfy the conditions of being both personally qualified and equipped to do the necessary work here. But instead of stepping down from their role and freeing up resources to be better allocated to those who are equipped and willing, they're keeping their feet planted for the simple selfish reason that doing otherwise might entail a salary reduction and/or diminished power and influence. So they sacrifice the Web.

Fuck them.


Sounds like you're volunteering to maintain the XSLT support in Firefox. That's great! That means when Chrome tries to decommission it, users who rely upon it will flock to Firefox instead because it's the only browser that works right with the websites they use.

Firefox ascendancy is overdue. Best of luck!


Do you still beat your wife?

Don't understand the question, but I do wish you well in your future endeavors!

> They all agreed to remove it.

All those people suck, too.

Were you counting on a different response?

> XSLT is extremely unpopular and worse than JS in every way

This isn't a quorum of folks torpedoing a proposed standard. This is a decades-old stable spec and an established part of the Web platform, and welching on their end of the deal will break things, contra "Don't break the Web".


You can continue to use XSLT server-side to emit HTML if you are deeply, deeply concerned about the technology.


I don't think that applies here (especially since I didn't even ask a question).

"I'm sad it's going away in the client!"

"So move it to the server, and the end-user will get essentially the same experience."

Am I missing something here?


> especially since I didn't even ask a question

Oh, that's the operative part? Accept my apologies. What I meant to say is, "I can see that you're deeply, deeply concerned about being able to continue beating your wife. I think you should reconsider your position on this matter."

No question mark, see? So I should be good now.

> Am I missing something here?

Probably not. People who engage in the sort of underhandedness linked to above generally don't do it without knowing that they're doing it. They're not missing anything. It's deliberate.

So, too, I would guess, is the case with you—particularly since your current reply is now employing another familiar underhanded rhetorical move. Familiar because I already called it out within the same comment section:

> The problem is that there is content that works today that will break after the Chrome team follows through on their announced plans of shirking on their responsibility to not break the Web. That's what the problem is. Any "solution" that involves people having to go around un-breaking things that the web browser broke is not a solution to the problem that the Chrome team's actions call for people to go around un-breaking things that the web browser broke.

<https://news.ycombinator.com/item?id=45824392>


I seem to have personally offended you, and for that I am sorry.

This seems personal to you so I'll bow out of further discourse on the subject as it is not particularly personal to me. The websites I maintain use a framework to build RSS output, and the framework will be modified to do server-side translation or polyfill as needed to provide a proper HTML display experience for end-users who want that.


I'm not involved in any capacity with the development or use of Jupyter—I think ipynb is fundamentally flawed at a deep level, starting with its (I)Python roots—but this company's framing of their product as "the successor to Jupyter notebook" comes across as passive aggressive at best and misleading at worst. What is their relationship to Jupyter besides building a Jupyter alternative?

What are some of the flaws surrounding IPython and ipynb in particular?

I was going to bring up the same claim—of it being a "Letter, from Robert Hooke to Gottfried Wilhelm Leibniz". It's clearly not written with that intent.

While reading, I first took it to be a journal entry. The penmanship also supports this. But the second person "you" at the end is a confounding detail. A journal entry in the form of a letter to himself is possible, but doesn't seem plausible.

The word you've labelled "[deviate?]" in your copy is definitely not "deviate" in the manuscript. I'm certain that the first letter is "R", and the second to last letter probably a "d" followed by "e" (compare to "undenyable" and "persuade"). The letter following "R" could be "i", but really could be anything. It's unfortunate that it's not as straightforward as just crafting a regex and grepping at /usr/share/dict/words, because whatever Hooke meant, it's likely to be an archaic spelling. "Recede" spelled as "Ricede" works grammatically, but I don't think that's it, either.


FYI I agree with you on that word: letter by letter it looks to me like "Roeade", but I can't figure out what English word that would be, either.

28, not 20.

Imagine if we had a system where you could just deposit the source code for a program you work on into a "depository". You could set it up so your team could "admit" the changes that have your approval, but it doesn't allow third parties to modify what's in your depository (even if it's a library that you're using that they wrote). When you build/deploy your program, you only compile/run third-party versions that have been admitted to the depository, and you never just eagerly fetch other versions that purport to be updates right before build time. If there is an update, you can download a copy and admit it to your repo at the normal time that you verify that your program actually needs the update. Even if it sounds far-fetched, I imagine we could get by with a system like this.

You're describing a custom registry. These exist IRL (eg jFrog Artifactory). Useful for managing allow-listed packages which have met whatever criteria you might have (eg CVE-free based on your security tool of choice). Use of a custom registry, and a sane package manager (pnpm, not npm), and its lockfile, will significantly enhance your supply-chain security.

No. I am literally describing bog standard use of an ordinary VCS/SCM where the code for e.g. Skia, sqlite, libpng, etc. is placed in a "third-party/" subdirectory. Except I'm deliberately using the words "admit" and "depository" here instead of "commit" and "repository" in keeping with the theme—of the widespread failure of people to use SCMs to manage the corresponding source code required to build their product/project.

Overlay version control systems like NPM, Cargo, etc. and their harebrained schemes involving "lockfiles" to paper over their deficiencies have evidently totally destroyed not just folks' ability to conceive of just using an SCM like Git or Mercurial to manage source the way that they're made for without introducing a second, half-assed, "registry"-dependent VCS into the mix, but also destroyed the ability to recognize when a comment on the subject is dripping in the most obvious, easily detectable irony.


Yeah, people invented the concept of packages and package management because they couldn’t conceive of vendoring (which is weird considering basically all package managers make use of it themselves) and surely not because package management has actual benefits.

Maybe in a perfect world, we’d all use a better VCS whose equivalent of submodules actually could do that job. We are not in that world yet.


Do you understand the reasons, and are you able to clearly articulate them? Are you able to describe the tangible benefits in the form of a set of falsifiable claims—without resorting to hand-waving or appeals to the perceived status quo or scoffing as if the reasons are self-evident and not in question or subject to scrutiny?

I'm not altogether surprised at the negative reaction to this comment, but I am at a loss to really get into the head of the reader who is so unhappy with it. Let's give it another shot:

You wrote—alluding to, but without actually stating—the reasons why registries and package managers for out-of-tree packages that subvert the base-level VCS were created:

> Yeah, people invented the concept of packages and package management because they couldn’t conceive of vendoring (which is weird considering basically all package managers make use of it themselves) and surely not because package management has actual benefits.

This is a sarcastic comment. It using irony to make the case that the aforementioned trio (packages, package managers, package registries), etc. were created for good reason (their "actual benefits").

Do you know what the reasons are? Can you reply here stating those reasons? Be explicit. Preferably, putting it into words in a way that can be tested (falsified)—like the way the claim, "We can reduce the size of our assets on $PROJECT_Z by storing the image data as PNG instead of raw bitmaps" is a claim that lends itself to being tested/falsified—and not just merely alluding to the good reasons for doing $X vs $Y.

What, specifically, are the reasons that make these out-of-tree, never-committed packages (and the associated infrastructure involving package registries, etc.) a good strategy? What problem does this solve? Again: please be specific. Can it be measured quantitatively?


This is exactly what Swift Package Manager does. No drama in the Swift Package world AFAIK.

Does the lockfile not solve this?

not really, because you can't easily see what changed when you get a new version. When you check in the third_party repo to your VSC, then when you get a new version, everything that changed is easily visible `git diff` before you commit the new changes. With a lockfile, the only diff is the hash changed.

Not if you use git submodules, which is how most people would end up using such a scheme in practice (and the handful of people that do this have ended up using submodules).

Go-style vendoring does dump everything into a directory but that has other downsides. I also question how effectively you can audit dependencies this way -- C developers don't have to do this unless there's a problem they're debugging, and at least for C it is maybe a tractible problem to audit your entire dependency graph for every release (of which there are relatively few).

Unfortunately IMHO the core issue is that making the packaging and shipping of libraries easy necessarily leads to an explosion of libraries with no mechanism to review them -- you cannot solve the latter without sacrificing the former. There were some attempts to crowd-source auditing as plugins for these package managers but none of them bore fruit AFAIK (there is cargo-audit but that only solves one part of the puzzle -- there really needs to be a way to mark packages as "probably trustworthy" and "really untrustworthy" based on ratings in a hard-to-gamify way).


The problem is that not enough people care about reviewing dependencies’ code. Adding what they consider noise to the diff doesn’t help much (especially if what you end up diffing is actually build output).

What is "this"?

this = deps getting updated when you don't want or don't expect them to

We do not appear to have a shared understanding of the problem to be solved.

This is because you have redefined the problem—partly as a way of allowing you to avoid addressing it, and partly to allow you to speak of lockfiles as a solution to that problem. See <https://news.ycombinator.com/item?id=45824392>.

Lockfiles do not solve the problem. They are the problem.

This is what I wrote:

> Overlay version control systems like NPM, Cargo, etc. and their harebrained schemes involving "lockfiles" to paper over their deficiencies have evidently totally destroyed […] folks' ability to conceive of just using an SCM like Git

That's the problem that I'm talking about—lockfiles. Or, more specifically: the insistence on practicing a form of version control (i.e. this style of dependency management that the current crop of package managers tell people is the Right Way to do things) that leads to the use of lockfiles—for the sole purpose of papering over the issues that were only introduced by this kind of package manager—and for people to be totally unaware of the water they're swimming in under this arrangement.

Everyone is familiar with the concept of "a solution in need of a problem". That's exactly what lockfiles are.


Huh? "Just use git" is kind of nonsensical in the context of this discussion.

Oh, okay.

Now you have the opposite problem, where a vulnerability could be found in one of your dependencies but you don't get the fix until the next "normal time that you verify that your program actually needs the update".

If a security issue is found that creates the "normal time".

That is, when a security issue is found, regardless of supply chain tooling one would update.

That there is a little cache/mirror thing in the middle is of little consequence in that case.

And for all other cases the blessed versions in your mirror are better even if not latest.


This is how most software used to work before internet package managers, and it turns out that the same people who aren't good at checking their dependencies before automatically upgrading are also not good at constantly monitoring their dependencies for vulnerabilities.

You are describing BSD ports from the 90's. FreeBSD ports date back to 1993.

Also, Gentoo dating back to 2003.

And today.

So, vendoring?

That's a bingo.

Well in the Java world, Maven had custom repositories which did this for the last 20+ years.

That is exactly what I do.

This post contains some interesting ideas and poses (or at least suggestively alludes to) a few thought-provoking questions but is weakened by spending too much of its word (and the author's thinking) budget on tangents about LLMs.

Side note: mashups and widget engines occupied a substantial part of technophiles' focus (incl. power users and programmers) 15–20 years ago. The W3C chartered a working group to investigate harmonizing different implementations. That interest eventually evaporated, and they all went away. It's almost eerie how rare it is to find any modern reference to something that consumed so much attention at the time. It'd be reasonable to wager that the majority of programmers under 25 have never even heard of Konfabulator or are aware of the hype that existed around other vendors' similar offerings.

I'm waiting for when a new browser maker comes along and gains market share by shaking up the conventional browser UI by offering stuff like a widget engine built into the browser and basic missing functionality like better UX around site logins (including its own native UI for ordinary (i.e. non-Cookie-based) HTTP auth), native support for dealing with tabular data (like sorting tables) and CSV, and of course direct authoring of Web resources—instead of offloading that to e.g. Google Docs and startups like Notion whose browser-based apps don't clearly separate the editor/tooling from the content, which in turns means it never really feels like first-class media that's really "of" the Web.


OpenDoc [1] was another attempt in this space.

I think the fundamental problem was that no one ever figured out a business model around components. You can get people to buy an application and the application could edit its own files. But it's not clear how a document or app that contains a mash-up of pieces of code written by different companies is paid for.

Would users be willing to pay for a component that let them add charts to their word processing docs? Would that mean no one else could open the doc unless they had the same component? It didn't seem at the time like there was a business model that held together.

(The somewhat related counter-example is modern digital audio workstations. Third-party plug-ins ["VSTs"] are a remarkably success model there for both users and businesses. And users do seem to understand and accept that, yes, if your project uses some audio plug-ins then anyone else you collaborate with needs to have those same plug-ins.)

[1]: https://en.wikipedia.org/wiki/OpenDoc


I reckon components either have to be free, or the "platform" pays top creators. The latter is hard...but one could execute it better thank tiktok, who did it very inequitably - but it was still incentive enough.

Mashup engines were so gloriously hopeful.

It feels like Opera browser was super early (as usual), with widgets (2008). Eventually we had a widget spec (2013). Overall PWAs cover a lot of this terrain today as a packagable standalone webapp, but also there was a lot more excitement in the world about small UI programs that overlayed and/or worked with your desktop at large, that we don't see today (but omgosh I wish Project Fugu had browbeaten Android into having a capable web home screen widget option!) https://www.wired.com/2008/05/opera-targets-widget-developer... https://www.w3.org/TR/widgets-apis/

But more than widgets, there were such interesting attempts ongoing to stitch together an inter-site inter-networked web. Google Buzz's protocols & especially the Digital Salmon protocol (2010) was a fore-runner to Mastadon, a way for discrete digital identities to push Atom/RSS like entries (such as a "like" or comment) at each other. Trying to work under the Open Web Foundation (OWF). http://blog.jclark.com/2010/02/tour-of-open-standards-used-b... https://news.ycombinator.com/item?id=1140893

Even before this, the OpenSocial folks were working on very ambitious cross-site data and widget systems. I really think some deep-diving technical retrospectives into OpenSocial would be incredible, could maybe help us shake loose some of the rut we're in with uncomposeable consumeristic computing being the only thing we can even think to do. https://en.wikipedia.org/wiki/OpenSocial


A browser like that would be so great! I wonder if a chromium base (as is trending) would enable it...

outerHTML is an attribute of Element and DocumentFragment is not an Element.

Where do the standards say it ought to work?


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: