Hacker Newsnew | past | comments | ask | show | jobs | submit | tannhaeuser's commentslogin

Is it really? Last commit was a day ago [1].

Btw I'm not going to whomever's fscking clickbaity yt channel, for a markup language topic of all things. Provide a transcript OP!

[1]: https://gitlab.gnome.org/GNOME/libxml2/-/tree/master/doc?ref...


If only it had stagnated around gnome 2.0.

MATE exists. You can use it right now.

I do. It's great that the UI is stagnated, but unfortunately the UX is too. Things like bluetooth not being integrated with the DE, and various details that we take for granted not working correctly

Face ID is severely lacking compared to MS Hello, simple as. It's at best 50:50 hit/miss compared to Hello which logs me in always. Granted, that figure doesn't include false positives, but the difference is substantial and makes Apple's implementation look really lame, to the point I'd like to see it removed.

I haven't had that happen, so I think it works fairly well. Even with a mask.

In fact, it works so well, for me, that I was worried that it was too generous, but it is actually very secure.


TFA's URL reads why-css-bad.html pointing at its originally intended title lol.

> [your browser rephrasing your text into smaller words] might be slowly becoming possible but is clearly outside the bounds of what CSS would do

When it comes to CSS, nothing seems out of scope. It's gaining conditionals (if) as we speak.

In hindsight, I wish Pavel had used something other than Z3/SMTLib for his CSS formal semantics, such as a Prolog-derived CLP language. Not that Z3 is bad, but the hope was that CSS spec authors could pick up and test/maintain the ruleset with new CSS features. Rather than unapologetically continue to add crap to CSS unhingedly. Or alternatively, expose layout to JS and then provide a common JS implementation for layout computation, as the Houdini project was proposing. I mean, that's what JS is for right?


Is anyone really using bazel outside Google in any meaningful capacity? There used to be a number of really popular and widely used projects such as closure compiler, gwt/j2cl, guava and other Java libs, and supposedly lots of golang stuff (not to speak of k8s where people seem to be satisfied it's a black box) that are dying behind bazel walls.


> Is anyone really using bazel outside Google in any meaningful capacity?

Yes. For instance, Stripe uses Bazel internally for ~all of its builds. https://stripe.com/blog/fast-secure-builds-choose-two

For other users, you might peruse the Bazelcon 2025 schedule, which happened earlier this month: https://bazelcon2025.sched.com/


“dying behind bazel walls” is a bit dramatic when it’s a freely available tool that anyone can learn and use.

imo, it’s also among the very few options that try to solve the hard problems of build systems (alongside Buck and maybe Nix).


Open source projects? Maybe less so.

But there are definitely companies that use Bazel in a major way.


I lost a month to Bazel a few years ago. The documentation had so many holes and what was there was either out of date or wildly inaccurate. You could not produce an Angular build using the tutorials as written. Everything was wrong. I'm sure Bazel great if you have a team of people to write bespoke libraries on top of it for each of your targets. I ended up using turbo for frontend and uv workspaces on the backend.


The Swiss company I work at (~300 employees) maintains a monorepo with projects in multiple languages that is managed with Bazel.


As far as custom shortforms for fully tagged angle-bracket markup is concerned, people are reinventing SGML which can handle markdown and other custom syntaxes since 1986.


I've been meaning to see how close I can come to Markdown syntax using SGML's SHORTREF and perhaps architectural forms.


Markdown inline syntax is straightforward to capture using SGML SHORTREF. What's more difficult (impossible) are things such as reference links where a markdown processor is supposed to pull text (the title of a link) from wherever it's defined before or after its usage.

Haven't heard about archforms in a while ;) but it's not a technique for custom syntax, and since markdown is specified as a Wiki syntax with canonical mapping to HTML, there's no need for the kind of simplistic element and token renaming possible with archforms.


I've use archforms in my custom markup before: https://r6.ca/HtmlAsSgml.html

For example added an <nbsp> attribute to turn all spaces into non-breaking spaces, and used archforms to remove the attribute afterwards.

But yeah, maybe for Makrdown you don't need archforms. On the other hand, perhaps there is some super clever way to use archforms to get your reference links working.


Found it interesting they've lost status as a non-profit for tax exemption in Germany, and now establish a non-profit in the US for attracting investors while dev and ops remain in a German gGmbH.


They're part-way through setting up a Belgian non-profit entity (AISBL) which will be the main organisation.

From https://blog.joinmastodon.org/2025/11/the-future-is-ours-to-...

>A vital aspect of our restructuring initiative is transitioning Mastodon to a new European not-for-profit entity. Our intent is to form a Belgian AISBL as the future home of the Mastodon organisation. > >As an update on our current status, Mastodon is continuing to run day-to-day operations through the Mastodon gGmbH entity (the Mastodon gGmbH entity automatically became a for-profit as a result of its charitable status being stripped away in Germany). The US-based 501(c)(3) continues to function as a strategic overlay and fundraising hub, and as a short-term solution until the AISBL is ready, the 501(c)(3) will own the trademark and other assets. We intend to transfer those assets as soon as the AISBL is ready. To enable tax-deductible donations for German donors, we partnered with WE AID as our fiscal sponsor.


My understanding is that the non-profit in the US exists exclusively to handle fundraising from US donors who might not be able to give to non-US organizations for tax reasons.


Or for a tax receipt. I give modestly to Wikipedia, but their lack of a Canadian entity means I direct the bulk of my giving toward entities that I get the CRA kickback for.


Though if you are in the US odds are you don't need this tax deduction anyway. Few people understand how US taxes work and so give their accountant all their deductions because they know tax deductions exist. They don't realize that the standard deduction applies and they don't/can't deduct anything.

(if you do apply deductions then this matters)


Before the SALT cap of $10K, it could easily matter. Prior to the cap, I itemized deductions every year - easy if you have a mortgage, live in a high income tax state, and both spouses work. In those days the standard deduction was $12K, and just our state taxes exceeded that amount - forget about mortgage + charity.

Even after the SALT cap, on some years itemizing was better.

And I believe as of next year (or the one after?), the SALT cap is going up significantly. Back to itemizing every year.


Prolog was introduced to capture natural language - in a logic/symbolic way that didn't prove as powerful as today's LLM for sure, but this still means there is a large corpus of direct English to Prolog mappings available for training, and also the mapping rules are much more straightforward by design. You can pretty much translate simple sentences 1:1 into Prolog clauses as in the classic boring example

    % "the boy eats the apple"
    eats(boy, apple).
This is being taken advantage of in Prolog code generation using LLMs. In the Quantum Prolog example, the LLM is also instructed not to generate search strategies/algorithms but just planning domain representation and action clauses for changing those domain state clauses which is natural enough in vanilla Prolog.

The results are quite a bit more powerful, close to end user problems, and upward in the food chain compared to the usual LLM coding tasks for Python and JavaScript such as boilerplate code generation and similarly idiosyncratic problems.


"large corpus" - large compared to the amount of Python on Github or the amount of JavaScript on all the webpages Google has ever indexed? Quantum Prolog doesn't have any relevant looking DuckDuckGo results, I found it in an old comment of yours here[1] but the link goes to a redirect which is blocked by uBlock rules and on to several more redirects beyond which I didn't get to a page. In your linked comment you write:

> "has convenient built-in recursive-decent parsing with backtracking built-in into the language semantics, but also has bottom-up parsing facilities for defining operator precedence parsers. That's why it's very convenient for building DSLs"

which I agree with, for humans. What I am arguing is that LLMs don't have the same notion of "convenient". Them dumping hundreds of lines of convoluted 'unreadable' Python (or C or Go or anything) to implement "half of Common Lisp" or "half of a Prolog engine" for a single task is fine, they don't have to read it, and it gets the same result. What would be different is if it got a significantly better result, which I would find interesting but haven't seen a good reason why it would.

[1] https://news.ycombinator.com/item?id=40523633


Worth noting XSLT is actually based on DSSSL, the Scheme-based document transformation and styling language of SGML. Core SGML already has "link processes" as a means to associate simple transforms/renames reusing other markup machinery concepts such as attributes, but is also introducing a rather low-level automaton construct to describe context-dependent and stateful transformations (the kind of which would've be used for recto/verso rendering on even/odd print pages).

I think it's interesting because XSLT, based on DSSSL, is already Turing-complete and thus the XML world lacked a "simple" sub-Turing transformation, templating, and mapping macro language that could be put in the hands of power users without going all the way to introduce a programming language requiring proper development cycles, unit testing, test harnesses, etc. to not inevitably explode in the hands of users. The idea of SGML is very much that you define your own little markup vocabulary for the kind of document you want to create at hand, including powerful features for ad-hoc custom Wiki markup such as markdown, and then create a canonical mapping to a rendering language such as HTML; a perspective completely lost in web development with nonsensical "semantic HTML" postulates and delivery of absurd amounts of CSS microsyntax.


As a youngster entering the IT professional circles, I was enamoured with SGML: creating my own DTDs for humane entry for my static site generator, editing my SGML source document with Emacs sgml-mode. I worked on TEI and DocBook documents too (and was there something related to Dewey coding system for libraries?).

However, processing fully compliant SGML, before you even introduce DSSSL into the picture, was a nightmare. With only one open source and at the same time the only fully compliant parser (nsgml), which was hard to build on contemporary systems, let alone run, really using SGML for anything was an exercise in frustration.

As an engineering mind, I loved the fact you could create documents that are concise yet meaningful, and really express the semantics of your application as efficiently as possible. But I created my own parsers for my subset, and did not really support all of the features.

HTML was also redefined to be an SGML application with 4.0.

I originally frowned on XML as a simplification to make it work for computers vs for humans, but with XML, XSLT, Xpath... specs, even that was too complex for most. And I heavily used libxml2 and libxslt to develop some open source tooling for documentation, and it was full of landmines.

All this to say that SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity. And going for "semantic HTML" in lieu of SGML + DSSSL or XML + XSLT was really an attempt to find that balance of meaning and simplicity.

It's the common cycle as old as software engineering itself.


> HTML was also redefined to be an SGML application with 4.0

Nope, it was intended as SGML from the get go; cf [1].

> SGML has really spectacularly failed (IMO) due to sheer flexibility and complexity

HTML (and thus SGML) is the most used document language there ever has been, by far.

[1]: https://info.cern.ch/hypertext/WWW/MarkUp/MarkUp.html


I stand corrected: HTML was defined as an SGML application from the very first published version in 1993 (https://www.w3.org/MarkUp/draft-ietf-iiir-html-01.txt), but I know the original draft in 1990-91 was heavily SGML inspired even if it didn't really conform to the spec (nor provide a DTD). Thanks for pointing this out, it's funny how memory can play games on us :)

While HTML is clearly the most used document markup language there has ever been, almost nobody is using an SGML-compliant parser to parse and process it, and most are not even bothering with the DTD itself; not to mention that HTML5 does not provide a DTD and really can't even be expressed with an SGML DTD.

So while HTML used to be one of SGML "applications" (document types, along with a formal definition), on the web it was never treated as such, but as a very specific language that is inspired by SGML and only inspired by the spec too (since day 1, all browsers accepted "invalid" HTML and they still do).

Ascribing the success to SGML is completely backwards, IMHO: HTML was successful despite it being based on SGML, and for all intents and purposes, majority never really cared about the relationship.


Completely correct and the operative phrase here is “absurd amounts” which actually captures our entire contemporary computing stack in almost every dimension that matters.


The entire point of markup attributes is to contain rendering hints that themselves aren't rendered to the user as such. Hell, angle-bracket markup itself was introduced to unify and put a limit to syntactic proliferation. But somehow "we" arrived at creating the monstrosity that is CSS and then even to put CSS and JS into inline element content with bogus special escaping and comment parsing rules rather than into attributes and external resources.

The enormous proliferation of syntax and super-complicated layout models doesn't stop markup haters to cry wolf because entities (text macros) represent a security risk in markup however; go figure.


But did it ever actually work in practice? As I remember it the XSLT backed websites still needed "absurd amounts of CSS microsyntac". You could not do everything you needed with XSLT so you needed to use both XSLT and CSS. Also coding in XSLT was generally painful, even more so than writing CSS (which I think is another poorly designed language).

It is all well and good to talk about theoretical alternatives that would have been better but we are talking here about a concrete attempt which never worked beyond trivial examples. Why should we keep that alive because of something theoretical which in my opinion never existed?


XSLT is template language. CSS is styling language. They have nothing to do with each other. You have data in some XML-based format. You write template using XSLT to transform that data into HTML. And then you use CSS to make that HTML look pretty. These technologies work very well with each other.


The question is why developers even want to contaminate markup with programming constructs when they have already everything they could ask for, including an object literal syntax ("JSON") for arbitrary graphs that also can encode a DOM.

SGML (XML, HTML) is for content (text) authors not developers, but webdevs just don't get it and look at it as a solution to developer problems.


Because you need quasiquoting to construct trees decently, and JavaScript doesn’t have any worth a damn.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: