Hacker Newsnew | past | comments | ask | show | jobs | submit | more ongy's commentslogin

But the offline enabled property allows exactly that.

Both sides type offline and only sync later. Neither would like their change to just be discarded.


I was responding only to the idea of having no conflict resolution: last edit wins (proposedin a great grandparent comment):

https://news.ycombinator.com/item?id=45341335 "We have a local-first app. Our approach? Just ignore the conflicts. The last change wins."

if you can see the edits being made in real time, keystroke by keystroke, that pretty much solves that problem.

As for offline editing, either don't support it (then you're not local-anything obviously) or you can have some lame workflow like "the document was changed by another user ..."


What does 'bucked' mean in this context?


Like a ‘bucking bronco,’ a horse kicking high and trying to throw off its rider - you ‘buck’ something to get it off of you, to get it off your back, to shake it off, to get out from under it. You might also see it in ‘bucking a trend.’

So it means “to act against” / “to escape from” the rules.


> In Go 1.24, we are introducing a new, experimental testing/synctest package Clearly a mature mechanism we'd see in large companies...


EAD does indeed look like a good example of why we shouldn't use XML.


hah hah yeah, these scoped content examples would be a joy to do in JSON

https://www.loc.gov/ead/tglib1998/tlin125.html


Yes. A sane schema that actually encapsulates the data would be a lot easier to read.

Earlier I had only seen the mix of values in body and values in tags. With one even being a tag called "value".

Thanks for showing more examples of XML being used to write unreadable messes.


you must find reading HTML a slog.


I do. That's why I have a browser render it to a format that makes sense for human consumption.

Granted, html actually makes sense in the xml-ish (I don't remember if it's technically compliant), since it weaves formatting into semantically uninterrupted text.

If that's not the case, I don't see a real benefit to use XML over anything sane (not yaml... Binary formats depending on use case)


>I do. That's why I have a browser render it to a format that makes sense for human consumption.

I guess if that's the standard then reading any data format is also a slog because hey, most data and document formats get rendered as something for "human consumption" but that said when one is a programmer one often has to read the format without the rendering, and so, your witty reply aside I guess you must find this task where HTML is concerned a slog.

This is too bad because most mixed content formats like EAD, HTML etc. are like that, and if you want humans to be able to write the content with say a paragraph, but inside that paragraph is a link etc. you're going to write it mixed content, because that works best based on millions of programmer and editor hours over decades and JSON would be crap for it.

Is it super great, nope it's only the best way of writing document formats (with highly technical and mix of structured and unstructured content) that we currently know of, in the same way that Democracy is the worst form of politics except for all the all others and multiple other examples of things that suck in the world but are better than all the alternatives.

I didn't say EAD was great, I said it was better than JSON for what it needed to do, part of which is having humans write mixed content.

Believe me I have certainly seen people who have been JSON enthusiasts try to replicate mixed content type documents in JSON and it has always ended up looking at least as bad as any XML but without all the tooling to make it easier to write XML and with a tendency to brittleness because in doing mixed content in JSON you are going to have to do a lot of character escaping.

I'm going to end off here with the observation that I doubt you are actually acquainted with the workflows of editors, writers, publishing industries and the use of markup formats in any sort of long running type of company using these things? I just have a feeling on this matter. You seem like your technical area of expertise is not in the area you are critiquing? Some of these companies are actually quite technically advanced, so I'm just putting that out there that you might not be as aware of the requirements of parts of the world that use things that you would build in a superior manner if only you were given the task to do so.


Calling XML human readable is a stretch. It can be with some tooling, but json is easier to read with both tooling and without. There's some level of the schema being relevant to how human readable the serialization is, but I know significantly fewer people that can parse an XML file by sight than json.

Efficient is also... questionable. It requires the full turing machine power to even validate iirc. (surely does to fully parse). by which metric is XML efficient?


By efficiency, I mean it's text and compresses well. If we mean speed, there are extremely fast XML parsers around see this page [0] for state of the art.

For hands-on experience, I used rapidxml for parsing said 3D object files. A 116K XML file is parsed instantly (the rapidxml library's aim is to have speed parity with strlen() on the same file, and they deliver).

Converting the same XML to my own memory model took less than 1ms including creation of classes and interlinking them.

This was on 2010s era hardware (a 3rd generation i7 3770K to be precise).

Verifying the same file against an XSLT would add some milliseconds, not more. Considering the core of the problem might took hours on end torturing memory and CPU, a single 20ms overhead is basically free.

I believe JSON and XML's readability is directly correlated with how the file is designed and written (incl. terminology and how it's formatted), but to be frank, I have seen both good and bad examples on both.

If you can mentally parse HTML, you can mentally parse XML. I tend to learn to parse any markup and programming language mentally so I can simulate them in my mind, but I might be an outlier.

If you're designing a file format based on either for computers only, approaching Perl level regular expressions is not hard.

Oops, forgot the link:

[0]: https://pugixml.org/benchmark.html


> Calling XML human readable is a stretch.

That’s always been the main flaw of XML.

There are very few use case where you wouldn’t be better served by an equivalent more efficient binary format.

You will need a tool to debug xml anyway as soon as it gets a bit complex.


With this you have efficient binary format and generality of XML

https://en.m.wikipedia.org/wiki/Efficient_XML_Interchange

But somehow google forgot to implement this.


A simple text editor of today (Vim, KATE) can real-time sanity check an XML file. Why debug?


Because issue with XML are pretty much never sanity check. After all XML is pretty much never written by hand but by tools which will most likely produce valid xml.

Most of the time you will actually be debugging what’s inside the file to understand why it caused an issue and find if that comes from the writing or receiving side.

It’s pretty much like with a binary format honestly. XML basically has all the downside of one with none of the upside.


I mean, I found it pretty trivial to write parsers for my XML files, which are not simple ones, TBH. The simplest one of contains a bit more than 1700 lines.

It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".

Maybe I'm distorted because I worked with XML files more than decade, but I never spent more than 30 seconds while debugging an XML parsing process.

Also, this was one of the first parts I "sealed" in the said codebase and never touched it again, because it worked, even if the coming file is badly formed (by erroring out correctly and cleanly).


> It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".

I think we are actually in agreement. You could do exactly the same with a binary format without having to deal with the cumbersomeness of xml which is my point.

You are already treating xml like one writing errors in your own parsers and "sealing" it.

What’s the added value of xml then?


> cumbersomeness of xml...

Telling the parser to navigate to first element named $ELEMENT, checking a couple of conditions and assigning values in a defensive manner is not cumbersome in my opinion.

I would not call parsing binary formats cumbersome (I'm a demoscene fan, so I aspire to match their elegance and performance in my codebases), but not the pragmatic approach for that particular problem at hand.

So, we arrive to your next question:

> What’s the added value of xml then?

It's various. Let me try to explain.

First of all, it's a self documenting text format. I don't need an extensive documentation for it. I have a spec, but someone opening it in a text editor can see what it is, and understand how it works. When half (or most) of the users of your code are non-CS researchers, that's a huge plus.

Talking about non-CS researchers, these folks will be the ones generating these files from different inputs. Writing an XML in any programming language incl. FORTRAN and MATLAB (not kidding) is 1000 times easier and trivial than writing a binary blob.

Expanding that file format I have developed on XML is extremely easy. You change a version number, and maybe add a couple of paths to your parser, and you're done. If you feel fancy, allow for backwards compatibility, or just throw an error if you don't like the version (this is for non-CS folks mostly. I'm not that cheap). I don't need to work with nasty offsets or slight behavior differences causing to pull my hairs out.

The preservation is much easier. Scientific software rots much quicker than conventional software, so keeping file format readable is better for preservation.

"Sealing" in that project's parlance means "verify and don't touch it again". When you're comparing your results with a ground truth with 32 significant digits, you don't poke here and there leisurely. If it works, you add a disclaimer that the file is "verified at YYYYMMDD", and is closed for modifications, unless necessary. Same principle is also valid for performance reasons.

So, building a complex file format over XML makes sense. It makes the format accessible, cross-platform, easier to preserve and more.


It's kinda funny to see "not human readable" as an argument in favor of JSON over XML, when the former doesn't even have comments.


And yet, it's still easier for me to parse with my eyes


You are making two assumptions that are generally not true

* In the US * Interviewee is currently unemployed

Even then, the general candidate we interview for Tech positions can usually survive quite well on a couple months.


A couple of months is one thing, but a typical time back to work after a tech layoff is currently more like six or eight months.


I'm speaking to what I know. Sue me.


At best WPA2. WEP is broken in ways that don't need human fault.

The only downside of TOTP to FIDO and friends (from a security perspective) is phishing resistance


Because of how humans work TOTP can give false confidence to the user which is a further downside.

Grandma goes to fakesite.com not realising it isn't her real site. It asks her for the TOTP code, she provides her TOTP code and it works. She is reassured - if this wasn't her real site why would the code work?

Now, in theory a neutral security assessor can see that's not reassuring, but that's not how humans work, the fact there was a challenge-response feels like security even though for all they know if was accepting any inputs.

Phishing sites generally have a milder version of this effect. I have vanity mail, so I own the "mail provider" handling my email and yet of course I get those phishing mails saying as the "Administrators" of my vanity domain they need me to type in my password. But they don't know my password of course, so filling in their form with crap "works" the same as anything else, fuckyouscammers, sure that's a reasonable password.

These schemes can't work if you don't rely on stupid shared human secrets ("Passwords") everywhere, but we did and it seems many people are really enthusiastic to keep doing that, so I doubt we'll escape from this self-imposed status. I wanted to make a web site that mimics the famous reusable Onion article but I've never gotten around to it. "No way to prevent this"


Find me a grandma using TOTP. It would confuse them too much.


Huh? We're not asking random grandparents to implement TOTP, only to use it, and that's necessary for a lot of basic remote work and so on these days.


I clearly said "using" not "implementing".


Hence my "Huh". Everybody working in my team uses TOTP if they don't have their own Yubikey which most do not. Most of them aren't close to as old as I am, but some are indeed grandparents, it's like if you were astonished anybody over age 40 can type.


That's a pretty major downside to OTP's and certainly not one that can be offhandedly dismissed.


It is for general population. I don't think HN users for instance are particularly concerned about phishing sites.


Python users (pypi.org) just got hit that were using TOTP.

"If the user had enrolled a Security Device for PyPI second factor authentication, the attacker would not have been able to use the second factor, as the WebAuthn protocol requires the user to physically interact with a hardware security key, or use a browser-based implementation, which would not be possible if the user was not on the legitimate PyPI.org website (Relying Party Identifier)."

https://blog.pypi.org/posts/2025-07-31-incident-report-phish...


Zero days exist, and something like tapjacking can be used to obscure and capture those TOTPs.

Don't use TOTPs if you have an option to use Passkeys/WebAuthN

Short video example: https://taptrap.click/


The core devs were all paid to work on it when I was still active.

Not sure how many of the gnome (mutter) people are paid. Last I checked, the nvidia support was donated by nvidia (paid) for both KDE and Gnome.

I think KDE got some work sponsored by valve (before gamescope), though I'm not quite sure on that.

Overall, outside the sway/wlroots group I was a part of at the time, people generally worked adjacent or directly on wayland for day jobs.


My homenet is 1GBit, so is my Internet

I.e. no traffic beyond my legitimate saturation can reach the ISP

I have saturated my link with quic or wireguard (logical or) plenty of times.

The lack of any response on high data rates would be an indicator I've only tried that once and it failed gloriously due to congestion. I don't think there's many real protocols that are unidirectional without even ACKs


I think that's the point. You don't have to run the full kernel to run some linux tools.

Though I don't think it ever supported docker. And wasn't really expected to, since the entire namespaces+cgroup stuff is way deeper than just some surface level syscall shims.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: