> I'm creating a blog platform using this concept.
Yeah, please don’t; this is an abomination that’s fun for demonstrating and teaching how these things work, but should absolutely never be used in reality.
You could use this as your data model foundation upon which to build a generator, but you should never under any circumstances actually serve this stuff.
This is such a depressing comment. It's the opposite of the hacker spirit. The web has made it easier than ever to discourage people from trying anything new.
New ideas seem like bad ideas, because otherwise people would be doing them. One could imagine your objection applying to React: "What a horrible idea. It's a nice hack, but under no circumstances should you actually build webpages like this. Webpages are built out of HTML and valid JavaScript, which this is not..."
This is just a bad idea. Apart from abusing browser rendering quirks (which is unfortunately a sad tradition of the web, something that has claimed the life of entire technology stacks like xml/xhtml), it removes any hope of producing webpages accessible from screen readers and similar. In comparison, JS-only pages look like model citizens. This is the equivalent of abusing IE hot-comments and other CSS-parsing bugs to style your website, but worse (at least that technique was useful).
If JS generated sites can support ARIA then I don't really see why this couldn't do the same thing since it's just a JS site with a funky initial payload.
The "fun" of this site is that it's its own API. Sure there are better ways to accomplish this but abusing quirks for fun and profit is the hacker spirit.
Hacking is mostly playing around and using things in ways the designer didn't imagine at the time.
This is a good hack. It brings a smile to the face and warms the heart (if a good hack is what you were looking for). And I think that was the point anyway, not to come up with a professional architectural pattern.
So long as the JavaScript executes, this doesn’t actually harm accessibility: as it loads, it slurps the JSON, and turns it into a perfectly normal web page; rather like XSLT, as others have pointed out. And quirks mode isn’t that serious a problem. It’s a mild nuisance at most, really.
I disagree with this removing any hope of accessibility. If anything it brings hope for websites to be more accessible than ever before. Imagine a screen reader that instantly and flawlessly knows which content to read and how to navigate through it? If a consistent JSON format is adopted and used this would be a reality.
You’re misunderstanding the purpose of this JSON/HTML combination, and I get the impression you’re probably not familiar with how screen readers work, either. The JSON is purely transient, being projected to the HTML DOM. The JSON has no standard semantics; the idea is that it should be whatever shape makes sense for your use case, and then that you should use JavaScript to project it to HTML. Think of it as a server-side templating language that takes a bag of data and decides how to write the HTML, except that the document is the data, plus an extra piece that embeds the template to apply to the data.
The web is pretty much best-in-class for accessibility matters. (There are a few isolated cases where native desktop or mobile apps can do better, mostly to do with efficiency.) HTML elements have defined semantics, so that things like headings and links are automatically navigable, and sections, headers, footers and navigation lists become waypoints. Then ARIA attributes can be used to provide any further metadata necessary, such as to mark up a tabs widget to show how to interact with it. And that’s still key—accessibility needs to care about interactions (which tab is open? and did the content available change?), so state matters. Thus, accessibility tools will never care about any format that you are projecting from, like this JSON; they must only care about what is materialised, which is the HTML DOM. (Besides all that, the only sort of “consistent JSON format” that you could have would be basically an encoding of the HTML, which would be verbose and subjectively ugly compared to the HTML serialisation, e.g. ["a", {"href": "/"}, ["Home"]] or {"tagName": "a", "href": "/", "children": ["Home"]} instead of <a href="/">Home</a>, and miss the whole point here that the JSON is representing data rather than what the user sees.)
If you’re not familiar with accessibility stuff, I heartily recommend looking into it. If you can, find a blind person and see if you can watch them using a computer or phone. It’s really fascinating (I’ve never seen anyone be bored by it) and super useful if you ever contribute to making just about anything on a computer. Even people making documents in a word processor can learn things like “use actual headings rather than just making the text bigger and bold, because the semantics are useful”.
I think the difference is that nothing in React is abusing fault tolerance in a browser to the extent that this is. This very specifically works only because browsers don't generally care that an HTML page is anything close to valid HTML. There's no guarantee at all that this will continue to be the case, and it's resonable to assume that some browser in the future might decide to do render things that are very clearly a JSON document as a JSON document instead - fundamentally breaking the core architecture of your system.
React is a different approach, but it plays by the rules and doesn't serve invalid HTML or Javascript.
In fact, at least Firefox already has support for displaying JSON files. I assume this does not happen here because eventually, there actually is an <html> tag.
The primary reason for calling it an abomination is that if they actually plan on having this used (especially if by other people) it will spectacularly break the moment Chrome decides to make their quirks mode more strict.
It's a fun hack, although not exactly very unique. See also the very old by now "Website in a PNG file" concept, which does the exact same thing: https://gist.github.com/gasman/2560551
This will never break. I still think it's a bad idea, but not for that reason. Browser vendors are strongly against ever making backwards incompatible changes - especially a change like this that would likely affect thousands of websites. Chrome is not going to suddenly make their HTML rendering engine more strict.
React and its ilk were designed for use in apps, where there are meaningful advantages in powering things entirely with client-side scripting rather than generating server-side HTML and possibly enhancing it on the client side with scripting.
They were then abused by increasingly many people for rendering static content, things like blogs.
These people were using the wrong tool for the job, and it has had a detrimental effect on the web.
Now, pages are regularly far heavier than before with expensive client-side code to do stuff that should have been done server-side in almost all cases, and it became popular enough that search engines eventually had to cave and introduce a full JavaScript execution environment in their indexers, tooling everywhere got a lot more complicated, and the last state of the web was worse than the first.
There is a place for things like React: in rich apps, and perhaps even for server-side rendering, though I’m not fond of that for things like blogs because it encourages you to end up depending on it on the client side too.
But client-side JavaScript app frameworks made some formerly-impossible things possible, and formerly-complex-and-unmaintainable things tractable.
This monstrosity, on the other hand, offers no actual benefits for the user, and does introduce a few new problems (quirks mode for styling, and an unnecessary dependency on JavaScript). And so I say it should never be exposed to the client side. Use it as an input format for your blog generator if you like, but don’t try shipping a wonky JSON/HTML polyglot directly.
Look. We're on a Show HN. You call his work a monstrosity with no benefits for the user. You call it an abomination. When you've used up all your insults, what's left? What are you going to call something deserving of hate? It's so petty and bitter, and you need to go have fun with something. Go play!
As for your React commentary, it's 4:42am my friend, and I was just sad to see someone take such a hot steamy dump on someone's work on a Show HN thread without a single other person standing up for them. But all of your points about React can be summed up as "well, yes, that's what happens when something is successful: history is rewritten to make it seem like it had a place from the beginning."
If someone was like "Show HN: React - a new way to write websites," it feels like a guarantee you'd be right there like "But it breaks when you turn off Javascript!" Meanwhile, even Tor admitted defeat long ago and enabled JS by default.
I’m specifically objecting because of the text “I'm creating a blog platform using this concept” and what else the author is saying here. As a technical demonstration, it’s fine—various people don’t realise you can do things like this, and I’m all in favour of showing people these sorts of things, as it does get people thinking in interesting ways. But the author seems to think that this is a good idea, which I’m afraid it’s just not, so I can’t mince words—though perhaps I have expressed myself a bit more strongly than is seemly (thank you for pulling me up on that).
Of React (and its ilk: React was by no means the first project along these lines; as an example, I can think of having hit a couple of full JS-required Knockout sites well before React was a thing, where they would have been better as prerendered HTML), I’m not saying that history was rewritten to say it had a place from the beginning, but rather that there was a place for it from the beginning: that there is a certain type of app where there are very substantial benefits for the user in doing at least some parts on the client side (that was where the jQuery style of progressive enhancement started, and then things like Backbone steadily expanded it), and then that architecturally there are substantial benefits to going all in on client-side rendering if you need this sort of enhancement (this was what Knockout tended towards, and what ExtJS and React more fully realised)—but that this had costs, too, in that it broke the traditional model, making life harder for all kinds of tooling and making pages heavier, so that it shouldn’t be used everywhere.
React was not initially intended as a way to write web sites, but rather web apps. It’s an important distinction. For apps like Facebook and Twitter, the advantages of server-side rendering were not so applicable, and the benefits of full client-side rendering more marked. Unfortunately, the SPA craze grew further, and people liked using one tool everywhere, and so it became more and more common to use React in places where it was inappropriate at the time; until finally tooling like search engines caved on the whole JavaScript thing.
I still don’t like how often I find normal websites depending on JavaScript for fundamental rendering (it may not surprise you to discover that I browse with JavaScript disabled by default—mostly for performance and minimisation of annoyances), but at least React has purpose and some advantages, and has done from the start.
Meanwhile, this thing here doesn’t get you any of the benefits of client-side rendering (things like lighter and faster subsequent page loads and transitions, by using AJAX and semantic knowledge), but does carry all of the costs of client-side rendering: poorer performance, and making life harder for tooling of all kinds.
I agree with you. People should just explore the possibilities that exist to the fullest, harmless extent. There's no harm in this. Along the way, they might find something entirely new. Blocking off that path with "don't ever do this" is just closed-minded.
It's a good thing people exist that aren't deterred by gatekeeping comments like these and actually try to innovate or simply play around and have fun with programming.
But what do you think one gains by serving json instead of serving valid html like a body with just `<data src="actual.json"></data>`?
If you can retain full functionality (and hack on your ideas) AND be standards compliant (to make sure someone who decides to start offering a new web browser doesn't have to worry about 10% of websites serving this instead of valid HTML), then you should do that.
You're both right. Suggesting other people try your blog built on this platform is giving people bad advice, and the comment you're replying to is overly negative.
I think the thing that's annoying is trying to sell other people on using such a tool. Anyone crazy enough to try it should jump right in, but don't try to talk people into it as an actual good blog option.
I call this sort of project "linux on a wristwatch." It's totally cool and a fun hack, but there's no real utility to it beyond an art piece.
Moreover this is not even a "plain JSON". While I don't think the OP doesn't make this exact claim, an arbitrary JSON with the `#render` bootstrap won't necessarily work because the JSON can contain valid HTML tags... Interesting as it stands, utterly useless in practice.
Can’t you put the render at the top and the script can block before anything else to make sure a) it always executes and b) removes all the json content from the page.
Or actually, you can probably just put some closing angles before the start of your render to make sure you don’t get broken from above. Little sketchy though.
If the script were at the top, it would run before any of the rest of the JSON gets inserted into the DOM, so it would (1) not have access to it (unless it sets up an async callback to do it) and (2) not be able to remove it from the DOM (until that callback runs).
The script might be able to set up mutation observers that notice stuff being added to the DOM and immediately remove it into a buffer the script maintains. This might actually be pretty viable.
> Or actually, you can probably just put some closing angles before the start of your render to make sure you don’t get broken from above.
That won't help with the fact that the data will actually be "corrupted" by the HTML parser. This is why all the HTML inside the JSON in the example is HTML-encoded.
One plausible fix for _that_ is to have a <plaintext> tag right after your <script>. So put the #render as the first thing in the JSON, set up mutation observers in the script, <plaintext> to prevent HTML-parsing of the rest of the doc, and this might be pretty robust to random HTML bits in the JSON data.
> One plausible fix for _that_ is to have a <plaintext> tag right after your <script>.
That seems more plausible since `<plaintext>` can't be closed. In fact mutation observers can be used to get and process the partially downloaded JSON before onload (I'm pretty sure this is possible but haven't tested, YMMV). That would be still horrible as a general solution, but might be actually an interesting solution for more limited situations.
It's kind of the author's problem and not ours. If they want to figure out how to make a quirks-mode page work correctly, be my guest. It will just be unnecessarily painful.
Quirks mode is not the only problem of this approach.
It may not significantly affect users of modern browsers in their default configuration with good internet connections, but it does affect plenty of other things.
You want to parse things in the document? Now you need a whole different suite of tools from the usual tools you use. Your library that parses all the meta tags, identifies the content, &c. is now useless. Now you need either a full JavaScript execution environment, or a JSON parser instead of an HTML parser (and that JSON has completely lost the semantics that HTML provides, so you can’t query things like “document title” or “meta description”).
You have JavaScript disabled? Here, have a mess that, well, it’s better than most client-side rendering things in that the content is still probably there, rather than the screen just being blank, but there are reasons why you should always prefer server-side rendering for things like blogs.
You have a slow or unreliable internet connection? Now the page is taking longer to load, and until the JavaScript loads, the page is empty—and it may fail to load.
I object to people doing things like this as more than a fun technical demonstration because it does harm some users.
> You want to parse things in the document? Now you need a whole different suite of tools from the usual tools you use. Your library that parses all the meta tags, identifies the content, &c. is now useless.
They're no less useless than for SPA's that render their content in JS. That's all this site is. If your web scraping suite can't handle content loaded from JS then you're already locked out of most of the web.
Same if you disable JS. Most of the web will be broken for you and it takes an annoying amount of effort enable the minimum necessary scripts.
For just general web pages (as distinct from apps), it’s not common to actually require JavaScript. I know this because I’ve JavaScript disabled by default for the last two or three years, and it’s really not all that much of a bother.
The key thing here is that SPAs normally get some kind of interactivity benefits from being written in that style (though I confess they break things that the platform provides, by reimplementing them badly, at least as often), such as loading same-site links faster. But this thing doesn’t do that; it’s purely a projection, like XSLT. It should be done as part of a generator or server, rather than on the client side.
Maybe browsers should artifically delay the rendering of pages in the quirks-mode to create incentive for website creators to use proper formats/techniques.
I can imagine the author had their fun, I also had fun reading the article, but I don't think this is leading to an accessible internet.
> Maybe browsers should artifically delay the rendering of pages in the quirks-mode to create incentive for website creators to use proper formats/techniques.
Interesting idea, but Google has a dominant position in both the browser space and the search-engine space. They can punish such pages in the Google search rankings, and this avoids the obvious retort of Why are you deliberately making my browser worse?
> Yeah, please don’t; this is an abomination that’s fun for demonstrating and teaching how these things work, but should absolutely never be used in reality.
Would you say the same about taking a browser -- which was designed to be a document viewer for researchers -- and turning it into an entire application execution platform like we have now?
Point is: It's silly to say things like this, because this is how innovation happens.
You know, if we could just go back and make that un-happen, we probably would have built an actual internet application platform. That probably would have been a lot better than the layers of hackery that we deal with now, but ok, that's pure speculation.
Perhaps, but that doesn't mean it would be better.
Look at how many over-engineered platforms/frameworks Microsoft made, and it just ended up being over-complicated or reached a point where it was no longer worth continuing.
No, please read my other comments here—the point is that this thing doesn’t solve anything, but introduces various problems.
The only tenuous claim it can have is that the document is JSON, so if you want to parse the data, maybe you’ll find it easier? But in practice this is not useful: you can already embed or link to a JSON representation in the HTML, and that JSON representation then won’t be constrained by having to embed the renderer, either.
It solves the problem of having to use multiple languages across front/back-end by using Javascript/JSON for everything. It may seem as ridiculous as using Javascript to write back-end server-side and desktop software, yet along came Node and industry hype and here we are today doing exactly that.
About the blog platform, I honestly thought the author was being sarcastic or joking when they out it there.
It is not even vid JSON response to begin with (it's served with text/html), and if the fetcher is configured to ignore that, you might as well configure the origin server to serve the JSON response based on the accept header.
Well, not much worse than all the other fancy JS ways to build a page that everyone and his mother are using today. They are all headache inducingly bad.
That last line has varying degrees of invisibility in different markdown viewers I looked at.
Obviously there's optimization that could be had here but this is equally hacky IMO and simpler since you can just write MD instead of JSON. You lose a couple of key features though -- templated components for example. However, I imagine you could shoehorn those in without much effort.
This has the advantage that without JS enabled you'll get poorly formatted markdown in the browser.
It's a valid JSON document that's basically unrelated to the HTML that ends up being displayed, so it's not exactly equivalent to the submitted link.
What the submitted link achieves is impossible without JS, which is also why it's a cute hack and should only be used in production with the knowledge that it's not going to work well for clients without JS.