Oh, and video editors. DAWs. Console-y video games. Compilers. Linkers. Operating systems. The overwhelming majority of mobile apps. Dental imaging systems. Robot assisted surgery systems. CNC controllers. Scientific data analysis. GPT-N.
Jeesh. Browser-at-platform offers some amazing possibilities, but get your head out of the sand and grasp the scope of what software actually is and does.
>video editors. DAWs. Console-y video games. Compilers. Linkers. Operating systems. The overwhelming majority of mobile apps. Dental imaging systems. Robot assisted surgery systems. CNC controllers. Scientific data analysis. GPT-N.
Because many of those already offer web interfaces, or can be done online
>get your head out of the sand and grasp the scope of what software actually is and does.
Interestingly, your experience is a stronger counterpoint to the article than the OP's.
The author says that every software domain that gets touched by the web browser gets swallowed by it.
Perhaps it is early days and all compilers will be web based soon (makes me think of that dystopian horror story "The Right to Read"), but browsers have touched every aspect of the compilation process and besides JS I would say the majority of compilers are not online.
WebAssembly is a sort of web-based compiler. Compiling WebAssembly to x86 isn't that much different from compiling LLVM's IR to x86. And nowadays, you can interop WebAssembly with most modern languages, making it a target of choice for libraries with cross-platform/language needs.
> As browsers become more capable, this is encompassing a large fraction of software standards. In a certain sense, (nearly) all software is now web software.
This is just bullshit, that can only be written by someone too immersed in web-centric development to understand the full scope of software development and design.
They were being intentionally hyperbolic. Most of the domains you listed have at least been touched by one web technology or another, whether that be JavaScript, HTTP, video codecs, etc. Nobody's saying literally all software, but virtually every domain has been affected at least a little bit by a web-related standard or technology. Chill out.
Out of that list, OS and browser itself are the only one that couldn't be done in the browser in theory. Yes, even robot assisted surgery systems, although that kind of disturbs me a little too. You could just do the GUI in the browser and the actual hardware control in a real time scheduled process. But even then, what's stopping some sort of WASM real time scheduling type extension from being created?
The rest are gradually moving towards the browser as performance issues get fixed, e.g., WebGPU, WASM, etc. It's going to take time but the incentives still strong to gradually try moving whatever can be put in the browser in the browser, and to keep creating more web extensions to enable any remaining roadblocks.
In the long run, the OS under the browser, the browser itself, and real time/embedded type code that runs on hardware too constrained or critical, are the only things safe from being swallowed up by the browser's world. The modern browser platform is like what POSIX or the JRE were meant to be, except it actually worked.
I was recently trying to get some basic stuff done in the web version of Excel. I have an Apple Magic Mouse which has a touchpad so you can do gestures and scroll in any direction. The spreadsheet had a lot of columns, so I was doing a lot of left and right scrolling to get around, but most of the time I tried to scroll to the left (by swiping right on the touchpad), it'd start activating the browser's back gesture to take me back to the last page I was on before opening the spreadsheet. Eventually I gave up and used the desktop app and it was like a breath of fresh air (which is saying something for an Office app)
Trying to port desktop apps to the browser is already so limiting from the fact that the browser itself gets first claim to so many keyboard shortcuts. Want to make your app do something when the user hits Cmd+L ? Well good luck cause that already focuses the address bar. And you certainly don't get to use Cmd+W because that will close your entire app!
Yeah, this is rub with the genericfication of everything - application design for most folks will begin to look bland (already has happened with WWDC) in an effort for WebAssembly, Swift UI and the like to “write once run anywhere.”
We’ve forgotten that the beauty of computers is that we get to do things we’ve never done before, not just pump out code.
IMHO and we're worse off for it. I still keep around non-web versions of old software like Picassa that are so much better than the web alternatives now being offered.
We've given up a lot for being able to run in a web browser. Web UI is janky and fussy and time consuming to use. I've been using computers since 1988, and the UI now is the worst it's ever been for anything other than dicking around doing half-assed work.
These are some seriously rose tinted glasses. What I remember about computers in the 90’s was incompatibility and inconsistency.
The last paragraph of the article is worth revisiting.
I couldn’t use the same features between Windows and Mac, and the Linux version of commercial apps would barely work if they existed at all.
Fighting with codecs, flash player, macromedia, incompatible file formats, ActiveX, RealPlayer, etc.
No way to easily extend functionality in the way that web based apps’ APIs offer. How do you connect information between two unrelated applications in the 90’s? The answer is that you didn’t!
On top of that, plenty of applications are still just regular local applications. We can use Picassa as an example. Many of its alternatives are not even web-based, including Apple Photos (native app, does not require iCloud or Internet at all), Adobe Lightroom, and many free alternatives like gThumb.
The thing is, most web-based applications are web-based because the web is so powerful and useful.
For example, VSCode would lose its powerful extension ecosystem if it wasn’t web-based. The whole reason it’s so successful is that any web developer can make an extension easily. On top of that, the app works the same on Mac, Windows, Linux, even inside a web browser.
What would we gain by the app no longer being based on web technologies that wouldn’t be offset by the lost functionality?
Offline apps are still developed, and they’re developed for uses where that makes more sense. For example, DaVinci Resolve, Affinity Photo, and Final Cut Pro are all regular non-web applications, because that’s what makes sense for them.
- Fighting with codecs? Indeed, gone if you use the web because one or two browser makers force a lowest common denominator codec on everyone to keep their own IP costs low. Would your use case benefit from a higher quality codec and you're willing to pay for it? Pound sand, you get what Mozilla wants you to get even if you don't use Firefox.
- Flash player? Indeed, gone because one man didn't like it. Would you benefit from the highly artist friendly timeline and vector art system Flash had, the compact binary format, the powerful animation system? Did you enjoy animated web comics made by people with tons of artistic skill and no coding abilities? Pound sand, you get CSS animations and you can roll the rest on your own (which nobody does).
- How do you connect information between two unrelated applications in the 1990s? Wow, pick your poison. DCOM/OLE was the most advanced implementation of this, but Apple had OpenDoc. Apps could expose components that implemented standardized APIs, they could expose sophisticated object models allowing you to reflect and script them from many kinds of language, they had a more powerful equivalent of iframing and so on. What does web tech give you that can match this? Nothing! If two web apps can connect to each other it's because of a bespoke integration. Browsers think they're drawing scientific documents so they don't have any notion of a page having an API. Every web API is a horrible hack in which you pretend some code is actually a pile of "documents". Schemas, API conventions, tooling, language binding is all totally scattershot. There's not really any such thing as just scripting a web app from a language of your choice like Windows managed decades ago, it's all custom each and every time.
"most web-based applications are web-based because the web is so powerful and useful"
Will have to disagree with that. The web lacks many basic features once considered essential.
People like making web apps for the desktop (not mobile) because of things like sandboxing, convenient server connectivity, "instant on" without (un)installation, clickable URLs without extra effort, no piracy risks, monitoring and metrics coming for free, blurred lines between documents and apps etc. Basically because of the stuff that browsers happen to expose as an artifact of their implementation and evolution path.
There's other reasons too (e.g. Microsoft/Apple dropped the ball on window management, a ball picked up by tabbed browsers) but I won't try and make a comprehensive list of all of them here. That's a future project :)
Oh, also and hugely importantly, because distributing desktop apps is much harder than it should be - there's lots of legacy tech and platform differences making it awkward. That's absolutely fixable though. In fact, my new startup is getting ready to launch a tool that makes distributing desktop apps as easy as distributing web apps. It's not quite launched yet but there's a mailing list form you can fill out at https://hydraulic.software if you're interested. The goal here is to start at the beginning: make it easy to distribute apps outside the browser, and then start to explore platforms that retain most of the benefits of a browser whilst fixing their weak points. It should be especially useful for the community of developers interested in building decentralized apps, which are a very poor fit for web tech.
That’s the thing about the old way of interfacing with applications, it was different on every platform, limiting its usefulness in practice. REST APIs are universal.
I think it’s telling that you’re working on a startup to fix desktop app distribution problems, even though there are dozens of distribution methods for desktop apps. The fact that commercial installer software companies still operate is telling.
The distribution model of the web is precisely what is so amazing about it. It is already decentralized. It has zero distribution friction.
Sure, if you buy some SaaS app, that’s not really decentralized, but I can download a self-hosted app (e.g., nextcloud), install it on my server, and now I can interface with it and “run the application” from any system in the world instantly. It can also talk to other systems that aren’t even running on my server or in my country.
The web already solved distribution. More than solved it. That’s why everyone is using it.
A REST API is really an app specific network protocol that happens to build on HTTP instead of TCP. It's not actually an API in the sense we're talking about above. Most obviously there's no standard way for a web server to say "I implement this standard typed interface X", and even if there was, there's no way to ask browsers to enumerate all the web apps the user "has" that implements interface X because browsers don't have any notion of a user "having" an app to begin with. That's sometimes beneficial, and sometimes it just means things can't integrate with each other.
In contrast stuff like DCOM/OLE supported tons of features that REST doesn't even try to address so it's an apples/oranges comparison to a large extent.
Don't get me wrong - decoupling the app APIs from the underlying operating systems has advantages, and I'm not trying to bring DCOM back, but clearly the advantages aren't that huge because on mobile a more modern take on the exact same concepts stomped web tech completely. Everyone wants native Android/iOS apps and they want them so badly that companies will write the same app frontend twice to satisfy that need.
"The distribution model of the web is precisely what is so amazing about it. It is already decentralized. It has zero distribution friction."
True, but this is engineering. There are always tradeoffs. It gets this zero distribution friction by simply ignoring any use cases that don't fit its document-oriented model. Most obviously, every approach to making the web support poor or non-existent connectivity is terrible and has never gained traction, but that's a problem. Many, many apps need to be able to run offline. Anything that manages a critical process of any kind simply cannot handle being served from the cloud as a web app:
• Industrial control
• Medical
• Servers themselves
• Military
• Much gaming (you play games when you're bored, why are you bored, maybe because you don't have the internet ...)
• Anything to do with managing the network itself
• Anything where latency is critical (can't beat the speed of light)
• Airgapped for security
etc. Apps in these fields don't just experience friction when trying to use the web, they experience a brick wall.
Then there are the many, many apps that struggle to ram their square pegs into the round hole of a browser, e.g. embedded devices struggle because browsers won't automatically discover them, because browsers are slowly turning the screws on requiring TLS but the TLS model doesn't really work for embedded devices you connect to locally. Developer tools. And so on and so forth.
Finally, there are other common distribution use cases browsers don't even try to support:
• Downgrading if the server ships a regression. "Regression" doesn't have to mean bug or outage, it can mean deliberate UI redesign, hence all the screaming whenever web apps change their UI layouts. Your workforce productivity can drop overnight because some SaaS shipped a new UI nobody asked for, and you're just screwed at that point. If it didn't come at a good time for your business or the new version is a downgrade for you, once again the web cordially invites you to try pounding sand. It might help you feel better.
• Apps that aren't controlled by a single organization (hence why web apps suck for the blockchain/decentralization community - Bitcoin 0.1 wasn't a web app for good reasons).
• Apps that need to keep working even if the originating organization goes bankrupt.
• Apps that need to separate data storage from software distribution. The origin concept doesn't make this easy.
A lot of companies in the mobile world are basically wishing mobile apps were distributed like web apps were, and the mobile world is going towards a web like distribution model slowly with things like android dynamic features and app clips.
Web apps are infinitely expandable, instant rollbacks and roll outs and no need for an update & app store review cycle which can add weeks of delays between code complete and landing on a user's doorstep. Because updates are not instant, you have to delay a lot longer in deploys than you would in an equivalent web app.
You could also hack it today via dynamic libraries that you update on demand via a supervisor shell for your actual app. There is nothing necessary with browsers and web specific tech to do all of the above.
Yes, there are lots of benefits to having apps be streamed on demand with an open world approach. I'm not trying to argue that mobile distribution is universally better, just observing that people have a choice there and they avoid the web platform. Effectively trading a different distribution model, maybe a worse one, for other benefits.
"Web apps are infinitely expandable, instant rollbacks and roll outs and no need for an update & app store review cycle which can add weeks of delays between code complete and landing on a user's doorstep"
This is an advantage for developers and may or may not be an advantage for users, depending on whether the upgrade is truly an upgrade or not. Web SaaS firms simply pretend their software always gets better and any disagreement is mere "change aversion" but their incentives aren't always aligned with the end users, many of whom may simply not need new versions. The web can't handle this scenario because browsers have no understanding of version numbers to begin with.
No they don't have a choice in the mobile world. Google and apple defacto decide the distribution model and place a lot of restrictions on how you can distribute updates to your app, forcing people into a specific model on how to do app updates. Certain web features that would make web apps significantly more competitive to mobile apps stay unimplemented on the apple platforms, which if you follow some webdevs on twitter see them complain constantly about.
Apple explicitly does not allow unreviewed binary code downloads, but do allow a sort of gimped version with javascript. If apps could manage their own update cycles and methods, you'd be seeing a lot more of this dynamic behavior today vs. the half solutions that are things like react native and homegrown equivalent. I wouldn't be surprised if android has similar restrictions here and there.
Also further more, it's not really devs / engineers that want these instant testing cycles, but PMs and businesses. Businesses want it because it lets them execute faster and make more money via faster A/B test cycles and lose less money from client side outages via much faster rollbacks. The cultural appetite for this is voracious and it's because of that business side demand I don't see it going away. Eventually software is going to need a internet connection whether you like it or not and there won't really be any such thing as version numbers anymore for the vast majority of software.
It's basically Darwinian evolution, because it makes more money and users don't really think about versions or updates that much at all unless special marketing is done to make differentiation of versions. Like evolution, the solution might be unpleasant, but it stays in place if it is effective.
People do have a choice on mobile. They can make web apps or native apps and they choose native. You can blame Apple for not implementing every last web standard, but Google definitely do, and people make native Android apps in preference to web apps too. So clearly the web isn't all that competitive when put up against motivated OS teams. Windows hasn't had that for a long time, nor has Linux or macOS (not motivated in the right way) so the Chrome guys have taken over there.
Still, it's just not the case that the web would win if not for Apple. And "I wouldn't be surprised if android has similar restrictions here and there", well, be surprised. Android allows side-loading as a condition of licensing it, which allows self-updating apps. Telegram do this for example. They've also added features for app streaming to the Play Store and so on.
Fast rollouts and A/B testing isn't something specific to the web, it just happens to be a consequence of how it works. As for client side outages, well, that's really something that we associate with web apps. I've never had a sudden outage of a desktop or mobile app unless it was just a frontend for a service that went away. Updates can contain bugs but updates so broken they take everything down - no, doesn't happen, gets picked up in testing and/or during the incremental rollout because not everyone updates at once. Whereas it happens to web apps pretty regularly.
> How do you connect information between two unrelated applications in the 1990s?
The answer is another question. It depends if the two had any overlap/business talking to another. If they did you'd:
1. Save it to a common file format, ie. .csv, .wav then load it.
2. Cut and paste the rich text/media data directly.
That handled just about every case unless you were wanting to pipe random binary data into a Word doc. There are now more standard formats so it has improved, but often already worked in the 90s.
The real problem in the 90s to my mind was the lack of a stable (as in reliable) operating system, that could work in 8-12mb of RAM, and have lots of software. Every OS missed at least one of those qualifications. OS/2, NT 3&4, Linux were getting close but didn't have enough professional apps.
I used to be on the "Flash is awesome! Don't kill it team!" But I've changed my mind. Flash needed to die. It existed in a world where everyone was on desktop with a mouse. Fixing it to be responsive so the majority of creations made in flash worked on both desktop and mobile and adjusted from landscape to portrait would have basically broken flash IMO.
There was Flash for mobile, it did work. Meanwhile, web content by default doesn't really work on mobile screens either. You need to carefully adapt it. There are frameworks that can help and provide sensible defaults, but there's no particular reason Flash couldn't do the same. Towards the end of its life it wasn't just an animation system but a full app platform.
Realistically, the reason jobs hated Flash was that he demanded total control over the software stack. Adobe did license the Flash plugin code, and SWF was a documented/open-ish format, but presumably there was some corporate politics there. As Jobs put it, he didn't want "a third party layer of software coming between the platform and the developer".
I started out my career writing a small business application that was sideloaded on 20+ devices. There were many bugs, as this was my first professional project. I was literally figuring out how APIs work and how to move data between the android app and the server. Many poor decisions were made.
Every update was extremely painful as users were not technical, so often I would have to collect the devices and perform the updates myself. It was a mess.
Then, I decided to use the web instead. I was more experienced in it and bugs were much easier to handle. Every device on the local network could access it, no need for installs. You do miss out on some built-in features but for me it was a good sacrifice.
If this is the case I hope it gets better because the standards process seems to be failing these days. I need to keep at least two browsers on a machine in case a site I use doesn't work with one of them. It seems that since the standards process is largely controlled by commercial vendors we're regressing back to that era of the web where vendors largely ignored standards and touted their own tech/extensions/etc.
Update: Not to mention how often buggy they can be and how terrible the performance is in terms of energy usage, responsiveness, etc.
For those of us maintaining any of the billions of lines of legacy code that runs the majority of IT... nope. A lot of use cases can be covered by the web ecosystem, but a lot can't as well.
This seems to be a common fantasy or fallacy of developers under a certain age (I too far from that age to guess). I suspect because of the amount of dedication required to keep up with front-end technologies.
The reason it's so complex is because it's a bad fit. Soon enough everything will be WASM and we'll be right back where we were in the 90's with the "thin client" vs "thick client" debate.
The good part of it is that there ARE some problems being solved. WASM is a good start on trusted computing, and that's not going to go away ever. Unless that other fantasy of my generation (X-Gen) comes true - post-apocalyptic societal collapse. Not sure that could be called a "fantasy", maybe "collective nightmare" is a better term for it.
That's fair, it was hard to condense the gist of it into a few words. I think (hope!) the opening paragraph is a good summary of what it's actually about, though:
> Here’s a theory: browsers, by sheer force of adoption, become the standard bearer for every domain of software they touch, even usurping incumbent standard bearers. As browsers become more capable, this is encompassing a large fraction of software standards. In a certain sense, (nearly) all software is now web software.
In this sense “web software” is broadly meant in the same sense that software that deals with Linux standards might be called “linux software”, and the point of the article is the surprising amount of influence browser developers have over software (and even hardware) we don't usually think of as being “web software”.
Definitely. I think it's a nice way to frame it and I didn't mean to imply it was deceptive. I just see lots of people in the comments here saying things like "Nuh-uh! Apps X, Y, and Z are still written in native languages", which if they'd clicked the link they'd know is not at all the point being made in the article
But web standards are crappy for decent GUI/CRUD applications that typically are used on a desktop, which is most business software. Web standards natively lack state handling, and the DOM makes sanely controlling the position of text a royal bear. We really need a state-ful GUI markup standard. Bending the web to do that job makes for really ugly stacks and/or resulting software.
Or just use a shell that understands JSON and can just pipe the output to JSON. In Nushell you can query the free space and then add `| to json` and you are done.
DAWs use XML, to the author's point. I'm not familiar enough with AAA development but I wouldn't be surprised at all if JSON and XML make a showing there as well. That's not to mention that if the game is online, it certainly uses some combination of HTTP and sockets.
XML predates the web because, as wikipedia puts it:
> XML is an application profile of SGML (ISO 8879)
The same page also notes:
> most of XML comes from SGML unchanged.
XML is, therefore, not "web technology". It has been used within a web context, but predates the web in almost all senses other than having actually been created by the W3C.
> XML predates the web because, as wikipedia puts it:
> > XML is an application profile of SGML (ISO 8879
That doesn't mean it predates the web... HTML is an application profile of SGML, too, so by your argument HTML predates the web (as it would have to, if XML did, not only for that reason, but because factually HTML predates XML) and is not web technology. This, of course, is silly.
XML was consciously modelled on HTML but generalized for arbitrary data. It is very much web, or at least web-inspired, technology.
SGML -> HTML -> world decides HTML is too limited -> XML
All are the progeny of SGML; HTML is "novel" because of how limited it was; by opening up the scope of XML to more or less all of SGML, it is much less "of the web" than HTML was.
I was working with and paying attention to markup schemes and languages from the mid-80s. At the time XML emerged, I remember a giant collective yawn from that side of the software world, precisely because it just seemed that W3C had decided to embrace-but-not-really-extend SGML.
But the order you recite to support this description is the same order as in my description you are responding to. What does “backwards” mean to you?
> by opening up the scope of XML to more or less all of SGML,
it is much less "of the web" than HTML was.
It's true that XML covers about the entire semantic space of SGML, but it deliberately does so with a much smaller syntactic space, inspired largely by HTML in its approach.
To say it wasn't “of the web” ignores the history of inspiration, development, standardization, and use.
> I was working with and paying attention to markup schemes and languages from the mid-80s. At the time XML emerged, I remember a giant collective yawn from that side of the software world, precisely because it just seemed that W3C had decided to embrace-but-not-really-extend SGML.
Yeah, and like the famous Slashdot CmdrTaco response to the iPod (“No wireless. Less space than a Nomad. Lame.”), that response missed what was relevant in the real world completely, XML (as bad a reputation for heavyweight implementations as it has compared to some more modern data representations), while in principal an application of SGML is much lighter weight to implement tooling for and a much smaller conceptual space for humans to understand (traits it shared with its inspiration, HTML.)
Progress isn't always by extension, it's often by recognizing and eliminating what is superfluous to requirements.
“It seems that perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” — Antoine de Saint Exupéry
Don't get me wrong. I am a huge fan of XML and use it extensively with my own non-browser-platform software. It has indeed turned out to be far more consequential than SGML.
But how of much has anything to do with the use of XML on the web ? I'd argue not much at all. XML's real success has been as one of the first data formats that could really be used to represent more or less anything at all, and very quickly had excellent parser implementations for all the languages that mattered.
The entire "impact" consisted of the significance that Apple is getting into that space: a company with a powerful brand backed by a history of delivering popular consumer tech. I'm sure that wasn't lost on Malda; but his remarks at the moment were about the specs (the what, not the who). There was no reason for a non-Apple-fanatic to get that iPod.
Which DAWs use XML specifically? Haven't digged deep, but I never found any XML when dealing with any DAW.
Games don't use HTTP (or WebSockets, which I suppose you mean) for gameplay. Maybe for stats, lobbies or such, but gameplay networking mostly happens over UDP.
There's a huge lesson in this. Web's layout engine is really bad, but because so many developers have experience in it and it works everywhere, it makes sense to build Electron apps.
The "works everywhere" part is the original reason (developer mindshare grew from that). It's hard to fathom how big a chasm it was to cross for a truly cross-platform open-standard GUI environment to reach critical mass such that every general purpose computer + OS must support it out of the box. It's not like Sun, Macromedia and Trolltech didn't take their shot, but these things need skin in the game from the OEM/OS side in order to succeed. It's the same structural advantages that ultimately loosened Microsoft's iron grip on profitable software in the early 00s.
>HTML started as a format for hypertext documents, but as it became more capable, applications jumped on it as a cross-platform UI layout engine.
I wish HTML actually allowed markup of hypertext. The most productive parts of the design space got truncated with the mistake of flatting markup INTO the text.
Well, not all at all. But there is so much desktop software going on the web and that is a good thing. You don't have to install it to use it and it doesn't need the performance and native capability of the bare metal.
OK, let's break down and analyze the authors arguments which IMO, are good, but perhaps are more of a jumping off point than a conclusion. He argues that web tech tends to leak out of browsers and into other domains. There are lots of examples of this. This occurs because:
1. Web tech is hard to change after launch due to the sheer size of the web and the amount of abandoned content, so it remains stable and is designed to be general. Developers like both.
2. Browser tech is subsidized by ads and hardware sales so has big budgets yet gets given away for free.
3. Browser makers (or rather, Google/Mozilla) insist on patent-free technologies. This can potentially kill off or stymie the development of more advanced tech, in particular in the codec space, but it's convenient.
He then argues that WebAssembly, WebGPU and QUIC will be the next technologies to go the way of JSON, XML etc and get widespread adoption outside the browser. And that's about it. The author passes little judgement on this trend beyond observing that it happens.
There's definitely a lot of reasons why our industry has settled on (let's face it) abusing a document format and document renderers as a way to distribute software, and Paul identifies some of them. In particular the rigorous commitment to backwards compatibility in a way that's PM-promotion-proof is hard to beat. It comes from a combination of the enormous amount of free content that any new browser (version) wants to be able to render, along with the network reachability of that content allowing very solid regression testing programmes. Other platforms tend not to test against their user's codebases in the same way that browser makers do.
Still, there's a lot of dissatisfaction with web tech out there because it was simply never meant for what we do with it and the seams are really showing these days. The sheer number of pre-processors and languages-transpiling-to-other-languages that modern web tech involves is a testament to that.
It feels like there's scope for competitors to the web stack to arise. I've noticed a lot of developers like to spin implausible stories about the importance of being on the web and how "nobody downloads apps anymore", but there's not much concrete evidence for this. Mobile/tablet is obviously a major hole in this theory, along with the fact that people actually download Chrome itself, the success of Minecraft, the existence of the entire video games industry etc. People often end up over-betting on web tech because they like the idea of a standards based platform, but then quietly hit its limitations and give up (most notable example: Steve Jobs). I remember when WebGL was new there was a wave of predictions that video games would all be web apps in future. Yeah, didn't happen.
What would a competitor want to emulate about the web? There's a million things that could be written about what the web does well and badly, but to focus on just one issue raised in the article you'd really want a way to discover downloadable apps that were built with the platform, so you could download them and run them against new versions of your platform. Once you can precisely measure how much stuff your new version breaks you can then set metrics around it and drive it to zero, so developers get a rock solid platform with no regressions. It'd also let you quantify the cost of breaking backwards compatibility (which should discourage it).
You'd probably also want a much higher degree of modularity. Although the article claims web tech keeps leaving the browser and being reused this is really more in spite of browser makers, not because of them. The Chrome team don't actually care about anything they do being used outside of Chrome itself, which is why stuff like Electron and NodeJS all come from external organizations, and why there's tons of useful tech in the Chromium codebase that languishes unused. It's not well documented, it's often not particularly well modularized, etc. Any competitor could certainly do better and provide more like an IKEA kit of components that, in one instantiation, happens to provide a browser like experience with sandboxing, tabs, code streaming etc, but in others could be more like an Electron-style library experience.
Oh, and video editors. DAWs. Console-y video games. Compilers. Linkers. Operating systems. The overwhelming majority of mobile apps. Dental imaging systems. Robot assisted surgery systems. CNC controllers. Scientific data analysis. GPT-N.
Jeesh. Browser-at-platform offers some amazing possibilities, but get your head out of the sand and grasp the scope of what software actually is and does.