This article is weird linking together many unrelated strands of thought.
Like linking reactive programming to “reactive” UIs, when really they mean UIs that are forgiving to their users instead of breaking down.
Or how by coding on the web we’ve lost the immediacy of a UI that runs on our desktop, and the primitives (like undo/redo stacks) that make desktop user interfaces friendlier, at least without having to build them ourselves.
And that new UI frameworks show up claiming to solve the problems that React creates with complexity, by forgoing the functionality that React provides that they pretend is unimportant by hiding behind toy examples.
There’s actually not much in this article that doesn’t resonate with a jaded millenial like myself who knows their computer history, but could have been expressed more cohesively.
As someone who has admittedly worked on approximately 0 user-facing programs, it seems like there are not many revolutionary ideas when it comes UI.
I have some dream of a sort of user-interface system that exposes controls and data more directly to the user, independent of the author's stylistic choices. Sort of like semantic HTML with style-sheets that are configured on a per-user basis. It would be analogous to the unix shell in the sense that it would allow small programs to easily inter-operate, like plan9's pumbing or something. Instead of having a big monolithic program like kdenlive or blender, you would have a number of modifiable general-use programs that can be re-arranged to fit many use-cases in an extensible way.
But instead of that it just seems like every toolkit or library wants to be highly specialized and complex, and provide for a very specific use-case rather than making the most general-possible user interface that is universal. Programmers should not be concerned with the appearance of their UIs like window decorations or the layout of buttons or text fields, for the same reason they should not be concerned with the minutia of optimizing assembly-code. It leads to a non-portable design. That should be left up to the system.
It's a nice idea, but if it were easy it would have been done by now. I think a lot of the problem is that UI tends to be very specific, and often giving one component what it needs to work well imposes requirements on other components.
In order for a UI to work well and be comprehensible, there has to be a sensible hierarchy over the entire layout. If you don't achieve that, the whole might be less than the sum of its parts.
But I think React is actually probably the closest thing to what you are describing. It does a pretty good job of encapsulating UI components into fairly small chunks which can be styled externally to fit into a larger layout.
I also agree it would make sense to break things like Blender into smaller applications. But actually I think that would have more to do with having a standardized interoperable model layer, with a clear interface, so different applications could modify the same data with a different UI layer.
> I have some dream of a sort of user-interface system that exposes controls and data more directly to the user, independent of the author's stylistic choices.
It will never happen.
Companies want to "brand" their UI to use it as a marketing tool. Usability and integration be damned.
Tk, and especially Themed Tk (Ttk) mostly fit this. It even has 3 geometry managers.
I've ported some code from Tk to run on the web (including working correctly in Lynx, as much as possible) [0], and the problem is still impedance mismatch. It's worse when running it on a phone, because you want a fundamentally different UI than you can easily describe. For example, Toplevels (aka windows) don't make any sense on Android or on the Web, but work well on the desktop.
> But instead of that it just seems like every toolkit or library wants to be highly specialized and complex, and provide for a very specific use-case rather than making the most general-possible user interface that is universal.
Blender, and 3d modeling in general, is a very specific use case which probably couldn’t just have some UI code thrown up over a library and be expected to work. Not like displaying some text and an image or two to rant about “those damn kids”, there’s real math behind being able to transform mouse movements into 3d space. Expecting to display a 3d environment in whatever front end UI code you want so it can have a pretty wrapper is very unrealistic.
I’m having a hard time even imagining how this would be possible without a large team working behind the scenes to do all the heavy lifting to make this even remotely possible. Wasted time IMHO as having a specialized tools for specialized domains is just how these things work.
I understand Haiku (née BeOS) has UI concept at the OS level, which should probably simplify many things.
For UI, you also need to extend programming languages to add so "nested structure with logic" as first class (which is the real value of jsx, and could, I suppose, be trivially done in lisp ?)
I would love to have a Rust + Nested trees syntax + native UI lib that look decent on all system.
But it's never going to happen for now obvious reason, so, never mind.
desktop environments (from Windows to iOS, including OSX, macOS, KDE, Gnome and everything in between) sort of already do this. they provide services and they provide apps/programs that require those services.
what they don't provide is interchangeability of programs from different environments. probably the closest thing is the freedesktop.org semi-standards, but there's not much compatibiltiy or interoperability between KDE and Gnome stack. (KDE's Plasma needs all the usual K-things and Gnome's thing needs, well, I don't know what it needs, but it does almost nothing but needs a bunch of things to do that :D Xfce sort of piggybacks on Gnome, but also uses a lot of their own shit, but at least Xfce is great.)
To me this idea sounds a little bit like how Java Swing works. At least it seems similar to the pluggable LookAndFeel?
I think it was a great idea but it seems like it never got any traction. Or maybe more like Java never got popular on desktop.
I've thought the same thing. A tech that lets you consume data from web sites but visualize it in any way you want. Despite the naysayers in other replies I think it is possible
in the early days of web2, everyone high on RSS feeds and dreaming about pubsubhubbub ... when a google maps embed was just a few seconds and half a line of code, when OAuth1a and JSONP were all the rage, well, back then 3rd party was not a swearword and it did not automatically meant ad-fucking-track-tech.
back then it was seen as the great, bold, and messy cross-linking organically-grown future.
it was called "semantic web", every day people were announcing how simple it is to encode more and more machine-readable stuff less-and-less intrusively into your HTML or xml+atom.
then facebook became the web, twitter became the truth and one day the dream got mugged, beaten up and left to bleed out in a back alley on July 1 in 2013.
everyone got mad and posted about it on their phone using an app from a nice boring, proactively curated walled garden.
>OpenDoc is a defunct multi-platform software componentry framework standard created by Apple in the 1990s for compound documents, intended as an alternative to Microsoft's proprietary Object Linking and Embedding (OLE).[1] It is one of Apple's earliest experiments with open standards and collaborative development methods with other companies. OpenDoc development was transferred to the non-profit Component Integration Laboratories, Inc. (CI Labs), owned by a growing team of major corporate backers and effectively starting an industry consortium. In 1992, the historic AIM alliance launched between Apple, IBM, and Motorola—with OpenDoc as a foundation. With the return of Steve Jobs to Apple, OpenDoc was discontinued in March 1997.
[...]
>After three years of development on OpenDoc itself, the first OpenDoc-based product release was Apple's CyberDog web browser in May 1996. The second was on August 1, 1996, of IBM's two packages of OpenDoc components for OS/2, available on the Club OpenDoc website for a 30 day free trial: the Person Pak is "components aimed at organizing names, addresses, and other personal information", for use with personal information management (PIM) applications, at $229; and the Table Pak "to store rows and columns in a database file" at $269. IBM then anticipated the release of 50 more components by the end of 1996.[7]
>Steve Jobs handling a tough question at the 1997 Worldwide Developer Conference. He had just returned to Apple as an advisor and was guiding sweeping change at the company. The full video is here - [original video taken down by Apple - new link: https://www.youtube.com/watch?v=GnO7D5UaDig ] - this interactions is at 50:25.
I highly recommend taking the time to watch the entire WWDC 1997 video -- it is historically profound.
Steve Jobs walked on stage and announced he wanted to take questions from the audience, right after his return to Apple from NeXT and the sweeping changes he made.
Some of the best most difficult questions really made him stop and think before speaking, and he delivered frank fascinating answers.
We all know what happened next, but it's amazing to hear how deeply and confidently he thought about it before it happened.
>Apple's Worldwide Developers Conference (WWDC) in the San Jose Convention Center (May 13-16) was the first show after the purchase of NeXT, and focused on the efforts to use OpenStep as the foundation of the next Mac OS. The plan at that time was to introduce a new system then known as Rhapsody, which would consist of a version of OpenStep modified with a more Mac-like look and feel, the Yellow Box, along with a Blue Box that allowed existing Mac applications to run under OS emulation.
>The show focused primarily on the work in progress, including a short history of the development efforts since the two development teams had been merged on February 4. Several new additions to the system were also demonstrated, including tabbed and outline views, and a new object-based graphics layer (NSBezier). Source: wikipedia.org
Check out the stylish 8 bit error diffusion dithered gradients and cheesy 90's drum loop in this delightfully retro but content-free "Apple OpenDoc Technology Intro" -- the take-away point is that Apple and IBM and Oracle were really pushing OpenDoc and making a lot of promises about supporting it for a while:
>Video clip from the original Apple OpenDoc: A Crucial breakthrough! CD
This one has a much funkier intro, and has more substantial down-to-earth information about OpenDoc. He's got HyperCard and AppleScript in their original boxes on the shelf behind him, so he knows what he's talking about:
>OpenDoc: A New Standard for Compound Documents, a lecture by Kurt Piersol. The video was recorded in March 1994.
>From University Video Communications' catalog:
>"OpenDoc represents a new standard method of building editors which support assembly and editing of compound documents on many computing platforms. Kurt Piersol discusses the need for compound document systems, their requirements, and OpenDoc's industrial and technical approach."
The article started really strong and I believe the point about React and knock-off frameworks not evolving to really solve challenges modern web applications face is good and worth exploring.
Then it took an awkward and very long detour into gushing over Apple design.
And it "circled back" to a rant that seemed to trivialize things like collaborative editing and undo/redo like they were generically solved problems in the past. No, collaborative editing was not a solved problem 20 years ago. It's an evolving space with recent advancements in the CRDT space from Yjs and Automerge really opening things up.
Started really good but I was hoping it would have ended up with an analysis of the gap between where React (and browsers in general) are currently and where it ought to be; with some ideas around how to cross the gap.
> This article is weird linking together many unrelated strands of thought.
You know, this isn't the first time I've thought as much about Acko's writing :\ Often there isn't a coherent thesis, but a bunch of interesting related thoughts that don't add up to anything specific.
Like linking reactive programming to “reactive” UIs, when really they mean UIs that are forgiving to their users instead of breaking down.
Or how by coding on the web we’ve lost the immediacy of a UI that runs on our desktop, and the primitives (like undo/redo stacks) that make desktop user interfaces friendlier, at least without having to build them ourselves.
And that new UI frameworks show up claiming to solve the problems that React creates with complexity, by forgoing the functionality that React provides that they pretend is unimportant by hiding behind toy examples.
There’s actually not much in this article that doesn’t resonate with a jaded millenial like myself who knows their computer history, but could have been expressed more cohesively.