It also seems to combine the worst of both platforms: Apps.
Is there any OSDEV work that goes towards a more integrated, component-based architecture? As sad as it seems, Unix tools & pipes seems to be the most successful and enduring attempt in that direction. Systems doing that on a language basis seem mostly dead (Lisps, Smalltalks, Oberons), component architecture isn't doing much (OpenDoc as the prime example, CORBA/COM to a minor degree).
I don't think we'll be breaking much new ground when it just comes down to more efficient runtimes, packaging and ways to launch the same old ultra-tightly focused applications.
Component architecture is everywhere on the desktops, just not on those using UNIX text terminals.
COM has been the underlying driving technology of Windows since Vista, where the Windows team took the Longhorn .NET ideas and redid them with COM, since then we got COM improved as WinRT/UWP. Which despite the common mix with the store (blame marketing teams), is what to this day most Windows 10 APIs make use of, and now even React Native for Windows is built on top of.
On Apple side, we have XPC being increasingly used, while Android uses a mix Binder and Activities, also the same IPC mechanism powering Treble based drivers.
XFCE, GNOME and KDE make heavy use of DBUS.
Then I laugh of joy with hype around gRPC, as everyone is just rediscovering CORBA.
Lisps, Smalltalks, Oberons ideas can be replicated on top of those component stacks, which is what Powershell actually does to certain extent on Windows.
I'm not sure where you're getting "COM has been the underlying driving technology of Windows since Vista".
COM and OLE, which is based on COM, have been a core part of the Windows shell since at least Windows 95/NT 4.0/IE 4.
You can see this in the registry where shell extensions are simply registered OLE Components, using OLE Activation to to register themselves as toolbars, folder views, context menus and additional commands in context menus.
If you are referring to drivers or core OS concepts at a lower level, you may be right, but DirectX and class drivers were already using COM in places before Vista.
COM is a standard vtable implementing a single core interface, called IUnknown which provides a method of reference counting as well as a way to look up implementaions of specific other interfaces from a given pointer. These interfaces dervive from that base, so they can always be cast as IUnknown.
Edit: saw the reply below, the specific Longhorn components only expanded the scope of what is extensible to include searchable/virtual/indexed filesystem extensions and few other places, based on the innovative concepts in Longhorn. My reading of your comment originally was that Vista was a departure from what came before, though it appears that it may have been intended to say it was more of a return to the simpler COM.
For you to get where I am coming from, for me Longhorn only failed due to the usual set of politics between DevDiv and WinDev that those of us on Windows know so well throughout the years.
Had they actually worked together and Longhorn, with .NET based stack would have happened, just like it did with Android and ChromeOS. When technical limitations exist, they can be sorted out when everyone rows into the same direction.
So after its failure, and reboot as Vista, the Windows team took many of the Longhorn ideas and redid them as COM libraries.
This decision started a trend (hence my started with Vista), where all major new APIs that were added to Windows since then, were COM based instead of classical Win32.
You can see this by following the C++ Hilo tutorial later released for Windows 7.
When Windows 8 came to be, they decided to double down on this direction and came up with WinRT, which if you read the Ext-VOS paper, looks quite similar. Ext-VOS being what was being designed as successor to VB/C++/J++ as COM evolution, before all the events that made .NET happen.
So you got the Windows team creating WinRT, they also had their own shot at what .NET should be, hence why it uses AOT based compilation with a compiler that only supports the MSIL sets that they cared about.
As we all know by now, due to several reasons this did not turn out as expected, still UWP (as COM evolution is now know) is still the way to go to all major APIs, even in the context of Project Reunion.
So we changed from Win32 + some relevant projects using COM, to Win32 mostly frozen in Windows XP API level + everything else is COM/UWP based.
This was my point about Vista being the turning point where this took place.
I might be wrong, but this is how I see the evolution of those events.
That's the, excuse my Huttese, friggin' point. We had versions of those tools for ages. And once we even had high hopes of using them outside of a dev's context -- the aforementioned OpenDoc for example, but even early Linux DE's seemed to have some intentions of going that route. I remember using the GNOME CORBA implementation because I needed something that would compile with a proprietary workstation C compiler (no C++).
And it's not even hard to see why we're not doing it -- money and bikeshedding. We're selling apps. Heck, we've gone a step beyond that and don't even do that anymore, we're selling services now. There are two environments where you could go beyond that -- made for hire enterprise software and open source (I've given up on academia, especially after the bachelor-splosion). But that's where the second part comes in play -- most enterprises have given up on trying to come up with some overarching architecture, things are moving too fast and there are too many managers.
Whereas there are too few in open source. Never mind that a lot of open source these days is just commercial turds, github profiling and heavily influenced by what you're doing in your day job.
Conway's Law is really screwing us over.
Yes, we have all the tools to e.g. create a desktop where you could have your favorite editor widget in all contexts, where interoperation isn't just dropping stuff into UTF-8 files, to be massaged by scripts and other applications.
I doubt that it will get better. There's a whole generation starting to fill the FAANG bullpens that never saw anything beyond the cordoned appscapes of their indistinguishable mobile devices. Maybe someone will notice that "hey, I'm using this insanely clever federated way of connecting my microservices with <rpc-du-jour>, what if...". Probably will end up on something like the suckless/unixpr*n pages and go a few steps too far, banging ASCII rocks together to summon the holy gopher.
As much as I like to bash C, even Microsoft was forced to reconsider their position and latest MSVC now supports C11 and C17, and UNIX kernels won't ever use anything else.
So anyone serious about OS development should know it, regardless of their opinion towards the language.
As for GObject, there are bindings for almost any relevant language.
You have my solidarity, but I do not see C having a resurgence.
Can share horror stories of expensive hires with MSc/PhD degrees, and years of experience having a fright of their lifetime when they see real world C.
I clearly need people who can do above helloworld level in C.
For many places, there are clearly no alternative to C.
The humongous open codebase of high quality C code, exceeding every other language, is also not going anywhere.
The situation is, shortly speaking, "no alternative to doing it in C, but we cannot do it in C because of some purely non-technical constraint." And the situation is very common I feel.
Chipmakers with billions of cash free to throw on RnD can't make any passable Linux kernel work. Google Chrome team must be employing best C++ hackers on the market, and yet.... and the story goes on like this.
The relative overabundance of webdev/java people hides the the fact that we are not only not gaining developers in conventional programming, but losing them fast to age, career change, or them returning back to home countries in case of labour migrants.
Can't say them not knowing them at all, or them being outright lousy.
When it come to CS fundamentals, algorithmics, most were better than me, who never studied CS academically. This was the reason we ever took interest in them.
It's just you cannot compensate for training, and experience with overall feel of skill. You can hire an ACM olympian, and the guy will still loose it unless he did C professionally for 3-4 years.
In my engineering degree doing stuff like B-Trees with its own i-node management module in C, validate against unit tests written by the professor, were requirements just to qualify for the data structures exam.
Someone lousy in low level coding won't go through such a curriculum,
Any other "real world" software can be ugly and can be beautifully written in any language, including C (except ones where being cryptic is by design).
Basic C can be picked up in very short time. there is nothing to be frightened about. When I first laid my hands on C I wrote working software (keyboard driver for DOS) in a first day as a learning exercise. Not sure what other than incompetence as a programmers is causing your expensive PhDs "fright of their lifetime".
> Not sure what other than incompetence as a programmers is causing your expensive PhDs "fright of their lifetime".
Memory management, handwritten loops, pointers everywhere, absence of ready made data structures, error handling (or its absence,) epopeia with C strings, sanitizing inputs, and nausea from the slightest smell of SIMD, or basic binary hackery to begin with.
Those are needed for the tasks C was made to handle: complete control over everything being a primary one. If you do not need those tasks you do not need C. And if you do then get used to it.
Amazing only to those that never looked outside Bell Labs.
Amazing was what Burroughs was doing in 1961, 10 years before C came to be, IBM RISC research in PL/S and PL.8, VAX/VMS stuff in BLISS, Solo OS in Concurrent Pascal, Xerox XDE in Mesa, ....
It was not amazing. It was decent particular tool for particular job. That was all about it. Same thing about Rust. There are no silver bullets laying around.
What do the cool kids use to write realtime or realtime-ish software with these days? What gets used for firmware? What gets used to write kernels? What gets used when you have to interact with hardware that uses memory mapped registers?
It might be C++ instead of C for some of the above, but that doesn't make C "a legacy tool".
It's possible to implement GObject subclasses in Rust which little additional overhead as compared to doing it in C, but you get the syntactical niceties of for each loops, traits, and other features of Rust.
librsvg makes good use of these bindings, and is undergoing a rewrite in Rust. Gstreamer bindings are also becoming more complete, and example implementations of many of the standard plugins have been rewritten. The official implementation is still in C though.
Vala is a neat concept though and a pretty clean language.
> librsvg makes good use of these bindings, and is undergoing a rewrite in Rust.
And is losing support for older CPU architectures as a result.
I really hope we don't see this more before the rust compiler is improved to support older architectures, as otherwise it may become prohibitively difficult to revive older systems...
I mentioned because some of that "componentised" style apps do indeed what amounts to intra-interprocess-communication for god knows what reason. Android for example
[Nushell](https://www.nushell.sh/) is doing something along those lines. They have plenty of work to go before it's perfect, but it's already surprisingly good.
We initially experimented with almost exactly what was described here, but it lacked one thing and that was third party expandablility. Sadly, I see no way for common and modern apps in their current form to be integrated into anything other than the traditional application format. If this changes, I hope to be the first to jump on board and get the ball rolling.
Dbus seems to be doing really well in recent decade. Sound, networking, system management already use it on Linux. It's likely only going to get more common. With the well defined interfaces seems to be a reasonable solution for system and user components.
What does the "unix"ness of stuff have to do with the lack of popularity of linux as a market? Blaming a technical problem for what is actually social does nothing positive for anyone. Certainly, "linux" itself as a free platform is not necessary for liberation, let alone the litany of other brands you mentioned. Linux is unpopular for many, many other reasons than avoiding technical issues.
If "unix" is considered a success, I'm giving up on coding. What's the point? People will just gravitate towards brand loyalty regardless of the technical underpinnings.
These guys used to be on the Fuchsia subreddit fairly frequently. They were definitely far from experienced or professional. IIRC, the founder is very young and was very much a beginner to OS development.
Hi this is Camden, yeah I'm pretty young. I was very new to OS development. Some of us aren't the most experienced at this kind of stuff, but we're doing pretty well I think for what we can do. A lot of my stuff is learning about kernel development and is mostly what I do. I do a lot with porting devices to fuchsia, and tinkering with the zircon kernel. Thats mostly it.
Usually hn is very positive towards young folks doing stuff, and doesn't measure it by the bar that older people get measured by. Why is this different this time?
Also even then, you'll probably find people doing mistakes, committing hacks, etc all the time. It's like complaining that the initial Linux release doesn't support x86... Linus didn't write it for x86 originally.
I think the skepticism and criticism of this project is reasonable, given the marketing. Look at it: https://dahliaos.io/
> a modern, secure, lightweight and responsive operating system, combining the best of GNU/Linux and Fuchsia OS
The initial announcement and first couple years of development of linux had no logo or website or anything. Let's compare:
> a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones
I usually just move on without comment ... but I'm kinda annoyed by the trend of the "npm generation" to have all this slick marketing bluster for experimental "just for learning" projects. Go ahead and compare to https://gcc.gnu.org/ - much less slick, much less marketing bluster, much more important project.
This dahliaOS has apparently recently joined the Open Invention Network "joining the likes of Google, IBM, SpaceX, Huawei, Microsoft, Yamaha, Honda, System76, GNOME, Daimler" and is accepting donations. Meanwhile, the founder says:
> A lot of my stuff is learning about kernel development and is mostly what I do. I do a lot with porting devices to fuchsia, and tinkering with the zircon kernel. Thats mostly it.
If there was just a repo and the readme just said that, I don't think there would be any skepticism and criticism here, just some interest and encouragement. But with all this slick marketing, skepticism and criticism is a healthy balance.
That's perfectly understandable, I even feel that I went a little too "slick marketing" with the website, but I have extremely high, maybe even too high goals for the project.
Having high aspirations for your rock band is perfectly fine, but designing the album cover and the poster for the world tour when you only play some covers from other bands is probably not the best way to make yourself a name in the scene.
I don't know your project in particular so I don't know if this is what you are doing, but I agree with ploxiln in that it is a general trend nowadays.
> but designing the album cover and the poster for the world tour when you only play some covers from other bands is probably not the best way to make yourself a name in the scene.
It does seem to be a trend nowdays. I think the most fitting analogy to what we are doing is "Desigining the album cover for the world tour, when we only have a few songs for it." It most definitely is a little overerpresented on the website, but i'd say the goals and features are completely attainable, they are represented to be high when they are really standard features that are relatively easy to implement.
It's fine to have high (even unrealistic!) goals, but IMO you're doing your project a disservice by presenting it as if the future's already here.
I think it's a really interesting project but it's not an honest presentation (not in an exceptional way, it seems to be an existing trend in startup culture so it's understandable that people replicate it).
The wording is definitely a little brash and market-y, but I don't think there is anything on the site that is explicitly false, but of course it probably comes across as represented to be more finished than it actually is, which is a fair understanding.
> It's like complaining that the initial Linux release doesn't support x86... Linus didn't write it for x86 originally.
What the hell is this nonsense?
Linus was working on a terminal program for his 386 with native x86 task switching and that's the origin story of Linux. It's clearly recorded on the wikipedia page for the kernel, and he speaks to this in Just For Fun IIRC.
Hmmm I misremembered it then. Thanks for pointing this out.
My general point still stands though. His initial Linux announcement E-Mail has this line:
> It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.
So it wasn't very portable at the start. If you complained about it not being portable to different arches (idk which one was available at that time? SPARC? MIPS?), it would have been unfair. Everyone has to start somewhere.
If anything Linus was extra-humble when announcing it, kind of the opposite of what's going on here. This kind of marketing copy would be appropriate for a mature product, as it is it rubs some people the wrong way.
Startups do the same thing for their first few years. Some companies never drop this attitude. Tesla for example promises a ton of stuff like self driving, or just building proper cars where it doesn't rain inside [0], but they haven't delivered none of the two yet.
Like it or not, people like this kid run the world.
The tagline ("modern, secure, lightweight and responsive operating system, combining the best of GNU/Linux and Fuchsia OS.") is brash and sets very high standards, so that might contribute a bit.
Besides, the comment you're replying on isn't really bashing them.
I didn’t mean for my comment to be an insult. It is as close to the truth as I remember, and was offered as an explanation for the potentially low code quality mentioned earlier in the thread.
It definitely isn't an insult :). I attempt to keep more quality and professionalism now, the earlier days of the project were, as they say, "turbulent". I do enjoy seeing criticism, as it guides us and often shows us points to improve, especially the code quality, which is being completely overhauled as we progress.
Being positive about young folk doesn't imply a confidence in the tech they produce. This applies just as much to Linus Torvalds as it does to everyone else: you'd be a fool to trust the disk image just based on his enthusiasm.
Indeed. However what is more telling is failing to set up git commit hooks (so one doesn’t need to remember running a formatter) and then randomly mashing the keyboard for the commit message.
Thankfully that's just around for layout reference, working with the Process class was absolutely infuriating. I have made more effort since then to be more verbose and keeping the code up to a decent quality.
Hi, even if it's very early, congrats for starting such an ambitious project!
Instead of aiming the moon today, I may suggest to focus on only one thing and do it very well now.
My suggestion is to target mobile / desktop (convergent) linux. The story today is not good, and you seems to have achieved some interesting things with your UI shell that may take years to achieve with the usual linux technologies: Gnome or KDE.
You may even be able to quickly make a business by selling pre-flashed linux devices like raspberry pis, pinephones, pinebooks... with your beautiful and easy to use UI.
Why does a new open source OS project have to always shoehorn the entire GNU/Linux ecosystem with X11/Wayland, GTK, DBus and GNOME onto a totally different system?
The only positive takeaway from this is perhaps the learning experience in OS development. Other than that is there a point to this?
For the same reason UNIX actually won against much better OSes, copy paste of freely available source code, C is the JavaScript/PHP of systems programming languages and there were much better and secure platforms out there, including Multics.
That, and the other OSes (Multics, VMS, VM/CMS) were considered competitive advantages and not really available for hacking on under the hood or porting to new architectures.
Unix only "won" in the sense that it's used. It doesn't mean that the situation has improved since the 80s... Y'all can stop boasting about a completely mediocre operating system with zero innovations for the last 30 years that functions about as well as what it was copying from the 80s. Y'all even failed to implement plan9's bind despite being an obvious improvement from what we have today. We're stuck with crappy, but stable, software that keeps you employed out of pure shittiness. Congratulations on sinking to the level of private enterprise.
Not sure exactly what it has to do with gnome or dbus, the desktop is backed by X11 (Migration to Wayland is underway but it's a pain to work with), the Flutter embedder itself renders with GTK. The project was started as a Fuchsia fork, and we intend to continue working hard on getting that ready, but sadly a lack of access to compatible modern hardware forced our hand to temporarily switch over to Linux as the development platform, so our layers of project would not stagnate while we wait for more polishing on Google's end.
For one thing, Fuchsia itself has had support built in for running Linux as a guest OS since roughly two years ago. [0]
For Dahlia specifically, it looks to ship a Linux kernel out of the box to extend its hardware compatibility. Relevant section below:
> Our dual kernel approach allows users with new(er) hardware to take advantage of the Zircon Kernel, while maintaining support for older devices using the Linux Kernel.
I opened the link. The only thing spoken to my screen reader was "enable accessibility". I pressed the button (it didn't accept space bar, but only enter) and it just announced an empty page.
I can't speak to the qualities of the os, but hopefully this won't be the entire a11y experience both on the website and on the os itself if both use flutter.
Yes, this is the kind of things I expect to get better if the tech gets more exposure.
If it doesn't then we'll be surely stuck with half-assed solutions for a truly cross-platform dev experience...
Yeah, sorry about the performance. We kinda put that together as an impromptu web demo, like the old ubuntu tour. Flutter for the web is super sucky right now, and lacks a lot of things like accessibility and normal text interaction because it is backed by a canvas element. Ideally as Flutter matures we will be able to improve both the performance and accessibility aspect of it.
I see two instances of "Uncaught Error: undefined" in the JS console. Strangely, in a different session I instead see "NoSuchMethodError: J.aH(...) is null" and the empty page has a gray background instead of white. I get similar results in both Firefox 81 and Safari 14 on macOS 10.15. Let me know if you want more info.
On Firefox, its struggling to keep this page alive such that the browser controls are locking everything up. I don't think its quite ready yet if browsers can't keep up with correctly rendering websites using Flutter.
It is a bit of a mess. It really shouldn't be able to run online, it just happens to be possible, so we decided to put it up. Flutter really isn't quite mature enough for production on the web, (Partially our fault for a decently messy codebase, a rewrite is underway to maximise efficiency).
Do you have an example of a complicated code base that uses flutter? I've never seen an interface designed with it that makes me think "wow. Great interface.", let alone the feeling that it can handle the functionality I would port to it.
If everything is just run as a canvas then you also loose everything that is good about a platform like accessibility, index-ability (at least on the web), integration with platform tools like copy, selecting text and so, so much more.
I can see canvas rendering as a good target for games and for individual components such as a drawing surface or a chart but using it for a whole app is just throwing out the baby with the bathwater.
Flutter on the web is more complicated than just a canvas element. The project has a fair amount of effort put into a11y on mobile; I’d be really surprised if the web output isn’t also accessible.
It seems like it renders text via a SVG on top of a canvas layer that is the background, so that's a plus for a11y. As for the downsides, and I'm not sure how much this is due to flutter or the app itself:
* I can't use the browser search for elements outside the viewport
* I can't select text at all
* I can't tab through elements/links at all (which is often seen as a strong indicator of bad a11y)
* Browser back/forward do not work as expected (back exits the app completely, and then forward does not bring me back)
* All the same as the first one, but also clickable elements do not give me any feedback that they are clickable (hand-pointer cursour or any underline/similar effect)
---
Maybe these are just demos that have not thought of these specific things but since it is a web showcase of a framework meant to work on the web (although not exclusively) and these were the first links up there it's a bit disappointing.
My point is that you get most of these things for free (or easy) on the normal web, but flutter seems to try to patch them back onto it instead of using what is there.
Possibility to (re-)introduce binary_only, closed_source hw driver modules for the fuchsia kernel? And still get to use all the goodie from GPL userland.
When I looked up Fucia and saw that it was Apache licenced, I assumed it was probably because Google are tired of providing back to the community that gave it 99% of the code that it uses. Much easier to do an Apple and take other people's work and not share back when it's not GPL'ed.
Can you give an example of a flutter app people use by choice outside of android? I think my google wifi app uses it and that's about it, and the google wifi app is not exactly the picture of quality software.
The framework seems designed to hold the hand of android developers, not to provide a decent experience to the end user. I have no clue why people keep referencing it as some kind of working cross-platform solution when it clearly caters exclusively to android.
Flutter's team is making efforts to improve it on other platforms, iirc it powers Stadia, and the new Google iOT hardware like the Home Hub (or whever they are calling that now) and the new Google TV.
To me this looks like the worst of both worlds. Poor UI choice, poor apps and loss of Linux hardware support. Sorry to be negative, but I just don't see where this fits.
How can they release it under the Apache license (which I applaud) if they use the Linux kernel which is GPL- I always thought those two licenses are incompatible (because of the restrictiveness of the GPL)
The builds are not neccessarily licensed, Fuchsia is Apache, and our source is Apache, I'm not quite sure legally about the final product of it. The underlying Linux system is licensed under various licenses, which are provided with the system. We release the GPL bits under the GPL, and the Apache bits under Apache 2.0, in addition to the other third party components which have miscellaneous licenses.
The core benefits of fuchsia are the kernel architecture and the security model it allows. If you're using linux for the kernel you have inherently done away with the primary advantage of fuchsia.
I looked briefly at the homepage but the actual architecture was not obvious even though their webpage was "pretty".
Can someone explain what it is they've done in a nice easy TLDR form?
With those benefits, there are also some major caveats that temporarily put a wrench in development. Fuchsia demands a system with a Kaby Lake CPU or newer, which were fairly hard to come by. Thanks to recent innovations in Fuchsia's emulator, it is easier to work on the UI and lower layers of the system, but it is still much easier to develop on and distribute Linux images for the time being.
"dahliaOS provides a fast and stable experience on nearly every computer, from a 2004 desktop tower to the latest generation of mobile notebooks. Our dual kernel approach allows users with new(er) hardware to take advantage of the Zircon Kernel, while maintaining support for older devices using the Linux Kernel."
It uses virtualization to run most/all Linux tools, I'm not quite sure if the functionality is bridged between the UI in Fuchsia yet, but the end product will likely end up looking something look Crostini on Chrome OS.
You can't install it, and this early on you shouldnt. My word alone probably does not carry much weight, but it is relatively safe, likely safer for usage than most current distributions where the only form of security is an up to date kernel. We opted to use a sort of stateful/stateless security model, where the system is stateless and varified on boot, and user files are stored inside an encrypted partition with relatively strict permissions for now.
I'm not convinced google learned anything with android WRT pushing GPU vendors to release driver source. Fuschia looks like it will only be a repeat of these problems.
I think they have learned the lesson that GPU venders will never release their source code. Fuchsia seems to be designed precisely to accomodate that fact - drivers are well isolated and speak to the kernel through a well defined (and hopefully stable!) interface (I think this is FIDL).
But I don't see how that is relevant here because this thing is based on Linux and X11.
Is there any OSDEV work that goes towards a more integrated, component-based architecture? As sad as it seems, Unix tools & pipes seems to be the most successful and enduring attempt in that direction. Systems doing that on a language basis seem mostly dead (Lisps, Smalltalks, Oberons), component architecture isn't doing much (OpenDoc as the prime example, CORBA/COM to a minor degree).
I don't think we'll be breaking much new ground when it just comes down to more efficient runtimes, packaging and ways to launch the same old ultra-tightly focused applications.