> I've never quite understood why the idea "everything is a file [descriptor]" is often revered as some particularly great insight.
I think the article articulated it decently:
> It is the file descriptor that makes files, devices, and inter-process I/O compatible.
Or if you like, because pushing everything into that single abstraction makes it easier to use, including in ways not considered by the original devs. Consider, for example, exposing battery information. On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels). In linux, you can just enumerate /sys/class/power_supply and read plain files to get that information.
> On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels)
I asked an LLM how to do this on Windows and got
> wmic path Win32_Battery get EstimatedChargeRemaining
Which doesn't seem meaningfully worse than looking at some sys path; it's not clear what the file abstraction adds for me there.
So you used an existing binary that hits the special kernel API to query the batteries. If you want to do it yourself (ex. to make your own graphical widget or something) then you have to hit that API yourself. And yes, sysfs is sort of an API too, but it's a simple, uniform API that in many cases can just be used via read() instead of needing to figure out some specialized interface.
To be clear, I recognize that some kind of general mechanism is useful, I’m just not sure why files and byte streams are considered especially great.
Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
> To be clear, I recognize that some kind of general mechanism is useful, I’m just not sure why files and byte streams are considered especially great.
It's one of the local maxima for generality. You could make everything an object or something, but it would require a lot of ecosystem work and eventually get you into a very similar place.
> Because the flip side of your example is that you now have a plain text protocol, and if you wanted to do anything else besides cat’ing it to the console, you’re now writing a parser.
Slight nuance: You could have everything-is-a-file without everything-is-text. Unix usually does both, and I think both are good, but eg. /dev/video0 is a file but not text. That said, text is also a nice local maxima, and the one that requires the least work to buy in to. Contrast, say, powershell, which does better... as long as your programs are integrated into that environment.
I think the article articulated it decently:
> It is the file descriptor that makes files, devices, and inter-process I/O compatible.
Or if you like, because pushing everything into that single abstraction makes it easier to use, including in ways not considered by the original devs. Consider, for example, exposing battery information. On other systems, you'd need to compile a program using some special kernel API to query the batteries and then check their stats (say, checking charge levels). In linux, you can just enumerate /sys/class/power_supply and read plain files to get that information.