Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why V7 Unix matters so much (utcc.utoronto.ca)
218 points by zdw on Nov 26, 2021 | hide | past | favorite | 93 comments


I was shown it in 79. I didn't "get it" I wound up using it in 81/82 and got it immediately. Between the gap period I'd moved from having the thinnest understanding of what programming was, to completing a CS degree and working in systems management and networking. Once you have had to use a pre-UNIX JCL such as a TOPS-10 system, moving to a unix system was a clear step up. The Dec 10 was a fine machine. It was just harder to sequence discrete units of code into an outcome. The pipe, and a reasonably programmatic shell and related tools were the killer app moment.

VMS was good, but it was still sys$system:[thing]/path arcane. And had complex database-like file abstractions which I am sure were a godsend to people in that space but IPC (using an abstraction called "mailboxes") was a nightmare. VMS had file versioning, something I think UNIX missed on. ZFS snapshots do much the same at a whole of FS level. UNIX still won, because it was more logically consistent, and had pipes. BSD4.2 brought sockets which I still hate, but they worked inside the consistency model.

v7 is where I learned to type. BSD4.1 is where I learned to develop. I still have a fond memory for the v7 pdp11.

unix 32V had deep roots in v7, as did the York unix port of v7 to the Vax with VM extensions. I think by then, BSD made it clear where things were going. System III -> System V were really aberrant, streams aside.

If Linux hadn't emerged, I think the BSD fracture would have led naturally to something dominant in FreeBSD or NetBSD or OpenBSD. But, they lost impetus to mind-share in the world of personal computers.


> VMS was good, but it was still sys$system:[thing]/path arcane. And had complex database-like file abstractions which I am sure were a godsend to people in that space but IPC (using an abstraction called "mailboxes") was a nightmare.

Indeed. One could not even open a text file without defining an elaborate file descriptor structure that had a dedicated field for every single potential problem that could occur whilst handling the I/O. UNIX? Just give me the file name, here is your file descriptor, being an integer, and now go away from me.


VMS --- the operating system from DEC --- has flatfiles, and file creation is as simple as opening one in an editor, much as with Unix.

MVS --- the mainframe OS from IBM --- has a far more complex data storage model, with "disk" storage being "DASD" (directly-addressable storage device), and requiring a long JCL or TSO/ISPF file definition including cylinders and sectors (similar to formatting a DOS disk, rather than simply creating a file.

VMS certainly has its warts, and I much prefer Unix / Linux. But requiring verbose file definitions isn't one of them.


Including if the file is already being used by another process, what could go wrong.


It depends. If one process is writing into the file whereas other processes are reading from the same file, there is not much wrong with that. Otherwise, we wouldn't be able to tee log files in UNIX.

Yes, the UNIX file locking semantics and implementation are at odds with parallel operating systems. The rise of «worse is better» in its glory – I will take three.


What's wrong with a file being used by another process ?


All sorts of data corruption possibilities.

Even after UNIX finally added support for file locking, its use relies on all processes making use of the API, if they don't care, they get to access the file anyway.

What angers UNIX users about file locking on Windows, is actually how proper file locking is done outside of UNIX, Windows is not alone in that regard.


As a self identifying Unix user, what angers me about file locking on windows is that I can't delete a file that another process has opened. This often leads to the situation that I can't delete a large directory structure because some unknown program has a file open in said directory. I don't care, just let me delete the directory please! Maybe it's not a "Unix" design, but something done by Linux. The concept that a program can continue using an already opened file while to others it's deleted is quite elegant.


This times 1000. Windows’s pessimistic file locking is why even trivial system updates require a reboot. Makes my blood boil every time.


Mostly because the people writing those updates don't update themselves (pun intended).

https://docs.microsoft.com/en-us/windows/win32/rstmgr/restar...


It is elegant until it happens to be an important file now lost forever on process shutdown, and yes it is pretty much UNIX.

A very basic attack vector, to yank files used by other processes and not take place in the traditional lock file dance.


It can be very bad eg. when file is a database ;) - so here we are circularity in use-cases :)


Funnily enough, BSD Sockets were essentially a crash port of code from TOPS-20... (which had much nicer shell than TOPS-10, though)


I liked tops10 but once you used a Unix directory tree the tops10/version model just didn't make sense any more.

If streams had made it out the door sooner sockets might not have swept the world.


I'm curious: what do you use today?


OSX which is BSD inside (mostly) ubuntu & debian derived stuff on pi4 because "it works" and freebsd on Dell rack mounts for systems backends (about ten). Some docker and k8s. Some bhyve stuff hosted on a freebsd master. An ix systems NAS on their BSD platform.

I lived in netbsd on thinkpad for quite a while. I used openbsd for network edge VPN and the like. I talk to a lot of BSD people still.


I feel the same - V7 is significant - but for different reasons.

(1) V7 did not use a MMU, there are no shared object libraries. The whole stack feels simpler, less layered. It is straightforward to hand-craft a binary on a pre-ELF system.

(2) V7 was just before select was introduced. V7 was only capable of basic networking like uucp where you give over the cpu to the networking operation.

A while back I wrote some notes contrasting RetroBSD (2BSD, close to v7) to LiteBSD (4.4 BSD), http://songseed.org/dinghy/d.pic32.platform.html

Both ports were done by the same developer, from different versions of the same codebase, separated by four years. If you compared any releases of Solaris, FreeBSD and Linux from the last twenty years, the differences would seem trivial compared to the differences between RetroBSD and LiteBSD.


V7 supported hardware segmentation registers. Look at sureg() in usr/sys/sys/ureg.c [1]. What it didn't support was pages. That came with 3BSD [2].

[1] https://github.com/v7unix/v7unix/blob/master/v7.tar.gz

[2] https://en.wikipedia.org/wiki/History_of_the_Berkeley_Softwa...

BTW, I think by far the main reason that V7 matters is that it was (somewhat) portable. That begat the workstation boom.


They call them segmentation registers in the source, but it's really paging hardware in the modern parlance. The estabur function in your citation can be seen iterating over the whole virtual address spaces (one each for I and D), and setting the permissions for each 8KB page. It's not super clearly written code, but those ap and dp variables are pointers to PTEs.

The hardware they're programming against is a KT-11D (compatible) MMU. Here's it's user guide: http://www.bitsavers.org/pdf/dec/pdp11/1140/KT11D_UsersMan.p...


> (1) V7 did not use a MMU, there are no shared object libraries. The whole stack feels simpler, less layered. It is straightforward to hand-craft a binary on a pre-ELF system.

It is strightforward or not quite. V7 on the PDP-11, even those with the MMU, had a 64kB process address space limitation (actually, I think it was 56kB); therefore, the solution was the overlay linking (a cheap and nasty implementation of memory paging in the user space) for applications that could not fit into the fixed address space which was pretty arcane to get right.

If the code accidentally crossed the overlay boundary, it would result in the application entering a death spiral of discarding a chunk of address space to replace it with the next overlay to discard the loaded overlay almost immediately and load the previous overlay back into the address space and start over again.


If V7 didn't use an MMU, how did it load programs? Did it rely on PIC code? Did it have a relocating loader that it relied upon? Curious how Unix managed to progress as far and as quickly as it did without an MMU while relying on something as notorious as C in an unprotected space. Just seems system stability would always be at stake.

Just curious what their experiences were like, not questioning it.


V7 absolutely used the MMU. V6 did too; you can see it's use in Lyon's commentary.


The PDP-11 had a MMU, but it was an extremely simple MMU, even simpler than what was available in Intel 8088/8086.

When MMU is mentioned in UNIX context, a MMU like in DEC VAX is understood, which provided features like access protection, dynamic relocation and virtual memory.

The main purpose of the PDP-11 MMU was to extend the maximum memory size from 64 kB to 4 MB, but that was done in a very inconvenient way.

A single process was limited to 64 kB of program code + 64 kB of data. For larger processes one had to use memory overlays, i.e. to swap out parts of the program while other parts were swapped in over the same addresses, but this feature was not used in UNIX, but in other operating systems for PDP-11, e.g. DEC RSX-11M.

In UNIX, the larger memory was exploited by loading in memory many concurrent processes. It was easier to split some task into subtasks done by cooperating processes communicating through pipes, than to attempt to remap some of the 8 kB pages of the 64 kB address space while the program was running, in order to exceed the process size limit.


The PDP-11 MMU was more powerful than what existed in the 8088/8086. It provided a supervisor/user distinction and didn't allow user code to go to town on the MMU directly. Yes, being a true 16 bit system, it was limited to 64KB total in each address space, but there was both

* True process isolation since user code couldn't load whatever they wanted into the base registers like on an 8086,

* True pages, so that the entire 64KB didn't have to be contiguous like on a 8086, but was instead broken into 8KB pages that could be mapped, shared between process, and mixed together in different places in the virtual address space.

Everything you're saying applies equally to the 32bit virt/36bit phys PAE world that we had for a while, and I wouldn't call that a simple MMU.


It is true that the PDP-11 MMU provided memory protection, unlike 8088/8086.

However that mattered only for the operating system kernel, as it allowed process isolation.

For user programs, the 8088/8086 MMU was a thousand times more convenient, because the segments could start on any 16-byte boundary instead of 8 kB boundaries and there were 4 segments instead of 2 and you could load a pointer to a new segment with a single instruction.

For an IBM PC, it was very easy to write a 512 kB program, you just had to select an appropriate memory model option for the compiler. For a PDP-11, even when having 4 MB of memory, writing a large program was extremely difficult.

You had to partition the code and data of the program in pieces not to small and not too large, to fit in 1 or a few 8 kB pages. You had to partition in such a way that the parts will not need to be active simultaneously and you would not need to swap them frequently. Swapping parts was slow, as it had to be done by an operating system call.

The linker had to be instructed to allocate the overlapping parts to appropriate addresses, so that it will be possible to swap them.


If you wanted to escew protection for ease of use, you could just map the unibus page into the user process and let them fiddle the attributes directly.

Additionally the 8kb pages were subdivided into 128B contiguous blocks, with a base and limit, allowing smaller granularity.

The feature set that the 8086 gave you was more or less still available, there was just another option that was deemed more useful in most cases.


I have to disagree - I ported v6 and v7 (and system III/V/etc ) - at the time we called them 'MMU's - we distinguished between a a pdp11 "base and bounds" MMU a "SUN style" SRAM based MMU (with fixed pages loaded at context switch time), a 68451 (which had power of 2 region mappings loaded at context switch time) - and full paging MMUs (vax, PMMUs, the RISC chips witrh software replacement when they came out)

But we called all of these "MMUs"


> but this feature was not used in UNIX

I believe this was actually used (to a limited degree) in later releases of 2BSD, as features from 4BSD were backported to the PDP-11, and the kernel ballooned in size such that overlays were in necessary. The authors of said overlays seemed rather exasperated by the whole thing, and I vaguely remember watching a talk on YouTube where someone recounted the (apocryphal?) tale of them pushing their pdp-11 out the window and cheering.


The 8088/8086 did not have an MMU.


The 8088/8086 had a MMU that provided only addressing space extension, from 16-bit to 20-bit.

It did not provide memory protection.

The PDP-11 MMU provided memory protection so that process isolation was possible, but its main function was also addressing space extension, from 16-bit to 18-bit in the first version and from 16-bit to 22-bit in the later version, but that function was much less convenient to use than in 8088/8086.


The 8088/8086 segment architecture is not what is generally considered an MMU. Yes, it provides address translation. That's part of what an MMU does. That doesn't make it an MMU, though.

The 80286 is generally considered to be when an MMU was added to the x86 line.


The address translation, either with segments or with pages, is the essential function of a MMU and by far its most complex feature.

Adding flags for additional features piggy-backed over the address translation, e.g. memory protection, is the easy part of a MMU.

80286 added memory protection, which is essential for a multi-user operating system, like UNIX. That is why it was the first Intel CPU to which UNIX was ported, but any computer with address space extension must have a MMU.

8088/8086 was indeed unusual in having a MMU without memory protection, because it was intended only for personal computers, while the previous computers expensive enough to need address space extension were all intended for multi-user applications, so all included memory protection in the MMUs.


"Address translation" at the level done by the 8088 is not the most complex part of a MMU. It's a shift and an add.

So, no. That's not what any of the rest of us mean by "MMU".


And in hardware, the shift is free, so it's literally just a 12+16 adder.


There was a Xenix version for the 8086. Of course, without memory protection, a misbehaving process was able to wreak havoc.

You can run the system in your browser using the pcjs emulator: https://www.pcjs.org/software/pcx86/sys/unix/sco/xenix/086/2...


> The 8088/8086 had a MMU that provided only addressing space extension, from 16-bit to 20-bit.

In hardware this is literally an add. Calling a single add (with aliasing) an MMU is quite a reach. The 80286 is not merely an extension on this as the segment registers actually do indirection. Also in terms of silicon real-estate I think you greatly underestimate the complexity of what you call the “easy” part - anything that can fault is no longer trivial for one - the 8088 cannot do this.


I believe the 8086, as originally designed and implemented, had a segmented address space with base and bounds registers in the style of the CDC 6600. This rudimentary but effective memory management approach was scrapped by Intel to make the 8086 part commensurate with the stepper reticle and semiconductor process of the time.


An email exchange with one of the 8086 architects informs me that no version of the 8086 design had base and bound hardware. Vivid and detailed as my recollection is, I appear to be remembering something that never happened.


The pdp-11 had a very primitive virtual memory system. It was not page based. So you had to swap in and out entire processes (with seperate instruction and data you could leave the program in memory, swap out data, and swap in data from another process running the same program).

Later pdp-11 models could have a few processes in ram at the same time.


Only for very low end systems. Most PDP-11s running Unix had something like the KT-11D MMU providing true paging hardware.


You are right. After I wrote my text, I realised that there must have been some kind of paging. Of course, with 8 Kbyte pages in a 64 Kbytes it is not the same granularity as today. Imagine unmapping one page to keep the stack separate from the heap. That's 1/8th of your already small address space.

But the part that is also missing the ability to restart instructions after a page fault. If I remember correctly some old discussion, that may have been possible for many instructions but not some popular instructions.

I found an online copy of the reference manual which states: "It is not generally possible to recover from these aborts."

So paging yes, but not in the modern sense where you can recover from a page fault. Which is key to many aspects of a modern unix system. For example, copy-on-write forks.


The 8KB pages could also be broken down into 128 byte blocks, so that you could map a (contiguous) portion of a page, and not have to use the whole 8KB. You could also use the same feature to give yourself guard 'pages' (but really guard blocks). It specifically supported the ability to grow down for stacks and guards.

And the abort issue is around restarting indirect addressing modes, which will result in repeats of accesses. Technically this is an issue if you were pulling pointers directly out of MMIO regions, but it's one of those "doctor it hurts when I do this." "Then don't do that" kind of situation. If you're getting real cute with advanced addressing using MMIO regions to bounce out to other regions, do that in the kernel (as you probably would be anyway). Then you can do all the fun virtual memory tricks.


I don't have any knowledge of V7, but V6 definitely used an MMU (as does xv6).


V7 was released in 1979. If you click on the first link in the article, you get to the wikipedia page on V7, and from there you have a screenshot[1] of what running `ls -l /usr` looked like in a terminal

Two things stand out to me about this screenshot:

1. The output of `ls -l` is very very similar to what my 2021 debian+coreutils produces.

2. The contents of `/usr` are not all that different either! Directories that are common to my system and this screenshot are: games, include, lib, src.

[1] https://en.wikipedia.org/wiki/Version_7_Unix#/media/File:Ver...


UNIX V7 (1979) had more differences compared to the older versions than all the current UNIX descendants have compared to V7.

The POSIX standards correspond mostly with V7 with relatively minor additions from the later ATT and BSD UNIX versions.

Many of the command line utilities that are still used today have appeared only in V7, e.g. the Bourne shell sh, sed, awk, m4, lex, basename, test, expr.

Most command line utilities from V6 or older versions, even those that have the same name as later utilities, may have very different behavior.


I like the UNIX haters handbook. Mostly for 2 reasons:

  - some of the criticism is reasonable and
  - today, most of those haters probably depend on some form of UNIX one way or another.


The mostly did at the time too, I think. The Handbook's sentiment was largely "you can make me use it, but you can't make me like it".


Very HongKonger life today


I came to UNIX via a v7 clone, Mark Williams' Coherent, back in 1991. I have used UNIX and UNIX-like OSs (Linux after 2001) as my everyday system ever since.

After discovering how primitive the much-vaunted Windows 3.1 was, back in the early 1990s, I never went through a 'Windows as my everyday OS' phase at all.


For CLI stuff yes, for those of us into gaming and graphics, only the likes of SGI and NeXT were appealing, and they were quite out of reach.

I had a big UNIX phase, and ironically my graduation thesis was to port an Objective-C based particle engine framework into MFC, as the department was getting rid of their NeXTSTEP Cube, and the department was going into Windows for graphics programming.

Additionally the library archives of the faculty showed me the alternative computing universes and thus UNIX lost its magic to me.


UNIX was well provided for with GUIs all through the early 90s and later. The reason I knew Windows 3.1 was so primitive around 92-93 was that I had been using a GUI at least as advanced as Windows 98 would eventually become about 5 years later. (SVR4 in AT&T's UNIX, which then became Novell's UNIX ("Unixware"). At the same period Sun had its SunOS GUI (retrospectively renamed 'Solaris 1') and then its Solaris 2.

My progression was Coherent (CLI) in 1991, followed by AT&T UNIX in 1992, Novell's Unixware in 1993, then Solaris 1 in 1994 and Solaris 2 from 1995 to 2001. (All my UNIXes from 1992 onwards had GUI Desktops. How else would I have used an Internet Browser called 'Mosaic'? I was a very early adopter of that new-fangled Internet. I been connected up since my Coherent days in 1991).

By 2001 Linux had, for all intents and purposes, caught up with the commercial UNIXes.


Yeah it had GUIs, painful as hell to program for. Only masochists can enjoy Xlib and Athena Widgets.

And CDE? No thanks.

Compared with Amiga, Windows 3.x and OS/2, UNIX has nothing to be proud about on the GUI chapter, other than outliers like NeXTSTEP or Irix, which in spite of CDE had other GUI capabilities on top.

It was designed for and remains a CLI focused OS, with a graphics buffer on top without a coherent stack.

As for Linux caughting up, until 2005 I only used Aix, HP-UX and Solaris in production.

While at CERN Linux was being adopted as transition away from Solaris to run on the clusters (2003), everyone was either using Windows 2000 or the newly released OS X on their desktop/laptops.

On my group I was one of the Scientific Linux early adopters, and yes in the context of running CMT, displaying Latex produced documents, Java based GUIs for the accelerators dashboards, or the same POSIX stuff as any other UNIX it was ok for the research workflows being done there.

Ironically even in such contexts the large majority was either using Word or Framemaker for their papers.


> I was a very early adopter of that new-fangled Internet. I been connected up since my Coherent days in 1991

If you were a very early adopter of the Internet in 1991... then how should we describe the activities of Douglas Englebert or Vint Cerf 2 decades prior?

1983-85 I dialed into BBS, which was fun, but not as interesting as dialing up and logging into UF servers without credentials, exploring lists and exploiting resources when everyone was asleep. That was 6th-8th grades for me. In Fall of 1989 I was forced to purchase an expensive A/UX license, and I am still enamored with it. Then I was lost in desktop publishing for nearly a decade before my first IT jobs using NetWare in the late 1990s, to using (not admin) RedHat servers in 2001, which happened to coincide with Win2K and Mac OS X desktops I set my team up to use. I utterly spoiled them with 2 desktops each.


.. then how should we describe the activities of Douglas Englebert or Vint Cerf 2 decades prior?

That's very true.

In relative terms, I was an early adopter. I clearly remember the buzz that went around when somebody told us that there were now the unbelievable number of 'one million nodes on the Net'. That was four years or so before MSFT 'got religion' and turned on a dime to provide Web software. (Much as I dislike MSFT, that was really fantastic, fast work!)

These days of course, a million nodes more or a million nodes less is merely 'daily noise level'.


For whatever reason, no one uses "World Wide Web." You were an impressively very early adopter of the WWW, or web. Excuse me, please, I've trolled better.


> By 2001 Linux had, for all intents and purposes, caught up with the commercial UNIXes.

No, I do not believe that is correct. That sounds like Penguinista revisionist history. Linux was quite usable yet still half-baked in 2001. In 2005, when IBM tried to switch everything over to Linux, there was a major revolt among AIX administrators that had numerous valid complaints, so even by 2005 Linux was not quite ready for prime time. By the end of 2008, the server install split was about 50/50 Linux/NetBSD. Everyone thinks Linux took over in 2008, but I think it was later, 2012. Linux was super popular without a reason to be. I still, today, do not see it as any better than NetBSD. Grassroots marketing by PC punks blanketing forum posts with nut bar verve made Linux popular, and not by its own merits. Linux is pop music.


Linux was popular because was fast on x86 hardware. And unlike BSD, all programs which came with the distribution worked. With ports you could get into a situation where one program would not compile or when it needed another library that the one installed which broke things. Also BSD wars turned a lot of developers away from BSD.


> Linux was popular because was fast on x86 hardware.

Because it booted fast? I don't understand that metric. We don't reboot servers. IMO Linux was a fad, but it faked it until it made it in 2008-2012. The death march of Sun had a lot to do with it. Linux, even today, never held a candle to the stability of any Sun system.

> With ports you could get into a situation where one program would not compile or when it needed another library that the one installed which broke things.

Comparing (I assume) apt to ports is Oranges to Apples. Ports is a source-based package management system. Apt is binary based. There are different goals here.

> Also BSD wars turned a lot of developers away from BSD.

What??! Did I miss the "BSD wars?" BSD won the UNIX wars prior to the Millennium. I'm not sure what you're saying. There'd be no GNU without BSD, and Linux would not be an operating system without GNU. Linux is more GNU than Linux.


I remember using Linux around 2005-2006 (SUSE Community edition) and it was stable enough even though I had drivers issues with certain things like Ethernet.


Stable enough for curious and intrepid home or laptop users. But in 2005 Linux was premature for the data center, just had a lot of momentum from fanatics. I used RedHat Linux servers in 2001 for production processing, but this was at a startup (which never quite made it, but also didn't quite die... purchased by another company. I believe still in existence, but with no compelling reason for existence today).


Yeah that may be the case, as I said it wouldn’t even work with my Ethernet card without tweaking and Wi-Fi was absolutely no happening


This was a common experience. Amazing (to me) that today all of the most popular distros have accomplished this, collecting and including all the drivers for all (well, not all, most) of the myriad network interfaces used, so that tweaking for network access is now pretty much built-in to the install. Same is true for the many BSDs now. I had the same thing happen as you with Darwin (yae, I got it to install! But I can't do a thing with it).

What I recall about linux in the 00's was that Microsoft would break some functionality in Windows environments, and linux would fix it within a few weeks. I saw that as the reason for linux's existence, subverting Microsoft control. I think it may be still true, but LAMP became a pretty good reason for linux, too.


>By 2001 Linux had, for all intents and purposes, caught up with the commercial UNIXes.

And even before, FVWM+RXVT ran circles around Solaris and CDE with most of the software being ported over: GCC, MESA, TCL, TK, commercial SGI tools...

I know, I know, those tools and WM's already run everywhere, but most of the progress was on cheap X86 machines with limited RAM so many people even repurposed 486's with EvilWM, and lots of people were browsing the web with just Links/Lynx and the former 486.

https://www.6809.org.uk/evilwm/

https://paparisa.unpatti.ac.id/linux_journal_archive_1994_20...


No they didn't, CDE wasn't available for free, so the Linux community used what they could put their hands on.

Using FVWM meant being stuck with Xlib and Athena Widgets, while Tcl/Tk was anyway a Sun project, with Tk not really being usable for anything that required graphics performance.


Solaris AND CDE, ofc Solaris bundled CDE. And Linux used FVWM + RXVT because it was a much faster alternative for common Unix work (cli and seldom graphical apps).

Most people used X to run terminals and Netscape, and nothing else. And maybe Gnuplot and XV.

On TK, ofc, but calling C from TCL was granted.


I must have lived on an alternative universe with those HP-UX, Solaris and Aix applications.

As for the rest I guess it has been a Linux thing since the start, given the CLI focus even to this day.

Using fvwm meant having fun with Xlib + Athena Widgets, hardly usable.

As for Tk, anything beyond providing a GUI to CLI did not escalate.

DDD is probably the only surviving Tk GUI application that matters.


Wasn't DDD a Xaw3D application?

On Tk software, Tkgate can be really useful.

Also, you forgot TKinter for prototyping. Widely used.

Well, on hardly usable... Netscape and a lot of software was statically linked and built against Motif, so most people ran commercial software in mid-late 90's well.

A good FVWM+RXVT setup was as good as CDE, if not better because the CPU and RAM usage was far lower as I stated.

I remember software like GV, mpg123, ImageMagick... far from unusable.


twm was also usable, depending on the point of view.

And being usable by whoever didn't want to put the money into real UNIXes, didn't mean that Xlib + Athena Widgets were something that one wishes to code for.

I surely didn't, and yes I used plenty of Linux systems, starting with Slackware 2.0.


Well, it was the 90's and even W95 was far more usable, Unixen were designed as to have a custom DE with highly customized settings even if CDE and Motif were the standard.

On TWM, FVWM was much faster and lightweight. RXVT flied against XTerm and dtterm. On high CPU loads TWM could be seen redrawing windows, something atrocious (ctwm fixed that a lot), and, well, as an example, you could code the core of your software in C and glue it to TCL/TK with relative ease. You had bindings and it was fast enough, much more than the pure TCL/TK.

On usability, most Unix folks would run some XTerm and a biggie application such as one for biology, CAD or astronomy.

If something required more windows, FVWM was perfect to switch between virtual pages. Better than a taskbar.


Also one "follow-on" to V7 was plan9 (also from CSRC). So in many ways looking to Plan9 to see how "the forefathers of UNIX" would improve something can be instructive as well.


And Inferno as follow up to Plan 9, which keeps being forgotten.


There was V8, V9 and V10 in between, with V8 using BSD as base, iirc, but creating streams and TLI/XTI instead of BSD Sockets (XTI was superior API, tbqh)


TLI/XTI are from AT&T SVR3/SVR4, not Research Unix (V8, V9, V10). dmr's V8 streams were the basis for AT&T SVR3/SVR4 STREAMS (all caps!), but quite a bit simpler: https://en.wikipedia.org/wiki/STREAMS.


Very annoying and confusing typo there in first paragraph --- CSRG when the author meant CSRC. CSRC was the Bell Labs group name, CSRG was the UCB group name.


V7 user 1978-1981 here. Reason #1 in the article is surely the main one -- it's the common evolutionary ancestor for the mainstream computing platform we use today.

I think V7 also played a part in establishing the OSS environment we're used to today. It was the first major production OS for which the source was freely available, at least until lawyers put an end to that. I think this produced a generation of programmers who had enjoyed that experience and hence worked to bring it back.


Its popularity and persistence was partly because of its size. Each release got a lot bigger. A v6 system fits easily into 128 KB of RAM and there are stripped down ports that run in less. v7 is just a little bigger. Small and comprehensible. See: https://en.wikipedia.org/wiki/Lions%27_Commentary_on_UNIX_6t...

V7 also persisted well into the 80s because it was a lot easier to port. And again the size meant it could fit on small machines. In 1983, the year the IBM PC XT with a hard drive came out, commercial AT&T UNIX was up to System V which was enormous. It wouldn't have fit in the RAM or disk of the XT.


I like the simplicity of early Unix systems.

Unix was originally simpler than Multics, but modern Linux is much larger (even ignoring libraries, device drivers, and networking) and much more complex – while remaining comparatively deficient in several important ways (kernel and utilities are written in an unsafe language, containers are less capable and less secure than rings, mmap is clunkier than unified storage architecture, etc..)


If you think mmap is clunky, you really don't want to see how Windows does it.


I think many people (myself included) think of V7 plus virtual memory plus fast file system plus job control as the ultimate pure version of Unix. Add Berkeley sockets from 4.2 BSD and stop there. Most of the subsequent work including almost all Linux command line implementations have been perverting the Unix philosophy: simplicity and minimalism and greatest reusability first.

Shared object libraries are a bloatware feature from hell. "Hey everybody let's introduce shared libraries into system V so we can bloat them and introduce new bugs due to missing or incompatible libraries - how does that sound? Because we must squeeze the last drop out of every byte of RAM, that's Sooooooo important!


You probably want a virtual filesystem as well.

You may want kernel modules.

For anything that is not a completely static server you may want some kind of devfs.

Then there is the issue of graphics output.

Virtual machines require a bit of kernel support.

What about container support?

Personally I don't like shared libraries, but for people who do you need shared memory support.

Multi-threaded programs? Support for multiple CPUs?


That may be the case but outside of my MacBook I much prefer Linux/GNU to “real” UNIX. It is improved upon in a lot of (subtle) ways and is a lot more feature rich. That’s an opinion though as people who prefer BSD/UNIX tend to think GNU is bloated and inconsistent.


> All other Unixes (BSD, System V, and Linux) are in some sense contaminated by outsiders who did not fully get the Unix philosophy. Both BSD Unix and System III/V added and changed things which people find objectionable and non-Unixy, so ignoring these as 'not in V7 and in the original intentions of Unix' is sometimes seen as convenient [...]

Whoa, it would be very interesting to hear some (obviously optionated) examples of this. And which modern BSD "gets" the original V7 thinking best? OpenBSD?


I haven't ever run V7 or looked at the code, but I am going to bet that run levels in SysV init are not in V7.

Perhaps some people disliked that innovation as the current detractors of systemd.


And you would not be mistaken: https://www.freebsd.org/cgi/man.cgi?query=init&apropos=0&sek...

According to Wikipedia, runlevels were indeed introduced by SysV UNIX.


I'm a bit confused about the 3rd paragraph. As far as I know, you had to have a v6 license for BSD not a v7. I think BSD independently implemented v7 functionality.

There also the license change going from v6 to v7. With v7, universities where no longer allowed to show v7 source code to students. Many universities were teaching operating system concepts using v6 source code (and the Lion book).

This inspired Andy Tanenbaum to create MINIX as a v7-like operating system written from scratch.


MINIX is the case I use to illustrate the difference between source available, open source and free software. The source of MINIX was available, but its license was very restrictive. Considering it was reasonably functional before linux, the world could be a very different place had a better license been chosen.

It eventually became open source and the 3.0 was even serious but it was too little too late. Besides serving intel's IME, there is very little use for it today.


The way Andy Tanenbaum tells the story is that he wanted to make sure that every student who got a copy of the operating systems book, would also get a copy of the source and binaries of MINIX. For this reason he insisted that the publisher, Prentice Hall, would deliver floppies with book.

Prentice Hall wasn't too keen on doing that but gave in. However, as it normally works with publishers, that also meant transfering the copyrights to them.

There is another part and that is that Andy Tanenbaum didn't consider MINIX a serious operating system. It was designed as a teaching aid. Until MINIX-3 he actively resisted adding complications, like a virtual memory system. So even with an open source license, it probably would have taken a fork and rebranding to turn it into a more serious operating system.

Finally, turning a micro-kernel system like MINIX into a modern unix system is far from trivial. So it is not clear how well an improved version of MINIX would have competed with *BSD.


Linux was mostly a toy at first. I see no reason MINIX couldn't be as successful.


Minix 2 design pretty much ensured it couldn't be anything more - a lot of early Linux code was about replacing Minix code with something that made it less of a joke, and the experience was fuel for the tanenbaum-linus flamewar (for example, Linus had reentrant, "multithreaded" I/O whereas Minix couldn't due to how it handled I/O).

It was only Minix 3 when care was taken to include possibly more complex but important techniques instead of forsaking them for simplicity


Not really. Linus thought he was building a toy kernel but as soon as it was released it became a real OS as people got it working on their systems, got all GNU programs running, etc. The X windowing system was ported in less than a year so it was a full fledged OS immediately.

As for minux, at the time Linux was released microkernels weren’t considered practical outside of research OSs until the mid-to-late 90s and even today Mac OS is the only widely used micro kernel OS. This is a big topic of Linus debate with Tannenbaum. I don’t recall the exact reasons but iirc microkernels we’re/are slower than a monolith and at a time when computing power was much lower they didn’t make sense for workstations.


Linux in late 90's was a beast. FVWM+RXVT used far less memory and CPU than the tipical SUN Station and with a fraction of the price.


The Lion book eventually became a kind of treasure, because AT&T forbade its use after the license changes, only years later was the situation ratified.

> When AT&T announced UNIX Version 7 at USENIX in June 1979, the academic/research license no longer automatically permitted classroom use.

> However, thousands of computer science students around the world spread photocopies. As they were not being taught it in class, they would sometimes meet after hours to discuss the book. Many pioneers of UNIX and open source had a treasured multiple-generation photocopy.[7]

-- https://en.wikipedia.org/wiki/Lions%27_Commentary_on_UNIX_6t...


What is the non-pure part of layer unix and why this spirit is great. Not mentioned in the article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: