I view Racket as an academic language used as a vehicle for education and for research. I think Racket does fine in its niche, but Racket has a lot of compelling competitors, especially for researchers and professional software engineers. Those who want a smaller Scheme can choose between plenty of implementations, and those who want a larger language can choose Common Lisp. For those who don't mind syntax different from S-expressions, there's Haskell and OCaml. Those who want access to the Java or .NET ecosystems could use Scala, Clojure, or F#.
There's nothing wrong with an academic/research language like Racket, Oberon, and Standard ML.
I wish Standard ML had a strong ecosystem and things like a good dependency manager/package manager. I really liked it. But there is even less of an ecosystem around it than some other niche languages, and I've gone into the rabbit hole of writing everything myself too often, to know that at some point I will either hit the limit of my energy burning out, or the limits of my mathematical understanding to implement something. For example how to make a normal distribution from only having uniform distribution in the standard library. So many approaches to have an approximation, but to really understand them, you need to understand a lot of math.
Anyway, I like the language. Felt great writing a few Advent of Code puzzles in SMLNJ.
I admit I'm one of those students who never used Racket in a non-academic setting (but mostly because I needed to contribute to already-existing projects written in different languages), and I was taught Racket from one of its main contributors, John Clements at Cal Poly San Luis Obispo. However, learning Racket planted a seed in me that would later grow into a love of programming languages beyond industry-standard imperative ones.
I took a two-quarter series of classes from John Clements: the first was a course on programming language interpreters, and the second was a compilers course. The first course was taught entirely in Racket (then called DrScheme). As a guy who loved C and wanted to be the next Dennis Ritchie, I remember hating Racket at first, with all of its parentheses and feeling restricted by immutability and needing to express repetition using recursion. However, we gradually worked our way toward building a Scheme meta-circular evaluator. The second course was language-agnostic. Our first assignment was to write an interpreter for a subset of Scheme. We were allowed to use any language. I was tired of Racket and wanted to code in a much more familiar language: C++. Surely this was a sign of relief, right?
It turned out that C++ was a terrible choice for the job. I ended up writing a complex inheritance hierarchy of expression types, which could have easily been implemented using Racket's pattern matching capabilities. Additionally, C++ requires manual memory management, and this was before the C++11 standard with its introduction of smart pointers. Finally, I learned how functional programming paradigms make testing so much easier, compared to using object-oriented unit testing frameworks and dealing with mutable objects. I managed to get the project done and working in C++, but only after a grueling 40 hours.
I never complained about Racket after that.
In graduate school, I was taught Scala and Haskell from Cormac Flanagan, who also contributed to Racket. Sometime after graduate school, I got bit by the Smalltalk and Lisp bugs hard....now I do a little bit of research on programming languages when I'm not busy teaching classes as a community college professor. I find Futamura projections quite fascinating.
I'm glad I was taught programming languages from John Clements and Cormac Flanagan. They planted seeds that later bloomed into a love for programming languages.
C++ is one of my favourite languages, and I got into a few cool jobs because of my C++ knowledge.
However, given the option I would mostly reach for managed compiled languages as first choice, and only if really, really required, to something like C++, and even then, probably to a native library that gets consumed, instead of 100% pure C++.
that's an often repeated misconception about lisps.
lisps are pretty good at low-level programming, but then you'll need to make some compromises like abandoning the reliance on the GC and managing memory manually (which is still a lot easier than in other languages due to the metaprogramming capabilities).
there are lisps that can compile themselves to machine code in 2-4000 LoC altogether (i.e. compiler and assembler included; https://github.com/attila-lendvai/maru).
i'm not saying that there are lisp-based solutions that are ready for use in the industry. what i'm saying is that the lisp langauge is not at all an obstacle for memory-limited and/or real-time programs. it's just that few people use them, especially in those fields.
and there are interesting experiments for direct compilation, too:
BIT: A Very Compact #Scheme System for #Microcontrollers (#lisp #embedded)
http://www.iro.umontreal.ca/~feeley/papers/DubeFeeleyHOSC05....
"We demonstrate that with this system it is clearly possible to run realistic Scheme programs on a microcontroller with as little as 3 to 4 KB of RAM. Programs that access the whole Scheme library require only 13 KB of ROM."
"Many of the techniques [...] are part of the Scheme and Lisp implementation folklore. [...] We cite relevant previous work for the less well known implementation techniques."
People always point out this as a failure, when it is the contrary.
A programming language being managed doesn't mean we need to close the door to any other kind of resource management.
Unless it is something hard real time, and there are options there as well, we get to enjoy the productivity of high level programming, while at the same time having the tools at our disposal to do low level systems stuff, without having to mix languages.
I’m an American living in the San Francisco Bay Area who travels to Japan twice per year. McDonald’s in Japan is better than McDonald’s in America. McDonald’s in Japan not only is cleaner and has better customer service, but is cheaper.
McDonald’s in America wasn’t always expensive; I was in high school and college in the 2000s when the dollar menu had double cheeseburgers, chicken sandwiches, and small orders of fries. The regular menu didn’t break the bank, either. Prices started shooting upward in the 2010s; first the Double Cheeseburger on the dollar menu got replaced with the McDouble (one slice of cheese instead of two), then it exited the dollar menu and became 2 for $3, then 2 for $4. But after COVID, prices exploded. I remember the first time seeing a fast food combo meal selling for more than $10 sometime about five years ago, but it was the most expensive meal on the menu. Nowadays in my area $10-$12 combo meals are the norm. It’s sad and maddening; my salary hasn’t risen at this level!
Meanwhile in Japan, I could get a Big Mac meal for around ¥800. Even when the yen was strong, $8 beats $11. At today’s yen valuation ($5.09), it’s more than half the cost, and with better customer service at that!
I make six figures but I feel like fast food prices in California are a ripoff ($10+ for a crappy meal? No thanks!), and so I quit eating out except when traveling or for entertainment, such as hanging out with friends.
Indeed—I meant specifically the NeXT branch of the family tree because of this exhaustingly long list.
I would very much like to see that quad-fat OS4.2 CD; most NeXT releases around that era drop PA-RISC and are only tri-fat. I only have a 3.3 RISC (HPPA+SPARC) ISO for HPPA coverage.
The big ones you're missing are the Intel i860 (used as a graphics accelerator on NeXTdimension video processing boards—also the original target platform for the Win NT kernel) and the Motorola 88k family, which was briefly explored for the "NeXT RISC machine" in the mid-90s; only one prototype is known to exist. There were non-NeXT ports of Mach to m88k, which may have influenced the decision.
Of course, if we add in the other branches of the Mach family the number of ports gets absurd! It originated on the VAX; OSF/1 adds MIPS and AXP to the list... ultimately RISC-V and Itanium are the only significant ≥32-bit CPUs of the last forty years to not see some kind of Mach port.
But—the ultimate point is that the lion's share of actual work porting the kernel to new hardware is thanks to NeXT and/or NeXT cosplaying as Apple.
I think the hard part about the Linux desktop ecosystem and its development pattern is the cobbled-up-parts nature of the system, where different teams and individuals work on different subsystems with no higher leadership directing how all of these parts should be assembled to create a cohesive whole. We have a situation where GUI applications depended on X.org, yet the X.org developers didn't want to work on X.org any more. If the desktop Linux ecosystem were more like FreeBSD in the sense that FreeBSD has control over both the kernel and its bundled userland, there'd be a clearer transition away from X.org since X.org would have been owned by the overall Linux project. However, that's not how development in the Linux ecosystem works, and what we ended up with is a very messy, dragged-out transition from X to Wayland, complete with competing compositors.
Bazaar-style development seems to work for command-line tools, but I don't think it works well for a coherent desktop experience. We've had so much fragmentation, from KDE/Qt vs GNOME/GTK, to now X11 vs Wayland. Even X11 itself didn't come from the bazaar, but rather from MIT, DEC, and IBM (https://en.wikipedia.org/wiki/X_Window_System).
My dream is to work on an operating system that at least gets us to the 1990s and 2000s when it comes to research ideas.
I have a soft spot for the Smalltalk-80 environment and Lisp machines. They had a single address space. In my opinion, the two most interesting things about these environments are (1) their facilitation for component-based software based on live, dynamic objects, and (2) the malleability of the system, where every aspect of the system can be modified by the user in real time.
Of course, a critical downside of Smalltalk-80 and Lisp machine environments is the lack of security; any piece of code can modify the system. There are two solutions to this that I'm thinking about: (1) capability-based security for objects in the system, and (2) work on single-address space operating systems that still have memory protection (Opal was a research system that had this design; see Sharing and Protection in a Single-Address-Space Operating System [Chase et al. 1994]).
One of the nice things about Lisp is its metaprogramming facilities, from macros to the metaobject protocol. Metaprogramming makes it feasible to implement domain-specific languages that make expressing problems more aligned to their domains.
During the late 2000s and early 2010s, Alan Kay's Viewpoints Research Institute had a project named STEPS that investigated the pervasive use of DSLs to implement an entire desktop environment. They did not use Lisp as a substrate, but they did use OMeta (https://tinlizzie.org/ometa/) for handling parsing expression grammars (PEGs), which are used to describe many of the systems in STEPS. Two DSLs that immediately come to mind are one for describing the 2D graphics system and another for describing TCP.
So now I've described my dream substrate: a single-address operating system with capability-based security, where each subsystem is expressed as a live object, ideally coded in a DSL.
Now comes the interface. The programmer's interface would be similar to Smalltalk-80 and Lisp machines, with a live REPL for interactive coding. All objects can be accessed programmatically by sending messages to them. The end-user interface would be heavily based on the classic Mac OS, and applications would conform to human interface guidelines similar to System 7.5, but with some updates to reflect usage patterns and lessons in UI/UX that weren't known at the time. Application software would be similar to the OpenDoc vision, where components can be combined based on the user's wishes.
The end result sounds like a synthesis of various Apple projects from the late 1980s until 1996: component-based applications backed by a live object system with capability-based security.
This is my dream and is a side project I'd love to create.
My own high-level language, Varyx, has somewhat LISPy internals and is very dynamic — for example, you can annotate a variable with a type that's determined only at run time — and has an eval() that insulates the caller from the payload and vice versa. You can sequester mutable state within a closure, which can't be cracked open. Using an experimental Varyx build with some bindings for Apple's Core Graphics API, I wrote a script that rendered an arrow cursor (which I donated to the ravynOS project).
I share the sentiment, which is why someone I ended up gravitating around technologies somehow related to it like Java, .NET, and the related languages on their ecosystems.
Also why despite not agreeing how Google went down with Java in Android, I appreciate their approach, because this kind of platforms apparently only get adoption with such kind of hard pushes, otherwise it would be yet another tiny UNIX clone.
Ironically is probably the closest we have on the market from Inferno/Limbo ideas on a mainstream OS.
"We love macOS, but we’re not a fan of the ever-closing hardware and ecosystem. So, we are creating ravynOS — an OS aimed to provide the finesse of macOS with the freedom of FreeBSD."
rayvnOS seems to be designed for people who love macOS, particularly its interface, its UI guidelines, and its ecosystem of applications, but who do not like the direction that Apple has moved toward under Tim Cook (soldered RAM, limited and inflexible hardware choices, notarization, iOS-influenced interface changes, increased pushiness with advertising Apple's subscription services, etc.) and who would be unhappy with either Windows or the Linux desktop.
Speaking for myself, I used to daily-drive Macs from 2006 through 2021, but I now daily-drive PCs running Windows due primarily to the lack of upgradable RAM in ARM Macs. I'm not a big fan of Windows, but I need some proprietary software packages such as Microsoft Office. This makes switching to desktop Linux difficult.
It would be awesome using what is essentially a community-driven clone of macOS, where I could continue using a Mac-like operating system without needing to worry about Apple's future directions.
On the Unix side of things, I believe the decision to base ravynOS on FreeBSD rather than on Linux may make migrating from macOS to ravynOS easier, since macOS is based on a hybrid Mach/BSD kernel, and since many of the command-line tools that ship with macOS are from the BSDs. This is known as Darwin. It's not that a Mac clone can't be built on top of Linux, but FreeBSD is closer to Darwin than Linux is.
It's not "soldered". It's integrated with the SoC. The benefit is memory latency and bandwidth.
If you know Framework, their entire mission is to build upgradeable laptops, and they keep delivering. Now they also wanted to build an incredibly powerful, but small and quiet desktop. They went directly to AMD, asked their engineers to make the memory upgradeable. AMD worked really hard and said not possible, not unless you want all of these cores to sit idle.
The world has moved on. Just as you no longer have discrete cache chips or discrete FPUs, you can't do discrete memory anymore - unless you don't need that level of performance, in which case CAMM is still an excellent choice.
But that's not what Apple does. M1 redefined the low-end. It will remain a great choice in 5 years, even when macOS kills it off - Asahi remains very decent.
no, they are talking about high performance desktops, mostly. They link to the Framework desktop, which has 256 GB/s memory bandwith. For comparison, the Apple Mac Pro has 800 GB/s memory bandwidth. Neither manufacturer is able to achieve these speeds using socketed memory.
> no, they are talking about high performance desktops
then i don't really get the "world has moved on"-claim. in my bubble socketed RAM is still the way to-go, be it for gaming or graphics work. of course Apple-user will use a Mac Pro, but saying that the world has moved on when it's about high-performance, deluxe edge-cases is a bit hyperbolic.
but maybe my POV is very outdated or whatever, not sure.
I think, but am not totally positive, this is primarily a concern for local LLM hardware. There are probably other niches, but I don't it's something most people need or would noticeably benefit from.
So somehow running MacOS in 2025 on hot, loud, horrible battery life x86 based computers is a good thing?
Not to mention x86 Mac apps are not long for this world. I can’t think of a single application I would miss moving from Macs to Windows. It’s more about the hardware and the integration with the rest of my Apple devices.
Notes and Reminders are extremely good at what they do, and the synchronization with their iOS equivalents is flawless from what I can tell… and fat chance you get to uproot such a thing to a non-Apple OS.
Third party apps other than for media editing seem to be rare, I think Apple has gobbled or rug pulled much of its independent software vendor ecosystem.
Come to think of it, it just dawned on me that most of the proprietary Mac programs I’ve used on Mac OS X/macOS (as opposed to the classic Mac OS) are either from Apple (Preview.app, Dictionary.app, iPhoto/Photos, iTunes/Apple Music, Keynote, iMovie, GarageBand), Microsoft (Office, Teams), or are Electron apps like Zoom and Slack. The only non-Microsoft, non-Electron third-party proprietary applications I’ve used on my Macs in the past 19 years are from the Omni Group, particularly OmniOutliner (which came bundled with my 2006 MacBook) and OmniGraffle.
It seems that what I miss the most about using a Mac whenever I’m on Windows or Linux is Apple’s bundled apps, not necessarily third-party Mac apps since I never used them much to begin with. Makes me think harder.
Apple Mail also is in my eyes the only generic mail client out there that really “gets it”.
Thunderbird has always felt clunky in comparison and the recent redesign just made it a different kind of clunky. Everything else is either too minimal (Geary), tries to clone old style Outlook (Evolution), or is tied to/favors a particular provider (Gmail, Outlook, etc).
This. I use Linux as my primary OS (with KDE) and my main complaint, by far, is the email/calendar situation. Mail.app simultaneously just works and gets out of my way, and I haven't seen a Linux email client come close to replicating that.
Every few years I convince myself I'll create a better email client for Linux, and I always start the project enthusiastically and stop soon after, when I get just far enough to be reminded of how complicated email is. Maybe someday I'll take a sabatical and actually do it...
That’s what I was implying when I said the integration.
As far as indie apps, BBEdit will survive the heat death of the universe and has made it through every Apple transition since at least System 7 in 1992.
Funny enough, I’ve only had one Apple computer during each era - an Apple //e (65C02), a Mac LC II (68K), A PowerMac 6100/60 (classic Mac PPC), Mac Mini G4 (OS X PPC), a Core Dúo Mac Mini (x86) and now a MacBook M2 Air.
I was never really that interested in x86 Macs and I just bought cheso Windows PCs that I really didn’t use that much outside of work except web browsing and back in the day iTunes.
This description really resonates with me, so I guess I’m a potential user.
I’ve been running macOS most of my life. In college I ran Linux on my laptops, but I switched back to macOS as the user experience was better - I could spend far less time messing with things and instead rely on system defaults and first party apps.
Year by year though I feel more like I don’t own my computer. I’ve tried switching back to Linux, but I always give up because despite the freedom, it starts feeling like a chore. Even Asahi Linux on macOS hardware I couldn’t get into.
The rayvnOS vision is something I could get behind. A fully packaged, macOS-like user experience, where the default settings are good and things work out of the box. I’d LOVE to have that as on option.
Linux compatibility or even macOS binary compatibility matters less to me than, say, an out of the box Time Machine like backup tool based on ZFS snapshots. So FreeBSD makes sense from that perspective.
I've been paying attention to this project periodically over the past few years. It would be nice to have a FOSS clone of macOS, similar to how FreeDOS, ReactOS, and Haiku are FOSS clones of MS-DOS, Windows, and BeOS, respectively.
The only thing is that this project has been quite slow going, which is similar to the histories of FreeDOS, ReactOS, and Haiku, where it took a long time for those projects to get to a usable state. It is a lot of work cloning an operating system, especially with an aim for binary compatibility. The Linux kernel benefited from the fact that there was an entire GNU ecosystem of tools that can run on Unix, and even in that case, the GNU ecosystem was seven years in the making in 1991 when the first version of the Linux kernel was released. It would've taken much longer for Linux to have been developed had GNU tools not existed.
Writing an entire operating system is long, hard work, even when provided the resources of companies like Microsoft, Apple, and Google. Hopefully projects like ravynOS and the similar HelloSystem (https://hellosystem.github.io/docs/) will lead to FOSS clones of macOS eventually, even if we need to wait another 5-10 years.
Sometimes it strikes me that something like this might be one of the better litmus tests for AI — if it’s really good enough to start 10x-ing engineers (let alone replacing them) it should be more common for more projects like this should begin to accelerate to practical usability.
If not, maybe the productivity dividends are mostly shallow.
The problem is that many of these clean room reimplementations require contributors to not have seen any of the proprietary source. You can't guarantee that with ai because who knows which training data was used
> You can't guarantee that with ai because who knows which training data was used
There are no guarantees in life, but with macOS you can know it is rather unlikely any AI was trained on (recent) Apple proprietary source code – because very little of it has been leaked to the general public – and if it hasn't leaked to the general public, the odds are low any mainstream AI would have been trained on it. Now, significant portions of macOS have been open-sourced – but presumably it is okay for you to use that under its open source license – and if not, you can just compare the AI-generated code to that open source code to evaluate similarity.
It is different for Windows, because there have been numerous public leaks of Windows source code, splattered all over GitHub and other places, and so odds are high a mainstream AI has ingested that code during training (even if only by accident).
But, even for Windows – there are tools you can use to compare two code bases for evidence of copying – so you can compare the AI-generated reimplementation of Windows to the leaked Windows source code, and reject it if it looks too similar. (Is it legal to use the leaked Windows source code in that way? Ask a lawyer–is someone violating your copyright if they use your code to do due diligence to ensure they're not violating your copyright? Could be "fair use" in jurisdictions which have such a concept–although again, ask a lawyer to be sure. And see A.V. ex rel. Vanderhye v. iParadigms, L.L.C.,
562 F.3d 630 (4th Cir. 2009))
In fact, I'm pretty sure there are SaaS services you can subscribe to which will do this sort of thing for you, and hence they can run the legal risk of actually possessing leaked code for comparison purposes rather than you having to do it directly. But this is another expense which an open source project might not be able to sustain.
Even for Windows – the vast majority of the leaked Windows code is >20 years old now – so if you are implementing some brand new API, odds of accidentally reusing leaked Windows code is significantly reduced.
Other options: decompile the binary, and compare the decompiled source to the AI-generated source. Or compile the AI-generated source and compare it to the Windows binary (this works best if you can use the exact same compiler, version and options as Microsoft did, or as close to the same as is manageable.)
Are those OSes actually that strict about contributors? That’s got to be impossible to verify and I’ve only seen clean room stuff when a competitor is straight up copying another competitor and doesn’t want to get sued
ReactOS froze development to audit their code.[1] Circumstantial evidence was enough to call code not clean. WINE are strict as well. It is impossible to verify beyond all doubt of course.
I’ve been thinking a long time about using AI to do binary decompilation for this exact purpose. Needless to say we’re short of a fundamental leap forward from doing that
This was my thought here as well. Getting one piece of software to match another piece of software is something that agentic AI tools are really good at. Like, the one area where they are truly better than humans.
I expect that with the right testing framework setup and accessible to Claude Code or Codex, you could iterate your way to full system compatibility in a mostly automated way.
If anyone on the team is interested in doing this, I’d love to speak to them.
In an actual business environment, you are right that its not a 10x gain, more like 1.5-2x. Most of my job as an engineer is gathering and understanding requirements, testing, managing expectations, making sure everyone is on the same page etc...it seems only 10-20% is writing actual code. If I do get AI to write some code, I still need to do all of these other things.
I have used it for my solo startups much more effectively, no humans to get in the way. I've used AI to replace things like designers and the like that I didn't have to hire (nor did I have the funds for that).
I can build mini AI agents with my engineering skills for simple non-engineering tasks that might otherwise need a human specialist.
Who's paying $30 to run an AI agent to run a single experiment that has a 20% chance of success?
On large code-bases like this, where a lot of context gets pulled in, agents start to cost a lot very quickly, and open source projects like this are usually quite short on money.
I have the unpopular opinion that like I have witness in person the transition from Assembly into high level languages, eventually many tasks that we manually write programs for, will be done with programable agents of some sort.
In an AI driven OS, there will be less need for bare bones "classical" programming, other than the AI infrastructure.
Now is this possible today, not really as the misteps from Google, Apple and Microsoft are showing, however eventually we might be there with a different programming paradigm.
Having LLMs generate code is a transition step, just like we run to Compiler Explorer to validate how good the compiler optimizer happens to be.
As someone who daily-drove Macs from 2006 through 2021 and who've since switched back to PCs, this does not surprise me. From 2017 through late 2021 I used a refurbished 2013 "trash can" Mac Pro as my daily driver; I have since moved to a Ryzen 9 3900 build as my daily driver. I actually purchased my Mac Pro from Apple not too long after Apple announced that it was still committed to pro users (https://web.archive.org/web/20170405022702/http://www.anandt...). It shipped with 12GB RAM, and during the COVID lockdowns of 2020 I upgraded it to 64GB RAM. My Mac Pro was a beast, and it's still a very capable machine despite the lack of support for current versions of macOS.
I want user-serviceability and expandability in my computers. I remember a time when Apple delivered this in spades; the Macintosh IIfx, the Quadra lineup, the beige Power Macintosh towers such as the 8600 and 9600, and the "new world" G3, G4, and G5 Power Macs were a testament to this. While I'm at it, let's also remember the NeXT Cube and the NeXTstation, which were wonderful workstations. Fast forward to 2013, and while the "trash can" Mac Pro lacks expansion slots, it does have user-serviceable RAM and storage, and some users have even upgraded its processor. This was a great machine, and it would've been cool had Apple kept updating it, though I know Apple ran into a wall with its dual-GPU approach.
Unfortunately, I'm disappointed with Apple's stewardship of the Mac Pro line. The 2019 Mac Pro is a very beautiful machine that supports unfathomable amounts of RAM (up to 768GB in the base models and up to 1.5TB in the highest-end models!), and I intend to buy one to add to my Mac collection when prices fall, but at the time of release the Mac Pro was prohibitively expensive: $5,999 compared to $2,999 for its predecessor. I was now priced out of buying a Mac Pro. The ARM-based Mac Pro has soldered RAM, which meant the entire Mac lineup lacked RAM upgradability, and is even more expensive at $6999.
I still pay attention to the Mac; I have a work-issued M3 MacBook Pro and I love its performance and battery life. However, I don't think I'll be buying a Mac for personal use unless Apple changes its direction. I want user-serviceable hardware, and while I'm at it, I want macOS to be unabashedly a workstation OS, not the increasingly iOS-ified environment we have today.
I've been working through The Rust Programming Language (https://doc.rust-lang.org/book/title-page.html) in preparation for this year's Advent of Code. I've been meaning to learn Rust for some time, and I want to use my winter break (I'm a professor) to make some small Rust projects in preparation for a larger side project involving operating systems design that I want to do in Rust.
There's nothing wrong with an academic/research language like Racket, Oberon, and Standard ML.
reply