Hacker Newsnew | past | comments | ask | show | jobs | submit | dedicate's commentslogin

Honestly, I think focusing on what is being censored is missing the point. Today it's adult content, which is an easy target. But what about tomorrow? What if they decide your political donation is 'problematic', or the indie news site you subscribe to is 'misinformation'? We're handing a kill switch for legal commerce to a handful of unelected execs.

> Today it's adult content, which is an easy target.

Not even that easy of a target, because the crazy people in America want to call anything that acknowledges the existence of LGBTQ people or how they exist within greater society is "adult content" or "pornographic."


[flagged]


Banality of evil. Being affable to you in particular doesn't mean not evil, especially to others. And the eliminationism on LGBT status is pretty damn evil.

100% agreed and I think many people are missing the forest from the trees on this issue.

This stuff always starts off with "think of the children" and then evolves into something else entirely.

How about when we have a game spewing rhetoric about religion being bad (the Assassin's Creed franchise being one example) - should card processors force steam to remove those too, to continue using their payments infrastructure?


Correct.

Combine required/trendy KYC. With trendy reputation score and similar attributes for purchase/sale/trade. With machine agent automated decision flows. With pushing everything into digital. With near mono/duo/tri-opolization of goods/services.

Then, a near invisible control grid is almost complete.

But that's the goal on the face of it.

Let's use it for good, all.


Isn't that already the case? Gofundme has definitely canceled donations for racist endeavors. That's kinda similar, isn't it?

The 'no more tools to learn' promise is powerful, but here's my hang-up: isn't this... just another tool? A layer between my notes and the web, with its own app and pricing. Feels like we're just trading one set of complexities for another. Is this true simplicity, or just a different kind of abstraction? Maybe I'm just being cynical lol.

I think by tools they mean more complex tools such as static site generators. Rather than an application where you select a Notes Folder and it does the rest.

Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.


FWIW, here are the repos for the CLI tool [1] and backend [2]. Looks like it is indeed VM-based container support (as opposed to WSLv1-style syscall translation or whatever):

  Containerization provides APIs to:
  [...]
  - Create an optimized Linux kernel for fast boot times.
  - Spawn lightweight virtual machines.
  - Manage the runtime environment of virtual machines.
[1] https://github.com/apple/container [2] https://github.com/apple/containerization


I'm kinda ignorant about the current state of Linux VMs, but my biggest gripe with VMs is that OS kernels kind of assume they have access to all the RAM the hardware has - unlike the reserve/commit scheme processes use for memory.

Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?

Or maybe could Apple patch the kernel to do exactly this?

Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.


It's still a problem for containers-in-VMs. You can in theory do something with either memory ballooning or (more modern) memory hotplugging, but the dance between the OS and the hypervisor takes a relatively long time to complete, and Linux just doesn't handle it well (eg. it inevitably places unmovable pages into newly reserved memory, meaning it can never be unplugged). We never found a good way to make applications running inside the VM able to transparently allocate memory. You can overprovision memory, and hypervisors won't actually allocate it on the host, and that's the best you can do, but this also has problems since Linux tends to allocate a bunch of fixed data structures proportional to the size of memory it thinks it has available.


That's called memory balooning and is supported by KVM on Linux. Proxmox for example can do that. It does need support on both the host and the guest.


it's not as straightforward a solution as it sounds, though


> Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?

Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.


> or just what the guest OS is actually using should depend on the hypervisor itself.

How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.

This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.


A generic vmm can not, but these are specific vmms so they can likely load dedicated kernel mode drivers into the well known guest to get the information back out.


The driver would still be part of the guest.


If you control both the VMM and the guest through a driver you have an essentially infinite latitude to set up communications between the two: virtual devices, iommu, interrupts, ...


Just looked it up - and the answer is 'baloon drivers', which are special drivers loaded by the guest OS, which can request and return unused pages to the host hypervisor.

Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.


The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.


It’s one reason i don’t like WSL2. When you compile something which needs 30 GB RAM the only thing you can do is terminate the wsl2 vm to get that ram back.


Since late 2023, WSL2 has supported "autoMemoryReclaim", nominally still experimental, but works fine for me.

add:

[experimental] autoMemoryReclaim=gradual

to your .wslconfig

See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config


I just noticed the addition of container cask when I ran b”brew update”.

I chased the package’s source and indeed it’s pointing to this repo.

You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.


There’s some problem with networking: if you try to run multiple containers, they won’t see each other. Could probably be solved by running a local VPN or something.


WSLv1 never supported a native docker (AFAIK, perhaps I'm wrong?)

That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.


This doesn't look like WSL1. They're not running Linux syscalls to the macOS kernel, but running Linux in a VM, more like the WSL2[0] approach.

[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...


In the end they're probably run into the same issues that killed WSL1 for Microsoft— the Linux kernel has enormous surface area, and lots of pretty subtle behaviour, particularly around the stuff that is most critical for containers, like cgroups and user namespaces. There isn't an externally usable test suite that could be used to validate Microsoft's implementation of all these interfaces, because... well, why would there be?

Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.


Yeah, it probably would be feasible to dust off the FreeBSD Linux compatibility layer[1] and turn that into native support for Linux apps on Mac.

I think Apple’s main hesitation would be that the Linux userland is all GPL.

[1]: https://docs.freebsd.org/en/books/handbook/linuxemu/


If they built as a kernel extension it would probably be okay with gpl.

There’s a huge opportunity for Apple to make kernel development for xnu way better.

Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).

If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.

I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.


My impression is they’re basically trying to end third party kernel development; macOS has been making it progressively more difficult to use kexts and has been providing alternate toolkits for doing things that used to require drivers.


It's impossible to have "native" support for Linux containers on macOS, since the technology inherently relies on Linux kernel features. So I'm guessing this is Apple rolling out their own Linux virtualization layer (same as WSL). Probably still an improvement over the current mess, but if they just support LXC and not Docker then most devs will still need to install Docker Desktop like they do today.


Apple has had a native hypervisor for some time now. This is probably a baked in clone of something like https://mac.getutm.app/ which provides the stuff on top of the hypervisor.


In case you're wondering, the Hypervisor.framework C API is really neat and straightforward:

1. Creating and configuring a virtual machine:

    hv_vm_create(HV_VM_DEFAULT);
2. Allocating guest memory:

    void* memory = mmap(...);
    hv_vm_map(memory, guest_physical_address, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
3. Creating virtual CPUs:

    hv_vcpu_create(&vcpu, HV_VCPU_DEFAULT);
4. Setting registers:

    hv_vcpu_write_register(vcpu, HV_X86_RIP, 0x1000); // Set instruction pointer
    hv_vcpu_write_register(vcpu, HV_X86_RSP, 0x8000); // Stack pointer
5. Running guest code:

    hv_vcpu_run(vcpu);
6. Handling VM exits:

    hv_vcpu_exit_reason_t reason;
    hv_vcpu_read_register(vcpu, HV_X86_EXIT_REASON, &reason);


Thanks for this ! Apple Silicon?


One of the reasons OrbStack is so great is because they implement their own hypervisor: https://orbstack.dev/

Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.


How does it compare to apple’s hv?


Better filesystem support (https://orbstack.dev/blog/fast-filesystem) and memory utilization (https://orbstack.dev/blog/dynamic-memory)


Using a hypervisor means just running a Linux VM, like WSL2 does on Windows. There is nothing native about it.

Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.


Hyper-V is a type 1 hypervisor, so Linux and Windows are both running as virtual machines but they have direct access to hardware resources.

It's possible that Apple has implemented a similar hypervisor here.


Surely if Windows kernel can be taught to respond to those syscalls, XNU can be taught it even easier. But, AIUI the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically


WSL1 didn't use the existing support for personalities in NT


XNU similarly has a concept of "flavors" and uses FreeBSD code to provide the BSD flavor. Theoretically, either Linux code or a compatibility layer could be implemented in the kernel in a similar way. The former won't happen due to licensing.


> the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically

XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.


Exactly. So it wouldn't necessarily be easier. NT is almost a microkernel.


Yep. People consistently underestimate the great piece of technology NT is, it really was ahead of its time. And a shame what Microsoft is doing with it now.


Was it ahead? I am not sure. There was lots of research on microkernels at the time and NT was a good compromise between a mono and a microkernel. It was an engineering product of its age. A considerably good one. It is still the best popular kernel today. Not because it is the best possible with today's resouces but because nobody else cares about core OS design anymore.

I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.

Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).


> Was it ahead? I am not sure.

You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)

NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.

> nobody else cares about core OS design anymore.

Agree with you on that one.


> You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)

I meant that NT was a product that matched the state of the art OS design of its time (90s). It was the Unix world that decided to be behind in 80s forever.

NT was ahead not because it is breaking ground and bringing in new design aspects of 2020s to wider audiences but Unix world constantly decides to be hardcore conservative and backwards in OS design. They just accept that a PDP11 simulator is all you need.

It is similar to how NASA got stuck with 70s/80s design of Shuttle. There was research for newer launch systems but nobody made good engineering applications of them.


Unix 'died' with plan9/9front, which is far more advanced than Unix v7 for a PDP or a DEC, can't remember.

9front is to Unix was NT it's for VMS.


It is as native as any Linux cloud instance.


> The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It's built on an open-source framework optimized for Apple Silicon and provides secure isolation between container images

That's their phrasing, which suggests to me that it's just a virtualization system. Linux container images generally contain the kernel.


> Linux container images generally contain the kernel.

No, containers differ from VMs precisely in requiring dependency on the host kernel.


Hmm, so they do. I assumed because you pulled in a linux distro that the kernel was from that distro is used too, but I guess not. Perhaps they have done some sort of improvement where they have one linux kernel running via the hypervisor that all containers use. Still can't see them trying to emulate linux calls, but who knows.


> I assumed because you pulled in a linux distro that the kernel was from that distro is used too,

Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.


> Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.

Can't edit my posts mobile but realized that's, what's the word, not useful... But yeah, sharing the kernal between containers but otherwise makes them isolated allegedly allows them to have VMesque security without the overhead of seperate VMs for each image. There's a lot more to it, but you get the idea.


They usually do contain a kernel because package managers are too stupid to realise it’s a container, so they install it anyway.


The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.


Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.


If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.



Yeah, from a quick glance the options are 1:1 mapped so an

  alias docker='container'
Should work, at least for basic and common operations


What about macOS being derived from BSD? Isn’t that where containers came from: BSD jails?

I know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.


OS X pulls some components of FreeBSD into kernel space, but not all (and those are very old at this point). It also uses various BSD bits for userspace.

Good read from horse mouth:

https://developer.apple.com/library/archive/documentation/Da...


Thank you—I’ll give that a read. :)


„Container“ is sort of synonymous with „OCI-compatible container“ these days, and OCI itself is basically a retcon standard for docker (runtime, images etc.). So from that perspective every „container system“ is necessarily „docker-like“ and that means Linux namespaces and cgroups.


With a whole generation forgetting they came first in big iron UNIX like HP-UX.


Interesting. My experience w/ HP-UX was in the 90s, but this (Integrity Virtual Machines) was released in 2005. I might call out FreeBSD Jails (2000) or Solaris Zones (2005) as an earlier and a more significant case respectively. I appreciate the insight, though, never knew about HP-UX.

https://en.wikipedia.org/wiki/HP_Integrity_Virtual_Machines


HP-UX Vault, released with HP-UX 10.24, in 1996,

https://en.m.wikipedia.org/wiki/HP-UX

What you searched for is an evolution of it.


Does it really matter, tho?


Another reason it matters is they might have done it differently which could inspire future improvements. :)

I like to read bibliographies for that reason—to read books that inspired the author I’m reading at the time. Same goes for code and research papers!


Some people think it matters to properly learn history, instead of urban myths.


History is one thing, who-did-it-first is often just a way to make a point in faction debates. In the broader picture, it makes little difference IMHO.


Conceptually similar but different implementations. Containers uses cgroups in Linux and there is also file system and network virtualization as well. It's not impossible but it would require quite a bit of work.


Another really good read about containers, jails and zones.

https://blog.jessfraz.com/post/containers-zones-jails-vms/


BSD jails are architected wholly differently from what something like Docker provides.

Jails are first-class citizens that are baked deep into the system.

A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.

Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.

Someone correct me please.


> BSD jails are architected wholly differently from what something like Docker provides. > Jails are first-class citizens that are baked deep into the system.

Both very true statements and worth remembering when considering:

> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.

You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.

The XNU source can be found here[2].

0 - https://en.wikipedia.org/wiki/XNU

1 - https://en.wikipedia.org/wiki/Mach_(kernel)

2 - https://github.com/apple-oss-distributions/xnu

3 - https://cgit.freebsd.org/src/

4 - https://man.freebsd.org/cgi/man.cgi?query=jail&apropos=0&sek...


Thank you for the links I will take a closer look at XNU. It’s neat to see how these projects influence each other.


> Thank you for the links I will take a closer look at XNU.

Another great resource regarding XNU and OS-X (although a bit dated now) is the book:

  Mac OS X Internals
  A Systems Approach[0]
0 - https://openlibrary.org/books/OL27440934M/Mac_OS_X_Internals


This is great! Thank you!


> what something like Docker provides

Docker isn't providing any of the underlying functionality. BSD jails and Linux cgroups etc aren't fundamentally different things.


Jails were explicitly designed for security, cgroups were more generalized as more about resource control, and leverages namespaces, capabilities, apparmor/SELinux to accomplish what they do.

> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]

While you can accomplish similar tasks, they are not equivalent.

Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.

Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.

[1] https://freebsdfoundation.org/freebsd-project/resources/intr...


WSL throughput is not enough for file intensive operations. It is much easier and straightforward to just delete windows and use Linux.


Unless you need to have a working video or audio config as well.


Using the Linux filesystem has almost no performance penalty under WSL2 since it is a VM. Docker Desktop automatically mounts the correct filesystem. Crossing the OS boundary for Windows files has some overhead of course but that's not the usecase WSL2 is optimized for.

With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.


FWIW I get better battery life with ubuntu.


Are you comparing against the default vendor image that's filled with adware or a clean Windows install with only drivers? There is a significant power use difference and the latter case has always been more power efficient for me compared to the Linux setup. Powering down Nvidia GPU has never fully worked with Linux for me.


How? What's your laptop brand and model? I've never had better battery life with any machine using ubuntu.


If they implemented the Linux syscall interface in their kernel they absolutely could.


Aren't the syscalls a constant moving target? Didn't even Microsoft fail at keeping up with them in WSL?


Linux is exceptional in that it has stable syscall numbers and guarantees stability. This is largely why statically linked binaries (and containers) "just work" on Linux, meanwhile Windows and Mac OS inevitably break things with an OS update.

Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].

[1] https://j00ru.vexillium.org/syscalls/nt/64/

[2] https://go.dev/doc/go1.11#runtime


arm64 macOS doesn't even allow statically linked binaries at all.

on the windows side, syscall ABI became stable since Server 2022 to run mismatched container releases


Not Linux syscalls, they are a stable interface as far as the Linux kernel is concerned.


They're not really a moving target (since some distros ship ancient kernels, most components will handle lack of new syscalls gracefully), but the surface is still pretty big. A single ioctl() or write() syscall could do a billion different things and a lot of software depends on small bits of this functionality, meaning you gotta implement 99% of it to get everything working.


FreeBSD and NetBSD do this.


They didn't.


WSL doesn't have a virtualization layer, WSL1 did have but it wasn't a feasible approach so WSL2 is basically running VMs with the Hyper-V hypervisor.

Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.


I installed Orbstack without Docker Desktop.


WSL 1.0, given that WSL 2.0 is regular Linux VM running on HYPER-V.


I wonder if User-Mode Linux could be ported to macOS...


It would probably be slower than just running a VM.


> Meet Containerization, an open source project written in Swift to create and run Linux containers on your Mac. Learn how Containerization approaches Linux containers securely and privately. Discover how the open-sourced Container CLI tool utilizes the Containerization package to provide simple, yet powerful functionality to build, run, and deploy Linux Containers on Mac.

https://developer.apple.com/videos/play/wwdc2025/346/


> Containerization executes each Linux container inside of its own lightweight virtual machine.

That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.


Podman Desktop, and probably other Linux-containers on macOS tools, can already create multiple VMs, each hosting a subset of the containers you run on your Mac.

What seems to be different here, is that a VM per each container is the default, if not only, configuration. And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.

Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.


The ground keeps shrinking for Docker Inc.

They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.

On Linux there’s just the cli, which they can’t afford to close since people will just move away.

Docker Hub likely can’t compete with the registries built into every other cloud provider.


There is already a paid alternative, Orbstack, for macOS which puts Docker for Mac to shame in terms of usability, features and performance. And then there are open alternatives like Colima.


Use OrbStack for sometime, made my dev team’s m1 run our kubernetes pods in a much lighter fashion. Love it.


How does it compare to Podman, though?


Podman works absolutely beautifully for me, other platforms, I tripped over weird corner cases.


That is why they are now into the reinventing application servers with WebAssembly kind of vibe.


It’s really awful. There’s a certain size at which you can pivot and keep most of your dignity, but for Docker Inc., it’s just ridiculous.


They got Sherlocked.


It's cool but also not as revolutionary as you make it sound. You can already install Podman, Orbstack or Colima right? Not sure which open-source framework they are using, but to me it seems like an OS-level integration of one of these tools. That's definitely a big win and will make things easier for developers, but I'm not sure if it's a gamechanger.


All those tools use a Linux VM (whether managed by Qemu or VZ) to run the actual containers, though, which comes with significant overhead. Native support for running containers -- with no need for a VM -- would be huge.


there's still a VM involved to run a Linux container on a Mac. I wouldn't expect any big performance gains here.


Still needs a VM. It'll be running more VMs than something like orbstack, which I believe runs just one for the docker implementation. Whether that means better or worse performance we'll find out.


Yes, it seems like it's actually a more refined implementation than what currently exists. Call me pleasantly surprised!


The framework that container uses is built in Swift and also open sourced today, along with the CLI tool itself: https://github.com/apple/containerization


It looks like nothing here is new: we have all the building blocks already. What Apple done is packaged it all nicely, which is nothing to discount: there's a reason people buy managed services over just raw metal for hosting their services, and having a batteries included development environment is worth a premium over the need to assemble it on your own.


The containerization experience on macOS has historically been underwhelming in terms of performance. Using Docker or Podman on a Mac often feels sluggish and unnecessarily complex compared to native Linux environments. Recently, I experimented with Microsandbox, which was shared here a few weeks ago, and found its performance to be comparable to that of native containers on Linux. This leads me to hope that Apple will soon elevate the developer experience by integrating robust containerization support directly into macOS, eliminating the need for third-party downloads.


Docker at least runs a linux vm that runs all those containers. Which is a lot of needless overhead.

The equivalent of Electron for containers :)


Use Colima.


yeah -- I saw it's built on "open source foundations", do you know what project this is?


My guess is Podman. They released native hypervisor support on macOS last year. https://devclass.com/2024/03/26/podman-5-0-released-with-nat...


My guess is nerdctl and containerd.


The CLI sure looks a lot like Docker.


If I had to guess, colima? But there are a number of open source projects using Apple's virtualisation technologies to run a linux VM to host docker-type containers.

Once you have an engine podman might be the best choice to manage containers, or docker.


Being able to drop Docker Desktop would be great. We're using Podman on MacOS now in a couple places, it's pretty good but it is another tool. Having the same tool across MacOS and Linux would be nice.


Migrate to Orbstack now, and get a lot of sanity back immediately. It’s a drop-in replacement, much faster, and most importantly, gets out of your way.


There's also Rancher Desktop (https://rancherdesktop.io/). Supports moby and containerd; also optionally runs kubernetes.


I have to drop docker desktop at work and move to podman.

I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.

Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL


Orbstack



vminitd is the most interesting part of this.


Colima is my guess, only thing that makes sense here if they are doing a qemu vm type of thing


That's my guess too... Colima, but probably doing a VM using the Virtualization framework. I'll be more curious if you can select x86 containers, or if you'll be limited to arm64/aarch64. Not that it really makes that much of a difference anymore, you can get pretty far with Linux Arm containers and VMs.


Should be easy enough, look for the one with upstream contributions from Apple.

Oh, wait.


They Sherlocked OrbStack.


Well, Orbstack isn't really anything special in terms of its features, it's the implementation that's so much better than all the other ways of spinning up VMs to run containers on macos. TBH, I'm not 100% sure 2025 Apple is capable anymore of delivering a more technically impressive product than orbstack ...


That's a good thing though right?


It would be better for the OrbStack guy if they bought it.


Apple sees some nice code under a pushover license and they just can’t help themselves.


Interestingly it looks like Apple has rewritten much of the Docker stack in Swift rather than using existing Go code.


I thought it's more like Colima than OrbStack

https://github.com/abiosoft/colima


Microsoft did it first to Virtual Box / VMWare Workstation thought.

That is what I have been using since 2010, until WSL came to be, it has been ages since I ever dual booted.


Orbstack has been pretty bulletproof


Orbstack is not free for commercial use

https://orbstack.dev/pricing

Orbstack owners are going to be fuming at this news!


I’ve been using Colima for a long while with zero issues, and that leverages the older virtualization framework.


It's a VM just like WSL... So yeah.


WSL 2 involves a VM. WSL 1, which is still maintained and usable, doesn't.

https://learn.microsoft.com/en-us/windows/wsl/compare-versio...


Ok, I've squeezed containerization into the title above. It's unsatisfactory, since multiple announced-things are also being discussed in this thread, but "Apple's kitchen-sink announcement from WWDC this year" wouldn't be great either, and "Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design" is right out.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Title makes sense to me.

It seems like a big step in the right direction to me. It's hard to tell if its 100% compatible with Docker or not, but the commands shown are identical (other than swapping docker for container).

Even if its not 100% compatible this is huge news.


> Apple Announces Foundation Models and Containerization frameworks, etc.

This sounds like apple announced 2 things, AI models and container related stuff I'd change it to something like:

> Apple Announces Foundation Models, Containerization frameworks, more tools


The article says that what was announced is "foundation model frameworks", hence the awkward twist in the title, to get two frameworkses in there.


Small nitpick but "Announces" being capitalized looks a bit weird to me.



Then you would expect "frameworks" to be capitalized as well.



They labeled it a nitpick. Seems fair.


Me too - I thought I'd fixed that! Fixed now, thanks.


We treat our apps like they're finished paintings to be hung on a wall, but Google and Apple treat them like tamagotchis we have to keep alive.


I feel like the real story isn't 'Can We Save Commodore?' but 'What IS Commodore anymore?'. If it's just a trademark disconnected from its original tech, you're not reviving a legend, you're just starting a new company with a famous name.


Commodore as a company died long ago, as can be seen in Deathbed Vigil (a video recorded by one of the employees on the last day).

For a long time it has been misused and nearly disappeared. It is about 40+ trademarks owned by a holding company. It could have been worse if this was scattered among a lot of different entities. So, this is still kind of a big deal as they can acquire all of them


Famous with old people.

The new generation has no idea


There are people now that know what a Commodire 64 is but have never heard of an Amiga. The C64 made a cultural impression that lasts today. I'm not sure why but anongst those born after both machines had had their day, quite a few have an idea what the C64 was.

Talking to the younger generation about the 8 bit era is wild. I mentioned that my first system (TRS-80) had 4k to someone and they expressed surprise at that you could get a monitor that good back then.


> There are people now that know what a Commodire 64 is but have never heard of an Amiga. The C64 made a cultural impression that lasts today.

I would claim that in my generation (people born after Commodore's heyday), those who are interested in retrocomputing topics (a minority) are similarly aware of the C64 and the Amiga and their cultural relevances.


> I mentioned that my first system (TRS-80) had 4k to someone and they expressed surprise at that you could get a monitor that good back then.

Admittedly didn't register with me at first but this is hilarious.


Seems like a made up interaction tbh. When would you just say “my trs80 had 4k” without specifics 4k of what unless the other person first said something like “my first computer only had 8MB of RAM” in which case they know from context you’re talking about memory, or you out of nowhere say “my trs80 had 4k” to someone and they have no idea what you’re talking about and ask if you mean the monitor.


I assure you it is not made up. Obviously it's not the literal words I used, I'm working from personal recall, not notes. It would have been part of a larger conversation I was having. From memory it was part of my answer to the question "how long have you been programming?"


Oooh, book 'em Danno!


You'd think thats the case but all these retro Youtubers seems to have inspired a new generation to take up the mantle. I attend the Vintage Computing Festival(the one in NJ) each year and post pandemic it has been filled to the brim with young GenZ era people interested in experiencing the computers of yesteryear. Is it enough to make this venture worth it? We shall see. All the Retro Computing Youtubers including this one must have some demographic data to help fill inn the picture.


wrong. little idea but not no


The only 'real' claim I could see to saving Commodore would be something adding backwards compatability this late in the game. It would be of dubious utility, but it would give a claim to legitimacy. Otherwise you might as well let it stay dead because there isn't anything to be gained from using it.



Kind of the same situation with Atari


In my daily use, I just want the answer, not a performance. I'd rather it sound like a smart assistant, not my best friend.


This sort of tech is also useful in that situation since it can better understand and deliver vocal nuances (e.g., emphasis/tone that delivers meaning)


I'm always blown away by the vision behind stuff like HyperCard. It was all about giving non-techies the keys to the kingdom.

But looking at today's tech landscape, with its walled gardens and app stores, I can't help but feel we've gone backwards.


Apparently we need to be doing more LSD


I wish safe, tested sources were generally available. I’m 55 this year and would like to try it, but I’m not going to buy street drugs nor am I capable of producing it. Is there a pharmaceutical version of LSD available somewhere in the world through legitimate channels?


Not exactly LSD, but psilocybin clinics have been legalized in certain locations, such as the US state of Oregon. Psilocybin is of the same psychedelic class (tryptamines), so it is not an entirely dissimilar experience, although for me it's less stimulating than LSD, so YMMV.

I understand though that clinics aren't the ideal for many (they are for some), since you aren't allowed to have the trip at home or leave the clinic until it is over.


I actually think I would be more comfortable in a clinic.


Then that may be an option for you. It just needs ... a diagnosis of treatment-resistant depression and a prescription for psilocybin therapy by a specially licensed psychiatrist...


Not sure about "safe and tested" but LSD prodrugs (substances that metabolise into LSD which then works as usual) are available in many places. One example is this https://en.wikipedia.org/wiki/1D-LSD .

Eventually they are made illegal but new ones appear.


If you haven't done it by 55, you probably aren't going to do it. There are easy ways to get safe LSD if you want it. But you do not actually want it.


It's possible to want something but not enough to break the law and risk your safety for it. I use LSD regularly, but that doesn't mean sourcing it is for everyone.


LSD can be quite helpful to the right mind and when used with the right mindset. It can also be quite harmful if used improperly. Still wish it were legal though.


What's worse, in context here, is Apple's distinguished primary role in bringing this about.


It's like they remembered their 1984 advert, and decided they wanted to be the baddy in it.


Idk 2003-2009 was very much the days of the sort of malware and spyware that showed developers in a company didn’t deserve rights anymore


I don't see what that has to do with Hypercard. If anything, Hypercard (or modern HTML) is living proof that you can create and share a secure software runtime with the world.

If developers "didn't deserve rights" for what they did with that, then I don't see how we should let Apple off the hook for PRISM compliance and backdoored Push Notifications.


HyperCard is completely insecure by any reasonable security/privacy standard.


Swift Playgrounds is very much in the spirit of HyperCard, but also gives access to the same APIs the professional developers are using.

It's also designed to be usable and educational for kids.


Yeah, Hypercard or MacPaint (really a demo for Quickdraw). Had he done only one of those two he would still rank as a genius.


From a particular POV, they’re it’s the same evolutionary chain. QuickDraw -> MacPaint -> HyperCard.


It's really hard to extract computing from the capitalistic, consumerist cradle within which it was born.

Every other human creative practice and media (poetry, theater, writing, music, painting, etc) have existed in a wide variety of cultures, societies, and economic contexts.

But computing has never existed outside of the immensely expensive and complex factories & supply chains required to produce computing components; and corporations producing software and selling it to other corporations, or to the large consumer class with disposable income that industrialization created.

In that sense the momentum of computing has always been in favor of the corporations manufacturing the computers dictating what can be done with them. We've been lucky to have had a few blips like the free software movement here and there (and the outsized effect they've had on the industry speaks to how much value there is to be found there), but the hard reality that's hard to fight is that if you control the chip factories, you control what can be done with the chips - Apple being the strongest example of this.

We're in dire need of movements pushing back against that. To name one, I'm a big fan of the uxn approach, which is to write software for a lightweight virtual machine that can run on the cheap, abundant, less/non locked down chips of yesteryear that will probably still be available and understandable a century from now.


Part of the problem trying to isolate computing is that it's fundamentally material. Even cloud resources are a flimsy abstraction over a more complex business model. That materialism is part of the issue, too. You can't ever escape the churn, bit rot gets your drives and Hetzner doesn't sell a lifetime plan. If you're not computing for the short-term, you're arguably wasting your time.

I'm not against the idea of a disasterproof runtime, but you're not "pushing back" against the consumerist machine by outlasting it. When high-quality software becomes inaccessible to support some sort of longtermist runtime, low-quality software everywhere sees a rise in popularity.


But computing has never existed outside of the immensely expensive and complex factories & supply chains required to produce computing components; and corporations producing software and selling it to other corporations, or to the large consumer class with disposable income that industrialization created.

You must be too young to have experienced the time when it was expected that you would build your own computer at home, and either write your own software for it, or get it for free (or just a duplication beer) from the local computer club.


you can only blame capitalism so much for the unpopularity of hypercardlike things vs instagram/facebook/twitter etc

on some level it is just human nature to want to consume than create. just is. its not great but lets not act like people havent tried to make creative new platforms for self expression and software creation and they all kinda failed


> is just human nature to want to consume than create

That may be true.

But it doesn't really explain why the tools for simple popular creation are not there. There are a lot of people in the world who would use them, even if its only 1%.


They were there, but nobody used them.

For a long while, Apple computers came an entrie creative suite of programs to make your own content and publish it on the Internet via iWeb.

For a variety of reasons, hardly anyone took advantage of it.


I totally agree


> feel we've gone backwards

The word you are looking for is enshittification.


Honestly, this whole story feels like a massive red flag for the company, not the data scientist.

They loved his attitude when it challenged broken systems, but the second he aimed that same energy at management's BS, he was out. Makes you wonder... is "culture fit" just code for "won't call out your boss"?


I don't actually want to have a deep, philosophical conversation with a blacksmith.

I just want to see that blacksmith close up shop early because he's feuding with the town guard, or give me a discount because his daughter just won the local archery competition. I want a world that reacts to itself, not just to me.

The goal shouldn't be to make NPCs that can pass the Turing test, but to make a world that feels like it has a pulse.


Do you really want that in a scrolls game though? I want the blacksmith to be first npc in the town, more or less always there, with 1 button on the dialog tree to get to the shop menu for me to unload an entire dungeon of loot onto this blacksmith. And he better have ore and leather strips.


And I want to be able to game player statistics using a combination of spells and potions so I can pickpocket the blacksmith and then sell their stuff back at marked-up prices. The traditional RPG numbers-and-skills-and-formulas part of TES was a great joy to exploit.


Agreed, that's the real dream of open world RPGs: dynamic worlds. Perhaps modern AI techniques can help in that a bit, but what you really need is an incredibly intricate simulation.


It doesn’t need to be a deep philosophical conversation. You could be striking up a “buy now pay later” business deal or asking him to produce a specific type of equipment according to your specifications, etc.


>I don't actually want to have a deep, philosophical conversation with a blacksmith.

You didn’t read the article, that’s not what Radiant AI did. This is from twenty years ago and has nothing to do with LLMs.


This isn't a chart of returns; it's a chart of who had nerves of absolute steel. Be honest, who here has actually lived through a major dip and not been tempted to smash that "sell" button?


I don't get tempted to smash the sell button during major dips, I get tempted to smash the buy button. The only time I smashed the sell button was during the stock market bubble in 1999.

I wanted to buy during this year's dip but I took a look at my asset allocation and I was still way overweight in US stocks compared to my target so I couldn't justify it. Hopefully people freak out even more next time.

Stock market crashes are my happy place.


I think the same could be said of anyone who has successfully traded regularly over several years.

To do that kind of business you have to have mastered yourself or set up systems where the emotional rollercoaster ride doesn't change your choices.

> I wanted to buy during this year's dip...

The Stock market hasn't had a real crash in quite a long time as evidenced by a number of things including the PE values and stock buybacks, lack of general price discoverability towards chaotic whipsaws and the indexes topping all time highs.

People are going to lose the shirts off their backs when it does come, and it will come suddenly without warning. Best to keep that in mind when greed might try to lead you astray. Greed is both a friend and a trader's worst enemy.

Printing money enrolls participants in boom bust cycles. We've had a boom for the last 10+ years nearly straight. The stock market is way overdue for a crash. Its an avalanche prone area with a massive snowpack built up. There's always some chaotic trigger that gets everything moving again.


> I don't get tempted to smash the sell button during major dips, I get tempted to smash the buy button.

Same. Not quite dollar-cost averaging, but when I'm confident in the long term outlook of a company, if they dip, I like to buy more.

Just before the COVID lockdowns, I sold everything and went to cash for a month or so. Bought back all my previous positions right about the bottom of the dip. Massive gains on that one... hard to time things that well all the time though.

Edit: One additional thing is that you can't be emotionally invested in the investments. That's just a recipe for poorly thought out choices, and ultimately a disaster


>Be honest, who here has actually lived through a major dip and not been tempted to smash that "sell" button?

Honestly, I've lived through four (1987[0], dot-com, 2008 recession, 2020 coronavirus) and was not tempted. For the first two, I was much too far away from retirement to worry about it, and for the third, I was still over ten years away from even beginning to think about retirement distributions, and because of my experience with the first two, again was not tempted. The fact that my investments for much of that time were in pre-tax accounts helped me avoid feeling some pain as well.

Nowadays, I try to keep, in addition to an emergency fund, about a year's worth of retirement distributions in money market equivalents, even inside my IRA.

[0]My employer started a 401k plan prior to 1987


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: