Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a good history, but it skips over a lot of the nice security work that really distinguishes Apple's operating systems from Linux or Windows. There's a lack of appreciation out there for just how far ahead Apple now is when it comes to security. I sometimes wonder if one day awareness of this will grow and people working in sensitive contexts will be required to use a Mac by their CISO.

The keystone is the code signing system. It's what allows apps to be granted permissions, or to be sandboxed, and for that to actually stick. Apple doesn't use ELF like most UNIXs do, they use a format called Mach-O. The differences between ELF and Mach-O aren't important except for one: Mach-O supports an extra section containing a signed code directory. The code directory contains a series of hashes over code pages. The kernel has some understanding of this data structure and dyld can associate it with the binary or library as it gets loaded. XNU checks the signature over the code directory and the VMM subsystem then hashes code pages as they are loaded on demand, verifying the hashes match the signed hash in the directory. The hash of the code directory therefore can act as a unique identifier for any program in the Apple ecosystem. There's a bug here: the association hangs off the Mach vnode structure so if you overwrite a signed binary and then run it the kernel gets upset and kills the process, even if the new file has a valid signature. You have to actually replace the file as a whole for it to recognize the new situation.

On top of this foundation Apple adds code requirements. These are programs written in a small expression language that specifies constraints over aspects of a code signature. You can write a requirement like, "this binary must be signed by Apple" or "this binary can be of any version signed by an entity whose identity is X according to certificate authority Y" or "this binary must have a cdhash of Z" (i.e. be that exact binary). Binaries can also expose a designated requirement, which is the requirement by which they'd like to be known by other parties. This system initially looks like overkill but enables programs to evolve whilst retaining a stable and unforgeable identity.

The kernel exposes the signing identity of tasks to other tasks via ports. Requirements can then be imposed on those ports using a userspace library that interprets the constraint language. For example, if a program stores a key in the system keychain (which is implemented in user space) the keychain daemon examines the designated requirement of the program sending the RPC and ensures it matches future requests to use the key.

This system is abstracted by entitlements. These are key=value pairs that express permissions. Entitlements are an open system and apps can define their own. However, most entitlements are defined by Apple. Some are purely opt-in: you obtain the permission merely by asking for it and the OS grants it automatically and silently. These seem useless at first, but allow the App Store to explain what an app will do up front, and more generally enable a least-privilege stance where apps don't have access to things unless they need them. Some require additional evidence like a provisioning profile: this is a signed CMS data structure provided by Apple that basically says "apps with designated requirement X are allowed to use restricted entitlement Y", and so you must get Apple's permission to use them. And some are basically abused as a generic signed flags system; they aren't security related at all.

The system is then extended further, again through cooperation of userspace and XNU. Binaries being signable is a start but many programs have data files too. At this point the Apple security system becomes a bit hacky IMHO: the kernel isn't involved in checking the integrity of data files. Instead a plist is included at a special place in the slightly ad-hoc bundle directory layout format, the plist contains hashes of every data file in the bundle (at file not page granularity), the hash of the plist is placed in the code signature, and finally the whole thing is checked by Gatekeeper on first run. Gatekeeper is asked by the kernel if it's willing to let a program run and it decides based on the presence of extended attributes that are placed on files and then propagated by GUI tools like web browsers and decompression utilities. The userspace OS code like Finder invokes Gatekeeper to check out a program when it's been first downloaded, and Gatekeeper hashes every file in the bundle to ensure it matches what's signed in the binaries. This is why macOS has this slow "Verifying app" dialog that pops up on first run. Presumably it's done this way to avoid causing apps to stall when they open large data files without using mmap, but it's a pity because on fast networks the unoptimized Gatekeeper verification can actually be slower than the download itself. Apple doesn't care because they view out-of-store distribution as legacy tech.

Finally there is Seatbelt, a Lisp-based programming language for expressing sandbox rules. These files are compiled in userspace to some sort of bytecode that's evaluated by the kernel. The language is quite sophisticated and lets you express arbitrary rules for how different system components interact and what they can do, all based on the code signing identities.

The above scheme has an obvious loophole that was only closed in recent releases: data files might contain code and they're only checked once. In fact for any Electron or JVM app this is true because the code is in a portable format. So, one app could potentially inject code into another by editing data files and thus subvert code signing. To block this in modern macOS Seatbelt actually sandboxes every single app running. AFAIK there is no unsandboxed code in a modern macOS. One of the policies the sandbox imposes is that apps aren't allowed to modify the data files of other apps unless they've been granted that permission. The policy is quite sophisticated: apps can modify other apps if they're signed by the same legal entity as verified by Apple, apps can allow others matching code requirements to modify them, and users can grant permission on demand. To see this in action go into Settings -> Privacy & Security -> App Management, then turn it off for Terminal.app and (re)start it. Run something like "vim /Applications/Google Chrome.app/Contents/Info.plist" and observe that although the file has rw permissions vim thinks it's read-only.

Now, I'll admit that my understanding of how this works ends here because I don't work for Apple. AFAIK the kernel doesn't understand app bundles, and I'm not sure how it decides whether an open() syscall should be converted to read only or not. My guess is that the default Seatbelt policy tells the kernel to do an upcall to a security daemon which understands the bundle format and how to read the SQLite permission database. It then compares the designated requirement of the opener against the policies expressed by the bundle and the sandbox to make the decision.



I do not think that "security" is the appropriate name for such features.

In my opinion "security" should always refer to the security of the computer owners or users.

These Apple features may be used for enhancing security, but the main purpose for which they have been designed is to provide enhanced control of the computer vendor on how the computer that they have sold, and which is supposed to no longer belong to them, is used by its theoretical owner, i.e. by allowing Apple to decide which programs are run by the end user.


On macOS the security system is open even though the codebase is closed. You can disable SIP and get full root access. Gatekeeper can be configured to trust some authority other than Apple, or disabled completely. You can write and load your own sandbox policies. These things aren't well known and require reading obscure man pages, but the capabilities are there.

Even in the default out-of-the-box configuration, Apple isn't exercising editorial control over what apps you can run. Out of store distribution requires only a verified identity and a notarization pass, but notarization is a fully automated malware scan. There's no human in the loop. The App Store is different, of course.

Could Apple close up the Mac? Yes. The tech is there to do so and they do it on iOS. But... people have been predicting they'd do this from the first day the unfortunately named Gatekeeper was introduced. Yet they never have.

I totally get the concern and in the beginning I shared it, but at some point you have to just stop speculating give them credit for what they've actually done. It's much easier to distribute an app Apple executives don't like to a Mac than it is to distribute an app Linux distributors don't like to Linux users, because Linux app distribution barely works if you go "out of store" (distro repositories). In theory it should be the other way around, but it's not.


> Even in the default out-of-the-box configuration, Apple isn't exercising editorial control over what apps you can run

Perhaps not in the strictest sense, but Apple continues to ramp up the editorial friction for the end user to run un-notarized applications.

I feel/felt <macOS 15 that right-click Open was an OK approach, but as we know that's gone. It's xattr or Settings.app. More egregious is the monthly reminder that an application is doing something that you want it to do.

A level between "disable all security" and what macOS 15 introduces would be appreciated.


More knobs would be nice, yes. Still nothing stops you using a customized file browser, browser, archiver etc that doesn't set the xattrs at all.


Sure, common apps will be notarized and will not run into any warnings/blocks. It's those apps which are not where we need to dive into the Terminal or Settings.app.


I think you went for a lazy reply rather than actually reading the comment through. Most of the things mentioned here directly improve security for the computer's owner.


> I think you went for a lazy reply rather than actually reading the comment through.

https://news.ycombinator.com/newsguidelines.html

Your reply could have omitted the first sentence.

Many years ago, at Macworld San Francisco, I met "Perry the Cynic", the Apple engineer who added code signing to Mac OS X. Nice person, but I also kind of hate him and wish I could travel back in time to stop this all from happening.


It could have, but I would just replace it with the same link you posted. And we all hate Perry sometimes :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: