Hacker Newsnew | past | comments | ask | show | jobs | submit | gargravarr's commentslogin

MacOS has some very strange ideas on removing permissions. When you allow kexts in modern OS releases, the signature gets added to an SQLite database in /var/db (you have to consult this database to get the signatures if you want to whitelist kexts in an MDM). Now kexts are quite invasive, hence Apple's caution on allowing them in the first place.

What happens if you want to revoke a kext? Delete the entry from the SQLite DB? Nah. Guess what, SIP prevents any and all deletions from that DB. You have to disable SIP to revoke a kext's permissions. And because the signature is not a hash, but instead a two-part vendor/product, it's entirely possible for a malicious version of an existing kext to be released that is then permitted by the signature.

As an admin with security focus, this to me seems completely backwards. I get that Apple don't want to make the permitting operation to be too difficult in the first place, because these are end-users we're talking about, but the lengths they go to in order to prevent the permissions being revoked is downright strange.


Not any more:

https://www.theverge.com/2018/11/12/18077166/apple-macbook-a...

"The parts affected, according to the document, are the display assembly, logic board, top case, and Touch ID board for the MacBook Pro, and the logic board and flash storage on the iMac Pro."

That doesn't leave much salvageable.


As someone that has parted out his and hers old MacBook Airs to fund their new ones, can confirm.

It was surprising how much value the old parts had.

Sad that old units with a broken X will just be scrapped because Apple will wants $hundreds for any of its specialized parts.


64GB will cost you $800 now. When you need it, it may cost half that, or less. And Apple was notorious for underspec'ing the maximum amount of RAM a machine could take - it was common for a machine to take double the Apple official maximum because the DIMM densities increased with time. Since the integration of the memory controller into the CPU this is less of an issue, but the soldered RAM means you're limited to the amount that Apple is prepared to give you, which until this model has been far less than the CPU actually supports - the 15" i9 could be spec'd to 32GB, but the CPU could address 64GB.

Apple's attitude to upgrading is to replace the entire machine. It's both expensive and absurd. I bought a 2008 MBP back in uni in 2010. I later upgraded the RAM and disk when I could afford to, and when I had reached the limits of what it had. That machine served me well for 5 years. There's nothing else on the market today I'd trust to be a daily use machine for 5 years straight.


Even the XPS 13s are more serviceable than their rival MBP 13"s. The RAM may be soldered on both but the XPS uses a standard NVMe SSD. It's saved me a few times already.


Not just the risk of data loss, but the inconvenience of restoring the machine is much higher. This means the only solution to restoring a modern Mac notebook is to replace the entire logic board - everything else may be functional except the storage, which is a consumable by most considerations, but now the entire board must be swapped. And if it's out of warranty, which let's face it, it's extremely likely to be, then you are completely and utterly stuck. You either have an enormous bill from Apple or an authorised repair shop (and Apple is notoriously cagey about allowing third parties access to their replacement parts) or you have to hunt down a machine for spares to do the swap yourself, and now with Apple's invasive security measures requiring communication with Apple's diagnostic tools to perform certain swaps, storage failure can render the machine entirely useless for months, if not permanently. Whereas if the disk was replaceable, just order one from the most convenient store, swap it in, restore from Time Machine.

The Function Key MBPs have a problem with their flash storage where they may randomly and unpredictably die, taking all the data with them. The fix is a firmware update, which also takes all the data with it. Fantastic, Apple, we bought six of those machines, and because the users are actively, y'know, using the damned things, it's not really convenient to tell them they'll be without their machine for a week while the service centre gets around to it, and then multiple hours of restoring their Time Machine backups. On the plus side, the SSDs in those machines are not soldered. Yes, they're proprietary, but it's something - if those machines suffer failure, I could grab an SSD on eBay and get them running again. It's almost worth the risk.


Xbox One X.

Now with 40% more confusion.


Mostly bandwidth, reliability and interference robustness. The Wii U certainly proves it's possible, although that's streaming 1280x720. Displays at much higher resolutions require considerably higher bandwidth and are much more expensive as a result. Plus, if your screen is in one place, the source is likely to be there too, so why not just make it more reliable with a cable.


Linux provides /dev/random and /dev/urandom to all applications as well as relevant API calls, but the quality of those is always in doubt (especially with urandom, which recycles the entropy pool when it gets too low). CPU hardware PRNGs are supposed to be much higher quality, when they work. So it's all about depending on the CPU to actually do what it's advertising - the CPUID flag advertises a high-quality hardware RNG, but then the firmware bug completely nurfs it.


> So it's all about depending on the CPU to actually do what it's advertising

I know I'm nitpicking but I disagree with that statement.

There's a kernel option called CONFIG_RANDOM_TRUST_CPU which you can set to false. So if for some reason, even if it's a bad one, you don't want your random numbers generated by your cpu then that's that. End of discussion. (In theory, not sure if rdrand if trapable)

I get that you're skeptical about the quality of what's provided in /dev/(u)random because in most cases it's true. Should I ever feel the need to hookup a hw random generator then I hope programs would use that one instead of guessing they can do better by calling rdrand.


I wasn't aware of this kernel flag (nor am I surprised it exists), seems like it would be useful to the article's author. However, by default it's left enabled, which allows the kernel to automatically set up the environment based on what it can glean from the hardware. So my comment was in regards to the machine without intervention.


My understanding is that implementing a PRNG in software results in a very small entropy pool. At the OS level, it can collect entropy from a vast number of sources, including things that an application won't have access to, which is why the OS exposes the RNG to applications. It also means maintaining such a PRNG in your software. Basically, it's the same old dependency argument again - there is nothing inherently wrong with using the tools the environment provides for you rather than building your own, but it does mean that the tools have to work properly. This is especially critical for cryptographically-secure PRNGs - those are things you really do not want to be maintaining yourself if you can access a high-quality source of random data, but again, if the PRNG doesn't work, you're in deep trouble (see for example Yubico's broken PRNG chips on the Yubikey 4). Hardware PRNGs on the CPU itself were supposed to dramatically improve the state of random data provisioning, but when bugs like this hit, it shows the weakness of depending on the stack beneath.


To be clear, you only need some random bytes to seed your cryptographic PRNG. This should of course be gathered from the OS but after that you only need to reseed once in a blue moon. Of course you shouldn't write and maintain a CPRNG yourself but there are many widely used, maintained and scrutinized libraries for this purpose.

For example, seeding ChaCha with 256bits will give you 1 ZiB of output before cycling. That should keep you going for awhile.


> My understanding is that implementing a PRNG in software results in a very small entropy pool.

A lot of PRNG are now implemented as the output of stream ciphers or block cipher in counter mode:

* https://en.wikipedia.org/wiki/Fortuna_(PRNG)

So 128 bits is all that is needed to get going.

Re-key every so often to ensure forward security in case there is a kernel-level compromise.

With AES-NI instructions in most CPUs, several GB/s can be achieved.


> This is especially critical for cryptographically-secure PRNGs - those are things you really do not want to be maintaining yourself

My conclusion would be exactly the opposite: If your RNG is so important that it has to be cryptographically secure, you owe it to your users to put in the time and effort of maintaining a proper implementation yourself, or at the very least use an open source library that provides this functionality in software. Otherwise you're always going to be at the mercy of a potentially misbehaving environment.

In terms of entropy, you don't really need to "maintain a pool" for CSPRNGs. You either have enough entropy to feed it with, or you don't. Once it is properly seeded, you can squeeze as many random bits out of it as you want (or at least, as many as anyone would ever reasonably need). It's really no different from a stream cipher, the key is the seed, and you're just encrypting zeroes. You don't need to suddenly get another randomly generated key after encrypting 100 MiB to encrypt the next 100 MiB securely.

Another great thing about entropy is that you can't reduce it. Which is why you really don't have to spend any time thinking about whether a particular entropy source is well behaved or uniformly distributed or anything like that at all. You just have to be certain that you have overall enough entropy that nobody can guess the entire seed. So anything the OS can give you? Dump it in there. Any kind of user interaction? Dump it in there. The time? CPU jitter? Network jitter? Just put it all in there. 100 MiBs of 0s? You know what why not, just put it on top because you literally can't make it worse, only better.


By which time the next release comes out.

You can't win with Apple now.


On OSX you can still upgrade to the old one though


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: