My work laptop (Mac) needed an OS upgrade recently, top of the release notes was "added 8 emojis" - what? Why is this an OS level feature worth calling out
> My work laptop (Mac) needed an OS upgrade recently, top of the release notes was "added 8 emojis" - what? Why is this an OS level feature worth calling out
Because the Software Update page under System Settings is all that normies will ever read and so what's in the text there is focused on normies.
Meanwhile techies may be interest in the CVEs listed in the security update list:
I believe I read somewhere that announcing new emoji drives noticeably more OS upgrades compared to more boring security and stability update release notes.
Apple is rumored to have a strategic emoji reserve. The more they want people to install an update, the more emojis from the reserve they release as part of it. Because this is basically the only thing that drives OS updates among average people.
The nuance is that you can have a NI number, then have your visa lapse for whatever reason - you still have the NI number. Hence the requirement to prove your right to work through another means.
Previously you could use proof of British nationality or a physical biometric residence card - but they've been replaced by the digital share code system (which tbh hasn't been too bad)
Sorry I worded that poorly - I was trying to make the point that citizens prove their right to work using passport/birth certificate, and until recently visa holders used a physical BRP, and now a digital system (which oddly enough uses your expired/redundant BRP number as a username)
On the ReDos aspect, I find the current CVSS rating system lacking. Availability is important, but following secure coding principles (fail closed) I'd much rather my system go down than have integrity/confidentiality compromised.
It's frustrating that a potential availability issue often gets the same (high) rating as a integrity/confidentiality issue
I agree but there's a bit of nuance here. Today scanning steps typically happen post install, which is wild but the status quo. Therefore preventing anything from running during install is desirable.
I'd like to see the ability to scan/restrict as part of the installation step become popular, there are some proprietary tools that do this already but it's not yet a common capability.
Yes. For instance when we had that crypto malware npm fiasco a few days back I happened to be updating one of my packages. The audit lit up with dozens of critical issues, but of course this was after it installed everything. Luckily I had disabled install scripts so it became a matter of not running the code until I could get it reverted back.
Basically we severed connection to the public npm registry completely earlier in the week whilst this worm plays out.
Unfortunately there wasn't a way to do this without taking our cached "good" public packages down as well, so we later replicated the good cached packages into a new standalone private registry to be the new upstream.
The bit that was not obvious in the moment but self evident once we realised is that the registry we're using took the copy time as the publish time, and therefore our new 2 week delay is rejecting the copied packages...
So sample size of one, but the registry we're using is definitely using upload time not any metadata in the packages themselves. Good to know the filtering is working.
I think the point around incorporating MFA into the automated publishing flow isn't getting enough attention.
I've got no problem with doing an MFA prompt to confirm publish by a CI workflow - but last I looked this was a convoluted process of opening a https tunnel out (using a third party solution) such that you could provide the code.
I'd love to see either npm or GitHub provide an easy, out the box way, for me to provide/confirm a code during CI.
Publishing a package involves 2 phases: uploading the package to npmjs, and making it availble to users. Right now these 2 phases are bundled together into 1 operation.
I think the right way to approach this is to unbundle uploading the packages & publishing packages so that they're available to end-users.
CI systems should be able to build & upload packages in a fully automated manner.
Publishing the uploaded packages should require a human to log into npmjs's website & manually publish the package and go through MFA.
Completely agree tbh, and that would be one of my preferred approaches should npm be the actor to implement a solution.
I also think it makes sense for GitHub to implement the ability to mark a workflow as sensitive and requiring "sudo mode" (MFA prompt) to run. It's not miles away from what they already do around requiring maintainer approval to run workflows on PRs.
Ideally both of these would exist, as not every npm package is published via GitHub actions (or any CI system), and not every GitHub workflow taking a sensitive action is publishing an npm package.
I'm feeling that maybe the entire concept of "publishing packages" is something that's not really needed? Instead, the VCS can be used as a "source of truth", with no extra publishing step required.
This is how Go works: you import by URL, e.g. "example.com/whatever/pkgname", which is presumed to be a VCS repo (git, mercurial, subversion, etc.) Versioning is done by VCS tags and branches. You "publish" by adding a tag.
While VCS repos can and have been compromised, this removes an entire attack surface from the equation. If you read every commit or a diff between two tags, then you've seen it all. No need to also diff the .tar.gz packages. I believe this would have prevented this entire incident, and I believe also the one from a few weeks ago (AFAIK that also only relied on compromised npm accounts, and not VCS?)
The main downside is that moving a repo is a bit harder, since the import path will change from "host1.com/pkgname" to "otherhost.com/pkgname", or "github.com/oneuser/repo" to "github.com/otheruser/repo". Arguably, this is a feature – opinions are divided.
Other than that, I can't really think of any advantages a "publish package"-step adds? Maybe I'm missing something? But to me it seems like a relic from the old "upload tar archive to FTP" days before VCS became ubiquitous (or nigh-ubiquitous anyway).
There’s also a cost that installs take much longer, you need the full toolchain installed, and are no longer reproducible due to variations in the local build environment. If everything you do is a first-party CI build of a binary image you deploy, that’s okay but for tools you’re installing outside of that kind of environment it adds friction.
Agreed, in the JS world? Hell no. Ironically, doing a local build would itself pull in a bunch of dependencies, whereas now you can at least have one built dependency technically.
All not problems for Go: pull through proxy is fast and eliminates the need for a toolchain if you just want to download, and Go builds are fully bit-for-bit reproducible.
That would be an impossible expectation on the Go toolchain. The pull through proxy can’t magically avoid the need to transfer all dependencies to my device, especially including any native code or other resources. Large projects are going to need to download stuff - think about how some cloud clients build code dynamically from API definition or how many codecs wrap native code.
Similarly, newer versions of Go change the compiler–which to be first is a good thing–so even if I start with the same source in Git I might not have the same compiled bytes in the result.
Again, mone of this is a bad thing: it just means that I want to compile binaries and ship those so they don’t unexpectedly change in the future and my CI pipeline doesn’t need to have a full Go build stage when all I want is to use Crane to do something with a container.
We have a terraform monorepo with many small workspaces (ie: state files). The amount of disk space used by the .terraform directories on a fully inited clone is wild
As a lot of these npm "packages" are glorified code snippets that should never have been individual libraries, perhaps this would drive people to standardise and improve the build tooling, or even move towards having sensibly sized libraries?
Yes, there’s widespread recognition that the small standard library makes JavaScript uniquely dependent on huge trees of packages, and that many of them (e.g. is-arrayish from last week) are no longer necessary but still linger from the era where it was even worse.
However, this isn’t a problem specific to JavaScript – for example, Python has a much richer standard library and we still see the same types of attacks on PyPI. The entire open source world has been built on an concept of trust which was arguably always more optimistic than realistic, and everyone is pivoting – especially after cryptocurrency’s inherent insecurity created enough of a profit margin to incentivize serious attacks.
> Really wish the norm was that companies hosted their own registries for their own usage
Is this not the norm? I've never worked anywhere that didn't use/host their own registry - both for hosting private packages, but also as a caching proxy to the public registry (and therefore more control over availability, security policy)
https://verdaccio.org/ is my go to self hosted solution, but the cloud providers have managed solutions and there's also jFrog Artifactory.
One corollary of this is that many commercial usages of packages don't contribute much to download stats, as often they download each version at most once.
Another advantage of this would be for CI/CD - MFA can be a pain for this.
If I could have a publish token / oidc Auth in CI that required an additional manual approve in the web UI before it was actually published I could imagine this working well.
It would help reduce risk from CI system breaches as well.
There are already "package published" notification emails, it's just at that point it's too late.
> make complexity manageable: strict isolation between components, explicit timing constraints, and simple, well-defined interfaces.
Maybe I'm missing something, but those simple well defined interfaces are the types?
I've worked with large javascript codebases before typescript came on the scene, and functions with a single params object were rife, which combined with casual mutation plus nested async callbacks made for a mess that was very difficult to get confidence on what was actually being passed to a given function (presumably easier if you'd written it instead of joining later)
A type system so that you can define those interfaces (and check they're adhered to) feels pretty essential to me.
For me the issue is at work with 16gb of ram, I'm basically always running into swap and having things grind to a halt. My personal workstation has 64gb and the only time I experience issues is when something's leaking memory
reply