You should play it while you can. It's a very fun game and the PVE server is some of the best relaxing gameplay I have experienced (as long as falling into an abyss doesn't stress you out)
SpatialOS is doing much more than that. It's main goal was bringing a single world spread over multiple server instances instead of individual servers.
EDIT: Also, how can you determine if the string should be a date, or a string? Sure, if it fits, you can always convert it to a Date, but if your first object is coming from something that sends ISO Dates as strings and you used JSON.parse with a reviver, you would get a different object.
Piping is relatively easy to write a function for. Here [0] is a package that I wrote that has a few trivial functions (including pipe) for things such as this.
Not OP, but where I work, we have a hybrid approach for our system.
> Share code between your SPAs, for example, utilities?
We alias a common folder for our utils with webpack which allows all our code to use the same utilities. We actually have two. One for shared frontend and backend, and one for just shared frontend
> What if your SPAs have large dependencies (React, Vue), should every SPA come with their own?
Webpack code splitting is how we split our apps. We also build into multiple bundles to keep bundle size small. That does mean that React does get bundled with each, but if you split thing appropriately, you can potentially save off a few MB with caching each bundle from page to page.
> Should every SPA have its own build tools?
With webpack, you can build all your bundles with one config / command
> What if there's an update to a library, should it be allowed that one SPA runs on an older library?
We have a root package.json for all of our react apps. This means that we don't update a library unless we can update it across the board. It takes more time to test upgrades, but it also means smaller bundles, easier package management and easier maintenance in the long run.
I clicked the link above, edited my cookie preferences on the pop up, waited 5 seconds for them to be updated... still updating so closed the tab. Modern UX sucks.
I will be honest, I was completely unaware of your other products, not that I would be your target audience. That being said, thank you for not pushing your other products on tracker users. That was something that always annoyed me with Atlassian.
Shameless plug, I made one that uses an AWS Lambda and S3 and encrypts everything before it gets sent over the wire. You can view a live version of it here (https://todos.md/) or view the source here (https://github.com/Prefinem/TODOS.md)
Agreed. I think this ‘shameless’ before posting a project of yours that answers or is related to the question/topic at hand is... wrong? People shouldn’t feel shame plugging their work and so, it would follow, ‘shameless’ is unnecessary.
Edit: Not to attack the poster. The poster is following societal convention, basically.
For one, they've marked the issue as resolved before they've completed any forensic analysis to discover if additional compromised packages were uploaded with stolen credentials during the window in which they weren't revoked.
This gives the false sense that its safe to install packages again.
You're the first person I've seen mention this, which seems like it should be the first and most obvious line of defense against bad stuff like this. +1
I used to be one of the biggest HN proponents of dunking on Node. I would give it a thrashing second only to the windmill dunks I would line up on Go.
These days, with ES6, TypeScript, and React...it's actually all pretty nice.
But NPM is still and forever a mess that just plain didn't learn from its predecessors--and my only guess as to the cavalier treatment of security issues is that the NPM crew's a bunch of premillenial dispensationalists banking on getting raptured (or bought, I guess, but that's way less fun) before the chickens come home to roost, 'cause just about anything else seems implausible.
I am thinking about getting in on Deno only and strictly because maybe that can have a package management solution that is not held captive by an irresponsible venture-backed entity. It's not like anyone's breaking NPM's grip anytime soon. (Even if you tried, their API is so awful as to be functionally incompatible unless you literally just copy their internal structure, so...yeah. Great. Hooray.)
NodeJS didn't make those - they'd be around with any nodejs alternative.
btw, nodejs should provide some "isolated" mode (ie run as user "nobody-projectName-userName" - eg. "nobody-react-whatcanthisbee") and do some appropriate group permissions.
what saddens me is that there seems to be no way out. There are so many projects that worked fine in the past and now the're using npm, for example PhoneGap.
None of the criticisms of package signing here apply in this case. Package signing does cover this exact issue, which we should describe to be specific:
(1) someone has published a package.
(2) that package has become well known and used enough that the community as a whole trusts that the entity publishing that package is not there for malicious purposes
(3) the maintainer of that package has their account compromised, either by credential leak or by a vulnerability in the package management system itself (in this case, the former and not the latter)
If the package were signed, then the attacker could not have published this fake package that lead to issues today. Does this solve the root chain of trust issue for whose packages you should trust? No, but nothing other than thoroughly reviewing every line can do that. Does it prevent you from having a random unknown person masquerade as the maintainer and publish a malicious package? Yes.
There are real additional trust issues to solve, but let's not let those detract from the fact that package signing would have prevented the exact issue which we saw today. Defense in depth, always.
I'm not going to say that you're wrong in any particular thing you said. You are correct that package signing would have prevented this exact issue we saw today, and I'm not opposed to an argument for defense in depth.
However, I am going to reiterate my original point with a bit of clarification: "it is not clear that package signing will prevent malicious actors from compromising systems using a language package manager with anything approaching the success of distro package managers".
I think what you're proposing amounts to a two-tier system. There is somehow a set of known signatures that are trusted by the npm consumer[0] and then a vast sea of untrusted signatures. Developers sometimes add libraries, and add one (or more?) signatures to their trusted set.
First, in the specific case of NPM, we have an existing system with huge numbers of transitive dependencies and no existing package signing. I don't see how you retrofit signing on to that system. Too many developers will ignore it, but also because there are so many libraries, you'll have an absolutely massive number of trusted keys.
Second, suppose you're starting from scratch, so you can enforce signatures from day one. You still have the problem of transitive dependencies. It's certainly more work for a malicious actor to either a) create legitimate libraries that will be included in other libraries, then convert them to malware, or b) steal a developer's signing keys, but neither one is remotely impossible, given the large number of libraries involved. You just need one part of that web of libraries to involve a sloppy or malicious developer, and you are hosed.
[0] Interesting question: how is this handled? Is it part of the lockfile for a project? Something system level, with the problems of devs installing random shit on their machines and ending up with too much being trusted, and also lack of reproducible builds?
That's from the maintainer of pypi. Pypi shipped SSH Decorator with malware that stole SSH keys. Another case that would have been entirely preventable with package signing. I've concluded they don't know what they're doing either.
I'm not surprised if you don't know about it, because it was [dead] on HN as soon as it was posted... which should tell you a bit about the echo chamber you're in here.
Except none of these comments take into account the fact that actually no one verifies apt package signatures either, and this fairy tale world where all of this is a solved problem in some other domain is not manifest.
The reason that package signing never really matters that much is that once you boil the thrat model down to package publish credentials are compromised or package repository infrastructure is compromised, the form of credentials involved is of little consequence to the prior and uninvolved in the latter. The threat is against the client, not intermediates.
The original developer here reused credentials. There is nothing in signing that protects from this attitude. This attitude is the one that also reuses credentials for signing keys, if encrypting them at all - I'd bet this user has numerous stale ssh keys and never encrypted any of the secrets. Some of the top eslint contributors have multiple short rsa keys on their GitHub. None of them have modern keys.
There are more effective places to invest to better protect users. Auditing infrastructure for example.
The most worrisome on that page is the arbitrary package attack, and on my Debian installation it's not feasible. The insecure apt.conf settings are not enabled on Debian by default. Saying that the package signing of apt is absolutely ineffective is dishonest.
Note that metadata signing in apt is just indirect package signing. The package sha256 sums are part of the metadata. It looks like that dpkg too have support for package signing, but at that point it would be redundant.
This is an obvious ad hominem attack, and completely missed the point of the pypi post.
If someone says "this problem cannot reasonably be solved", you don't get to discredit them by saying "look, you had the problem!" You have to actually rebut their arguments and say that it can be solved.
In fact, there's a pull request from 2013 for GPG package verification. It took over a year for a response from NPM, shooting it down. There's already an "I told you so" in the thread.[0]
>Thank you for your time and effort.
Yeah, thanks to you too, NPM. No time or effort to go around for progress on this issue in the interim 3 years, apparently.
If the hacker has your password, don't they also have the ability to publish a public key used to verify the signed package? It presumably would protect against distribution of a fake package outside of NPM, but if your NPM account is hosed isn't it too late?
If 2fa is enforced, having your password doesn't get you anything. Publishing an npm package isn't having a Twitter account, it's one of those things where enforcing 2fa shouldn't even be a usability question.
I don't think sandboxing install scripts will help much, as any code in the package will be executed when that package is require()'d. You really need to sandbox the whole of `node`.
* eslint user FooCorp also gets compromised, and a similarly-malicious version of foolib-js gets published that includes the _same code_ to steal tokens
* npm invalidates all tokens
* you decide to use foolib-js, and your newly-minted token is now compromised
While this is possible, I'm willing to give the NPM team at least a little benefit of the doubt that they actually researched the access logs before they state this:
> We determined that access tokens for approximately 4,500 accounts could have been obtained before we acted to close this vulnerability. However, we have not found evidence that any tokens were actually obtained or used to access any npmjs.com account during this window.[1]
I get that it's possible that other modules could already be infected, but it's also true that other modules could have been similarly infected long before this one.
Your quote wonderfully illustrates that npm are either being obfuscatory or entirely missing the point.
How did they determine tokens for 4,500 accounts could have been obtained, and what is that even supposed to mean? The problem here is that any user of these packages could have had their .npmjs file read and exfiltrated, not just some upstream package maintainer. Were there only 4,500 valid npm tokens or something? I cannot imagine that is the case.
So either they looked at 4,500 packages uploaded during the compromise window and they're not explaining how they undertook to do that, or they don't understand the vector and are minimizing the severity of the issue.
I would assume their logs would possibly tell them which tokens were associated with the users that downloaded v3.7.2. npm probably doesn't need credentials to download a package so the number of downloads is likely higher. Determining other packages affected are another matter entirely and no one can say this attack vector is only bound by this specific date window. This could've been way more widespread unless they're unpacking payloads and grepping for key pieces of this specific attack.
I think it would be helpful if they could expose some of those logs but considering the meat of what matters would be the IP addresses to verify if your machine was compromised (or your CI server) that GPDR effectively wiped that possibility off the table. It would almost behoove them to setup a kind of haveibeenpwned service where you can check against stuff like this in the future. It's not like this can't happen again as the hole hasn't been closed completely, only this one set of compromised packages appears clean for now.
Calling it resolved is worse than doing nothing. If they had done nothing, at least people would know that "If I run npm install now, that's bad". Now they've claimed it is resolved, which tells their users "It's okay to start installing things again" when it isn't safe until an audit has been completed.