This is an very quick route to a maintenance nightmare imo.
If you have totally unpinned dependencies, and you come back to a project after a year untouched, or 5 years, and it no longer works - which dependency update broke it?
I don't agree that using an outdated package is necessarily a problem at all. Some versions are done! You don't need the latest version of every possible package. You don't necessarily need to update _ever_ (which is why this differs from CI). These updates are often entirely unnecessary churn.
There absolutely are vulnerabilities in some old versions, and those updates are necessary (but tooling & notifications to easily handle this have dramatically improved in recent years, especially on GitHub). There will also be vulnerabilities in new packages though, which may be unknown, and will often not exist in older much simpler versions.
Using a well-tested version of a dependency that does exactly what you need is not less secure than chasing the latest version at all times without a specific reason.
I've found manually updating packages on the rare occasions where relevant vulnerabilities arise, and using existing working versions without changes the rest of the time has been perfectly effective over many years now, and avoid the shifting sands of external dependencies wherever possible means that a project that worked 5 years ago still works _exactly_ the same today.
I rather see more software go in this direction, valuing reproducibility & known correctness (i.e. with isolated pinned dependencies, in some form) over 'always be latest' dependency updates and the complex & hard to reproduce bugs that those shifting dependency interactions can create.
> If you have totally unpinned dependencies, and you come back to a project after a year untouched, or 5 years, and it no longer works - which dependency update broke it?
In this case, it doesn't matter to me which dependency update broke it, what matters to me is to have the tests passing again with all dependencies.
> I don't agree that using an outdated package is necessarily a problem at all. Some versions are done!
If a version is done then why is there a new release? Using old versions is tech debt that will one day blow up and cost much more to correct than if it had been corrected over the time, not to mention the security risk.
> You don't necessarily need to update _ever_
Another dependency might decide to use that dependency in a newer version, in which case aren't we all better off using the latest versions of everything? The cost is some effort, the benefit is more features, security, performance, less bugs, basically a better program.
> There will also be vulnerabilities in new packages though, which may be unknown, and will often not exist in older much simpler versions.
Then why upgrade at all when you have a dependency with a security issue? After all, in your upgrade you might be adding even more unknown security issues that might be even more dangerous.
> Using a well-tested version of a dependency that does exactly what you need is not less secure than chasing the latest version at all times without a specific reason.
If I can test well that a newer version of a dependency works for me, why not upgrade it? There might be performance, security or other bugfixes, and I'm allowing other maintainers of other dependencies to also use that newer version.
> I rather see more software go in this direction, valuing reproducibility & known correctness (i.e. with isolated pinned dependencies, in some form) over 'always be latest' dependency updates and the complex & hard to reproduce bugs that those shifting dependency interactions can create.
Ok, but then again, some version might fix a security bug that has not been backported to your old version, especially if it's 5 years old.
Basically you're advocating against continuous integration (I'm talking about the "practice", not talking about "the tool that runs automated tests that people call CI").
> In this case, it doesn't matter to me which dependency update broke it, what matters to me is to have the tests passing again with all dependencies.
Knowing which change (or changes) broke it can make resolving the issue much faster.
> If a version is done then why is there a new release? Using old versions is tech debt that will one day blow up and cost much more to correct than if it had been corrected over the time
Because <new feature> was added, that is totally irrelevant to your use case.
> not to mention the security risk.
As parent mentioned, there is automated tooling for this. Your tooling yells that you are using a version of a package with a vulnerability, so you update.
> Another dependency might decide to use that dependency in a newer version, in which case aren't we all better off using the latest versions of everything?
If you don't update A, it doesn't matter if a newer version of A wants a newer version of B.
> The cost is some effort, the benefit is more features, security, performance, less bugs, basically a better program.
This assumes bugs and vulnerabilities decrease monotonically over time. This isn't true.
> Then why upgrade at all when you have a dependency with a security issue? After all, in your upgrade you might be adding even more unknown security issues that might be even more dangerous.
Because a known (to the world) security flaw is orders of magnitude more dangerous than an unknown (to the world) one, all else being equal. If there is a CVE for it, there are likely large-scale attempts at exploiting it anywhere it can be found.
> If I can test well that a newer version of a dependency works for me, why not upgrade it? There might be performance, security or other bugfixes, and I'm allowing other maintainers of other dependencies to also use that newer version.
Large amounts of labor. Furthermore, "well-tested" may include "battle tested". Some bugs make it through to deployment, and get caught and fixed. Updating dependencies without a good reason means more potential bugs slipping through, which means more bugs being discovered in deployment and a worse experience for the end user.
This is an very quick route to a maintenance nightmare imo.
If you have totally unpinned dependencies, and you come back to a project after a year untouched, or 5 years, and it no longer works - which dependency update broke it?
I don't agree that using an outdated package is necessarily a problem at all. Some versions are done! You don't need the latest version of every possible package. You don't necessarily need to update _ever_ (which is why this differs from CI). These updates are often entirely unnecessary churn.
There absolutely are vulnerabilities in some old versions, and those updates are necessary (but tooling & notifications to easily handle this have dramatically improved in recent years, especially on GitHub). There will also be vulnerabilities in new packages though, which may be unknown, and will often not exist in older much simpler versions.
Using a well-tested version of a dependency that does exactly what you need is not less secure than chasing the latest version at all times without a specific reason.
I've found manually updating packages on the rare occasions where relevant vulnerabilities arise, and using existing working versions without changes the rest of the time has been perfectly effective over many years now, and avoid the shifting sands of external dependencies wherever possible means that a project that worked 5 years ago still works _exactly_ the same today.
I rather see more software go in this direction, valuing reproducibility & known correctness (i.e. with isolated pinned dependencies, in some form) over 'always be latest' dependency updates and the complex & hard to reproduce bugs that those shifting dependency interactions can create.