> What are the issues with using docker to solve this problem ?
Docker alone doesn't solve the problem and neither does pip unless you take extra steps.
Here's a common use case to demonstrate the issue:
I open source a web app written in Flask and push it to GitHub today with a requirements.txt file that only has top level dependencies (such as Flask, SQLAlchemy, etc.) included, all pinned down to their exact patch version.
You come in 3 months from now and clone the project and run docker-compose build.
At this point in time you're going to get different versions than I had 3 months ago for many sub-dependencies. This could result in broken builds. This happened multiple times with Celery and its sub-dependency of Vine and Flask with its sub-dependency of Werkzeug.
So the answer is simple right, just pip freeze your requirements.txt file. That works but now you have 100 dependencies in this file when really only about 8 of them are top level dependencies. It becomes a nightmare to maintain that as a human. You basically need to become a human dependency resolution machine that traces every dependency to each dependency.
Fortunately pip has an answer to this with the -c flag but for such a big problem it's not very well documented or talked about.
It is a solvable problem tho, to have a separate lock file with pip without using any external tools and the solution works with and without Docker. I have an example of it in this Docker Flask example repo https://github.com/nickjj/docker-flask-example#updating-depe..., but it'll work without Docker too.
Docker alone doesn't solve the problem and neither does pip unless you take extra steps.
Here's a common use case to demonstrate the issue:
I open source a web app written in Flask and push it to GitHub today with a requirements.txt file that only has top level dependencies (such as Flask, SQLAlchemy, etc.) included, all pinned down to their exact patch version.
You come in 3 months from now and clone the project and run docker-compose build.
At this point in time you're going to get different versions than I had 3 months ago for many sub-dependencies. This could result in broken builds. This happened multiple times with Celery and its sub-dependency of Vine and Flask with its sub-dependency of Werkzeug.
So the answer is simple right, just pip freeze your requirements.txt file. That works but now you have 100 dependencies in this file when really only about 8 of them are top level dependencies. It becomes a nightmare to maintain that as a human. You basically need to become a human dependency resolution machine that traces every dependency to each dependency.
Fortunately pip has an answer to this with the -c flag but for such a big problem it's not very well documented or talked about.
It is a solvable problem tho, to have a separate lock file with pip without using any external tools and the solution works with and without Docker. I have an example of it in this Docker Flask example repo https://github.com/nickjj/docker-flask-example#updating-depe..., but it'll work without Docker too.