These do not solve fundamental issues as mentioned elsewhere.
Hell, just a few weeks ago i wanted to try out some tool someone made in Python (available as just the Python code specific to that project off github) - but that was broken because there was some minor compatibility breakage between the version the author used and the current one. The breakage wasn't in the tool itself but in some dependency 3-4 layers down (i.e. some dependency of a dependency of a dependency). Meanwhile the exact version of Python used wasn't available in the distro packages.
Eventually i solved it by downloading the exact Python version source code inside a minimal Debian docker container, compiling it and installing the rest via pip (meaning the only thing i changed was the Python version, not anything else).
Neither conda nor venv really solve the underlying problem.
Most ml project authors only supply a "top-level" requirements file, that is, one that only contains the list of dependencies directly required by the project, not a full graph of both direct and indirect dependencies.
Those dependencies are often unversioned. This means pip will fetch their latest versions, which might be incompatible with what the project actually requires, e.g. numpy 2 instead of the numpy 1.x that the project was actually developed with.
Even if the dependencies are versioned correctly, it's likely that one of those dependencies has its own improperly versioned dependency. Maybe the project doesn't depend on numpy directly, but it depends on foo 0.8.14, which depends on bar 3.11.4, which is sloppy and depends on improperly numpy, which will resolve to 2.x nowadays even though it actually needs 1.x.
You can `pip freeze`, but that doesn't always work, as the graph of required dependencies might differ across platforms and Python versions. It also has the problem of not distinguishing between the actual project constraints and those mechanistically added by pip.
Then there's the fact that pip installs dependencies one-by-one instead of looking at the whole list and figuring out what versions will work together. If you need both foo and bar, where foo needs numpy 1.x and the latest version of bar needs >2.0, you will get >2.0 and a broken foo, instead of a working foo an an older working bar that still worked with numpy 1. This is actually one problem that conda solves, at a serious performance cost.
Not to mention the fact that requirements files often fall "out of sync" with what is actually there. It is too easy to follow the suggestion in some exception and just `pip install` a missing package, forgetting to add it to requirements.txt.
Then there's the python version debacle, people often don't specify the version of Python needed, which sometimes just makes the requirements uninstallable (or, even worse, subtly broken).
The solution to all these problems is using uv. It keeps a package lock of the exact set of working versions (which is always resolved for multiple platforms), and which is separate from the version constraints imposed by the project author. It resolves packages all at once without Conda's performance problems, Rust probably helps here. It encourages commands like `uv add`, which install a package and track it as a dependency in a single step. It even tracks Python versions, fetching the exact version needed when necessary.
Those are decent options but you can still run into really ugly issues if you try to go back too far in time. An example I ran into in the last year or two was a Python library that linked against the system OpenSSL. A chain of dependencies ultimately required a super old version of this library and it failed to compile against the current system OpenSSL. Had to use virtualenv inside a Docker container that was based on either Ubuntu 18.04 or 20.04 to get it all to work.
To a certain extent. The big difference being how difficult it would be to work around or update. If my C code is linking against a library whose API has changed, I can update my code to use the new API so that it builds. In the case I ran into… it wasn’t one of my dependencies that failed to build, it was a dependency’s dependency’s dependency with no clear road out of the mess.
I could have done a whole series of pip install attempts to try to figure out the minimal version bump on my direct dependency that would use a version of the sub-dep that would compile and then adapt my Python code to use that version instead of the version it was pinned to originally.