Neither conda nor venv really solve the underlying problem.
Most ml project authors only supply a "top-level" requirements file, that is, one that only contains the list of dependencies directly required by the project, not a full graph of both direct and indirect dependencies.
Those dependencies are often unversioned. This means pip will fetch their latest versions, which might be incompatible with what the project actually requires, e.g. numpy 2 instead of the numpy 1.x that the project was actually developed with.
Even if the dependencies are versioned correctly, it's likely that one of those dependencies has its own improperly versioned dependency. Maybe the project doesn't depend on numpy directly, but it depends on foo 0.8.14, which depends on bar 3.11.4, which is sloppy and depends on improperly numpy, which will resolve to 2.x nowadays even though it actually needs 1.x.
You can `pip freeze`, but that doesn't always work, as the graph of required dependencies might differ across platforms and Python versions. It also has the problem of not distinguishing between the actual project constraints and those mechanistically added by pip.
Then there's the fact that pip installs dependencies one-by-one instead of looking at the whole list and figuring out what versions will work together. If you need both foo and bar, where foo needs numpy 1.x and the latest version of bar needs >2.0, you will get >2.0 and a broken foo, instead of a working foo an an older working bar that still worked with numpy 1. This is actually one problem that conda solves, at a serious performance cost.
Not to mention the fact that requirements files often fall "out of sync" with what is actually there. It is too easy to follow the suggestion in some exception and just `pip install` a missing package, forgetting to add it to requirements.txt.
Then there's the python version debacle, people often don't specify the version of Python needed, which sometimes just makes the requirements uninstallable (or, even worse, subtly broken).
The solution to all these problems is using uv. It keeps a package lock of the exact set of working versions (which is always resolved for multiple platforms), and which is separate from the version constraints imposed by the project author. It resolves packages all at once without Conda's performance problems, Rust probably helps here. It encourages commands like `uv add`, which install a package and track it as a dependency in a single step. It even tracks Python versions, fetching the exact version needed when necessary.
Most ml project authors only supply a "top-level" requirements file, that is, one that only contains the list of dependencies directly required by the project, not a full graph of both direct and indirect dependencies.
Those dependencies are often unversioned. This means pip will fetch their latest versions, which might be incompatible with what the project actually requires, e.g. numpy 2 instead of the numpy 1.x that the project was actually developed with.
Even if the dependencies are versioned correctly, it's likely that one of those dependencies has its own improperly versioned dependency. Maybe the project doesn't depend on numpy directly, but it depends on foo 0.8.14, which depends on bar 3.11.4, which is sloppy and depends on improperly numpy, which will resolve to 2.x nowadays even though it actually needs 1.x.
You can `pip freeze`, but that doesn't always work, as the graph of required dependencies might differ across platforms and Python versions. It also has the problem of not distinguishing between the actual project constraints and those mechanistically added by pip.
Then there's the fact that pip installs dependencies one-by-one instead of looking at the whole list and figuring out what versions will work together. If you need both foo and bar, where foo needs numpy 1.x and the latest version of bar needs >2.0, you will get >2.0 and a broken foo, instead of a working foo an an older working bar that still worked with numpy 1. This is actually one problem that conda solves, at a serious performance cost.
Not to mention the fact that requirements files often fall "out of sync" with what is actually there. It is too easy to follow the suggestion in some exception and just `pip install` a missing package, forgetting to add it to requirements.txt.
Then there's the python version debacle, people often don't specify the version of Python needed, which sometimes just makes the requirements uninstallable (or, even worse, subtly broken).
The solution to all these problems is using uv. It keeps a package lock of the exact set of working versions (which is always resolved for multiple platforms), and which is separate from the version constraints imposed by the project author. It resolves packages all at once without Conda's performance problems, Rust probably helps here. It encourages commands like `uv add`, which install a package and track it as a dependency in a single step. It even tracks Python versions, fetching the exact version needed when necessary.