When deploying the developed application on some server, all the exact dependencies get installed there. The main reason for the existence of the server and its configuration is to run the application, so the server adapts to the needs of the application and gets the dependency versions preferred by the app, instead of the application trying to adapt to the server and trying to make do with the libraries already existing there.
I'd say it's a heavy-handed approach to mitigate more fundamental issues with how python packages are maintained, if everybody wants to pin different versions then we're going to have to install different versions of everything which is what npm does and I consider that heavier.
Again, it's all a question of point of view, what we see as a package manager problem and causes us to keep reinventing packages managers, might actually be a problem with how we maintain our packages, my point of view being the latter. But I'm digressing.
When it comes to installing on "another machine", you don't know what Python they have, you don't know what libc they have, and so on, that is exactly what containers attempt to mitigate, so that seems exactly like the tool to use for this problem.
I think it's a fundamental problem with managing dependencies. On one hand, any given application usually knows what version of dependencies it actually supports so it makes sense for the application to simply bundle those in: it most extreme cases it's statical linking/binary embedding, or (usually) putting the dependencies in subdirectories of the application's directory ― in cases where the application has a "directory where it lives in" instead of it being thinly spread all over the system (e.g. over /bin, /etc, /usr/bin/, /usr/lib/, etc.).
On the other hand, the users/sysadmins sometimes want to force the application to use different version of dependency, so the application may provide for that by somehow referencing the dependency from the ambient environment: usually it's done by either looking for a dependency in a well-known/hard-coded path, or getting that path from a well-known/hard-coded env var, or from a config file (which also you have to get from somewhere) or from some other injection/locator mechanism, thousands of those.
And all this stuff is bloody fractal: we have system-level packaging, then Python's own packaging on top of that, and then some particular Python application may decide to have its plugin-distribution system of sort (I've seen that), and that too goes on top of all of that, not to mention all the stuff happening in parallel (Qt sublibraries, GNOME modules, npm ecosystem)... well, you get the picture. It kinda reminds me of "Turing tarpit" and I doubt collapsing all this into one single layer of system-level packaging, and nothing on top, is really practical or even possible.
What about working on a project with a team?
What about deploying the code on another machine?