Their point is that if 1000 programs use the same 1000 libraries, static linking duplicates all those libraries across each binary, taking that much more storage and memory (which can hurt performance as well), effectively making 1000000 libraries in use.
Dynamic linking gives you M binaries + N libraries. Static linking is M * N.
But there are not 1000 programs being proposed. No one said every binary in a system. Only some binaries are a problem. That is silly hyperbole that isn't useful or any kind of valid argument.
What I said specifically is I'd rather a static binary than a flatpak/snap/appimage/docker/etc. That is a comparison between 2 specific things, and neither of them is "1000 programs using 1000 libraries"
And some binaries already ship with their own copies of all the libraries anyway, just in other forms than static linking. If there are 1000 flatpaks/snaps/docker images etc, then those million libraries are already out there in an even worse form than if they were all static binaries. But there are not, generally, on any give single system, yet, though the number is growing not shrinking.
For all the well known and obvious benefits of dynamic linking, there are reasons why sometimes it's not a good fit for the task.
And in those cases where, for whatever reason, you want the executable to be self-contained, there are any number of ways to arrange it, from a simple tar with the libs & bin in non-conflicting locations and a launcher script that sets a custom lib path (or bin is compiled with the lib path), to appimage/snap/etc, to a full docker/other container, to unikernel, to simple static bin.
All of those give different benefits and incur different costs. Static linking simply has the benefit of being dead simple. It's both space and complexity-efficient compared to any container or bundle system.
Dynamic linking gives you M binaries + N libraries. Static linking is M * N.