H/2 doesn't solve blocking it on the TCP level, but it solved another kind of blocking on the protocol level by having multiplexing.
H/1 pipelining was unusable, so H/1 had to wait for a response before sending the next request, which added a ton of latency, and made server-side processing serial and latency-sensitive. The solution to this was to open a dozen separate H/1 connections, but that multiplied setup cost, and made congestion control worse across many connections.
> it solved another kind of blocking on the protocol level
Indeed! and it works well on low latency, low packet loss networks. On high packet loss networks, it performs worse than HTTP1.1. Moreover it gets increasingly worse the larger the page the request is serving.
We pointed this out at the time, but were told that we didn't understand the web.
> H/1 pipelining was unusable,
Yup, but think how easy it would be to create http1.2 with better spec for pipe-lining. (but then why not make changes to other bits as well, soon we get HTTP2!) But of course pipelining only really works in a low packet loss network, because you get head of line blocking.
> open a dozen separate H/1 connections, but that multiplied setup cost
Indeed, that SSL upgrade is a pain in the arse. But connections are cheap to keep open. So with persistent connections and pooling its possible to really nail down the latency.
Personally, I think the biggest problem with HTTP is that its a file access protocol, a state interchange protocol and an authentication system. I would tentatively suggest that we adopt websockets to do state (with some extra features like optional schema sharing {yes I know thats a bit of enanthema}) Make http4 a proper file sharing prototcol and have a third system for authentication token generation, sharing and validation.
However the real world says that'll never work. So connection pooling over TCP with quick start TLS would be my way forward.
"HTTP is being used as a file access, state interchange and authentication transport system"
Ideally we would split them out into a dedicated file access, generic state pipe (ie websockets) and some sort of well documented, easy to understand, implement and secure authentication mechanism (how hard can that be!?)
but to you point. HTTP was always mean to be stateless. You issue a GET request to find an object at a URI. That object was envisaged to be a file. (at least in HTTP 1.0 days) Only with the rise of CGI-bin in the middle 90s did that meaningfully change.
However I'm willing to bet that most of the traffic over HTTP is still files. Hence the assertion.
H/1 pipelining was unusable, so H/1 had to wait for a response before sending the next request, which added a ton of latency, and made server-side processing serial and latency-sensitive. The solution to this was to open a dozen separate H/1 connections, but that multiplied setup cost, and made congestion control worse across many connections.