I've actually been struggling with a problem related to this. First time page loads if you have a 2000loc JavaScript file + index + css + favicon requests nothing but those 4 resources, which is very quick on a keep-alive http 1.1 server.
I've written all of that from scratch because I got tired of maintaining node.js
But when splitting up the js file into pieces and using es6 modules, say 12 different files, Chrome makes 8 TCP connections on 8 different sockets, and each connection has its own TCP handshake (and TLS handshake for https). How do you bundle things without using a build system or a bundler? Import maps help, and it's not difficult to simply hash the page and copy the asset to a "dist/" folder with the hash appended, but it's still slow on first page load.
I'm not a web developer professionally (or network engineer), so I'm learning about web networking myself for the first time. It might be helpful to add a section about "traffic shaping"? I've smacked together a service worker that does the work of caching well-enough for now, but I'm definitely doing something rather strange and reinventing something. My page loaded significantly faster when it was just 1 JS file, no caching needed.
The number of connections is a bit of a red herring here, the problem is (typically) that the browser loads one module which then tells it to load another module, and so on. Each round-trip is wasted time. Preloading gives the browser a flat list of all required dependencies right away. By the way, this applies to everything else: having all required resources declared at the top of the page makes things faster. You can even preload some less-obvious things like background images referenced by CSS files.
This might achieve the "bundling" you want, in the sense that all the preloaded resources can be multiplexed into a single connection. But again, the number of connections is almost nothing compared to the number of round-trips required.
This is my understanding of HTTP versions and when bundling is necessary:
On HTTP 1 chrome will make 6 TCP connections to a host, and serialize requests one by one over those connections. This suffers from the waterfall effect, where you have to wait for the first 6 requests to complete, then the next batch, and so on. On lossy connections this can also lead to head of line blocking on each of those connections, where it has to wait for a packet of one file to arrive before it "frees up" the connection for the next request. On HTTP 1 bundling becomes necessary pretty quickly to guarantee good performance.
On HTTP 2 it will make 1 TCP connection to a host, and multiplex requests over those connections, so they may all download in intermixed chunks. This has less connection negotiation overhead, and it does not suffer from the waterfall effect to the same degree. However, it still has head of line blocking on lossy connections, and this is in fact made worse because there is only one connection so if it's blocked at TCP level on a single packet of a single file every request behind of that packet is blocked as well. I've done some tests and on reasonable quality connections there is not much overhead involved with requesting a hundred files instead of one file. The caveat is "on reasonable quality connections".
On HTTP 3 the QUIC protocol is based on UDP and therefore connectionless. There is no overhead for establishing connections or doing TLS handshakes, and no waterfall or head of line blocking. However, because of the connectionless protocol a lot of TCP concerns (like dealing with packet loss) become a concern of the HTTP layer, and this complicates implementation. The browsers already support it, but it is especially an issue at the web server level. This means it will take a while for this feature to roll out to servers everywhere. Once the web moves over to HTTP 3 the performance advantages of bundling should largely disappear.
A service worker and/or careful use of caching can be used as a workaround to lessen the impact of requesting many files over HTTP 1, but this adds implementation complexity in the application and a bug may cause clients to end up in a semi-permanently broken state.
HTTP 2 and above will use one connection to retrieve several files.
Caddy [1] can act as a static file server that will default to HTTP 2 if all parties support it. No configuration required.
If you allow UDP connections in your firewall, it will upgrade to HTTP 3 automagically as well.
I like the style, and although it's a bit more anxiety inducing than I would like to put into the world, it is very gripping. Perhaps being gripping, building suspense, is an alright form of anxiety. It certainly makes for compelling writing.
The subject matter feels well covered, but I am not a frequent TikTok user, only peering in the periphery of YouTube shorts when I allow my once-in-a-blue-moon dosage of hyper-modernity.
I've implemented something similar for a pet project, trying to make a very concise http(s) server with raw python.
It works with bytes for speed and handles a lot of tricky edge cases I couldn't find in other implementations (like request paths that were over 16k bytes long) that I needed for my frontend. It has microsecond-level logging, a packet proxy, and a supervisor, all as separate binaries that interact.
How do you deal with iOS deleting PWA data when unused? I'm building an app that relies on indexedDB, which afaik is the only persistent storage a PWA can access.
I seem to have some kind of issue with ssl_shutdown, I wrote a packet-level proxy in c++ but I still can't figure out the correct incantation to shutdown on both sides correctly. The Python SSL socket shutdown() function doesn't seem to return anything, so we can't tell if it's finished yet.
I wrote this as an alternative to socketserver.py from the standard library, as I didn't like all of the customization points and indirection.
Also one thing to note is that doing .recv(1024) makes it so that the first line of your http request must be less than 1024 bytes long, which (if you have a lot of query parameters, like I do on some requests) might require you to do some slightly stranger loop shenanigans like I do. It was tricky to figure out, and I don't think it's a completely generalizable solution, but I heavily recommend something like it for anyone dealing with the same problem.
There's a source somewhere for the greatest number of errors happening because of shift change, where e.g. a doctor's notes are incomplete and don't mark down a detail that later turned out to be important. The study found that much-longer-than-i-thought shifts were better than shorter ones, I believe e.g. 16 or even 24 hour shifts were safer than 8 hour shifts.
Reminds me of Red Alert 2's amphibious transport. The Kirov is also a zeppelin, huh. I have a new appreciation for the alternate history in RA2, of technology that was hyped but fizzled out. Tesla coils as well.
I loot authors I read for grammar, and I picked this and “nb” up from David Foster Wallace, who used both all the time. This usage of “which” is really handy for achieving clarity in certain situations.
I've written all of that from scratch because I got tired of maintaining node.js
But when splitting up the js file into pieces and using es6 modules, say 12 different files, Chrome makes 8 TCP connections on 8 different sockets, and each connection has its own TCP handshake (and TLS handshake for https). How do you bundle things without using a build system or a bundler? Import maps help, and it's not difficult to simply hash the page and copy the asset to a "dist/" folder with the hash appended, but it's still slow on first page load.
I'm not a web developer professionally (or network engineer), so I'm learning about web networking myself for the first time. It might be helpful to add a section about "traffic shaping"? I've smacked together a service worker that does the work of caching well-enough for now, but I'm definitely doing something rather strange and reinventing something. My page loaded significantly faster when it was just 1 JS file, no caching needed.