No less impressive than the SQLite project itself; especially 100% branch coverage! That's really hard to pull off and especially to maintain as the development continues.
Good points; I wonder have they measure whether charities that get this money are effective though - it is a really hard problem; not only to help for the help sake, but to help effectively, achieving the intended end results
I also like and use lazy loading a lot :) But I guess the general question worth asking is: who drives the need for new features and functionalities in the browsers?
Whoever wants to write the underlying engines for virtually every browser: Apple and Google. They both have their agendas that they try to push via them.
Yes! One could argue that we might end up with programmers (experts) going through a training of creating software manually first, before becoming operators of AI, and then also spending regularly some of their working time (10 - 20%?) on keeping these skills sharp - by working on purely education projects, in the old school way; but it begs the question:
Does it then really speeds us up and generally makes things better?
This is a pedantic point no longer worth fighting for but "begs the question" means something is a circular argument, and not "this raises the question"
I love LLMs for learning and discussing concepts, but the AI slop and people using the very peak of the current hype cycle to make quick bucks is causing a lot of societal damage.
Or even the VC winter. Y Combinator put a lot of money into basically AI slop. If you can get this much VC funding for this sloppy of a product, I do wonder the future of this arrangement.
Even though the article is mainly about AsyncIO in Python, it does a very good job at explaining various terms used in Concurrency Programming in general and CPU- vs IO-bound processing.
I wonder how have they measured it... But in any case, one might argue that truly human writing will (already is) a competitive advantage - depending on the domain of course, we do not always care about that
On the VPS we use:
- 80 (standard http)
- 443 (standard https)
- 22 (obv for standard ssh)
- 9090 (metrics / internal so I can have an idea of the generic usage like reqs/s and active connections)
Client-Side: The -R 80:localhost:8080 Explained
The 80 in -R 80:localhost:8080 is not a real port on the server. It's a virtual bind port that tells the SSH client what port to "pretend" it's listening on.
No port conflicts - The server doesn't actually bind to port 80 per tunnel. Each tunnel gets an internal listener on 127.0.0.1:random (ephemeral port). The 80 is just metadata passed in the SSH forwarded-tcpip channel. All public traffic comes through single port 443 (HTTPS), routed by subdomain.
So What Ports Are "Available" to Users?
Any port - because it doesn't matter! Users can specify any port in -R:
ssh -t -R 80:localhost:3000 proxy.tunnl.gg # Works
ssh -t -R 8080:localhost:3000 proxy.tunnl.gg # Also works
ssh -t -R 3000:localhost:3000 proxy.tunnl.gg # Also works
ssh -t -R 1:localhost:3000 proxy.tunnl.gg # Even this works!
The number is just passed to the SSH client so it knows which forwarded-tcpip requests to accept. The actual routing is done by subdomain, not port.
Why Use 80 Convention?
It's just convention - many SSH clients expect port 80 for HTTP forwarding. But functionally, any number works because:
- Server extracts BindPort from the SSH request
- Stores it in the tunnel struct
- Sends it back in forwarded-tcpip channel payload
- Client matches on this to forward to correct local port
- The "magic" is that all 1000 possible tunnels share the same public ports (22, 80, 443) and are differentiated by subdomain.
reply