Hacker Newsnew | past | comments | ask | show | jobs | submit | Zambyte's commentslogin

The reason it is hard to get people to care is the obvious intent difference. People don't tend to get behind the wheel with the conscious intent of killing people, even if they make poor decisions that lead to that. Drivers killing people is much more relatable to the average person, so it's hard to get the average person to have a rational conversation about it.

The situation is the same with drunk driving. Most people can relate to just wanting to get home after having a couple drinks. There was little enforcement of existing laws. Yet the USA was able to turn things around, the culture has changed, fatalities have been slowly dropping, and even police chiefs can no longer talk their way out of going to jail when caught.

The "pavement queens" have been convinced they need larger by companies that sell trucks, because larger trucks have lower legal requirements for fuel efficiency.

Of course it is always someone else’s fault.

This comment makes it seem like people are built differently in the US than they are in the rest of the world, but that obviously isn't true. The roads (particularly intersections, where crashes tend to happen) are in fact built differently though. Urbanist resources like NotJustBikes and Oh The Urbanity! YouTube channels do a great job of highlighting the differences, and how they force drivers to pay attention through the laws of physics rather than the laws of signage.

No, the US has a culture of not giving a single shit about anyone but yourself. A frighteningly large fraction of drivers will do anything they can get away with. Here in the land of the free, rules are for other people, not for me.

Pedestrians are the same people: I often see a person nonchalantly crossing a six lane street in the middle of a block to get into a parked car on the other side. If someone decides to post on Instagram while driving at that time then it's another innocent pedestrian taken out by evil drivers but a few minutes later the same person could be posting on the Instagram while running over another.

I don't think it's a culture though, it's just people genuinely not being punished/rewarded for putting themselves in danger and avoiding danger when growing up.


American exceptionalism, even when used as a negative, is a stereotype, and often a fable.

but there are policy differences

american cars are measurably bigger/taller/heavier than in EU/JP. and they drive measurably faster than in EU/JP. and the walking infrastructure (crossroads/pavements) is measurably worse.

also anecdotally it's way easier to get a driving license in the US than in France or Japan (I don't know for the other EU countries) so i suspect there is a higher number of bad drivers on the road, but i have no proof for that.

that said, i went to my license renewal training session in japan last month and they informed us that the most accident-prone situation is similar to the op's one. (left-turn but on green, since turn on red is illegal and we drive on the left). when those happen generally there is a big rework of the spot to avoid repeat accident. and we have a lot of old drivers too...


Yeah, but I dont see how your people can get away of ignoring laws of physics, as this is what the parent comment by "Zambyte" mentioned.

Why not both?

Some amount is likely cultural too.

German drivers are objectively WORSE than MOST American drivers, speaking from experience driving thousands of km/mi in both. German drivers completely unnecessarily accelerate very strongly, take corners quickly, and slam on the brakes when stopping much more so than in the USA. The main difference I can attribute fewer deaths to by observation and critical thinking, is that Europeans have to be far more vigilant of random stuff appearing on the side, since many streets can have cars randomly coming from the right side because of what qualifies as a secondary road, and in some cases, you must yield to them, so the paranoia is much higher in towns. Of course, there are way more stops and crosswalks, cyclists, and pedestrians in most European towns, also elevating ones alertness. Finally, speed limits in European towns are much lower than anything in equivalent US towns because everything is more compact. Also of note is truck speed limits in Europe are generally 80 KILOMETERS/H whereas American truckers frequently drive north of 80 MILES/H. Cattle haulers are known for going 90-100 MILES/H on I-10.

Not driving like a grandma is not a big security issue if drivers are accordingly trained and expect it. Not looking for other participants, especially pedestrians, is one.

Citation needed. Maybe, but maybe "some" is essentially zero.

Considering driving and road rules is entirely learned behavior that requires tens / a hundred hours of training before they let you do it unattended, it seems pretty reasonable that the environment you learned in plays a pretty big impact on how you drive.

This article might be interesting, and I'm not against AI use. I am not interested in AI slop though, and I immediately lost interest in the banner photo with nonsense text in it.

Author here. Good feedback: the text isn't nonsense, but it requires background knowledge that the man on the right is the rapper Eminem.

That was my first reaction, too, but it’s not actually nonsense - it’s a depiction of Eminem practicing rhymes in a casual conversation.

It’s valid feedback for the author, though. I had to read the article to understand the image.


AI polarization is a little interesting. The AI generated image prompted the parent to not even consider whether the content was on topic. This might be a decent heuristic, but it's bound to throw out a lot of potentially useful stuff as well.

Yeah, it would probably work better if that image were positioned after the reference to Eminem thinking of rhymes all day.

I'm an Eminem fan and didn't get that reference FWIW.

I thought it was some "Silicon Valley bro" that wanted you to drink kelp, and build your biceps or something


Gotcha! Shame on me. Just slapped a "Slim Shady" label on his hoodie. Won't fully stop the bleeding but at least a few more people will get it.

Thanks for the change, sorry if I came off as too aggressive. I've seen some uses of AI that were very similar that strictly made the article worse and it would have been better to simply delete it. I'll concede that I simply didn't get it, and that's a me problem here. I'll give the article a more fair chance when I have some time later :)

Orange door hinge. I got the reference.

I lost interest when I got to the email address box to subscribe. Interrupts the flow and makes me skim the rest.

Sorry for the interruption. We're an indie business with no banner ads, and our newsletter helps us keep the lights on. Hope you enjoyed the piece up to that point.

Conversely, it's perfectly impossible no never go anywhere and have meaningful relationships with your neighbors.

The important part of the "church" in this context is finding a third space[0]. A church can be a reasonable third space, even for atheists. Particularly in smaller communities with limited resources. Other healthy options include hackerspaces / makerspackes (that is my third space of choice), libraries, parks, sporting events, etc.

[0] https://en.wikipedia.org/wiki/Third_place


Stop using it.

This feels like a "if you don't like where you live just move to a bigger house in a better neighborhood" style response to work related software problems. I.e. many people don't get to choose to run whatever software they'd like to on their work machines, nor are they able to justify changing jobs over control of a bug in the file browser.

Then maybe they actually can believe that Microsoft gets away with it. My comment was multifaceted :P

Ah, I get ya. To me, the hard to believe part is not how individual end users can't solve the problem/pressure Microsoft - it's how enterprise IT teams across the country pay massive licensing/support fees but core parts products like this have regular outstanding hanging bugs of the same family for extended periods over decades. You'd think there would have been enough pressure to make File Explorer more asynchronous by now, given Microsoft are talking about how they still tinker with the low level stuff from decades ago! I know even just mid sized individual companies I've work at have gotten custom patch requests/minor changes in before.

The tough part with (implied) multifaceted comments is nobody can just say things like that, they have to assume what meaning could still make sense to them (which is a dangerous game) or just not engage.


If you can convince corporate to ditch Microsoft and Azure, I'll buy you a beer.

I was skeptical of the claim that it's faster than traditional SSH, but the README specifies that it is faster at establishing a connection, and that active connections are the same speed. That makes a lot of sense and seems like a reasonable claim to make.

It is not faster in this sense. However, an SSH connection can have multiple substreams, especially for port forwarding. Over a single classical connection, this can lead to head-of-line blocking, where an issue in one stream slows everything down. QUIC/HTTP3 protocol can solve this.

Does this implementation do that do, or does it just use a single h3 stream?

The answer is yes according to code and documentation [0]:

> The stream multiplexing capabilities of QUIC allow reducing the head-of-line blocking that SSHv2 encounters when multiplexing several SSH channels over the same TCP connection

....

> Each channel runs over a bidirectional HTTP/3 stream and is attached to a single remote terminal session

[0] https://www.ietf.org/archive/id/draft-michel-remote-terminal...


Fun fact: SSH also supports multiple streams. It's called multiplexing.

Multiple streams at the application level, which can be head-of-line blocked due to all being multiplexed on the same transport layer connection.

The former kind of multiplexing addresses functionality, the latter performance.


Doesn't it run over a single TCP connection in all cases, unless you manually launch multiple and manually load-balance your clients across is? As in, it won't/can't open a new TCP connection when you open a new connection in the SOCKS proxy or port forward. They'll all share one head-of-line and block each other

Not that I've ever noticed this being an issue (no matter how much we complain, internet here is pretty decent)

Edit: seeing as someone downvoted your hour-old comment just as I was adding this first reply, I guess maybe they 'voted to disagree'... Would be nice if the person would comment. It wasn't me anyway


Although, dollars-to-donuts my bet is that this tool/protocol is much faster than SSH over high-latency links, simply by virtue of using UDP. Not waiting for ack's before sending more data might be a significant boost for things like scp'ing large files from part of the world to the another.

SSH has low throughput on high latency links, but not because it uses TCP. It is because SSH hardcodes a too-small maximum window size in its protocol, in addition to the one of TCP.

This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.

I think it's silly that this exists. They should just let TCP handle this.


> I think it's silly that this exists. They should just let TCP handle this.

No, unfortunately it'snecessary so that the SSH proocol can multiplex streams independently over a single established connection.

If one of the multiplexed streams stalls because its receiver is blocked or slow, and the receive buffer (for that stream) fills up, then without window-based flow control, that causes head-of-line blocking of all the other streams.

That's fine if you don't mind streams blocking each other, but it's a problem if they should flow independently. It's pretty much a requirement for opportunistic connection sharing by independent processes, as SSH does.

In some situations, this type of multiplexed stream blockiing can even result in a deadlock, depending on what's sent over the streams.

Solutions to the problem are to either use window-based flow control, separate from TCP,, or to require all stream receive buffers to expand without limit, which is normally unacceptable.

HTTP/2 does something like this.

I once designed a protocol without this, thinking multipexing was enough by itself, and found out the hard way when processes got stuck for no apparent reason.


Then:

* Give users a config options so I can adjust it to my use case, like I can for TCP. Don't just hardcode some 2 MB (which was even raised to this in the past, showing how futile it is to hardcode it because it clearly needs adjustments to people's networks and and ever-increasing speeds). It is extremely silly that within my own networks, controlling both endpoints, I cannot achieve TCP speeds over SSH, but I can with nc and a symmetric encryption piped in. It is silly that any TCP/HTTP transfer is reliably faster than SSH.

* Implement data dropping and retransmissions to handle blocking -- like TCP does. It seems obviously asking for trouble to want to implement multiplexing, but then only implement half of the features needed to make it work well.

When one designs a network protocol, shouldn't one of the first sanity checks be "if my connection becomes 1000x faster, does it scale"?


I've just looked at the OpenSSH source, and I agree it should be configurable. That seems like an easy patch if you wanted to do it.

Or, better but more difficult, it should track the dynamic TCP window size, from the OS when possible, combined with end-to-end measurements, and ensure the SSH mux channel windows grow to accomodate the TCP window, without growing so much they starve other channels.

To your second point, you can't do data dropping and retransmission for mux'd channels over a single TCP connection. After data is sent from the application to the kernel socket, it can't be removed from the TCP transmission queue, will be retransmitted by the kernel socket as often as needed, and will reach the destination eventually, provided the TCP connection as a whole survives.

You can do mux'd data dropping and retransmission over a single UDP connection, but that's basically what QUIC is.


Yeah, the longstanding hpn-ssh fork started off by adjusting ssh’s window sizes for long fat pipes.

https://github.com/rapier1/hpn-ssh


You're mixing application layer multiplexing and transport layer multiplexing.

If you use the former without the latter, you'll inevitably have head-of-line blocking issues if your connection is bandwidth or receiver limited.

Of course not every SSH user uses protocol multiplexing, many do, as it can avoid repeated and relatively expensive (terms of CPU, performance, and logging volume) handshakes.


Off the top of your head do you know of any file transfer tools that do utilize multiple streams?

Yes, I wrote down some that do and don't support it here:

https://github.com/libfuse/sshfs/issues/300


I tend to use 'rclone', does SSH/more. The '--transfers' arg is useful for handling several files, lol. One, if I recall correctly, isn't parallelized.

That's not really a common TCP problem. Only when there's something severely weird going on in the return path (e.g. an extremely asymmetric and/or congested return path connection dropping ACKs while the forward path has enough capacity) does the ACK mechanism limit TCP.

Also, HTTP/3 must obviously also be using some kind of acknowledgements, since for fairness reasons alone it must be implementing some congestion control mechanism, and I can't think of one that gets by entirely without positive acknowledgements.

It could well be more efficient than TCP's default "ack every other segment", though. (This helps in the type of connection mentioned above; as far as I know, some DOCSIS modems do this via a mechanism called "ack compression", since TCP is generally tolerant of losing some ACKs.)

In a sense, the win of QUIC/HTTP/3 in this sense isn’t that it’s not TCP (it actually provides all the components of TCP per stream!); it’s rather that the application layer can “provide its own TCP”, which might well be more modern than the operating system’s.


Yeah, there’s a replacement for scp that uses ssh for setup and QUIC for bulk data transfer, which is much faster over high-latency paths.

https://github.com/crazyscot/qcp


That's why mosh exists, as it is purpose built for terminals over high latency / high packet loss links.

But mosh doesn't actually do any of what ssh does, let alone do it faster - it wins by changing the problem, to the vastly narrower one of "getting characters in front of human eyeballs". (Which is amazing if that's what you were trying to do - but that has nothing to do with multiple data streams...)

mosh is hard to get into. There are many subtle bugs; a random sample that I ran into is that it fails to connect when the LC_ALL variables diverge between the client and the server[0]. On top of it, development seems abandoned. Finally, when running a terminal multiplexer, the predictive system breaks the panes, which is distracting.

[0]: https://github.com/mobile-shell/mosh/issues/98


Of course it has ACKs. There are protocols without ACKs but they are exotic and HTTP3 is not one of them.

He said not waiting for ACKs.

That makes even less sense, unless we are talking about XMODEM every protocol uses windowing to avoid getting stuck waiting for ACKs.

Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.

Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.

Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks: https://fasterdata.es.net/host-tuning/linux/

There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.


> Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen.

SSH multiplexes multiple channels on the same TCP connection which results in head of line blocking issues.

> Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for.

Not really, no. OpenSSH has a 2 MB window size (in the 2000s, 64K), even with just ~gigabit speeds it only takes around 10-20 ms of latency to start being limited by the BDP.


Well, you could peruse the code. Then see what it does and explain it.

Not really that relevant - anybody regularly using SSH over high latency links is using SSH+mosh already anyway.

The huge downside of mosh is it handles its own rendering and destroys the scrollback buffer. (Yes I know I can add tmux for a middle ground.)

But it's still irrelevant here; specifically called out in README:

> The keystroke latency in a running session is unchanged.


"huge downside" (completely mitigated by using tmux)

The YouTube and social media eras made everyone so damn dramatic. :/

Mosh solves a problem. tmux provides a "solution" for some that resolves a design decision that can impact some user workflows.

I guess what I'm saying here, is it you NEED mosh, then running tmux is not even a hard ask.


No it’s not completely mitigated by tmux. mosh has two main use cases (that I know of)

1. High latency, maybe even packet-dropping connections;

2. You’re roaming and don’t want to get disconnected all the time.

For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.

For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.


I believe this depends on the intent of your connection!. The first sentence of your last paragraph: "For read-heavy, reconnectable workloads" - A-ha!

From my stance, and where I've used mosh has been in performing quick actions on routers and servers that may have bad connections to them, or may be under DDoS, etc. "Read" is extremely limited.

So from that perspective and use case, the "huge downside" has never been a problem.


Honestly, it feels like the one being dramatic here is you. Because the one you’re replying to added “huge”, you added a whole sentence calling everyone “so damn dramatic”. But oh well.

You know what has a "huge downside"? Radiation therapy.

Not a scroll back buffer workflow issue.


If you believe that, you clearly haven't had to work with mosh in a heavily firewalled environment.

Filtering inbound UDP on one side is usually enough to break mosh, in my experience. Maybe they use better NAT traversal strategies since I last checked, but there's usually no workaround if at least one network admin involved actively blocks it.


SSH is actually really slow on high latency high bandwidth links (this is what HPN-SSH patches fix: https://www.psc.edu/hpn-ssh-home/hpn-ssh-faq). It's very apparent if you try running rsync between two datacenters on different contients.

HTTP/3 (and hopefully this project) does not have this problem.


Sounds like a complex change to fix a security protocol but, reading the page, it seems to just increase the send buffer, which indeed makes sense for high-latency links

It also tracks with HTTP/3 and QUIC as a whole, as one of the main "selling points" has always been reduced round trips leading to faster connection setup.

If by being faster at making a connection it would reduce latency even if a little, it would mean a really big improvement for other protocols built on top of it like rsync. If Rsync reuses an active connection to stream the files and calculate changes then the impact might be negligible.

Should be genuinely faster over many VPNs, because it avoids the "TCP inside TCP" tar pit.

openssh is generally not praised for its speed but its security track record. i hope this thing doesnt sacrefice it for a little more speed in something that generally doesn't require more speed..

I read this and thought “who cares”?

I use ssh everywhere, maybe establish 200+ SSH sessions a day for my entire career of 20 years and never once have I thought “I wish establishing this connection was faster”


Good for you.

There are a lot of automation use cases for SSH where connection setup time is a significant impediment; if you’re making dozens or hundreds of connections to hundreds or thousands of hosts, those seconds add up.


"Slower" only if you consider yourself to be the only individual that matters. High speed trains move way more people per mile per hour. Plus train stations can be located in much more convenient locations (directly in city centers), so even though your time in the air may be less time than you would be on a train, door-to-door home-to-destination may be faster.


The best quality you can get is at odds with the best speed you can get. There are lots of people (especially with specific use cases) who will pay for the best speed they can get that is high enough quality.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: