I used to use lots of port knocking setups to hide my ssh port. That was, until I discovered Tailscale's SSH setup. Now my SSH is run over wireguard which is very stealthy.
Same. It's amazing not having my server hammered by malicious actors and hardening it by not even offering the ssh service on the primary network interface
Why is it so amazing? Sounds more complicated than fail2ban. I've been installing fail2ban for decades on countless servers, using decent passwords, and have never had SSH get brute-forced. It’s anecdotal, but if you’re getting blocked after three wrong attempts, the chances of a successful attack are pretty small. So, why bother with nonstandard ports or even other protocols?
For anyone else reading this, generally speaking one shouldn't use passwords for SSH in 2024. Use public key auth instead.
> Why is it so amazing?
OpenSSH isn't invulnerable. It can have zero-day vulnerabilities. But if it isn't even listening on the public internet, that's one less attack vector.
Generally speaking, you’re right, but I have servers I want to be able to access from anywhere, because I support some app running on them. Until 1password agent setup, having keys only and password disabled was too difficult, and yet, also unnecessary.
Zero day ssh bug? I’m not NSA, how often does this happen to random servers?? Again, never have been hacked in more than 20 years. Still support some servers with ~6 year uptime.
Ah yes, the "I've never been hacked so I must be secure" argument ;)
Unfortunately, you're not convincing anyone. Amongst the security conscious, multi-year uptimes are the opposite of a brag. And it doesn't matter how you spin it, key-based auth is best practice, as is reducing your attack surface.
It seems that some of these measures are too difficult for you, and that's fine. But trying to argue that the measures are pointless is just false.
I’m not trying to convince anyone, I’m trying to understand what drives sone security focused people to make things more complicated and harder without practical justification.
So, are you NSA? How many servers have you lost to the password attack vector?
For the record, and whatever worth - it is the (it seems, serious) conviction of here folks (and I concur) that the NSA is at least a reader of these threads.-
Yeah, it might read like that, but it also is how I feel. If I was running a crypto farm, or if I was doing security research, I would have different levels of concerns.
But, in fact, hosting a competitive gsmijg website, I did experience common brute force and and other types of attacks, but fail2ban did foil them for years :)
None of the attackers were ever sophisticated enough to come up with a successful attack (that I know of :))
The point is, should everything be do all the best practices as if they were equally likely to be attacked?
It’s like saying that everyone should also have a faraday cage house, and electrified fences, it is the best practice, after all.
Every large- or medium-sized multi-user server disables passwords for SSH login, because they're worried about things like password stuffing - and because they know password reuse is unavoidable when you've got even a small fleet of servers.
At the same time for most users certificate-based login is easy (no need to enter a password every time) and they've already got it set up, because github and AWS work that way.
OK, let's assume SSH is configured to accept one 30-character random password as an escape hatch. All normal auth is done using pre-shared keys. What are the risks, from your point of view?
From my POV, the principal risk is opsec mishaps, which may lead to leaking a public key or a password alike.
One difference is that MitM attacks can capture your password, thereby giving persistent access to at least that system (more, if you reuse passwords).
With public keys, this is not possible. The worst case of credential theft from MitM would be hijacking a forwarded SSH agent, which would require a deliberate (and highly discouraged) client configuration.
I feel like syncing a password-protected private key for break-glass use would be better than syncing a password database (given the same master password, key-stretching, and syncing strategy...or even just encoding your private key in a "secure note" field instead).
SSH is always using public keys even when you use password authentication. Your SSH client knows the host the key. If you're not connecting to the right host, you are informed.
I know that, but "public keys" is long enough on mobile without typing "public user-authentication keys."
Anyway, I think it is reasonable to assume that if you're using the "escape hatch" as mentioned by /u/nine_k, you may well not have your .ssh/known_hosts file on your client. In which case public user-authentication keys minimizes your blast radius of a MitM host.
Also, a compromised (but legitimate) host could still grab your password and try lateral movement (mitigated if you don't reuse your break-glass password, but you get it for free with public keys).
And yet syncing an encrypted private key is still easier and more secure than syncing (via the same mechanism e.g. Keepass) a 30-character random password.
You don't need a 30 character random password for SSH to your machine; that's a strawman.
The attackers are not cracking a password hash with GPUs; they are just connecting and guessing.
People who use passwords with SSH of course use passwords they can remember and type.
If the attackers are trying to brute-force your password by attacking the hash, that means the machine was already compromised. The password then has no value, unless you're re-using it for other machines.
The user who thinks they need a 30 character random password for SSH (if they were to use one) will of course opt for keys instead.
I see user nine_k introduced that. It's still a strawman; nobody needs 30 random characters for a SSH password (except in some circumstances in which a key would obviously be better).
Let's assume I have an uncommon user name (not root or www-data, ...) and not anything from my domain name or e-mail address or whatever, and a nine character password made of lower case characters and digits, reasonably easy to remember.
Well great, so I'm addressing nine_k and his question/scenario. As I have been this entire time. And it sounds like you're _agreeing_ that a 30-character random password makes no sense, and a key is easier and better. No?
Regarding _your_ scenario, cool bro, do whatever you want. However, if you reuse that password for any other servers, you're open to lateral movement attacks, which keys mitigate.
Actually I guess that's my main argument: you can mitigate the downsides of passwords, but keys are super simple, well-supported, and require no such fussiness. Just generate it, set a password, authorize it, forget it. Threats mitigated. If you want to futz about with workarounds, be my guest. I have no such desire.
Anything is better than a 30 character password, including quitting computing and just doing vegetable farming on a tiny island, completely off every grid.
BTW, that remark I made about known_hosts applies to keys. You could put your SSH client keys (I mean private ones) on some HTTPS URL, so that you could fetch them to a brand new machine (e.g. burner phone purchased abroad).
And that's back to passwords: anyone else knowing that URL could fetch those keys, and their security depends on their password phrase. So we are back to relying on the strength of a password phrase as well as faith in attackers not knowing anything about such an URL.
Oh right; the URL could be .htpasswd protected too, let's not forget. :)
Re: hosting your key, I think that's quite reasonable, again, assuming your access control + encryption is good. It's a solid break-glass solution. I would add monitoring that alerts if it is ever used, though. Then you can remediate quickly on the off-chance it is compromised. In day-to-day use I would stick with a different key that only lives on my machine.
That's access control and transport encryption. By encryption I meant the encryption of the private key itself. I would not upload a plaintext private key, especially for privileged account access, even to a server I control.
A strawman indeed. 30 characters is deliberate overkill.
So, thank you for confirming my understanding: keys are just as / more convenient from the ops standpoint than passwords. The weakness is only in short, guessable passwords.
Short, guessable passwords are another strawman. There are short passwords which are unguessable. Those passwords are crackable from their hash, which is different from guessable.
fail2ban is dangerous imho. First, it will only block high-frequency maliciousness. If an attacker knows to stay below the default ban frequencies, or change endpoints often enough, they will have free reign. Second, fail2ban is a DoS risk, attackers can spoof connections from an IP they want to switch off. Third, fail2ban relies on parsing of textual logs. This is vulnerable to all kinds of injection attacks (there have been some CVEs to that end) where an attacker injects patterns that the fail2ban heuristics will latch on to, and wildly ban stuff.
So you should not rely on fail2ban to keep you safe from anything, and you are introducing DoS risks. Very bad tradeoff imho, making it only good as a last resort.
Like all these problems, the answer is it depends...
In general, fail2ban is often setup to indirectly whisper to a firewall API. The firewall is smart enough to enforce white lists, and custom rate-limiting traffic rules.
i.e. to survive a DoS, the server enforces the traffic profile that chokes off users violating normal rules (ratio of TCP packet types, UDP volume %, and TTL count variance per IP.)
In general, for DDoS you just drop the traffic to a fixed cost CDN with a CAPTCHA to issue real users session tokens, or issue a 302 to 127.0.0.1 for everyone else hammering the site.
Getting your SSH brute-forced shouldn't be possible with or without screening, because absolutely nobody in 2024 should be using passwords with SSH to begin with. This is the most frustrating thing about fail2ban cargo culting; fail2ban hasn't made sense since the era of multiuser Unix shell accounts, which is 2 decades in the past.
You've been mugged in a foreign country and have no phone, no wallet, or keys. How do you break back into your digital life? Everything's got two factor and you've not got your something-you-have factor.
I'm all for almost all my servers not accepting passwords, but it's a scenario that I think about, so there's one server running ssh on an non-default port that takes a password so I can break back in using only what's in my head (hopefully I don't get hit so hard in this mugging so as to forget what I've memorized).
But what is your plan? Everything should be protected by 2FA but you don't have your additional factor unless you get it implanted under your skin, http://dangerousthings.com style
Really, what are the chances if you have a decent password and 3 attempts per 24 hours? Why would I take something %0.000000001 likely and make it %0.0000000000000000000001 likely if there is an added risk of %0.001 my house burns down and I will lose all my access?
I've never seen SSH attackers probe a space of user IDs in such a way that they would eventually find a user ID like Z@an4ar. In fact, they just stick to standard user IDs like root, and probe only the password spaces. If your super user account is not named root, or not available by that name via SSH, it is safe from attacks which only try that name.
sshd can be configured simply not to allow root logins. To use root, you log into some other account and then su. That other account can have a name that attackers will never try, and a decent password.
Then, if you happen to be using a log based banning system, since you know that no legitimate user would be trying the name root, you can impose an instant ban on such an IP address, with a long duration. It's really just for reducing traffic more than anything.
Regarding aliasing root, you can create an alias for the UID 0 user simply by editing your password and shadow files to create duplicate entry. If the root entry appears first, then that name is still used whenever a UID is resolved to a user name, like in your ls -l and whatnot.
The shadow entry for root can have a star in the password field so that it cannot be used for logging in by any means; only the alternative name can be used via the other entry that has a password set up in its shadow entry.
Addendum: I just ran some scripts to see what attackers are trying. They probe various funny user names but there doesn't appear to be any system behind it. They are all short names. The vast majority of them are nothing but lower case letters. A few have underscores and digits, as well as dashes and periods. Some are digits only. A few are using glyph characters:
!
!!!
?
#$
I suspect that the user IDs being tried are all targeting known passwords that have been obtained before. I.e. they are probing "where else on the planet has the same user ID used that same password".
The valid users they are trying are:
avahi
backup
bin
daemon
Debian-exim
foo
games
gdm
gnats
hplip
irc
libuuid
list
lp
mail
man
messagebus
news
nobody
ntp
postgres
proxy
root
saned
sshd
sshroot
statd
sync
sys
uucp
www-data
None of these allow login; they have a * in the shadow file.
You get that if you believe attackers can't break your passwords, screening SSH with "port knockers" or fail2ban isn't doing anything, right?
The whole thing is kind of moot though. For other reasons, you should just wrap all this stuff up in WireGuard and never think about it again. WireGuard is silent; you can't probe it.
Actually, banning reduces traffic less than you might think. These days most of the attackers assume they are going to be banned. You get a lot of singleton requests from IP addresses that don't show up again, or not any time soon. And if your banning system generates logs of its own, it just increases the log noise.
As a result of this HN discussion, I disabled all SSH logging, and turned off the associated banning system. I disabled the use of PAM by sshd, and set its logging level to FATAL (because the ERROR level stupidly still logs when sshd is not able to find a shadow entry for a user ID).
I'm confident they are not getting in by guessing a password and no longer believe there is a net saving in resources by monitoring and banning.
It can be difficult to pass some PCI compliance tests if your ssh port is available to the world. OpenSSH also leaks some information about your server unless you recompile it with those options removed.
I also love tailscale's ssh option. I have been using it for a few months now. But I'm a little bit scared that if the tailscale daemon crashes, I'll lose access to my server.
You would also be locked out if you ran OpenSSH on Tailscale's autoconfigured WG interface. Setup WireGuard manually, or enable serial console login, or make sure your servers are dispensible. Tailscale (and Nebula) mostly alleviate the last case.
Same could be said for the sshd service crashing. And yeah, I suppose you could say it's got a longer track record, but I've yet to have tailscale crash on me in a few years.
Port knocking is supposed to be a last, self made, no dependency, cheap, cute layer of defense.
Installing external dependencies, even from someone trusted like Moxie, is counterproductive. The more system you have the more vulnerabilities, less is more.
I've actually been fired over this, we were building a product, and I implemented port knocking in python. Lead said it was unsecure and wanted to install an encrypted port knocking protocol.
EDIT: Just read the readme, Moxie is saying the same thing verbatim lol, we cool
Been wanting to use wireguard but seems like a lot of effort of managing keys and ip addrseses and routing rules etc. Do you have resources that might help me understanding the best setup?
WireGuard is extremely easy to setup. It's difficult to manage if you have hundreds of nodes or dynamic endpoints: that's what Tailscale and Netmaker helps with.
OpenBSD's wg documentation is straightforward. It maps onto wireguard-tools' configuration concepts if you need to use Linux.
When Wireguard first came out I wrote some scripts for myself. Later on I used SaltStack to configure Wireguard for customers with sets of laptops in the dozens or more.
It also works only with iptables. And because it's from 2012, it's watching the file /var/log/kern.log [0], which was a simple way to monitor for incoming packets in 2012, but will not work anymore with systemd based distributions nowadays, since all logs are binary and thus accessible through an util such as journalctl.
Someone opened a PR to address this [1]. It tries to keep it simple in the spirit of the tool, but it adds another dependency (a systemd python module).
I like it overall. The code is so small and simple, it's easy to adapt and to keep small anyway, whatever distro and firewall one might end up use it with.
Security is built in layers. Is it theoretically possible for someone on the network to observe the knock sequence? Yes. Is it likely to happen in any but the most adversarial of conditions? No. And if it’s implemented in a cryptographically secure way, like fwknop, then it’s really very good.
Prevents some types of distributed slow brute force attack, port scans, and 99.98% of nuisance traffic on the ports. Most effective when interleaved with port-sequence-close and port-trip-wire firewall random-delayed black-hole rules. Note login time window restrictions and fail2ban should also be active.
Obfuscating your ssh traffic over SSL or Iodine tunnel traffic can punch through many sandbox networks that try to jack secure traffic.
People will argue time constrained tap sequences (think Morse code) are also easily logged with a sniffer, but in general fail2ban rules can email you as the ssh noise should be nearly nonexistent.
i.e. One can determine if a route/VPN is attacking secure traffic links, or has uncanny insight into internal security policy.
Some people post bad policies for setting up ssh, email, and web servers...
Setting up knocking should be the first step on a new server image, as many folks lock themselves out the first run (and on some occasions need to re-image the host). =3
Port knocking is a stupid concept. You're sending a password (or TOTP) in plaintext. Just send an UDP packet with the password in it, and be honest about it.
Just to prove a point, I made two port knockers using just an UDP packet in a few lines of bash. One uses OpenBSD's signify to sign unlock requests, and the other is on the server side just nftables config to check a static UDP packet, not even a binary.
Port knocking by sending a secret to the server, in a very simple protocol (to the point of being obviously correct), is good. Once it gets too complex, the implementation is more likely to have a bug than OpenSSH.
So just send that secret in a UDP packet. Modulating SYN packets is like sending your password (or other secret) in morse code rather than ASCII, for no reason.
Sure, many portknocking projects are non-replayable (including the other script in the one I linked to, using OpenBSD's Signify).
But what I'm saying is that OTP or not, it's sent in plain text. But "port knocking" is usually this silly modulation over SYN packets.
I mean that an OTP sent in plain text is still sent in plain text. Using a series of SYN packets is no more encrypted than just sending it as a UDP packet. SYN modulation is not encryption.
Of course non-replayable is better than replayable. I'm not objecting to that. I'm objecting to modulating over SYN packets.
Why is this silly, again? It's indistinguishable from a non-modulated SYN, if that's your concern: the eavesdropper can't even distinguish such port-knocking SYN packets from any other random SYN packet that happened to be routed to the host (there is e.g. a lot of port scanning going on around the Internet). It's encrypted, so even if the eavesdropper could distinguish such a packet, they can't learn the requested port; and they can't even replay this packet, so... what's exactly wrong with it?
If the attacker is sniffing the connection anyway (if they're not, then why be sneaky in the first place?), then they'll see the SYN packets.
What exactly is the difference between "the SSH port is filtered, yet right after a SYN goes to port A, then B, then C, the SYN to port 22 is suddenly answered" and "the SSH port is filtered, yet right after a UDP packet with this content, SYN from that address are accepted"?
They're both a secret being sent in plain text, after which the SSH port is open for a bit.
Anyone who's sniffing looking for secret UDP packets is also sniffing looking for modulated SYN packets, because it's still just sniffing. They already know that "something's up", because they see the returning SYNACK. Something made the port unlock.
So yes, it's indistinguishable from a non-modulated SYN, in the same way that a UDP packet with a password is indistinguishable from an unrelated UDP packet without a password.
> the eavesdropper can't even distinguish such port-knocking SYN packets from any other random SYN packet that happened to be routed to the host (there is e.g. a lot of port scanning going on around the Internet)
But isn't the goal to get past the firewall?
> It's encrypted, so even if the eavesdropper could distinguish such a packet, they can't learn the requested port; and they can't even replay this packet, so... what's exactly wrong with it?
Again, this is a completely different question. There are two distinct aspects to port knocking:
1. Generating the "token" (often a fixed password, but can as you say be encrypted, or a one-time pad).
2. Send that "token" to the server, to open the door.
Yes, you should generate a secure token. That adds security. It should not be replayable.
But why are you sending that token using modulated SYN packets? That's like if you had to enter your google account password in morse code. It's just more inconvenient, and the secret is in the password, not the modulation. Anybody who can sniff you entering your google account password can sniff morse code just as well as if you use ASCII.
And I don't buy that using modulated SYN packets makes you disappear in the background noise of port scans. It's not exactly hard to detect the pattern "after N unanswered SYNs from A to B on apparently random ports, A then connects to B on port 22, successfully". You might as well just send a UDP packet. It'd make your (apparently open) firewall WAY less of a footgun (for modulated SYN packets, the SYNs have to actually arrive).
As a workaround for vulnerabilities in your ssh implementation ("Why Is This Even Necessary") this seems timely given the recent OpenSSH flaw. I wonder if this still works after 12 years of no commits.
spiped has been another tool sometimes recommended for addressing this and seems more maintained.