SSH hardening guide bonus edition: Disable password login if you can, leave the algorithm settings as they are and use an up to date version of OpenSSH.
OpenSSH already agressively deprecates algorithms that are problematic. None of the algorithms enabled by default has any known security issue. But your manual tweaks from a random document you read on the Internet may enable an algorithm that we may later learn to be problematic.
In the same vein, protecting your SSH server with spiped does 99% of the job. (= No need to setup fail2ban, password auth is not a big deal anymore, protects against out-of-date SSH servers and/or zero-days exploits, ...)
spiped looks like netcat with symmetric encryption. If your SSH server has password auth disabled, then all you're doing is moving the attack surface from one thing to another.
You're making a trade-off no matter which way you go. spiped probably has a smaller attack surface than sshd due to being less code, but it's also less "tried and true" than openssh. Not to mention, managing symmetric keys securely is more difficult than with asymmetric openssh keys where you generally only need to copy around the public key.
OpenSSH is plenty secure enough to be exposed to the public internet as long as you keep it up to date and do not have it misconfigured. But if you have a strong reason to not make it public, then I feel that something like Wireguard is really a better way to go.
Is Spiped similar in-concept to a VPN?
Sort of. Not really. spiped operates at the level of individual stream connections, so you can e.g. make one end a local socket in a filesystem and use UNIX permissions to control access to it.
In fact that's exactly why I wrote it -- so I could have a set of daemons designed to communicate via local sockets and transparently (aside from performance) have them running on different systems.
Is it possible to use tarsnap's deduplication code on my own server? We're setting up an ML dataset distribution box, and I was hoping to avoid storing e.g. imagenet as a tarball + untar'd (so that nginx can serve each photo individually) + imagenet in TFDS format.
https://github.com/xolox/dedupfs was the closest I found, but it has a lot of downsides.
Has anyone made an interface to tarsnap's tarball dedup code? A python wrapper around the block dedup code would be ideal, but I doubt it exists.
(Sorry for the random question -- I was just hoping for a standalone library along the lines of tarsnap's "filesystem block database" APIs. I thought about emailing this to you instead, but I'm crossing my fingers that some random HN'er might know. I'm sort of surprised that filesystems don't make it effortless. In fact, I delayed posting this for an hour to go research whether ZFS is the actual solution -- apparently "no, not unless you have specific brands of SSDs: https://www.truenas.com/community/resources/my-experiments-i..." which rules out my non-SSD 64TB Hetzner server. But like, dropbox solved this problem a decade ago -- isn't there something similar by now?)
EDIT: How timely -- Wyng (https://news.ycombinator.com/item?id=28537761) was just submitted a few hours ago. It seems to support "Data deduplication," though I wonder if it's block-level or file-level dedup. Tarsnap's block dedup is basically flawless, so I'm keen to find something that closely matches it.
Thanks Colin! Love all the things you create :)
Yes. When spiped was released (2011) there were no really good VPN solutions out there.
Now there is WireGuard, but it wasn’t launched until 2016.
Ok. I don’t agree there. What I’ve heard from security experts is that WireGuard is vastly superior to OpenVPN.
Addendum: OpenVPN was released in 2001 and there where lots of cryptography-related systems from that era that certainly didn’t age well – IMO OpenVPN is one of those examples.
Not really. spiped is more an equivalent to mutual SSL (aka "Client certs SSL"). It basically just encrypts and mutually auths individual connections.
It works at the TCP level, not the IP level.
> None of the algorithms enabled by default has any known security issue.
For the tinfoil among us, jump on the post-quantum key exchange train, there's little overhead, it still uses traditional elliptic curve crypto, best of both worlds.
The jump-off: firstname.lastname@example.org
Note that the above post-quantum key exchange method was removed in OpenSSH 8.5 (released March 3, 2021) in favor of a newer one:
So if you add the previous one to your server config, sshd may fail to start after upgrading to 8.5 or newer (8.7 is the most recent release).
I started using email@example.com about two years ago, and the new one when OpenSSH 8.5 came out. They've both worked flawlessly for me. Thank you to TinySSH and OpenSSH, and of course cryptographers, for making this possible.
What's a post-quantum key exchange? I assume this is a real thing and not a reference to Devs or Lapsis or something?
All of today's practical public key crypto could be broken by a sufficiently powerful quantum computer - if one existed which today it does not - thanks to a trick called Shor's Algorithm.
However, there are new public key algorithms that aren't affected by Shor's algorithm. Two problems. 1. They're all worse, they have huge keys for example, or they're slow, so that sucks. 2. We don't know if they actually work, beyond that Shor's algorithm doesn't break them on hypothetical Quantum Computers.
Even if you decide you don't care about (1) and you're very scared of the hypothetical quantum computers (after all once upon a time the Atom Bomb was also hypothetical) you need to deal with (2).
Post-quantum schemes like this for SSH generally arrange that by doing two things, they have the shiny new Quantum Resistant algorithms, but they also have an old-fashioned public key algorithm, and you need to break both to break their security. Either alone won't help you.
Also, configure sshd to use blacklistd:
disable password login. disable ssh login. Put the computer back in the box and put the box under your bed.
Then, using aluminum foil and unfolded crisp packets make your room into a Faraday cage. Never open the door for any reason whatsoever. In fact, remove the door altogether.
I absolutely disable sshd on machines where I never need to login remotely like my laptops. My servers of course run sshd.
Ensure lead bricks are used to build the said room.
Lead is good, but I think it'd be easy to melt through.
Better add another layer of something more temperature resistant. Defense in depth.
Is it really necessary to disable an E521 ECDSA host key? By all means, replace a P256 host key with E521, but are E521 keys truly weak to justify removal?
E521 is listed as safe on DJB's main evaluation site:
More specific DJB commentary: "To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2^521 – 1; but the sheer size of this prime makes it much slower than NIST P-256."
I believe that OpenSSH is using the E521 provided by OpenSSL (as seen on Red Hat 7):
These appear to have been contributed by Sun Microsystems, and were designed to avoid patent infringement.
$ openssl ecparam -list_curves secp256k1 : SECG curve over a 256 bit prime field secp384r1 : NIST/SECG curve over a 384 bit prime field secp521r1 : NIST/SECG curve over a 521 bit prime field prime256v1: X9.62/SECG curve over a 256 bit prime field
Ignoring the fact that some of the SafeCurves criteria are questionable (reasonably performant complete short Weierstrass formulae have existed for a while; indistinguishability is a complete niche feature that is hardly ever required)...
These are not the same curves. NIST P-521 is a short Weierstrass curve defined by NIST. E-521 is an Edwards curve introduced by Aranha/Barreto/Pereira/Ricardini.
NIST P-521: y^2 = x^3 - 3x + 0x51953EB9618E1C9A1F929A21A0B68540EEA2DA725B99B315F3B8B489918EF109E156193951EC7E937B1652C0BD3BB1BF073573DF883D2C34F1EF451FD46B503F00
E-521: x^2 + y^2 = 1 - 376014x^2y^2
The only thing they share is the finite field over which they're defined, GF(2^521 - 1).
Thank you for the clarification.
Does this reduce the safety of an OpenSSH ECDSA key defined at 521 bits? That large constant is not reassuring, despite DJB's direct commentary.
To the best of my current knowledge, it's at most possible that the NSA backdoored the NIST curves. I'm unaware of anyone in academia positively proving the existence thereof.
If your threat model doesn't include the NSA or other intelligence agency level state actors, ECDSA with NIST P-521 will serve you just fine.
(ECDSA is per se a questionable abuse of elliptic curves born from patent issues now long past, but it's not a real, exploitable security problem, either, if implemented correctly.)
AIU, E-521 is not P-521.
Not everyone knows that you can use MFA with SSH. I’ve successfully used Google authenticator via PAM and YubiKey.
You can also setup SSH certificate authorities instead of using self-signed ones 
Jumping on the bandwagon here, SSH also now supports FIDO/U2F. This allows for hardware security keys like Yubikeys to be used directly for auth, rather than via TOTP/HOTP codes.
The FIDO tokens have no intention of allowing you to do anything else except FIDO with their keys (in fact the cheapest ones literally couldn't if they wanted to, good) and the SSH protocol of course was not originally designed for these tokens (it's from last century!) so the result is that the OpenSSH team had to design a custom key type for this purpose.
In consequence, although this technology is excellent and I endorse choosing it especially in tandem with other FIDO usage (e.g. WebAuthn for web sites, and I believe Windows can use it to authenticate users to their desktops/ laptops) you need to understand that both sides must have the necessary feature for it to authenticate you, both your clients and any SSH servers you need to authenticate against must recognise the FIDO-specific auth method in SSH.
If you mostly administrate shiny modern *BSD or Linux boxes, they have a new enough OpenSSH, so this Just Works™. But if you've got some creaky five year old VMs or worse real servers running like RHEL 6 or something, that may be an obstacle to practically deploying this.
Good news is that this will improve over time, and e.g. GitHub did eventually learn the new key type.
Oh neat! Didn’t know that :)
I was just searching for someone bringing this up. Of any advice I could give someone setting up SSH it would be to use a FIDO key + short lived sessions.
i did not notice until skimming this arch wiki page that it is now possible to have both MFA or pam auth in general and key auth both required at the same time to authenticate, great!
I made a mistake once setting this up and I managed to have password AND key and MFA. This was a misconfiguration but might be useful for some use-cases. So it’s good to know it’s not either/or.
The first thing i do on a new remote box is to move SSH to another non-standard port other than 22. I use the same port for every remote boxes I have. Then add that port into `.ssh/config` on local box.
Second is to disable root login.
Third is to copy my private key over and disable password login.
3 essential steps to secure SSH.
Just use blacklistd , on FreeBSD, instead of changing the port. It works with sshd, and it temporarily blocks IPs that are abusive.
please do not copy your private key to remote machines ;) you can use the ssh-copy-id tool it does the right thing for you.
yeah, my mistake, i meant to write copy the public key :D copying by hand is ok but ssh-copy-id is simpler. Can't edit the comment now hmmm...
I think there are better approaches than this.
1) Setup a VPN via wireguard and only expose that random udp port. That way only a single UDP port is exposed and port-scans become infeasible.
2) Setup 2fa via libpam-google
Yes, at work we use OpenVPN and only expose VPN, HTTP, HTTPS ports to the public.
But I find VPN a bit overkill on my personal machines.
My use for it was to have a 5 way residential VPN across multiple Countries for obvious reasons. That wouldn't really suffice with just ssh. It also makes the shared infrastructure a lot easier to use for the rest of my family.
Also a globally accessible pihole connected to DoH which ensures somewhat global privacy.
They are basically my perfect use for my raspberry pis. Extremely low power, but perfectly capable for handling say 1080p video streams or to RDP into machines for access to cross-country resources.
What would you do if your Wireguard tunnel dies?
That's the one thing that's prevented me from actually doing this.
The same thing that happens if the SSH daemon dies, I guess?
FWIW, I’ve been using Wireguard for a while (probably ~2 years?) as an always-on VPN for multiple mobile devices, and also as a reverse tunnel to pinhole service access inside a LAN. The Wireguard config and daemon has been rock solid. The only time it’s failed is when I messed up the AllowedIPs, but that failure occurs at configuration time. It has never crashed, or stopped routing traffic correctly, or otherwise failed in a way that interrupted traffic flows.
I have 5 locations running effectively independent VPNs, each hub connected to each other for redundancy if a VPN falls over.
i.e. Each hub has 1 VPN in, or is connecting 4 ways out.
If the port forwarding or something fails inbound, then I can connect via another VPN and try and debug/diagnose what is wrong.
If all VPNs are reporting down, then I know the pi/internet is completely down. It will either restart connectivity, but I have someone there who can plug/unplug/restore the system if necessary. The same kind of problem would occur if ssh falls over or wireguard.
>f your Wireguard tunnel dies?
wireguard tunnels are pretty robust to failure.
they can survive you changing your wifi access point and IP for example.
ssh is typically the only thing i expose (publicly if needed) because in most environments were it is running it is used for troubleshooting issues. if your issue is that your wireguard peer cant connect you are lost with that suggestion.
Just so we’re clear, this is 1 step to secure SSH, 1 step to avoid installing logrotate, and 1 step to encourage good admin practices.
Changing the port and using a non-root user for SSH don’t appreciably change the strength of the server’s security.
Changing the port is obfuscation and by itself would not enhance security, however it does preclude all the noise from the automated bots. This allows you to have better alerting on brute force attempts because all of those attempts are a human manually targeting your server. The end result is effectively a better security posture. I have servers sprinkled all over the internet and in the last 30 years or so bots have never tickled my ssh daemon.
I got a new cloud virtual machine and didn't login for 2 hours. When I did the logs showed there were about 50 attempts to login from random IP addresses.
I changed my port to a random 4 digit number. Not a single failed login attempt in 6 months.
Obviously follow good security practices too but I like not have to rotate and filter the logs with yet another tool.
As I said: changing the port is just a means to avoid having to `apt install logrotate`
Active alerting on brute force attempts on an internet-facing SSH service is an exercise in human suffering. At best you don’t get any alerts, and at worst you get alerts that you do… what, precisely, with? Block the IP? Look up the “human” attacker and send them an email asking them to stop?
There are environments and entities for whom pattern detection on incoming connections makes sense, and those environments aren’t running internet-facing SSH.
I feel like this doesn’t actually address any of my comment.
I’m specifically saying that the act of reading SSH logs for an internet-facing server is an exercise in futility. The kinds of things that will show up in the logs (brute force attempts, generally nmapping, etc) are not credible risks to even a largely unconfigured SSH daemon (as noted elsewhere in this thread, the bar to have an above average secure SSH service is basically “apply pubkey, disable password auth, celebrate”).
The attackers that are problematic don’t look out of place in your logs: somebody who stole a valid pubkey/password, the unlikely case of an SSH zero day, etc. Those are going to be single access attempts that just work. Unless you’re literally alerting on every successful auth, the logs aren’t helping you for active alerting.
Keeping your internet-facing SSH logs is important for investigative work: once you find out that your buddy accidentally put their private key in a pastebin, you can check if somebody used it to log into your server.
obscurity increases security, doesn't it?
Relevant: If your SSH server is public, you can give its address to https://sshcheck.com/ and it will report any weak spots in your config.
Thanks, that's an interesting tool.
But geezus, it's daunting to address SSH weaknesses unless you know ssh and it's configuration top to bottom. I don't! And I am not afraid to admit it. I just use ssh "as-is" on mainstream platforms, for example, whatever Amazon gives me on lightsail linux images or windows-10 or whatever's on my Mac and hope for the best.
I mean, there's 4 different groups of algorithms to think about: "Key Exchange", "Server Host Key", "Encryption" and "MAC". Each with a bunch of choices, all different, all consisting of mouthfuls of impossible to remember complicated names.
The sshcheck tool indicates that one of these is "insecure" because it may be "broken by nation states". What does that _really_ mean for a business or individual? ¯\_(ツ)_/¯ There are others which are labeled as "weak" so what does that mean? That it might someday be broken by nation-states?
I think it's still useful, however. Why wouldn't you want to have the most secure ssh connections if it's just a matter of configuration?
Ultimately, someone who uses the report from sshcheck has to decide whether it's worth it to google around, spend a solid 30 minutes or so, and figure out how to change their "out-of-the-box" ssh config to get a fully secure report from sshcheck.
If you like Wireguard's security, you can emulate it in your sshd_config:
The MAC is irrelevant, as that function is built into the AEAD cipher, which are to be preferred (the alternate is AES-GCM).
Ciphers firstname.lastname@example.org KexAlgorithms email@example.com MACs firstname.lastname@example.org
This will shut off a lot of legacy SSH clients. Android Connectbot specifically needs the AES cipher; adding it causes problems for putty.
Otherwise, this is the classic "best practice" site for SSH:
Exactly my finding too!
Except certain version of MacOS (and Windows) ssh client would also be unable to connect.
I don't know about MacOS, but Microsoft's native OpenSSH supports this configuration.
Above you can also see that the MAC is implicit with the chosen AEAD cipher.
C:\>ssh -vv me@myDJBserver.myco.com OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2 ... debug2: KEX algorithms: email@example.com ... debug2: ciphers ctos: firstname.lastname@example.org debug2: ciphers stoc: email@example.com ... debug1: kex: server->client cipher: firstname.lastname@example.org MAC: <implicit> compression: none debug1: kex: client->server cipher: email@example.com MAC: <implicit> compression: none ... $
Yo, do you play Supreme Commander? I think I saw someone with your username on FAF...
No, it's just a randomly chosen name (a British new wave band from the early 80's).
Though I'll have to hunt out (or try knock together) something that we can run locally for checking internal-only/white-listed hosts (like https://testssl.sh/ for HTTPS config checking).
I tried out the tool, and all I got was a “Timeout exceeded while waiting for welcome message.”
For every public SSH server I put up, I always configure a rate limiter in front of it to help thwart SSH attacks. Evidently, it thwarts this as well.
I like it a lot ... I just dont know what to do about all my 'weak' results.
Generally they’re going to be for legacy ciphers/MACs/etc.
If you don’t need them, you can turn them off. If you’re the only one accessing your servers, you can honestly just pick a single option for each based on the highest security option that’s supported by all your client devices.
https://infosec.mozilla.org/guidelines/openssh.html is a good starting point. The lists of available options are sorted from left -> right, most optimal -> least.
Yeah I was kinda hoping that each of the fields would point to some instructions on disabling - but that is just me being too lazy to google things.
edit: thanks for that link - swiped their kex, ciphers and macs and didnt break anything (that I know of)
nice honeypot ;)
(joking, thanks for the suggestion)
When using the SSH protocol for running automated remote commands you can improve security using forced command within your authorized_keys file.
... and you can also restrict by IP address in authorized_keys ...
Ah yes, before your key type specifier i.e. at the very beginning use something like :
I feel like an important part of "hardening" a server is to remove/disable unused services. Does anyone know if NanoBSD is actively worked-on by the FreeBSD team and/or still in use? For those note aware, NanoBSD is an official build from FreeBSD team that allows you to compile a slimmed down FreeBSD build that is read-only yet can run any/all FreeBSD software.
I can find very little about NanoBSD other than a handful of posts from 10 years ago. It seems like a great foundation for hardening a server.
I looked into this for a project a couple of years ago (to boot VMs from minimal customized ISO) and ended up using mfsbsd instead.
That's interesting, this appears to be created by a core FreeBSD developer.
I'm not able to find a lot of documentation on it. How does it compare to NanoBSD (and why did you choose it over Nano)?
I've used Martin Matuska's mfsbsd in the past to install a system with ZFS on root. I believe now that's natively supported, but back in the day it was quite an involved thing :)
I like reading tutorials on this subject. One of my favorites, albeit six years old, is https://stribika.github.io/2015/01/04/secure-secure-shell.ht...
It generates a new RSA key but disables it in the config? Seems a bit much mindless cut and paste.
I always heard that FreeBSD has unparalleled networking
Does it mean that it'd be worth picking FreeBSD over Linux for my C# crud app if it had to handle a lot of requests/sec? (let's ignore db for the moment)
As with all things, you would really need to benchmark the system, preferably with real load, both ways to know for sure. But that takes a lot of time, especially if you're going to put in the time to tweak both systems.
People can do amazing stuff with enough time in both FreeBSD and Linux. I honestly think most server applications wouldn't be held back by either OS. You need your application to be really lightweight and focused before the OS makes a big difference, and even then, the differences only show if you're maxing out the hardware.
I worked at WhatsApp, and enjoyed working with FreeBSD there, and clearly it worked for us. Linux in FB datacenters also worked, but the server components were a lot different so there was never an apples to apples comparison. I run FreeBSD on my personal servers because I enjoyed working with it at Yahoo and then WhatsApp; but my personal servers don't have any performance needs. Sure, the networking stuff is nice (and it was nice to work with in the kernel), but what I like most about FreeBSD is the lack of churn. I can look at old administrative recipies and all the commands still work. I can expect (and mostly get) that when I upgrade, everything will keep working, and maybe a little better; occassionally, a lot better.
IIRC, I saw a presentation by someone (Rick?) where at your previous employer - you guys slimmed FreeBSD down to be unbelievably minimal such that only 2-3 total services ran on the entire server.
Was that done for performance reason? Or for hardening reasons?
If someone wanted to do that today with FreeBSD: would you recommend it and how would you go about doing it (NanoBSD)?
> but what I like most about FreeBSD is the lack of churn
I agree completely. As I mature in this field this becomes an ever important characteristic of the technology I adopt. Erlang also shares this property.
There was a time when FreeBSD had much better networking performance than Linux, but that was many years ago.
Now, on supported hardware, their performance should be similar.
However, Linux has drivers for much more varied networking hardware. FreeBSD has very good support for Intel NIC's but some of the hardware from other vendors may happen to be not supported.
FreeBSD has a few nicer kernel features for those who develop themselves a networking application, but more networking libraries useful for high performance applications are available for Linux, even if DPDK, which was mentioned in another reply, is available for both Linux and FreeBSD.
So, while I am a very satisfied FreeBSD user, I would recommend to someone with less experience to use Linux, as there are more resources readily available.
On the other hand, for someone who wants to learn more about the implementation of networking applications, it can be useful to also try FreeBSD, to understand more about alternative solutions.
FreeBSD (and all BSDs for that matter) have pretty good networking stacks besides any new recent-gen WiFi.
C# support isn't great on FreeBSD yet, so probably not.
but if we assume that it works fine?
AFAIK there's ongoing and active work for FreeBSD
That issue has been continuously open since 2015. Assuming it works, sure, but I would not hold my breath.
I don't think it would be worthwhile.
If you want better latency and throughput, you wouldn't be using the kernel network stack and instead be opting for some userspace networking stack like DPDK or onload.
Depends obviously on what the bottlenecks of your application are, your NIC and the characteristics of your hardware as well.
FreeBSD has netmap for fast userspace packet processing.
True, and the Linux kernel has zero-copy AF_XDP that enable memory to be shared with userspace. However, low-latency networking is a lot more than just simple kernel bypass.
It's things like pinning cpu cores dedicated for networking, disabling C-states, epolling and being able to utilize bespoke firmware interfaces designed for smartnics. Also application protocol, ie using features like TCP checksum offload and TSO.
Heck the application would also need to be adjusted for a low-latency environment via probably a custom JVM and doing things like reading data structures/variables to ensure they are in CPU cache.
Frankly I would recommend trying openonload which at least is compatible with native Linux socket programming unlike DPDK.
I loved the simple explanation :)