FreeBSD SSH Hardening

122 points9
hannob7 hours ago

SSH hardening guide bonus edition: Disable password login if you can, leave the algorithm settings as they are and use an up to date version of OpenSSH.

OpenSSH already agressively deprecates algorithms that are problematic. None of the algorithms enabled by default has any known security issue. But your manual tweaks from a random document you read on the Internet may enable an algorithm that we may later learn to be problematic.

acatton6 hours ago

In the same vein, protecting your SSH server with spiped[1] does 99% of the job. (= No need to setup fail2ban, password auth is not a big deal anymore, protects against out-of-date SSH servers and/or zero-days exploits, ...)


bityard57 minutes ago

spiped looks like netcat with symmetric encryption. If your SSH server has password auth disabled, then all you're doing is moving the attack surface from one thing to another.

You're making a trade-off no matter which way you go. spiped probably has a smaller attack surface than sshd due to being less code, but it's also less "tried and true" than openssh. Not to mention, managing symmetric keys securely is more difficult than with asymmetric openssh keys where you generally only need to copy around the public key.

OpenSSH is plenty secure enough to be exposed to the public internet as long as you keep it up to date and do not have it misconfigured. But if you have a strong reason to not make it public, then I feel that something like Wireguard is really a better way to go.

tiffanyh5 hours ago

Is Spiped similar in-concept to a VPN?

cperciva5 hours ago

Sort of. Not really. spiped operates at the level of individual stream connections, so you can e.g. make one end a local socket in a filesystem and use UNIX permissions to control access to it.

In fact that's exactly why I wrote it -- so I could have a set of daemons designed to communicate via local sockets and transparently (aside from performance) have them running on different systems.

sillysaurusx38 minutes ago

Is it possible to use tarsnap's deduplication code on my own server? We're setting up an ML dataset distribution box, and I was hoping to avoid storing e.g. imagenet as a tarball + untar'd (so that nginx can serve each photo individually) + imagenet in TFDS format. was the closest I found, but it has a lot of downsides.

Has anyone made an interface to tarsnap's tarball dedup code? A python wrapper around the block dedup code would be ideal, but I doubt it exists.

(Sorry for the random question -- I was just hoping for a standalone library along the lines of tarsnap's "filesystem block database" APIs. I thought about emailing this to you instead, but I'm crossing my fingers that some random HN'er might know. I'm sort of surprised that filesystems don't make it effortless. In fact, I delayed posting this for an hour to go research whether ZFS is the actual solution -- apparently "no, not unless you have specific brands of SSDs:" which rules out my non-SSD 64TB Hetzner server. But like, dropbox solved this problem a decade ago -- isn't there something similar by now?)

EDIT: How timely -- Wyng ( was just submitted a few hours ago. It seems to support "Data deduplication," though I wonder if it's block-level or file-level dedup. Tarsnap's block dedup is basically flawless, so I'm keen to find something that closely matches it.

tiffanyh5 hours ago

Thanks Colin! Love all the things you create :)

cpach5 hours ago

Yes. When spiped was released (2011) there were no really good VPN solutions out there.

Now there is WireGuard, but it wasn’t launched until 2016.

amarshall4 hours ago
acatton3 hours ago

Not really. spiped is more an equivalent to mutual SSL (aka "Client certs SSL"). It basically just encrypts and mutually auths individual connections.

It works at the TCP level, not the IP level.

BelenusMordred5 hours ago

> None of the algorithms enabled by default has any known security issue.

For the tinfoil among us, jump on the post-quantum key exchange train, there's little overhead, it still uses traditional elliptic curve crypto, best of both worlds.

The jump-off:

Panino1 hour ago

Note that the above post-quantum key exchange method was removed in OpenSSH 8.5 (released March 3, 2021) in favor of a newer one:

So if you add the previous one to your server config, sshd may fail to start after upgrading to 8.5 or newer (8.7 is the most recent release).

I started using about two years ago, and the new one when OpenSSH 8.5 came out. They've both worked flawlessly for me. Thank you to TinySSH and OpenSSH, and of course cryptographers, for making this possible.

ilaksh3 hours ago

What's a post-quantum key exchange? I assume this is a real thing and not a reference to Devs or Lapsis or something?

tialaramex2 hours ago


All of today's practical public key crypto could be broken by a sufficiently powerful quantum computer - if one existed which today it does not - thanks to a trick called Shor's Algorithm.

However, there are new public key algorithms that aren't affected by Shor's algorithm. Two problems. 1. They're all worse, they have huge keys for example, or they're slow, so that sucks. 2. We don't know if they actually work, beyond that Shor's algorithm doesn't break them on hypothetical Quantum Computers.

Even if you decide you don't care about (1) and you're very scared of the hypothetical quantum computers (after all once upon a time the Atom Bomb was also hypothetical) you need to deal with (2).

Post-quantum schemes like this for SSH generally arrange that by doing two things, they have the shiny new Quantum Resistant algorithms, but they also have an old-fashioned public key algorithm, and you need to break both to break their security. Either alone won't help you.

drclau3 hours ago
ipaddr7 hours ago

Safest setup

disable password login. disable ssh login. Put the computer back in the box and put the box under your bed.

0xdeadb00f6 hours ago

Then, using aluminum foil and unfolded crisp packets make your room into a Faraday cage. Never open the door for any reason whatsoever. In fact, remove the door altogether.

lizknope3 hours ago

I absolutely disable sshd on machines where I never need to login remotely like my laptops. My servers of course run sshd.

0xFFFE3 hours ago

Ensure lead bricks are used to build the said room.

doubled1122 hours ago

Lead is good, but I think it'd be easy to melt through.

Better add another layer of something more temperature resistant. Defense in depth.

chasil7 hours ago

Is it really necessary to disable an E521 ECDSA host key? By all means, replace a P256 host key with E521, but are E521 keys truly weak to justify removal?

E521 is listed as safe on DJB's main evaluation site:

More specific DJB commentary: "To be fair I should mention that there's one standard NIST curve using a nice prime, namely 2^521 – 1; but the sheer size of this prime makes it much slower than NIST P-256."

I believe that OpenSSH is using the E521 provided by OpenSSL (as seen on Red Hat 7):

    $ openssl ecparam -list_curves
      secp256k1 : SECG curve over a 256 bit prime field
      secp384r1 : NIST/SECG curve over a 384 bit prime field
      secp521r1 : NIST/SECG curve over a 521 bit prime field
      prime256v1: X9.62/SECG curve over a 256 bit prime field
These appear to have been contributed by Sun Microsystems, and were designed to avoid patent infringement.

rdpintqogeogsaa5 hours ago

Ignoring the fact that some of the SafeCurves criteria are questionable (reasonably performant complete short Weierstrass formulae have existed for a while; indistinguishability is a complete niche feature that is hardly ever required)...

These are not the same curves. NIST P-521 is a short Weierstrass curve defined by NIST. E-521 is an Edwards curve introduced by Aranha/Barreto/Pereira/Ricardini.

NIST P-521: y^2 = x^3 - 3x + 0x51953EB9618E1C9A1F929A21A0B68540EEA2DA725B99B315F3B8B489918EF109E156193951EC7E937B1652C0BD3BB1BF073573DF883D2C34F1EF451FD46B503F00

E-521: x^2 + y^2 = 1 - 376014x^2y^2

The only thing they share is the finite field over which they're defined, GF(2^521 - 1).

chasil5 hours ago

Thank you for the clarification.

Does this reduce the safety of an OpenSSH ECDSA key defined at 521 bits? That large constant is not reassuring, despite DJB's direct commentary.

rdpintqogeogsaa5 hours ago

To the best of my current knowledge, it's at most possible that the NSA backdoored the NIST curves. I'm unaware of anyone in academia positively proving the existence thereof.

If your threat model doesn't include the NSA or other intelligence agency level state actors, ECDSA with NIST P-521 will serve you just fine.

(ECDSA is per se a questionable abuse of elliptic curves born from patent issues now long past, but it's not a real, exploitable security problem, either, if implemented correctly.)

GoblinSlayer10 minutes ago

AIU, E-521 is not P-521.

beermonster5 hours ago

Not everyone knows that you can use MFA with SSH. I’ve successfully used Google authenticator via PAM[1] and YubiKey[2].

You can also setup SSH certificate authorities instead of using self-signed ones [3]




akerl_5 hours ago

Jumping on the bandwagon here, SSH also now supports FIDO/U2F. This allows for hardware security keys like Yubikeys to be used directly for auth, rather than via TOTP/HOTP codes.

tialaramex3 hours ago

The FIDO tokens have no intention of allowing you to do anything else except FIDO with their keys (in fact the cheapest ones literally couldn't if they wanted to, good) and the SSH protocol of course was not originally designed for these tokens (it's from last century!) so the result is that the OpenSSH team had to design a custom key type for this purpose.

In consequence, although this technology is excellent and I endorse choosing it especially in tandem with other FIDO usage (e.g. WebAuthn for web sites, and I believe Windows can use it to authenticate users to their desktops/ laptops) you need to understand that both sides must have the necessary feature for it to authenticate you, both your clients and any SSH servers you need to authenticate against must recognise the FIDO-specific auth method in SSH.

If you mostly administrate shiny modern *BSD or Linux boxes, they have a new enough OpenSSH, so this Just Works™. But if you've got some creaky five year old VMs or worse real servers running like RHEL 6 or something, that may be an obstacle to practically deploying this.

Good news is that this will improve over time, and e.g. GitHub did eventually learn the new key type.

beermonster4 hours ago

Oh neat! Didn’t know that :)

staticassertion1 hour ago

I was just searching for someone bringing this up. Of any advice I could give someone setting up SSH it would be to use a FIDO key + short lived sessions.

mercora4 hours ago

i did not notice until skimming this arch wiki page that it is now possible to have both MFA or pam auth in general and key auth both required at the same time to authenticate, great!

beermonster4 hours ago

I made a mistake once setting this up and I managed to have password AND key and MFA. This was a misconfiguration but might be useful for some use-cases. So it’s good to know it’s not either/or.

lamnk7 hours ago

The first thing i do on a new remote box is to move SSH to another non-standard port other than 22. I use the same port for every remote boxes I have. Then add that port into `.ssh/config` on local box.

Second is to disable root login.

Third is to copy my private key over and disable password login.

3 essential steps to secure SSH.

drclau3 hours ago

Just use blacklistd [0], on FreeBSD, instead of changing the port. It works with sshd, and it temporarily blocks IPs that are abusive.


mercora5 hours ago

please do not copy your private key to remote machines ;) you can use the ssh-copy-id tool it does the right thing for you.

lamnk3 hours ago

yeah, my mistake, i meant to write copy the public key :D copying by hand is ok but ssh-copy-id is simpler. Can't edit the comment now hmmm...

kd9136 hours ago

I think there are better approaches than this.

1) Setup a VPN via wireguard and only expose that random udp port. That way only a single UDP port is exposed and port-scans become infeasible.

2) Setup 2fa via libpam-google

lamnk3 hours ago

Yes, at work we use OpenVPN and only expose VPN, HTTP, HTTPS ports to the public.

But I find VPN a bit overkill on my personal machines.

kd9133 hours ago

My use for it was to have a 5 way residential VPN across multiple Countries for obvious reasons. That wouldn't really suffice with just ssh. It also makes the shared infrastructure a lot easier to use for the rest of my family.

Also a globally accessible pihole connected to DoH which ensures somewhat global privacy.

They are basically my perfect use for my raspberry pis. Extremely low power, but perfectly capable for handling say 1080p video streams or to RDP into machines for access to cross-country resources.

RealStickman_5 hours ago

What would you do if your Wireguard tunnel dies?

That's the one thing that's prevented me from actually doing this.

akerl_4 hours ago

The same thing that happens if the SSH daemon dies, I guess?

FWIW, I’ve been using Wireguard for a while (probably ~2 years?) as an always-on VPN for multiple mobile devices, and also as a reverse tunnel to pinhole service access inside a LAN. The Wireguard config and daemon has been rock solid. The only time it’s failed is when I messed up the AllowedIPs, but that failure occurs at configuration time. It has never crashed, or stopped routing traffic correctly, or otherwise failed in a way that interrupted traffic flows.

kd9134 hours ago

I have 5 locations running effectively independent VPNs, each hub connected to each other for redundancy if a VPN falls over.

i.e. Each hub has 1 VPN in, or is connecting 4 ways out.

If the port forwarding or something fails inbound, then I can connect via another VPN and try and debug/diagnose what is wrong.

If all VPNs are reporting down, then I know the pi/internet is completely down. It will either restart connectivity, but I have someone there who can plug/unplug/restore the system if necessary. The same kind of problem would occur if ssh falls over or wireguard.

ur-whale33 minutes ago

>f your Wireguard tunnel dies?

wireguard tunnels are pretty robust to failure.

they can survive you changing your wifi access point and IP for example.

mercora5 hours ago

ssh is typically the only thing i expose (publicly if needed) because in most environments were it is running it is used for troubleshooting issues. if your issue is that your wireguard peer cant connect you are lost with that suggestion.

akerl_6 hours ago

Just so we’re clear, this is 1 step to secure SSH, 1 step to avoid installing logrotate, and 1 step to encourage good admin practices.

Changing the port and using a non-root user for SSH don’t appreciably change the strength of the server’s security.

LinuxBender6 hours ago

Changing the port is obfuscation and by itself would not enhance security, however it does preclude all the noise from the automated bots. This allows you to have better alerting on brute force attempts because all of those attempts are a human manually targeting your server. The end result is effectively a better security posture. I have servers sprinkled all over the internet and in the last 30 years or so bots have never tickled my ssh daemon.

lizknope3 hours ago

I got a new cloud virtual machine and didn't login for 2 hours. When I did the logs showed there were about 50 attempts to login from random IP addresses.

I changed my port to a random 4 digit number. Not a single failed login attempt in 6 months.

Obviously follow good security practices too but I like not have to rotate and filter the logs with yet another tool.

akerl_5 hours ago

As I said: changing the port is just a means to avoid having to `apt install logrotate`

Active alerting on brute force attempts on an internet-facing SSH service is an exercise in human suffering. At best you don’t get any alerts, and at worst you get alerts that you do… what, precisely, with? Block the IP? Look up the “human” attacker and send them an email asking them to stop?

There are environments and entities for whom pattern detection on incoming connections makes sense, and those environments aren’t running internet-facing SSH.

Saint_Genet5 hours ago
tester7562 hours ago

obscurity increases security, doesn't it?

lou13067 hours ago

Relevant: If your SSH server is public, you can give its address to and it will report any weak spots in your config.

crispyambulance5 hours ago

Thanks, that's an interesting tool.

But geezus, it's daunting to address SSH weaknesses unless you know ssh and it's configuration top to bottom. I don't! And I am not afraid to admit it. I just use ssh "as-is" on mainstream platforms, for example, whatever Amazon gives me on lightsail linux images or windows-10 or whatever's on my Mac and hope for the best.

I mean, there's 4 different groups of algorithms to think about: "Key Exchange", "Server Host Key", "Encryption" and "MAC". Each with a bunch of choices, all different, all consisting of mouthfuls of impossible to remember complicated names.

The sshcheck tool indicates that one of these is "insecure" because it may be "broken by nation states". What does that _really_ mean for a business or individual? ¯\_(ツ)_/¯ There are others which are labeled as "weak" so what does that mean? That it might someday be broken by nation-states?

I think it's still useful, however. Why wouldn't you want to have the most secure ssh connections if it's just a matter of configuration?

Ultimately, someone who uses the report from sshcheck has to decide whether it's worth it to google around, spend a solid 30 minutes or so, and figure out how to change their "out-of-the-box" ssh config to get a fully secure report from sshcheck.

chasil5 hours ago

If you like Wireguard's security, you can emulate it in your sshd_config:

The MAC is irrelevant, as that function is built into the AEAD cipher, which are to be preferred (the alternate is AES-GCM).

This will shut off a lot of legacy SSH clients. Android Connectbot specifically needs the AES cipher; adding it causes problems for putty.

Otherwise, this is the classic "best practice" site for SSH:

egberts13 hours ago

Exactly my finding too!

Except certain version of MacOS (and Windows) ssh client would also be unable to connect.

chasil3 hours ago

I don't know about MacOS, but Microsoft's native OpenSSH supports this configuration.

    C:\>ssh -vv
    OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2
    debug2: KEX algorithms:
    debug2: ciphers ctos:
    debug2: ciphers stoc:
    debug1: kex: server->client cipher: MAC: <implicit> compression: none
    debug1: kex: client->server cipher: MAC: <implicit> compression: none
Above you can also see that the MAC is implicit with the chosen AEAD cipher.
gautamcgoel2 hours ago

Yo, do you play Supreme Commander? I think I saw someone with your username on FAF...

crispyambulance2 hours ago

No, it's just a randomly chosen name (a British new wave band from the early 80's).

dspillett6 hours ago


Though I'll have to hunt out (or try knock together) something that we can run locally for checking internal-only/white-listed hosts (like for HTTPS config checking).

binkHN5 hours ago

I tried out the tool, and all I got was a “Timeout exceeded while waiting for welcome message.”

For every public SSH server I put up, I always configure a rate limiter in front of it to help thwart SSH attacks. Evidently, it thwarts this as well.

shapefrog4 hours ago

I like it a lot ... I just dont know what to do about all my 'weak' results.

akerl_4 hours ago

Generally they’re going to be for legacy ciphers/MACs/etc.

If you don’t need them, you can turn them off. If you’re the only one accessing your servers, you can honestly just pick a single option for each based on the highest security option that’s supported by all your client devices. is a good starting point. The lists of available options are sorted from left -> right, most optimal -> least.

shapefrog4 hours ago

Yeah I was kinda hoping that each of the fields would point to some instructions on disabling - but that is just me being too lazy to google things.

edit: thanks for that link - swiped their kex, ciphers and macs and didnt break anything (that I know of)

qersist3nce6 hours ago

nice honeypot ;)

(joking, thanks for the suggestion)

beermonster5 hours ago

When using the SSH protocol for running automated remote commands you can improve security using forced command[1] within your authorized_keys file.


rsync4 hours ago

... and you can also restrict by IP address in authorized_keys ...

beermonster4 hours ago

Ah yes, before your key type specifier i.e. at the very beginning use something like :


tiffanyh5 hours ago


I feel like an important part of "hardening" a server is to remove/disable unused services. Does anyone know if NanoBSD is actively worked-on by the FreeBSD team and/or still in use? For those note aware, NanoBSD is an official build from FreeBSD team that allows you to compile a slimmed down FreeBSD build that is read-only yet can run any/all FreeBSD software.

I can find very little about NanoBSD other than a handful of posts from 10 years ago. It seems like a great foundation for hardening a server.

enduser5 hours ago

I looked into this for a project a couple of years ago (to boot VMs from minimal customized ISO) and ended up using mfsbsd instead.

tiffanyh5 hours ago

That's interesting, this appears to be created by a core FreeBSD developer.

I'm not able to find a lot of documentation on it. How does it compare to NanoBSD (and why did you choose it over Nano)?

robohoe4 hours ago

I've used Martin Matuska's mfsbsd in the past to install a system with ZFS on root. I believe now that's natively supported, but back in the day it was quite an involved thing :)

samgranieri5 hours ago

I like reading tutorials on this subject. One of my favorites, albeit six years old, is

mkj7 hours ago

It generates a new RSA key but disables it in the config? Seems a bit much mindless cut and paste.

tester7568 hours ago

I always heard that FreeBSD has unparalleled networking

Does it mean that it'd be worth picking FreeBSD over Linux for my C# crud app if it had to handle a lot of requests/sec? (let's ignore db for the moment)

toast07 hours ago

As with all things, you would really need to benchmark the system, preferably with real load, both ways to know for sure. But that takes a lot of time, especially if you're going to put in the time to tweak both systems.

People can do amazing stuff with enough time in both FreeBSD and Linux. I honestly think most server applications wouldn't be held back by either OS. You need your application to be really lightweight and focused before the OS makes a big difference, and even then, the differences only show if you're maxing out the hardware.

I worked at WhatsApp, and enjoyed working with FreeBSD there, and clearly it worked for us. Linux in FB datacenters also worked, but the server components were a lot different so there was never an apples to apples comparison. I run FreeBSD on my personal servers because I enjoyed working with it at Yahoo and then WhatsApp; but my personal servers don't have any performance needs. Sure, the networking stuff is nice (and it was nice to work with in the kernel), but what I like most about FreeBSD is the lack of churn. I can look at old administrative recipies and all the commands still work. I can expect (and mostly get) that when I upgrade, everything will keep working, and maybe a little better; occassionally, a lot better.

tiffanyh3 hours ago

Hi toast0

IIRC, I saw a presentation by someone (Rick?) where at your previous employer - you guys slimmed FreeBSD down to be unbelievably minimal such that only 2-3 total services ran on the entire server.

Was that done for performance reason? Or for hardening reasons?

If someone wanted to do that today with FreeBSD: would you recommend it and how would you go about doing it (NanoBSD)?

waynesonfire4 hours ago

> but what I like most about FreeBSD is the lack of churn

I agree completely. As I mature in this field this becomes an ever important characteristic of the technology I adopt. Erlang also shares this property.

adrian_b7 hours ago

There was a time when FreeBSD had much better networking performance than Linux, but that was many years ago.

Now, on supported hardware, their performance should be similar.

However, Linux has drivers for much more varied networking hardware. FreeBSD has very good support for Intel NIC's but some of the hardware from other vendors may happen to be not supported.

FreeBSD has a few nicer kernel features for those who develop themselves a networking application, but more networking libraries useful for high performance applications are available for Linux, even if DPDK, which was mentioned in another reply, is available for both Linux and FreeBSD.

So, while I am a very satisfied FreeBSD user, I would recommend to someone with less experience to use Linux, as there are more resources readily available.

On the other hand, for someone who wants to learn more about the implementation of networking applications, it can be useful to also try FreeBSD, to understand more about alternative solutions.

0xdeadb00f6 hours ago

FreeBSD (and all BSDs for that matter) have pretty good networking stacks besides any new recent-gen WiFi.

loeg3 hours ago

C# support isn't great on FreeBSD yet, so probably not.

tester7562 hours ago

but if we assume that it works fine?

AFAIK there's ongoing and active work for FreeBSD

loeg23 minutes ago

That issue has been continuously open since 2015. Assuming it works, sure, but I would not hold my breath.

kd9137 hours ago

I don't think it would be worthwhile.

If you want better latency and throughput, you wouldn't be using the kernel network stack and instead be opting for some userspace networking stack like DPDK or onload.

Depends obviously on what the bottlenecks of your application are, your NIC and the characteristics of your hardware as well.

crest7 hours ago

FreeBSD has netmap for fast userspace packet processing.

kd9137 hours ago

True, and the Linux kernel has zero-copy AF_XDP that enable memory to be shared with userspace. However, low-latency networking is a lot more than just simple kernel bypass.

It's things like pinning cpu cores dedicated for networking, disabling C-states, epolling and being able to utilize bespoke firmware interfaces designed for smartnics. Also application protocol, ie using features like TCP checksum offload and TSO.

Heck the application would also need to be adjusted for a low-latency environment via probably a custom JVM and doing things like reading data structures/variables to ensure they are in CPU cache.

Frankly I would recommend trying openonload which at least is compatible with native Linux socket programming unlike DPDK.

almalkemqq7 hours ago

I loved the simple explanation :)