How to correctly secure your Linux servers

Hey everyone,
I think as always you know it I love to open tiny topics to get your opinion and experiences.

I am really interested to know which layer of security you add to your machines. Currently, by my side, we only work with SSH-Keys but we still allow users to try to login via SSH password then we ban them after 3 tries with Fail2ban which gives us a fair list of banned ip’s.

We do not close every port and only open needed port in the last time because I am having a lot of troubles making iptables to work but this is another story.

For FTPS/SFTP if login is really required using a password we use at least a 64 characters password with an uncommon username.

So I wanted to know how is it by your side which measures do you take. And is closing ports mandatory except for DDoS attacks?

1 Like

Reduce OS attack surface, e.g. install Alpine. Use bare/stripped down package lists. Close all ports with iptables, open only to IPs whitelisted in “KNOWN-IP” chain (automatically updated). At-rest partition encryption with dynamic key fetch on boot. Run inter-VPS traffic through private cloud. Containerize.

I ran f2b for years, but found it to be a bit of a memory hog on a smaller VPS. When we only allow whitelisted IPs through the firewall then f2b isn’t much use anyway. For public-facing ports we geo-restrict in iptables (either whitelisted countries or blacklisted countries) and in most cases rate-limit connections via HAProxy.

1 Like

It’s becoming increasingly uncommon for commonly used packages to ship from the repos of major (LTS) distros that default to a state that’s known to not be secure on a system that isn’t shared. Single tenant systems are much more commonly compromised by poor passwords or vulnerable web applications.

On a shared server especially you should always think twice before installing any software even if it’s not public facing. Investigate how it functions and think about how privileged users might interact with it if possible, and what the implications of that could be. Ex. something like redis bound to localhost shared among the users potentially allowing them to attack each other by polluting or stealing the caches used by their applications.


If you don’t mind, which software are you using for this?

1 Like

In-house. Poli might get angry if I stray too far away from security and into VPS setup, but roughly I have scripts which take a KVM and do a ‘hot’ (in-place) repartition/reinstall so that it has a 3GB ext4 Alpine boot partition which is very basic. Then I put everything in LXC containers on a btrfs partition which is encrypted. To unlock, the script is along the lines of

url=`grep ^url= /etc/lxc-api.conf | sed -e "s/.*=//"`
uuid=`grep ^uuid= /etc/lxc-api.conf | sed -e "s/.*=//"`
bdev=`grep ^dev= /etc/lxc-api.conf | sed -e "s/.*=//"`
pw=$( curl -fs -H "X-LXC-Id: ${uuid}" "${url}/getkey" 2>/dev/null )
if [ "$?" -ne "0" ]
  echo "Fatal error retrieving key from server."
  exit 1
echo "${pw}" | cryptsetup open ${bdev} crypt-root
mount /var/lib/lxc
if [ ! -d /sys/fs/cgroup/systemd ]
  mkdir /sys/fs/cgroup/systemd
mount -t cgroup -o rw,nosuid,nodev,noexec,relatime,none,name=systemd cgroup /sys/fs/cgroup/systemd
exit 0

The actual script is a bit more complex (e.g. the ability to instruct the VPS to re-encrypt the partition with a new key), but you get the idea. The backend is a simple SQLite3 DB which stores IP, UID and LUKS password. Of course if a provider wants to clone the running/unlocked VPS then you can’t stop this. The main purpose is in case of poor hardware disposal, e.g. provider shuts down and hardware goes who-knows-where.

Thus everything is abstracted from the VPS provider - essentially we can pick up and move any container to any other VPS provider. All the communication between containers is in an encrypted private cloud, so we (generally) don’t have to worry about any changes in IP addresses that come with moving provider.

For each container we have a script that does any iptables setup unique to that container (e.g. public ports like web server). This all gets packaged up and stored in a repository, along with incremental btrfs snapshots. If host node goes down (or we pick up a new VPS on sale!) then we run an ‘import’ script on the new host which pulls the LXC container from the repo and runs to do any networking.

Long reply, but that’s at least a very high-level idea. It is nothing fancy, just scaled down to work on 512MB KVM VPSes.


Very interesting security layer. Looks complicated because at the end of the machine is turned on you can still get all the data if I am correct. And it does nor annoy me at all I am even interested if people’s share there security systems. I think we all want to have the most secure environment possible.

I really need to take a machine and try to use iptables I always do something wrong and block my machine even with some basic commands :joy:.

Also here which countries do you block looks interesting to me. I never blocked any country because I think everyone should be able to access their data but really interested in what you blocked.

For a VPS, whoever has access to the host node can always snapshot the RAM. There’s some steps to mitigate (not eliminate) that but they have cost/latency implications. At least for me, this is not a reasonable trade-off. I’ll go as far as choosing hosts I trust in locations with acceptable laws.

However, stuff can always go wrong - DC seizes hardware in legal dispute; provider previously thought to be “reliable” goes bankrupt; whatever scenario you imagine. In this case you have a disk that is not securely erased and maybe re-leased, sold at auction or whatever. This is the scenario such “at rest” disk encryption is designed for. Remove LUKS key from the SQLite3 DB and partition won’t mount. The encryption step isn’t that complicated to do, so at least for me it is a good trade-off.

It depends on the service. For nameservers, http(s) or smtp, I don’t do any geo-block.

For something like a private VPN:

# grep ip_deny prefs.txt
ip_deny=am az bg by ge kg kz md ru tj tm ua uz ae af ao bf bh bi bj bw cd cf cg ci cm cv dj dz eg er et ga gh gm gn gq gw id il in io iq ir jo ke km kw lb lr ls ly ma mg ml mr mu mw mz na ne ng om pk ps qa rw sa sd sl sn so st sy sz td tg tn tr tz ug ye yt za zm zw cn kp mn mo

For other services (like IMAP) I do a “geo-allow” for certain whitelisted countries.

But other than these services, I don’t really have any public ports. Things like ssh are restricted to ‘known IPs’.

When you posted this topic I did start writing down something, anyway your question is quite broad and ultimately I couldn’t tell if you are approaching this from a customer or a provider’s perspective; also I can’t tell if you’re in charge of managing the boxes you offer as a provider.

I see a very specific approach has been posted, I’ll come to that later.

The hints to reduce OS attack surface, to use either a stripped down or an LTS distro and to check your public facing services are the most versatile ones.
You mention passwords twice in your OP:

If you’re e.g. controlling the frontend used to set the passwords for these services, you may consider to double-check if the passwords set by your users are already compromised via API call.
I wouldn’t enforce 64 chars long passwords; long, complex passwords using the whole ASCII table were once encouraged. ~3 years ago NIST updated his policy, NCSC and ENISA followed suit. If anything, you may want to explore SSO and 2FA solutions. Consider to throttle down/rate limit failed sign-ins rather than relying on outright bans. Password managers are generally encouraged.

If you want ipsets to either DROP or throttle specific attacks, you may want to give a look at FireHOL IP lists: multi- and single-purpose lists are sensibly kept up-to-date.
You may want to consider Network- or Host-based IDS, depending on your role and your setup.

About “hardening” ssh specifically: you may want to quickly audit your setup and correct it if possible and required.
About “hardening” the OS as a whole: you may find CISOfy/Lynis a nice way to get at least some hints for your setup; you may eventually craft a specific policy for your servers if e.g. you need to meet some compliance standards.

Encryption-at-rest is always a nice thing to have, sometimes it’s required for compliance. If you’re looking at FDE from a bare-metal perspective, and assuming you aren’t going to encrypt /boot, a SSH dracut module or a dropbear initramfs (depending on the distro) may assist you in unlocking a remote machine. In a more complex setup you may automatically fetch a remote key at startup or, better, unlock using clevis and tang (this is a non-trivial setup); I think I may have brought this up elsewhere already when discussing more broadly about pulling keys for FDE.

In general anyway you don’t just “secure your Linux server”: you first define a threat, a scenario, and then you turn off your machine act accordingly. So yeah the topic is broad, to say the least.

1 Like

Really intressting today I looked at SSH hardening I enabled it in our “quick” script as well as FireHOL.

For FireHOL I did really fastly a tiny script with all working list I founded. Also list that do not block and… the firehol_level1 list blocked those ip’s.
I will surely update if I found another useful list but it looks like it’s working great.
GitHub - Poli-Systems/Firehol: Basic quick installer for firehol

I will look at the other thing you stated tommorow. And yeah by my side I use bitwarden but checking them on haveibeenpwned is surely a good idea as well.

For now I mainly try to have a generic script for our machines I still have an hardware firewall for all different uses.

well you know we’re here talking about hardening and then I read this
sudo bash -c "$(wget -qO - --no-cache"

For FireHOL: you don’t actually need to install the whole thing, you may just use the suggested ipsets as… ipsets, to be then used in conjunction with iptables. If you want to check how deep the rabbit hole goes, iprange is an additional stand-alone tool for your ipsets.