How to Protect a LAMP Server Against nf_conntrack Flood Attacks

An AWS hosted website went offline at 02:00 this morning. It was running on a t2.nano Debian 9 instance. I was unable to log into the affected server, and a reboot was the only available course of action. Logging in and checking the logs afterwards revealed thousands of errors like this in the kernel log file, from 2:00 AM onward:

nf_conntrack: nf_conntrack: table full, dropping packet

The cause was a denial of service attack, coming from a couple of IP addresses seemingly in Iran. However, it was a little unusual for a couple of reasons. This article explains more about the attack vector and presents a solution to guard against future attacks. (In summary: block IP addresses, tune the kernel).

About nf_conntrack

nf_conntrack is a part of the Linux kernel firewall, sometimes called “netfilter” or sometimes just “iptables”. When running, netfilter keeps a record of network connections presently open on the system, and those that have recently been opened closed. Using the information, netfilter is able to block IP addresses, where necessary, from making too many connections in a given time period, or to filter connections based on their state.

To store information about connections being thus “tracked”, Netfilter uses the nf_conntrack table. In the case of the server above, the table filled up. New network connections to the server (including my ssh connection) could not be made, leading to the “dropping packet” message in the logs. And that’s how the attack works. It fills the nf_conntrack table – gradually, over time – until the server falls over.

nf_connrack Attack

Back to the system. After the above AWS instance had been rebooted, the attack continued, which is helpful because it allows continued investigation (the live service now having moved to a standby system). To see a count of network connections being tracked, look at the file nf_conntrack_count:

# cat /proc/sys/net/netfilter/nf_conntrack_count
165

And again a couple of minutes later:

# cat /proc/sys/net/netfilter/nf_conntrack_count
180

180 is a high number. For comparison, the same command on a healthy LAMP server, hosting the live service, gives a count of 10.

Another file, nf_conntrack, shows information every connection being tracked:

# tail /proc/net/nf_conntrack
 ipv4     2 tcp      6 431338 ESTABLISHED src=5.211.167.102 dst=172.31.19.247 sport=59460 dport=443 src=172.31.19.247 dst=5.211.167.102 sport=443 dport=59460 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429228 ESTABLISHED src=78.39.13.110 dst=172.31.19.247 sport=15897 dport=443 src=172.31.19.247 dst=78.39.13.110 sport=443 dport=15897 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 431047 ESTABLISHED src=5.121.204.0 dst=172.31.19.247 sport=43305 dport=443 src=172.31.19.247 dst=5.121.204.0 sport=443 dport=43305 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 431805 ESTABLISHED src=78.39.13.110 dst=172.31.19.247 sport=17439 dport=443 src=172.31.19.247 dst=78.39.13.110 sport=443 dport=17439 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429312 ESTABLISHED src=5.211.167.102 dst=172.31.19.247 sport=58527 dport=443 src=172.31.19.247 dst=5.211.167.102 sport=443 dport=58527 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 431706 ESTABLISHED src=78.39.13.110 dst=172.31.19.247 sport=17412 dport=443 src=172.31.19.247 dst=78.39.13.110 sport=443 dport=17412 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 430248 ESTABLISHED src=5.121.204.0 dst=172.31.19.247 sport=42302 dport=443 src=172.31.19.247 dst=5.121.204.0 sport=443 dport=42302 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 430058 ESTABLISHED src=5.121.204.0 dst=172.31.19.247 sport=41819 dport=443 src=172.31.19.247 dst=5.121.204.0 sport=443 dport=41819 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429150 ESTABLISHED src=78.39.13.110 dst=172.31.19.247 sport=15856 dport=443 src=172.31.19.247 dst=78.39.13.110 sport=443 dport=15856 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429591 ESTABLISHED src=78.39.13.110 dst=172.31.19.247 sport=16146 dport=443 src=172.31.19.247 dst=78.39.13.110 sport=443 dport=16146 [ASSURED] mark=0 zone=0 use=2

Attack Source

By processing the nf_conntrack file, the originating IP addresses can be highlighted. The following command, repeated every few seconds, indicated 4 remote IP addresses were responsible:

# cat /proc/net/nf_conntrack | awk -F '  *|=' '{print $8}' | sort | uniq -c | sort -rn
303 78.39.13.110
189 5.121.204.0
156 5.126.223.176
142 5.211.167.102

Every few hours, I noticed that two IP addresses would “retire” from opening connections, and another two would take their place. All of the IPs seemed to be located in Iran though none were listed in Abuseipdb.

Speed of Attack

Whats more, new connections are being opened fairly quickly. How quickly ? Well, about 19 every minute:

# cat /proc/sys/net/netfilter/nf_conntrack_count ; sleep 60 ; cat /proc/sys/net/netfilter/nf_conntrack_count
249
268

To check the maximum number of allowable connections, check nf_conntrack_max:

# cat /proc/sys/net/netfilter/nf_conntrack_max
16384

In about 853 minutes from now, or just over 14 hours, the table will fill up and the server will again be in a helpless state, unable to host any new network connections. SSH will again be impossible and another reboot will be required. Obviously unacceptable, so what can be done?

Attack Vector

Although I called these things “connections” above, they aren’t full network connections, ie. sockets. They are really just occupied slots on the nf_conntrack table. That is to say, the attack is directed at the netfilter framework itself, rather than at the wider system.

The attack software works by initiating a network connection to our system, on the https port, and then not completing the “3 way handshake”. Probably it is the usual SYN flood: the source IP sends a SYN, receives an ACK, but then fails to send the second ACK that would complete the setup.

Our system half opens a socket, puts it into SYN_RECV state, patiently waits for the ACK, then, not receiving it, closes the socket a minute or two later. No harm done, you might think. However, netfilter attempts to track every one of these, eventually building up to thousands of entries in the nf_conntrack table and, as explained, overwhelming the firewall resource and causing the system to seize up.

Netstat Output

A netstat command issued at any time during the attack reveals about 20 or 30 connections from the two IPs of interest, most in the SYN_RECV state. Careful observation of the output (for example with “watch -n 1 “netstat -a | grep https”) shows connections being initiated every few seconds, as expected.

Protecting Against the Attack

In order to prefent this type of attack from taking down the system, connections need to be removed at a faster rate than they are made. The conntrack table will then never fill up. COnnections can be deleted, the kernel can be tuned to expire them more quickly, and originating IP addresses can be blocked.

Delete Conntrack Entries

A quick course of action for a server under attack is to delete the existing conntrack entries. A package called “conntrack” provides the required tools. Install it as follows.

# apt-get install conntrack

Entries can be filtered according to the originating IP. I’ll use some of the attack addresses shown above. Nuke those entries:

# cat /proc/sys/net/netfilter/nf_conntrack_count
 1097
# conntrack -D --src 78.39.13.110 > /dev/null
# cat /proc/sys/net/netfilter/nf_conntrack_count
 701
# conntrack -D --src 5.121.204.0 > /dev/null
# cat /proc/sys/net/netfilter/nf_conntrack_count
 450
# conntrack -D --src 5.126.223.176 > /dev/null
# conntrack -D --src 5.211.167.102 > /dev/null
# cat /proc/sys/net/netfilter/nf_conntrack_count
 36

1097 entries has been reduced to 36. I feel better already. Attack still continuing though:

# cat /proc/sys/net/netfilter/nf_conntrack_count
 82

Deleting entries has bought the server more time, but the table will still overflow eventually.

Tune Conntrack Kernel Parameters

Many sites suggest a course of action that involves making the nf_conntrack table much bigger (it is 16384 by default on our t2.nano instance), perhaps extending to hundreds of thousands of entries, and tuning other variables to, for example, decrease the tracking timeout or even disable tracking altogether. As it is so well documented elsewhere, I won’t repeat the parameters here.

Parameter tuning is probably a good idea, depending on the nature of your server and the amount of traffic it receives. Trial and error might be required to determine the best values, and it might not be possible, in some cases, to completely preclude the possibility of future attacks taking down the system. And if you are locally filtering traffic (with netfilter) based on connection state, then disabling nf_conntrack is not an option.

Blocking Rogue IP Addresses

A quicker and more certain way to prevent your server from being taken offline is to simply block offending IP addresses on your firewall (the corporate firewall, or the local netfilter). A script could be written to run every hour or so and ban IP addresses, if necessary, according to the contents of nf_conntrack or output of netstat.

Here is a quick example Bash script.

#!/bin/bash

# Quick script to check nf_conntrack table and potentially block troublesome IPs

# IP addresses originating more than $THRESHOLD conntrack entried
# will be banned.
THRESHOLD=30


cat /proc/net/nf_conntrack | awk -F '  *|=' '{print $8}' | \
   sort | uniq -c | awk -v threshold=$THRESHOLD '$1 > threshold {print $0}' |

while read count ip 
do
   # "blacklist1" is the name of a locally defined ipset.
   echo ipset add blacklist1 $ip
   ipset add blacklist1 $ip

   # Delete the conntrack entries of the offending IP, so a second ban attempt
   # is not made next time this script is run.
   conntrack -D --src $ip > /dev/null

   echo

done

Incidentally, I have written a separate guide on how to protect your server with ipset.

Running it:

# ./check_conntrack.sh
ipset add blacklist1 5.126.223.176
conntrack -D --src 5.126.223.176
conntrack v1.4.4 (conntrack-tools): 212 flow entries have been deleted.

ipset add blacklist1 5.211.167.102
conntrack -D --src 5.211.167.102
conntrack v1.4.4 (conntrack-tools): 119 flow entries have been deleted.

ipset add blacklist1 78.39.13.110
conntrack -D --src 78.39.13.110
conntrack v1.4.4 (conntrack-tools): 304 flow entries have been deleted.

The script has added the worst offending IP addresses to the local firewall blacklist, as well as removing 635 spurious entries from the conntrack table. The opening of new connections from those addresses stops immediately, as shown by these commands run a few hours later. Each command lists entries from the relevant IP address:

# conntrack -L --src 5.126.223.176
 conntrack v1.4.4 (conntrack-tools): 0 flow entries have been shown.
# conntrack -L --src 5.211.167.102
 conntrack v1.4.4 (conntrack-tools): 0 flow entries have been shown.
# conntrack -L --src 78.39.13.110
 conntrack v1.4.4 (conntrack-tools): 0 flow entries have been shown.

Conclusion

The nf_conntrack attack is subtle. Connections are opened at a slow rate, and after creating about 300, the individual bot node stops, while another comes online, continuing to open connections to your server. In all likelihood the bot is designed to “fly under the radar”, by avoiding high connection rates and large concentrations of source IP addresses.

For some reason, servers in AWS seem more prone to attack than elsewhere. Perhaps because AWS address ranges are better known to the bots responsible.

Recommendations

It is recommended that the size of the nf_conntrack table be increased, in order to extend the life of a system once an attack has started.

It is recommended to decrease the conntrack timeout values, so that slots in the nf_conntrack table will be released more quickly.

It is also recommended that offending IP addresses are regular blocked by the firewall, using an automated process.

Care is needed, especially on a large commercial server, in order to avoid, for example, blocking your own servers or killing legitimate connections, and the value of $THRESHOLD should be set carefully. The script should also probably check the state of entries, with only TIME_WAIT state leading to a block.

A combination of tuning the nf_conntrack subsystem and IP address blocking should protect any server against this kind of SYN flood attack by, in essence, removing entries from the table more quickly than they can accumulate.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.