How to Protect a LAMP Server Against nf_conntrack Flood Attacks

An AWS hosted website went offline at 02:00 this morning. It was running on a t2.nano Debian 9 instance. I was unable to log into the affected server, and a reboot was the only available course of action. Logging in and checking the logs afterwards revealed thousands of errors like this in the kernel log file, from 2:00 AM onward:

nf_conntrack: nf_conntrack: table full, dropping packet

The cause was a denial of service attack, coming from a couple of IP addresses seemingly in Iran. However, it was a little unusual for a couple of reasons. This article explains more about the attack vector and presents a solution to guard against future attacks. (In summary: block IP addresses, tune the kernel).

About nf_conntrack

nf_conntrack is a part of the Linux kernel firewall, sometimes called “netfilter” or sometimes just “iptables”. When running, netfilter keeps a record of network connections presently open on the system, and those that have recently been opened closed. Using the information, netfilter is able to block IP addresses, where necessary, from making too many connections in a given time period, or to filter connections based on their state.

To store information about connections being thus “tracked”, Netfilter uses the nf_conntrack table. In the case of the server above, the table filled up. New network connections to the server (including my ssh connection) could not be made, leading to the “dropping packet” message in the logs. And that’s how the attack works. It fills the nf_conntrack table – gradually, over time – until the server falls over.

nf_connrack Attack

Back to the system. After the above AWS instance had been rebooted, the attack continued, which is helpful because it allows continued investigation (the live service now having moved to a standby system). To see a count of network connections being tracked, look at the file nf_conntrack_count:

# cat /proc/sys/net/netfilter/nf_conntrack_count

And again a couple of minutes later:

# cat /proc/sys/net/netfilter/nf_conntrack_count

180 is a high number. For comparison, the same command on a healthy LAMP server, hosting the live service, gives a count of 10.

Another file, nf_conntrack, shows information every connection being tracked:

# tail /proc/net/nf_conntrack
 ipv4     2 tcp      6 431338 ESTABLISHED src= dst= sport=59460 dport=443 src= dst= sport=443 dport=59460 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429228 ESTABLISHED src= dst= sport=15897 dport=443 src= dst= sport=443 dport=15897 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 431047 ESTABLISHED src= dst= sport=43305 dport=443 src= dst= sport=443 dport=43305 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 431805 ESTABLISHED src= dst= sport=17439 dport=443 src= dst= sport=443 dport=17439 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429312 ESTABLISHED src= dst= sport=58527 dport=443 src= dst= sport=443 dport=58527 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 431706 ESTABLISHED src= dst= sport=17412 dport=443 src= dst= sport=443 dport=17412 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 430248 ESTABLISHED src= dst= sport=42302 dport=443 src= dst= sport=443 dport=42302 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 430058 ESTABLISHED src= dst= sport=41819 dport=443 src= dst= sport=443 dport=41819 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429150 ESTABLISHED src= dst= sport=15856 dport=443 src= dst= sport=443 dport=15856 [ASSURED] mark=0 zone=0 use=2
 ipv4     2 tcp      6 429591 ESTABLISHED src= dst= sport=16146 dport=443 src= dst= sport=443 dport=16146 [ASSURED] mark=0 zone=0 use=2

Attack Source

By processing the nf_conntrack file, the originating IP addresses can be highlighted. The following command, repeated every few seconds, indicated 4 remote IP addresses were responsible:

# cat /proc/net/nf_conntrack | awk -F '  *|=' '{print $8}' | sort | uniq -c | sort -rn

Every few hours, I noticed that two IP addresses would “retire” from opening connections, and another two would take their place. All of the IPs seemed to be located in Iran though none were listed in Abuseipdb.

Speed of Attack

Whats more, new connections are being opened fairly quickly. How quickly ? Well, about 19 every minute:

# cat /proc/sys/net/netfilter/nf_conntrack_count ; sleep 60 ; cat /proc/sys/net/netfilter/nf_conntrack_count

To check the maximum number of allowable connections, check nf_conntrack_max:

# cat /proc/sys/net/netfilter/nf_conntrack_max

In about 853 minutes from now, or just over 14 hours, the table will fill up and the server will again be in a helpless state, unable to host any new network connections. SSH will again be impossible and another reboot will be required. Obviously unacceptable, so what can be done?

Attack Vector

Although I called these things “connections” above, they aren’t full network connections, ie. sockets. They are really just occupied slots on the nf_conntrack table. That is to say, the attack is directed at the netfilter framework itself, rather than at the wider system.

The attack software works by initiating a network connection to our system, on the https port, and then not completing the “3 way handshake”. Probably it is the usual SYN flood: the source IP sends a SYN, receives an ACK, but then fails to send the second ACK that would complete the setup.

Our system half opens a socket, puts it into SYN_RECV state, patiently waits for the ACK, then, not receiving it, closes the socket a minute or two later. No harm done, you might think. However, netfilter attempts to track every one of these, eventually building up to thousands of entries in the nf_conntrack table and, as explained, overwhelming the firewall resource and causing the system to seize up.

Netstat Output

A netstat command issued at any time during the attack reveals about 20 or 30 connections from the two IPs of interest, most in the SYN_RECV state. Careful observation of the output (for example with “watch -n 1 “netstat -a | grep https”) shows connections being initiated every few seconds, as expected.

Protecting Against the Attack

In order to prefent this type of attack from taking down the system, connections need to be removed at a faster rate than they are made. The conntrack table will then never fill up. COnnections can be deleted, the kernel can be tuned to expire them more quickly, and originating IP addresses can be blocked.

Delete Conntrack Entries

A quick course of action for a server under attack is to delete the existing conntrack entries. A package called “conntrack” provides the required tools. Install it as follows.

# apt-get install conntrack

Entries can be filtered according to the originating IP. I’ll use some of the attack addresses shown above. Nuke those entries:

# cat /proc/sys/net/netfilter/nf_conntrack_count
# conntrack -D --src > /dev/null
# cat /proc/sys/net/netfilter/nf_conntrack_count
# conntrack -D --src > /dev/null
# cat /proc/sys/net/netfilter/nf_conntrack_count
# conntrack -D --src > /dev/null
# conntrack -D --src > /dev/null
# cat /proc/sys/net/netfilter/nf_conntrack_count

1097 entries has been reduced to 36. I feel better already. Attack still continuing though:

# cat /proc/sys/net/netfilter/nf_conntrack_count

Deleting entries has bought the server more time, but the table will still overflow eventually.

Tune Conntrack Kernel Parameters

Many sites suggest a course of action that involves making the nf_conntrack table much bigger (it is 16384 by default on our t2.nano instance), perhaps extending to hundreds of thousands of entries, and tuning other variables to, for example, decrease the tracking timeout or even disable tracking altogether. As it is so well documented elsewhere, I won’t repeat the parameters here.

Parameter tuning is probably a good idea, depending on the nature of your server and the amount of traffic it receives. Trial and error might be required to determine the best values, and it might not be possible, in some cases, to completely preclude the possibility of future attacks taking down the system. And if you are locally filtering traffic (with netfilter) based on connection state, then disabling nf_conntrack is not an option.

Blocking Rogue IP Addresses

A quicker and more certain way to prevent your server from being taken offline is to simply block offending IP addresses on your firewall (the corporate firewall, or the local netfilter). A script could be written to run every hour or so and ban IP addresses, if necessary, according to the contents of nf_conntrack or output of netstat.

Here is a quick example Bash script.


# Quick script to check nf_conntrack table and potentially block troublesome IPs

# IP addresses originating more than $THRESHOLD conntrack entried
# will be banned.

cat /proc/net/nf_conntrack | awk -F '  *|=' '{print $8}' | \
   sort | uniq -c | awk -v threshold=$THRESHOLD '$1 > threshold {print $0}' |

while read count ip 
   # "blacklist1" is the name of a locally defined ipset.
   echo ipset add blacklist1 $ip
   ipset add blacklist1 $ip

   # Delete the conntrack entries of the offending IP, so a second ban attempt
   # is not made next time this script is run.
   conntrack -D --src $ip > /dev/null



Incidentally, I have written a separate guide on how to protect your server with ipset.

Running it:

# ./
ipset add blacklist1
conntrack -D --src
conntrack v1.4.4 (conntrack-tools): 212 flow entries have been deleted.

ipset add blacklist1
conntrack -D --src
conntrack v1.4.4 (conntrack-tools): 119 flow entries have been deleted.

ipset add blacklist1
conntrack -D --src
conntrack v1.4.4 (conntrack-tools): 304 flow entries have been deleted.

The script has added the worst offending IP addresses to the local firewall blacklist, as well as removing 635 spurious entries from the conntrack table. The opening of new connections from those addresses stops immediately, as shown by these commands run a few hours later. Each command lists entries from the relevant IP address:

# conntrack -L --src
 conntrack v1.4.4 (conntrack-tools): 0 flow entries have been shown.
# conntrack -L --src
 conntrack v1.4.4 (conntrack-tools): 0 flow entries have been shown.
# conntrack -L --src
 conntrack v1.4.4 (conntrack-tools): 0 flow entries have been shown.


The nf_conntrack attack is subtle. Connections are opened at a slow rate, and after creating about 300, the individual bot node stops, while another comes online, continuing to open connections to your server. In all likelihood the bot is designed to “fly under the radar”, by avoiding high connection rates and large concentrations of source IP addresses.

For some reason, servers in AWS seem more prone to attack than elsewhere. Perhaps because AWS address ranges are better known to the bots responsible.


It is recommended that the size of the nf_conntrack table be increased, in order to extend the life of a system once an attack has started.

It is recommended to decrease the conntrack timeout values, so that slots in the nf_conntrack table will be released more quickly.

It is also recommended that offending IP addresses are regular blocked by the firewall, using an automated process.

Care is needed, especially on a large commercial server, in order to avoid, for example, blocking your own servers or killing legitimate connections, and the value of $THRESHOLD should be set carefully. The script should also probably check the state of entries, with only TIME_WAIT state leading to a block.

A combination of tuning the nf_conntrack subsystem and IP address blocking should protect any server against this kind of SYN flood attack by, in essence, removing entries from the table more quickly than they can accumulate.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.