DDoS = distributed denial of service attack.
server: NGINX
os: UBUNTU
The Problem
A DDoS attack is a pain in the arse. It’s basically some bot out there, or a connection of bots (distributed) that are sending requests to your server in an attempt to overload it and make it really really slow – possibly to the point of causing it to crash.
The Solution
Luckily, this isn’t the first time people have heard of this, nor will it be the last, so there’s security measures in play.
Firewall
Most notably, we have a firewall. The main premise of a firewalls is they block all traffic except the stuff you want through. To enable this, Ubuntu has a built in firewall called Uncomplicated Firewall ‘ufw’ that should already be ready for you to enable. The manual is here for ufw. The ufw Community Help Wiki is here.
The community help wiki is extremely helpful at starting this for you, but basically, you’ll want to:
ufw allow ssh <- allow ssh connections to your server
ufw allow http <- allow http connections to your server
ufw enable <- enable ufw itself now that you’ve allowed the connections you’ll still need to access it
ufw status verbose <- have a look at what settings you’ve set
This is just simply smart computing, especially with a server, which allows you to ensure that all your ports are blocked so that nobody can get in that you didn’t really want in.
However, this does nothing for limiting the amount a single IP can ping your server, so we still haven’t stopped the DDoS attack.
NGINX Limit Request Module
NGINX has a built in module called limit_req – Limit Requests – that’s does exactly what one needs to do to limit the effectiveness of a DoS attack. If a DoS attack is limited to so many requests per second on your server and then is bottlenecked for a certain time, it becomes a very inefficient DoS attacker. When this happens, a log is created of the event which we’ll be using later with Fail2ban.
The NGINX manual for limit_req is here, and there is a great article on EasyEngine on configuring the limit_req module, but you’re going to be opening your NGINX config file:
sudo vi /etc/nginx/nginx.conf
and adding the following inside the http{ … } block:
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
What you’re doing here is setting up a limit_req_zone configuration that you’ll refer to later by the zone name: zone=one. You can change it to: zone=superstrictrule, or anything of your choice. $binary_remote_addr is the binary IP address of the out of line attacker. 10m is the size of the file that stores the zone states in. rate=1r/s means that this zone’s limits are 1 request per second. You’ll want to configure this zone file as you see fit, particular with the rate, and you can use (s) for seconds, (m) for minutes, and NGINX makes clear if you’d like a rate of less then 1r/s you must go towards: 30r/min, which is kind of equivalent to 0.5r/s.
Then, inside the unique server {…} block (or a virtual server if you’ve deployed them):
limit_req zone=one burst=5 nodelay;
To set the actual limit_req rule. This doesn’t necessarily have to be inside a server { } block, it can be inside a location { } block if you’re really getting specific. nodelay makes sure that as soon as the limit is hit, an error is thrown. zone=one will identify which specific limit_req_zone the limit_req is for, and burst=5 is the burst size that’s allowable (my suspicion is that 5 is of the rate already defined, so for this example, it would be 1r/s is the rate, burst would be allowable 5r/s, I’d love is somebody could confirm this in the comments).
Fail2Ban
I’m not quite sure the source of the name, but Fail2ban is pretty brilliant tool. It scans your logs and will create specific blocks based on suspicious activity. So, if the same IP is hammering your server, it’ll see it and block it. Another cool thing is that it works for any logs, be it SSH, mySQL, HTTP, etc. Within the jail.local file that one creates upon installation, you can activate whichever monitors you wish.
The Ubuntu Community Wiki help file on Fail2ban steps you through installation and how to enable each module, and there is also this tutorial at EasyEngine on how to configure Fail2ban for NGINX that’s really helpful, but the steps are:
sudo apt-get install fail2ban <- install Fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local <- create a local configuration file
sudo vi /etc/fail2ban/jail.local <- edit the local configuration file and add the section below:[nginx-req-limit]
enabled = true
filter = nginx-req-limit
action = iptables-multiport[name=ReqLimit, port=”http,https”, protocol=tcp]
logpath = /var/log/nginx/*error.log
findtime = 600
bantime = 7200
maxretry = 10sudo service fail2ban restart <- restart fail2ban with the new configuration settings
(might have to do ‘sudo /etc/init.d/fail2ban restart’ instead of above)
note: logpath above should be the folder that your nginx config file has been set to for it’s errors, according to the error_log directive. This may sometimes be inside the nginx.config file itself, or if you have virtual servers and have declared it inside the virtual server itself, it will be in there: /etc/nginx/sites-available/mywebsite. The * in front of the error.log means that any log file that ends with error.log will be included in the rule. So, mywebsite.error.log will be included, so will myotherwebsite.error.log. It is handy to have each website, virtual server, creating its own error log so that you can much easier monitor each site.
Once you have Fail2ban running, you can monitor the log file that is actually remarkably clear on what it’s doing (compared to some very cryptic log files):
sudo vi /var/log/fail2ban.log
If it’s not there, you might have to go looking for it, using the command:
sudo ls -la /var/log/
To browse the folders to see what’s in them (hint: you’re looking for a file named fail2ban).
After a while, you may want to see a report from Fail2ban to see if it’s actually working. The command:
fail2ban-client status nginx-req-limit
will give you an output that will tell you how it’s working.
Conclusion
Well, I hope this helps some of you start your path towards higher security on your webserver. Sometimes it can feel almost like you’re hoping everything goes right and when something goes wrong you’re caught with your pants down. Truly, this solution feels a little bit like magic.
I’d love if some of you who have done this for a while could share below your configurations so I could get a look at optimal settings, or if you’ve written your own similar blog, please do share a link. Better security is always becoming more widely versed in secure practices.