Installing PhantomJS on Amazon Linux / Fedora / RedHat / CentOS over SSH

OS: Amazon Linux / Fedora / Redhad / CentOS
method: SSH
program: PhantomJS
dependency: Python

What is PhantomJS

PhantomJS is what is known as a headless web browser. This means it’s a web browser like Firefox or Google Chrome, without the bulky window that you’ve become very familiar with. This is great for program testing and website testing using scripts. It greatly reduces load time and thus makes testing a breeze.

Recent news shows that Google has decided to release their own headless version of Chrome for their extensive testing they do, and Firefox has been on that bandwagon for a long time now too. Which version you pick I will not discuss here, but this tutorial is for PhantomJS on a Linux computer over SSH.

Installing PhantomJS on Linux Computer over SSH

It was incredibly frustrating for me to find how to install PhantomJS onto my computer since it’s a Linux Amazon distro, which is a stripped down version of Linux meant for running on Amazon’s EC2 cloud computing servers.

However, it uses yum so that gives us a hint that much of the tutorials for CentOS or RedHat will work with this.


Build PhantomJS

note: there was a suggestion on PhantomJS webpage that one should just download the package and install it simply, with some dependencies, however, I wasn’t able to get this working so I defaulted to Build.

I am assuming you have Python installed. You’ll need it. (to install it, type: sudo yum install python3.4)

Make sure that git is intalled:

sudo yum install git

Directly from the Build PhantomJS page, we’ll first install the dependencies:

sudo yum -y install gcc gcc-c++ make flex bison gperf ruby \
openssl-devel freetype-devel fontconfig-devel libicu-devel sqlite-devel \
libpng-devel libjpeg-devel

Next, we’re going to add the git and download the source to begin the build, which you type into your SSH window one line at a time, pressing enter at the end of each line:

git clone git://
cd phantomjs
git checkout 2.1.1
git submodule init
git submodule update


This last command ( will prompt you with an alert that it will take some time, of which you type Y for yes, continue.

Once it is completed, the executable will be available under the bin subdirectory.

Linking the executable in the Bin SubDirectory

Once this is complete, you’ll still not be able to run PhantomJS unless you ensure that the location of the PhantomJS file is included in the system $PATH. This is the way for the system to know what you mean when you type into the command window: phantomjs.

The bin subdirectory is most likely going to be located in the folder where PhantomJS was installed to, which if you weren’t running sudo for the above, is going to be: /home/user/phantomjs/bin/

To view the directories that are included in your system path, type:

echo $PATH

This will give you a list of locations that are included in the PATH, seperated by a semi-colon ;. There is most likely the dir: /home/user/bin included in this path, so instead of adding a new symbolic link (ln) or adding a new dir to the path, I just copied the phantomjs executable into this bin folder.

note: keep in mind, ‘user’ here is your Linux user, so it may be another name.

If you run the command:

sudo ls -ln /home/user/

and don’t see a /bin folder in there, you’ll have to first create one:

sudo mkdir /home/user/bin/

then you can copy the phantomjs executable to the /bin/ folder:

sudo cp -Rp /home/user/phantomjs/bin/phantomjs /home/user/bin/

make sure it’s been copied over successfully:

sudo ls -la /home/user/bin

which will hopefully show you your phantomjs file!

Test PhantomJS

No point dusting your hands off yet, who knows if it’s really worked?! Time to test.

I’ve taken this code from the PhantomJS Quick Start page, that will check how quickly you can view a page using the PhantomJS headless magic:

var page = require(‘webpage’).create(),
system = require(‘system’),
t, address;

if (system.args.length === 1) {
console.log(‘Usage: loadspeed.js <some URL>’);

t =;
address = system.args[1];, function(status) {
if (status !== ‘success’) {
console.log(‘FAIL to load the address’);
} else {
t = – t;
console.log(‘Loading ‘ + system.args[1]);
console.log(‘Loading time ‘ + t + ‘ msec’);

To run this, create a new file called loadspeed.js, which we’ll do without sudo permissions:

vi loadspeed.js

Then copy the code above, and right click in the SSH terminal to paste it all in. Then type: ‘:wq’ to save the file and exit.

Now we run this little codelet to see how fast it will load with:

phantomjs loadspeed.js

So, how quickly did yours load? Mine:

load time for google

Hope this works!



Creating Multiple Instances of Tor, Giving Multiple IP Addresses

OS: Amazon Linux / Linux / CentOS / RedHat
Package: Tor (the Onion Router)

Tor is a anonymizing tool used around the world in various security scenarios. Even just for the regular Joe wanting to keep their browsing relatively anon, it’s a great tool. If the world was made up of everybody making it hard for people to spy on us, there would be a lot less spys, so the theory goes.

Depending on what you’re doing, you might want to tell Tor to launch a few different instances. This will provide you with a few different routes or onions (if I may) to even moreso anonymize your internet browsing.

This tutorial will be about doing this through command line (SSH).

First, we must make sure that Tor is installed. For installation instructions of Tor, please refer to this blog.

To check whether Tor is installed:

sudo /etc/init.d/tor status

Which should show you something like:

tor is running (pid: 2944)

Now stop Tor so that we can edit the config file:

sudo /etc/init.d/tor stop

which should show you:

Stopping tor: [ OK ]

Creating Multiple Tor Instances

To make Tor work, you first send your local requests to your localhost (almost always, which then re-routes the request through the defined SOCKSport (you’ll learn that in a second), which then goes through the Onion (jumps from IP to IP to anonymize your traffic) and then finally reaches your destination.

There have been some instructions out there to make multiple Tor configuration files that would then get started each themselves, but I haven’t been able to find any benefit to doing it this way, and every instruction I’ve come across severely lacks any extensive configuration details which worries me about security that should be involved in the configuration itself, not to mention the pain of having to start four – or as many as you make – unique instances in the first place. So..

First, we will edit the Tor configuration file, torrc:

sudo vi /etc/tor/torrc

In this file, look for:

SocksPort 9050

and add just below this, so it looks like:

SocksPort 9050
SocksPort 9060
SocksPort 9070
SocksPort 9080

note: you can choose whatever port you wish here except for the very next port. So, port 9051 don’t pick, but you can pick 9052, but then you couldn’t pick 9053 for the next because the very next port is required for the next part.

Then keep looking for:

ControlPort 9051

and add just below it, so it looks like:

ControlPort 9051
ControlPort 9061
ControlPort 9071
ControlPort 9081

Type :wq to exit vi. Then start Tor again:

sudo /etc/init.d/tor start

which should give you something that looks similar to this:

Start Tor Confirmation

From this, we can see that we have four Socks and four Control ports opened up for Tor.

Now, to test.

curl –proxy socks5h://localhost:9050

This will return an IP. Now do this for all four of the different ports opened (9050, 9060, 9070, 9080) and you should see that each time that you do this, it returns a different IP address. Then do the first one again, and likely it will be the same as it was the first time.

Go ahead, play with this!


Additional Resources:

Installing Tor on Linux/UNIX/CentOS/Fedora/Amazon

I had a heck of a time trying to figure out how to install Tor onto my Linux box, and finally did figure it out, so I thought I’d give a little tutorial on it.

Basically, the issue I was having was that:

sudo yum install tor

was returning the error that no package named tor existed.

Sure, you might be able to build the source files, but in my experience that wasn’t installed, and trying to install it came up with more dependencies that could not be installed or found. This is the sound of spinning tires.

The first three items on the Expert Guide at Tor provide details on how to install Tor. They might change, and in the best interest of security, I’m going to say go there to find out where to get the install files.

If you’re using Fedora or CentOS, see this guide on Finding a List of Your Repos on your Operating System (OS), which will most likely be some sort of UNIX Linux distro since Windows machines don’t have repos!

The short solution is:

  • yum repolist all
    – shows you a list of all repos available on your computer
  • sudo vi /etc/yum.repos.d/epel.repo
    – opens the EPEL repo for editing, as it’s likely you have something in here that’s not enabled yet that will work
  • change the main repo to enabled=1
  • sudo yum install tor

To check whether Tor is installed:

sudo /etc/init.d/tor status

Which should show you something like:

tor is running (pid: 2944)

This means it’s running!

Head here to check out how to run multiple SOCKS for your Tor.



Finding More Repos (Repositories) to Search for Packages

On Linux, there are very handy tools to install packages for the user. Mainly, there are:

  • apt-get
  • yum

(note: I’m going to go through this guide assuming you’re using yum on an Fedora/CentOS/Linux Operating System, but similar solutions exist for each.)

These will automatically search repos – which are repositories of packages, or programs, that the user may want to use – download, and then install the packages for the user. This means that instead of downloading the install package, locating it, double clicking on it (Windows Operating System (OS)), and then following the install instructions, you simply run a command such as: apt-get install python <- and it will download and install python for the user.

The Problem

However, some distrobutions of Linux don’t come with the most extensive repos enabled. My recent trouble was with Amazon Linux, but I’m sure you’ll have your own distro.

An indication that you need to add more repos to your OS would be when you go to install a package, following along the instructions from the package website, and you get an error saying no package exists.

BUT THE WEBSITE SAYS IT’S THIS EASY. I know. It’s frustrating.

The Solution

However, the solution is fairly straight forwards. You can probably search around for all sorts of crazy repos out there, which could also be a security hazard because you might get into downloading pretty sketchy versions of software from the outdated repo… but I suggest add the big Extra Packages for Enterprise Linux (EPEL) repository. Sounds scary, and it should be. There’s a lot of packages on there.

It is advised to take a look at your current repo list using:

yum repolist all

For me, this list was very short (I had two Amazon repos listed).

You can then add repos a few ways. But first, it might be smart to have a look and see if there are already repos that exist, and they are just disabled, using the command:

sudo vi /etc/yum.repos.d/epel.repo

tip: to see a list of repos available to Yum, just search the dir: sudo ls -la /etc/yum.repos.d/

If this file exists, you’ll see a few additional repos that you never knew you had! Look for a line that says enabled=0, and switch this to enabled=1. I would suggest picking the one that doesn’t have debug or source in it’s name, as those might have packages that are in beta modes (I don’t know really what the difference is, I’d love if somebody shared what the difference is in comments below).

Then if you try and install your package again and it still can’t find it to install, you might want to manually add the repo to your repo list using:

sudo yum-config-manager –add-repo

Tip: repo files are succeeded with *.repo as the file format, that’s how you’ll know you have a repo.


Additional Resources

Fighting a DDoS Attack – Limiting Requests in NGINX

DDoS = distributed denial of service attack.

server: NGINX

The Problem

A DDoS attack is a pain in the arse. It’s basically some bot out there, or a connection of bots (distributed) that are sending requests to your server in an attempt to overload it and make it really really slow – possibly to the point of causing it to crash.

The Solution

Luckily, this isn’t the first time people have heard of this, nor will it be the last, so there’s security measures in play.


Most notably, we have a firewall. The main premise of a firewalls is they block all traffic except the stuff you want through. To enable this, Ubuntu has a built in firewall called Uncomplicated Firewall ‘ufw’ that should already be ready for you to enable. The manual is here for ufw. The ufw Community Help Wiki is here.

The community help wiki is extremely helpful at starting this for you, but basically, you’ll want to:

ufw allow ssh <- allow ssh connections to your server
ufw allow http <- allow http connections to your server
ufw enable <- enable ufw itself now that you’ve allowed the connections you’ll still need to access it
ufw status verbose <- have a look at what settings you’ve set

This is just simply smart computing, especially with a server, which allows you to ensure that all your ports are blocked so that nobody can get in that you didn’t really want in.

However, this does nothing for limiting the amount a single IP can ping your server, so we still haven’t stopped the DDoS attack.

NGINX Limit Request Module

NGINX has a built in module called limit_req – Limit Requests – that’s does exactly what one needs to do to limit the effectiveness of a DoS attack. If a DoS attack is limited to so many requests per second on your server and then is bottlenecked for a certain time, it becomes a very inefficient DoS attacker. When this happens, a log is created of the event which we’ll be using later with Fail2ban.

The NGINX manual for limit_req is here, and there is a great article on EasyEngine on configuring the limit_req module, but you’re going to be opening your NGINX config file:

sudo vi /etc/nginx/nginx.conf

and adding the following inside the http{ … } block:

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;

What you’re doing here is setting up a limit_req_zone configuration that you’ll refer to later by the zone name: zone=one. You can change it to: zone=superstrictrule, or anything of your choice. $binary_remote_addr is the binary IP address of the out of line attacker. 10m is the size of the file that stores the zone states in. rate=1r/s means that this zone’s limits are 1 request per second. You’ll want to configure this zone file as you see fit, particular with the rate, and you can use (s) for seconds, (m) for minutes, and NGINX makes clear if you’d like a rate of less then 1r/s you must go towards: 30r/min, which is kind of equivalent to 0.5r/s.

Then, inside the unique server {…} block (or a virtual server if you’ve deployed them):

limit_req zone=one burst=5 nodelay;

To set the actual limit_req rule. This doesn’t necessarily have to be inside a server { } block, it can be inside a location { } block if you’re really getting specific. nodelay makes sure that as soon as the limit is hit, an error is thrown. zone=one will identify which specific limit_req_zone the limit_req is for, and burst=5 is the burst size that’s allowable (my suspicion is that 5 is of the rate already defined, so for this example, it would be 1r/s is the rate, burst would be allowable 5r/s, I’d love is somebody could confirm this in the comments).


I’m not quite sure the source of the name, but Fail2ban is pretty brilliant tool. It scans your logs and will create specific blocks based on suspicious activity. So, if the same IP is hammering your server, it’ll see it and block it. Another cool thing is that it works for any logs, be it SSH, mySQL, HTTP, etc. Within the jail.local file that one creates upon installation, you can activate whichever monitors you wish.

The Ubuntu Community Wiki help file on Fail2ban steps you through installation and how to enable each module, and there is also this tutorial at EasyEngine on how to configure Fail2ban for NGINX that’s really helpful, but the steps are:

sudo apt-get install fail2ban <- install Fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local <- create a local configuration file
sudo vi /etc/fail2ban/jail.local <- edit the local configuration file and add the section below:


enabled = true
filter = nginx-req-limit
action = iptables-multiport[name=ReqLimit, port=”http,https”, protocol=tcp]
logpath = /var/log/nginx/*error.log
findtime = 600
bantime = 7200
maxretry = 10

sudo service fail2ban restart <- restart fail2ban with the new configuration settings
(might have to do ‘sudo /etc/init.d/fail2ban restart’  instead of above)

note: logpath above should be the folder that your nginx config file has been set to for it’s errors, according to the error_log directive. This may sometimes be inside the nginx.config file itself, or if you have virtual servers and have declared it inside the virtual server itself, it will be in there: /etc/nginx/sites-available/mywebsite. The * in front of the error.log means that any log file that ends with error.log will be included in the rule. So, mywebsite.error.log will be included, so will myotherwebsite.error.log. It is handy to have each website, virtual server, creating its own error log so that you can much easier monitor each site.

Once you have Fail2ban running, you can monitor the log file that is actually remarkably clear on what it’s doing (compared to some very cryptic log files):

sudo vi /var/log/fail2ban.log

If it’s not there, you might have to go looking for it, using the command:

sudo ls -la /var/log/

To browse the folders to see what’s in them (hint: you’re looking for a file named fail2ban).

After a while, you may want to see a report from Fail2ban to see if it’s actually working. The command:

fail2ban-client status nginx-req-limit

will give you an output that will tell you how it’s working.


Well, I hope this helps some of you start your path towards higher security on your webserver. Sometimes it can feel almost like you’re hoping everything goes right and when something goes wrong you’re caught with your pants down. Truly, this solution feels a little bit like magic.

I’d love if some of you who have done this for a while could share below your configurations so I could get a look at optimal settings, or if you’ve written your own similar blog, please do share a link. Better security is always becoming more widely versed in secure practices.

Further Links