What Is Hacktivism?

CodeMilitant Solutions for Linux Nginx Python PHP Bash MariaDB

As a server DevOps, this is a serious problem that can take up a tremendous amount of time. This article, same as all the other articles on this site, is here as a public reference to help solve problems at the server level and defeat these issues.

Wikipedia defines “Hacktivism” as:

“Hacktivism” is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking.


While this ambiguous definition refers to “programming skills with critical thinking”, the reality is, Hacktivism has far more to do with attempting to cripple or crash a server for political or social gains, than it is focusing on “programming skills with critical thinking”. The skills required to crash a website is ridiculously easy. No “critical thinking” at all.

The reality is, it’s easy to crash over 99% of all websites, regardless of who’s hosting the site. It doesn’t matter if the site’s using Cloudflare, or Cloudfront, or AWS, or Linode, or Rackspace or a private server. If someone doesn’t like a site, they can shut it down in minutes.

The purpose of this article is to show how to minimize these hacktivisits with solid server configurations. The process required to do this is extensive and it can be easy to overlook any number of steps. This article will help focus the requirements to secure any web server.

First, let’s see how easy it is to overwhelm a server. To do so will require a small, yes small, Bash script executed from any Linux terminal (SSH) command line. Once this small script is activated, it will focus on overwhelming the programs that run a website. Programs such as Apache, Nginx, PHP, MySQL and even programs that are meant to enhance a site’s performance, such as Memcached. Although this could be built with any version of Bash, the script below is for Bash version 4.2+.

Since this is a test script for determining the strength of your own web server, the point to this script is to try and overwhelm the programs that run your site. Pay close attention here as this will overwhelm your own website. The best way to do this, is to use actual URLs that return a good HTTP status code (200). The reason for accessing the site’s true URLs is to confirm what the site can handle. Frankly, it’s easy to stop an attacker that’s hammering away on made up 404 URLs. These amateur attackers are easy to stop on the server side, so please read on further to see how this is done.

The Bash script:

### web_hacktivism.sh

set -x ## so we can watch the script as it executes

### Let's get the sitemap and find as many legitimate URLs as possible
mapfile -t sitemap_urls < <( curl -Lks http://somedomain.xyz/sitemap.xml | grep -oiE "http[^<]+somedomain[^<]+" )
# declare -p sitemap_urls
### Print the array built with 'mapfile'

### If the sitemap contains category links for post/page links, these can be followed using this same script, but for simplicity, let's assume the above 'mapfile' command finds all the site links.
### The script should be repetitive, but not perpetual. It should also focus on randomly accessing URLs in bulk. Sounds difficult, but it's not.

### The revs equals the number of loops you want to run on the website. Change this to any number.

for ((i=0;i<$revs;i++)); do  ## loop through the sitemap URLs using this for loop
    xargs -P 16 -n 4 curl -Lfks -o /dev/null -o /dev/null -o /dev/null -o /dev/null <<<"$( printf '%s\n' ${sitemap_urls[@]} )"
    sleep $((RANDOM % 49 + 3))


set +x ## turn off script output
exit 0;

That’s it for the script! Ignoring the inline comments, this script is only 7 lines! It doesn’t get any easier than this. You probably see why so many website owners can have mountains of troubles. Bash scripts like this can be found all over the internet, and these scripts are often found as “web scrapers“, so just be aware this can happen to anyone.

Continue reading to the end to see the results of all these configurations.

Going over the commands in the script we have the following:

mapfile – build an array of sitemap URLs
curl – the command to request any website from the command line
xargs – will execute the curl command running 16 processes with 4 different concurrent URLs at one time
sleep – pause the script for a few random seconds

See how ridiculously easy this is? Just four commands with a simple loop and you can cripple any website.

Now do you see why so many website owners have so many problems?

This is also one of the top reasons server administrators are fired from their jobs. If a server admin can’t put a stop to these attacks, they’ll find themselves on the short list for a pink slip.

Before You Try It

Something to remember if you decide to try this:

  1. You will crash your site
  2. Your IP address will be logged
  3. You could get fired or worse


Effective Server Configurations

Thank you for reading this far because here’s where it gets difficult. Any competent server DevOps should be able to configure the web server to effectively combat these attacks, so let’s get into some of the processes required to make your web server nearly bullet proof.

“Nearly bullet proof”? Why can’t the server be configured to be bullet proof? The short answer; because it’s a computer.

Using a typical web server configuration, also called a LEMP stack, the following programs will be used to build an effective web server.

  • Linux (L)
  • Nginx (E)
  • MySQL (M)
  • PHP (P)

Why the E? Just to dispel any misnomer, it’s pronounced EngineX (Nginx).

The web server program, Nginx, is the only serious choice for large companies seeking very high online success. The reason for this has to do with how easy Nginx can be tuned for maximum performance.

A basic Nginx configuration that will dramatically help minimize web attacks hinges primarily on three elements:

  • Timeout
  • Cache
  • Limit IP and Domain

Here’s a good start to any Nginx configuration for a WordPress website:

### /etc/nginx/nginx.conf

load_module modules/ngx_http_geoip_module.so;
load_module modules/ngx_stream_geoip_module.so;

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
worker_rlimit_nofile 65536;

events {
    worker_connections  131072;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    proxy_buffering on;
    proxy_buffer_size		256k;
    proxy_buffers		4096 16k;
    proxy_busy_buffers_size	256k;
    proxy_connect_timeout       30;
    proxy_send_timeout          30;
    proxy_read_timeout          30;
    send_timeout                30;

    #Nginx security
    server_tokens off;

    sendfile		on;
    sendfile_max_chunk  1m;
    tcp_nodelay		on;
    tcp_nopush		on;
    keepalive_timeout	75s 75s;

    client_max_body_size 64m;
    client_body_buffer_size 64k;

    http2_push_preload on;

    limit_req_zone $binary_remote_addr zone=LIMITIP:10m rate=1000r/s;
    limit_req_zone $server_name zone=LIMITNAME:10m rate=200r/s;

    include /etc/nginx/conf.d/*.conf;

Nginx PHP-FPM configuration:

### /etc/nginx/conf.d/php-fpm.conf

# PHP-FPM FastCGI server
# network or unix domain socket configuration

upstream php-fpm {
	server unix:/run/php-fpm/www.sock;

Nginx domain configuration:

### /etc/nginx/conf.d/example.conf

server {
    listen 80;
    listen [::]:80;
    server_name example.com cdn.example.com www.example.com;

    access_log /var/log/nginx/example.access.log main;
    error_log /var/log/nginx/example.error.log error;

    return 301 https://example.com$request_uri;

server {
    listen 443 ssl http2;
    server_name example.com cdn.example.com www.example.com;
    root /var/www/html/example;
    index index.php index.htm index.html;

    limit_req zone=LIMITIP burst=500;
    limit_req zone=LIMITNAME burst=100;

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header  X-Forwarded-For $remote_addr;
        proxy_set_header  X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_key "ROOT https://$host$request_uri";
        proxy_cache $example_cache;
        proxy_cache_methods GET;
        proxy_cache_bypass $http_authorization $arg_nocache $http_cookie;
        proxy_no_cache $http_authorization $arg_nocache $http_cookie;
        proxy_cache_valid 200 30m;
        proxy_cache_valid 500 502 503 504 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_cache_revalidate on;

    real_ip_header X-Real-IP;
    real_ip_recursive on;

    access_log  /var/log/nginx/example.access.log main;
    error_log  /var/log/nginx/example.error.log error;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem; # managed by Certbot

Nginx WordPress configurations:

    ### /etc/nginx/global/wp_example.conf

    ### Nginx config for WordPress
    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    location /uploads {
        valid_referers none blocked server_names ~\.google\. ~\.bing\. ~\.yahoo\. ~\.facebook\. ~\.example\.;
        if ($invalid_referer) {
            return 403;

    location ~* ^.+\.(png|jpe?g|webp)$ {
        try_files $uri $uri$webp_suffix =404;
        expires 1y; access_log off; log_not_found off;

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        fastcgi_pass php-fpm;
        fastcgi_index index.php;
        fastcgi_buffering on;
        fastcgi_buffers 128 32k;
        fastcgi_buffer_size 256k;
        fastcgi_connect_timeout 60;
        fastcgi_send_timeout 60;
        fastcgi_read_timeout 60;

    location ~* \.(ogg|ogv|svg|svgz|eot|otf|woff2?|flv|mp4|ttf|css|rss|atom|js|gif|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|midi?|wav|bmp|rtf)$ {
        mp4_buffer_size		2m;
        mp4_max_buffer_size	20m;
        expires 1y; access_log off; log_not_found off;

    # Necessary for Let's Encrypt domain name ownership validation
    location ~ /.well-known/ {
        allow all;

    location ~ /\. {
        deny all; access_log off; log_not_found off;

    # Deny public access to wp-config.php and debug.log
    location ~* (wp-config.php|debug.log) {
        deny all;

    # Deny access to uploads which aren’t images, videos, music, etc.
    location ~* ^/wp-content/uploads/.*.(html|htm|shtml|php|js|swf|zip)$ {
        deny all;

Nginx proxy cache paths, keys and zones:

    ### /etc/nginx/global/example_cache.conf

    proxy_cache_path /var/cache/nginx/example1 levels=1:2 keys_zone=CACHE1:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example2 levels=1:2 keys_zone=CACHE2:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example3 levels=1:2 keys_zone=CACHE3:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example4 levels=1:2 keys_zone=CACHE4:20m inactive=10m max_size=22m use_temp_path=off;

    split_clients $request_uri $example_cache {
        25% CACHE1;
        25% CACHE2;
        25% CACHE3;
        25% CACHE4;

Now that Nginx is setup, it’s important to understand those three vital elements mentioned above.

Nginx Timeout

Here’s the elements dictating the timeout configurations:

    ### Proxy timeout
    proxy_connect_timeout       30;
    proxy_send_timeout          30;
    proxy_read_timeout          30;
    send_timeout                30;

    ### FastCGI timeout
    fastcgi_connect_timeout 60;
    fastcgi_send_timeout 60;
    fastcgi_read_timeout 60;

In this example domain, the timeout is set to just 30 seconds for the proxy connections, and