What Is Hacktivism?


CodeMilitant Solutions for Linux Nginx Python PHP Bash MariaDB

As a server DevOps, this is a serious problem that can take up a tremendous amount of time. This article, same as all the other articles on this site, is here as a public reference to help solve problems at the server level and defeat these issues.

Wikipedia defines “Hacktivism” as:

“Hacktivism” is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking.

Wikipedia

While this ambiguous definition refers to “programming skills with critical thinking”, the reality is, Hacktivism has far more to do with attempting to cripple or crash a server for political or social gains, than it is focusing on “programming skills with critical thinking”. The skills required to crash a website is ridiculously easy. No “critical thinking” at all.

The reality is, it’s easy to crash over 99% of all websites, regardless of who’s hosting the site. It doesn’t matter if the site’s using Cloudflare, or Cloudfront, or AWS, or Linode, or Rackspace or a private server. If someone doesn’t like a site, they can shut it down in minutes.

The purpose of this article is to show how to minimize these hacktivisits with solid server configurations. The process required to do this is extensive and it can be easy to overlook any number of steps. This article will help focus the requirements to secure any web server.

First, let’s see how easy it is to overwhelm a server. To do so will require a small, yes small, Bash script executed from any Linux terminal (SSH) command line. Once this small script is activated, it will focus on overwhelming the programs that run a website. Programs such as Apache, Nginx, PHP, MySQL and even programs that are meant to enhance a site’s performance, such as Memcached. Although this could be built with any version of Bash, the script below is for Bash version 4.2+.

Since this is a test script for determining the strength of your own web server, the point to this script is to try and overwhelm the programs that run your site. Pay close attention here as this will overwhelm your own website. The best way to do this, is to use actual URLs that return a good HTTP status code (200). The reason for accessing the site’s true URLs is to confirm what the site can handle. Frankly, it’s easy to stop an attacker that’s hammering away on made up 404 URLs. These amateur attackers are easy to stop on the server side, so please read on further to see how this is done.

The Bash script:


### web_hacktivism.sh

#!/usr/local/bin/bash
set -x ## so we can watch the script as it executes

### Let's get the sitemap and find as many legitimate URLs as possible
mapfile -t sitemap_urls < <( curl -Lks http://somedomain.xyz/sitemap.xml | grep -oiE "http[^<]+somedomain[^<]+" )
# declare -p sitemap_urls
### Print the array built with 'mapfile'

### If the sitemap contains category links for post/page links, these can be followed using this same script, but for simplicity, let's assume the above 'mapfile' command finds all the site links.
### The script should be repetitive, but not perpetual. It should also focus on randomly accessing URLs in bulk. Sounds difficult, but it's not.

revs=10
### The revs equals the number of loops you want to run on the website. Change this to any number.

for ((i=0;i<$revs;i++)); do  ## loop through the sitemap URLs using this for loop
    xargs -P 16 -n 4 curl -Lfks -o /dev/null -o /dev/null -o /dev/null -o /dev/null <<<"$( printf '%s\n' ${sitemap_urls[@]} )"
    sleep $((RANDOM % 49 + 3))
done

echo "SUCCESS! HAMMERED THE SITE FOR $revs LOOPS!"

set +x ## turn off script output
exit 0;

That’s it for the script! Ignoring the inline comments, this script is only 7 lines! It doesn’t get any easier than this. You probably see why so many website owners can have mountains of troubles. Bash scripts like this can be found all over the internet, and these scripts are often found as “web scrapers“, so just be aware this can happen to anyone.

Continue reading to the end to see the results of all these configurations.

Going over the commands in the script we have the following:

mapfile – build an array of sitemap URLs
curl – the command to request any website from the command line
xargs – will execute the curl command running 16 processes with 4 different concurrent URLs at one time
sleep – pause the script for a few random seconds

See how ridiculously easy this is? Just four commands with a simple loop and you can cripple any website.

Now do you see why so many website owners have so many problems?

This is also one of the top reasons server administrators are fired from their jobs. If a server admin can’t put a stop to these attacks, they’ll find themselves on the short list for a pink slip.

Before You Try It

Something to remember if you decide to try this:

  1. You will crash your site
  2. Your IP address will be logged
  3. You could get fired or worse

YOU’VE BEEN WARNED!!!

Effective Server Configurations

Thank you for reading this far because here’s where it gets difficult. Any competent server DevOps should be able to configure the web server to effectively combat these attacks, so let’s get into some of the processes required to make your web server nearly bullet proof.

“Nearly bullet proof”? Why can’t the server be configured to be bullet proof? The short answer; because it’s a computer.

Using a typical web server configuration, also called a LEMP stack, the following programs will be used to build an effective web server.

  • Linux (L)
  • Nginx (E)
  • MySQL (M)
  • PHP (P)

Why the E? Just to dispel any misnomer, it’s pronounced EngineX (Nginx).

The web server program, Nginx, is the only serious choice for large companies seeking very high online success. The reason for this has to do with how easy Nginx can be tuned for maximum performance.

A basic Nginx configuration that will dramatically help minimize web attacks hinges primarily on three elements:

  • Timeout
  • Cache
  • Limit IP and Domain

Here’s a good start to any Nginx configuration for a WordPress website:


### /etc/nginx/nginx.conf

load_module modules/ngx_http_geoip_module.so;
load_module modules/ngx_stream_geoip_module.so;

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
worker_rlimit_nofile 65536;

events {
    worker_connections  131072;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    proxy_buffering on;
    proxy_buffer_size		256k;
    proxy_buffers		4096 16k;
    proxy_busy_buffers_size	256k;
    proxy_connect_timeout       30;
    proxy_send_timeout          30;
    proxy_read_timeout          30;
    send_timeout                30;

    #Nginx security
    server_tokens off;

    sendfile		on;
    sendfile_max_chunk  1m;
    tcp_nodelay		on;
    tcp_nopush		on;
    keepalive_timeout	75s 75s;

    client_max_body_size 64m;
    client_body_buffer_size 64k;

    http2_push_preload on;

    limit_req_zone $binary_remote_addr zone=LIMITIP:10m rate=1000r/s;
    limit_req_zone $server_name zone=LIMITNAME:10m rate=200r/s;

    include /etc/nginx/conf.d/*.conf;
}

Nginx PHP-FPM configuration:


### /etc/nginx/conf.d/php-fpm.conf

# PHP-FPM FastCGI server
# network or unix domain socket configuration

upstream php-fpm {
	#server 127.0.0.1:9000;
	server unix:/run/php-fpm/www.sock;
}

Nginx domain configuration:


### /etc/nginx/conf.d/example.conf

## DEFAULT EXAMPLE HTTP CONFIG
server {
    listen 80;
    listen [::]:80;
    server_name example.com cdn.example.com www.example.com;

    access_log /var/log/nginx/example.access.log main;
    error_log /var/log/nginx/example.error.log error;

    return 301 https://example.com$request_uri;
}

## EXAMPLE SSL SERVER
server {
    listen 443 ssl http2;
    server_name example.com cdn.example.com www.example.com;
    root /var/www/html/example;
    index index.php index.htm index.html;

    limit_req zone=LIMITIP burst=500;
    limit_req zone=LIMITNAME burst=100;

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header  X-Forwarded-For $remote_addr;
        proxy_set_header  X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_key "ROOT https://$host$request_uri";
        proxy_cache $example_cache;
        proxy_cache_methods GET;
        proxy_cache_bypass $http_authorization $arg_nocache $http_cookie;
        proxy_no_cache $http_authorization $arg_nocache $http_cookie;
        proxy_cache_valid 200 30m;
        proxy_cache_valid 500 502 503 504 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_lock on;
        proxy_cache_revalidate on;
    }

    set_real_ip_from 172.31.25.0/24;
    real_ip_header X-Real-IP;
    real_ip_recursive on;

    access_log  /var/log/nginx/example.access.log main;
    error_log  /var/log/nginx/example.error.log error;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem; # managed by Certbot
}

Nginx WordPress configurations:


    ### /etc/nginx/global/wp_example.conf

    ### Nginx config for WordPress
    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }
    location /uploads {
        valid_referers none blocked server_names ~\.google\. ~\.bing\. ~\.yahoo\. ~\.facebook\. ~\.example\.;
        if ($invalid_referer) {
            return 403;
        }
    }

    ### WEPB REWRITE RULES FOR WEBP MAPPING
    location ~* ^.+\.(png|jpe?g|webp)$ {
        try_files $uri $uri$webp_suffix =404;
        expires 1y; access_log off; log_not_found off;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors on;
        fastcgi_pass php-fpm;
        fastcgi_index index.php;
        fastcgi_buffering on;
        fastcgi_buffers 128 32k;
        fastcgi_buffer_size 256k;
        fastcgi_connect_timeout 60;
        fastcgi_send_timeout 60;
        fastcgi_read_timeout 60;
    }

    location ~* \.(ogg|ogv|svg|svgz|eot|otf|woff2?|flv|mp4|ttf|css|rss|atom|js|gif|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|midi?|wav|bmp|rtf)$ {
        flv;
        mp4;
        mp4_buffer_size		2m;
        mp4_max_buffer_size	20m;
        expires 1y; access_log off; log_not_found off;
    }

    # Necessary for Let's Encrypt domain name ownership validation
    location ~ /.well-known/ {
        allow all;
    }

    location ~ /\. {
        deny all; access_log off; log_not_found off;
    }

    # Deny public access to wp-config.php and debug.log
    location ~* (wp-config.php|debug.log) {
        deny all;
    }

    # Deny access to uploads which aren’t images, videos, music, etc.
    location ~* ^/wp-content/uploads/.*.(html|htm|shtml|php|js|swf|zip)$ {
        deny all;
    }

Nginx proxy cache paths, keys and zones:


    ### /etc/nginx/global/example_cache.conf

    proxy_cache_path /var/cache/nginx/example1 levels=1:2 keys_zone=CACHE1:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example2 levels=1:2 keys_zone=CACHE2:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example3 levels=1:2 keys_zone=CACHE3:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example4 levels=1:2 keys_zone=CACHE4:20m inactive=10m max_size=22m use_temp_path=off;

    split_clients $request_uri $example_cache {
        25% CACHE1;
        25% CACHE2;
        25% CACHE3;
        25% CACHE4;
    }

Now that Nginx is setup, it’s important to understand those three vital elements mentioned above.

Nginx Timeout

Here’s the elements dictating the timeout configurations:


    ### Proxy timeout
    proxy_connect_timeout       30;
    proxy_send_timeout          30;
    proxy_read_timeout          30;
    send_timeout                30;

    ### FastCGI timeout
    fastcgi_connect_timeout 60;
    fastcgi_send_timeout 60;
    fastcgi_read_timeout 60;

In this example domain, the timeout is set to just 30 seconds for the proxy connections, and the fastcgi allows for only 60 seconds to process PHP scripts.

Now imagine if these timeout elements were turned up to accommodate large PHP or database requests. The server could be easily overwhelmed and server resources would be quickly exhausted.

Later, the PHP has it’s timeout limits as well and it’s important to understand how this ties to Nginx.

Nginx Proxy Cache

The proxy cache is the fastest of all the Nginx cache. Nginx proxy cache maintains the full web page render while the fastcgi cache keeps the PHP request cached. The proxy cache will completely bypass the need for any PHP or database requests. This makes the proxy cache the best choice for the highest performance.

Looking closely at the proxy cache configurations, the proxy cache is divided into 4 equal blocks. These blocks are evenly divided to handle 25% of the workload. This allows for multiple pages to be accessed from different server directories simultaneously, and achieving maximum performance for each request.


    ### /etc/nginx/global/example_cache.conf

    proxy_cache_path /var/cache/nginx/example1 levels=1:2 keys_zone=CACHE1:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example2 levels=1:2 keys_zone=CACHE2:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example3 levels=1:2 keys_zone=CACHE3:20m inactive=10m max_size=22m use_temp_path=off;
    proxy_cache_path /var/cache/nginx/example4 levels=1:2 keys_zone=CACHE4:20m inactive=10m max_size=22m use_temp_path=off;

    split_clients $request_uri $example_cache {
        25% CACHE1;
        25% CACHE2;
        25% CACHE3;
        25% CACHE4;
    }

Nginx Limits

Even if the website is small, with low traffic volume, the Nginx limit request (limit_req) should always be set.


    ### /etc/nginx/nginx.conf

    limit_req_zone $binary_remote_addr zone=LIMITIP:10m rate=1000r/s;
    limit_req_zone $server_name zone=LIMITNAME:10m rate=200r/s;

    ### /etc/nginx/conf.d/example.conf
    limit_req zone=LIMITIP burst=500;
    limit_req zone=LIMITNAME burst=100;

Notice that the LIMITIP is set at least 5 times higher than the LIMITNAME. One is setting the limits for the number of IPs and the other is setting the limits for the server name, in this case, example.com.

Why is the LIMITIP higher? Because the number of requests per IP is considerably larger than the number of server name requests. Every request for any web page carries with it all the requests for the page assets as well. So every request must be able to deliver all the CSS, Javascript, external resources and internal resources. In many cases, especially with dynamic websites like WordPress, the volume of CSS and Javascript requests can run in the hundreds per page. So it’s vital to get this right.

If these limit_req configurations were flipped, the site would crash with just a modest increase in traffic. This must be correct.

It’s vital to understand that any server, especially shared hosting, has limits that can be easily exceeded. Notice that the number of files:


worker_rlimit_nofile 65536;

is limited to just 65536 open files. Divided across multiple websites on a shared host and this is a limit that can be easily exceeded. If just one website is experiencing heavy traffic, it can easily take down all the rest of the sites on the server.

This is commonly why many websites seemingly crash for no reason. It may not be your website under attack, but you can easily suffer the consequences from a poorly configured shared host. A virtual private server (VPS), will solve many of these issues, but a dedicated physical host is truly the maximum achievable performance.

MySQL (MariaDB)

The dominant database installation today is the Maria database (MariaDB), and this is a community based version of the most popular database, MySQL.

Just like all the other programs in this article, the default configuration of MySQL can be easily overwhelmed by anyone, at any time. Even a small raspberry Pi micro computer has the ability to render just about any website inoperable.

So how does a database get overwhelmed? The database contains the tables and records necessary for a dynamic website, like WordPress, Drupal, Joomla, etc, to build it’s content for visitors to view.

The content of this article, the text you’re reading now, is stored in a record within a table inside a database. This article can now be retrieved by WordPress executing a SQL command.

The SQL command, if written properly, will target and find this article will little strain on the server. Unfortunately, this is not the default command strategy that WordPress, or any other website framework, employs to extract database records. The normal means of finding a database record is to request all records within a table, and then matching a record to a very large dataset. Knowing this, it’s easy to see how any database, combined with a poorly coded website framework, could be overwhelmed and crashed.

How does the database crash? This happens when the number of requests exceeds the servers CPU and/or memory capacity. Depending on the request, the database call, also known as an object, executes the request and this request will consume both memory and CPU resources. Now depending on how well this database object is formed, will dictate the quantity of these resources. Repeat this process thousands of times, and MySQL or MariaDB will quickly crash. This crash will not only take out the database program and the website, but could cripple the server itself.

This is just one more reason why shared web hosting is a bad idea if you’re going to build an e-commerce, auction or search platform for public consumption. Surprisingly, if you have a large private website, like a business-to-business (B2B) site, you can use shared hosting because the general public will not have access to it.

So what do some of these SQL commands look like?


SELECT * FROM `WPDatabase`.`wp_posts` WHERE (CONVERT(`ID` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_author` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_date` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_date_gmt` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_content` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_title` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_excerpt` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_status` USING utf8) LIKE '%Nginx%' OR CONVERT(`comment_status` USING utf8) LIKE '%Nginx%' OR CONVERT(`ping_status` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_password` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_name` USING utf8) LIKE '%Nginx%' OR CONVERT(`to_ping` USING utf8) LIKE '%Nginx%' OR CONVERT(`pinged` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_modified` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_modified_gmt` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_content_filtered` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_parent` USING utf8) LIKE '%Nginx%' OR CONVERT(`guid` USING utf8) LIKE '%Nginx%' OR CONVERT(`menu_order` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_type` USING utf8) LIKE '%Nginx%' OR CONVERT(`post_mime_type` USING utf8) LIKE '%Nginx%' OR CONVERT(`comment_count` USING utf8) LIKE '%Nginx%';

This is a search for any title, post, product or keyword that contains the term “Nginx”. As you can see, this is just one example of the database object that’s going to be requested. This is just a single term, but a short string of terms would make this request far more complicated for the database to process.

What this is not showing are the results. The results include database records that contain the term “Nginx” from all tables within the database, and not just the table that holds the article content.

Configuring database calls, all depends on the website framework (WordPress, Drupal, Joomla, etc) the site is using to serve content. To compound this issue, third party developers will often build their products to just work, but not work efficiently.

Here’s an efficient database call versus what’s posted above:


SELECT * FROM `WPDatabase`.`wp_posts` WHERE `post_content` LIKE '%Nginx%' AND `post_title` LIKE '%Nginx%';

This very small database call can be executed thousands of times an hour and the database will parse this request very quickly. The amount of CPU/memory pressure is very light and the server can quickly recover from this request.

To make this request a reality, the website framework must be properly coded to deliver efficient results. Unfortunately, this is not the case and this failure to code frameworks properly leaves 99% of all websites vulnerable to database collapse.

To mitigate this problem, notice here that there’s no bulletproof server solution, the website code should be rebuilt to make database calls more efficient. This is not an easy solution and requires massive research and debugging to refactor the website code. Keep in mind that the PHP code, along with the SQL code, must be refactored.

Why are these website frameworks requiring massive database calls? Because that’s the easy solution. The reason large companies use their own custom website solutions has everything to do with serving millions, possibly billions, of visitors. There’s no way to use an out-of-the-box solution that is readily available for free to the average small business.

The best solution for small business owners is to keep their websites simple. If you have 100 products to sell, then do a good job promoting those 100 products. Just remember, the more content online, the easier it is to crash the database.

For larger companies, the best solution is to focus on refactoring the code to limit what can be searched, and using database object caching like Memcached. This will dramatically speed up database calls because the request bypasses the database and goes straight to the Memcached object stored in server memory.

Just because a database object is stored in memory (Memcached), it’s still requesting information from the database, the Memcached executes the command immediately instead of the PHP-FPM building the command upon request. The Memcached doesn’t store the results of the request in memory, it stores only the database object.

If the website is sitting on an older version of MySQL, the best practice is to enable Qcache. Newer versions of MySQL have their own built in caching that does not need to be configured.

Add the following to the /etc/my.cnf


performance_schema=0
query_cache_type=1
query_cache_limit=131072
query_cache_size=37748736

On smaller sites, sites that contain less than 1,000 blog posts/products, it’s not necessary to enable the ‘performance_schema’. Once a website reaches a level where a greater performance is required, then ‘performance_schema’ must be configured and that might look like this:


performance_schema=1
max_digest_length=32768
performance_schema_max_digest_length=131072

Keep in mind that MySQL prefers 1024 byte multiples for proper configuration.

Another important database configuration is building the database using the proper storage engine. The two most common are MyISAM and InnoDB. By default, InnoDB is the default database storage engine, and for the vast majority of websites, this is the correct choice. During a database call, when the records need to be accessed and extracted, the tables, rows and records must be locked from any write operations. This allows the query to be executed and deliver the most current data.

The InnoDB storage engine locks the rows and records during any database request. While this does require greater CPU/memory overhead, it allows the database to continue write operations on other tables during a query (read) execution.

The MyISAM storage engine locks the entire table during a query (read request) and may conflict with write operations because the entire table is locked, versus only a row and record in the InnoDB storage engine.

Knowing this, if the website can be updated during off-peak hours, and the website framework does not require routine write operations then the MyISAM storage engine is the best choice.

The takeaway from these two storage engines has to do with the fact that InnoDB is best for read/write operations, while MyISAM is best for read operations. The MyISAM storage engine will far outperform InnoDB, but using this engine requires the proper website framework and write operations should be properly scheduled.

The link between the website framework and the database queries, all depends on “how” well the PHP code is factored. PHP is used as a means to build dynamic pages, by building dynamic database queries. Highly skilled PHP coders is required for larger website platforms, but for most, the open source solutions, such as WordPress, Drupal, Joomla, will do a great job.

PHP-FPM Configurations

PHP will be run as a daemon (background process) using PHP-FPM and PHP performance must be configured based on the operation of the website framework. For example, if the website is a standard WordPress site, the site is primarily meant for reading blog posts and purchasing products. These operations are primarily “read” operations, with some “write” operations. In this case, the PHP should be configured with very low timeouts and low memory.

PHP is easily the primary reason for a website crashing and one of the first things anyone can do is to reconfigure these default PHP settings to make the site more “read only”. This means the default timeout and memory settings should be updated to deliver higher performance, while preserving server resources. The default settings look like this:


### /etc/php.ini
max_execution_time = 47
memory_limit = 512M

The default PHP ‘max_execution_time’ default is 60 seconds. Which is good for most websites, but it’s also predictable. Since this is the default timeout, it’s easy for an attacker to use this predictable timeout to overwhelm the PHP and crash the site.

Remember the ‘sleep’ command in the Bash script above? It looked like this:


sleep $((RANDOM % 49 + 3))

The reason this random sleep command is set to 49 seconds is due to the fact that the PHP timeout is almost guaranteed to be 60 seconds. So when a connection to the website is made, the PHP will maintain a connection to that request for 60 seconds. If the same IP requests another URL less than 60 seconds later, then the PHP connection is always maintained, and the server resources are dedicated to maintaining this connection.

The best solution is ensure the PHP timeout is set to a number that no one can predict. If the site can get away with it, then set the timeout as low as possible, yet high enough to execute the PHP code.

Most websites on the planet are running WordPress, and if the site is typically used for reading, which is the purpose of WordPress, then the PHP timeout can be set very low. Often times less than 20 seconds.

What this does is allow that PHP connection to remain for no more than 20 seconds. Then it releases these resources for the next connection. The lower the timeout, the lower the ‘memory_limit’ can be set as well.

The ‘memory_limit’ must be configured based on total number of PHP connections and by default, this is set to 128MB. This means that when some PHP code executes and occcupies more than 128MB of server memory, the PHP will crash and the website will fail.

The good part of PHP is that it recovers from exceeded memory limits and the site will come back online once the traffic load is reduced. Contrast this with the database collapsing, the database must be restarted to bring it back online.

If the PHP cannot execute a request in less than 20 seconds, it doesn’t crash the server, it only fails that single connection. However, because the timeout is low, it will quickly recover and be able to try again. Now this might seem like using a low timeout is the way to go, but that all depends on the website framework. Poorly coded PHP can crash any connection, regardless of the timeout.

This is what hacktivists are seeking to exploit. Finding PHP limits is a primary means by which to overwhelm the server and temporarily disable new connections. If the PHP cannot parse the next connection, then the site will throw 502 (bad gateway) or 504 (gateway timeout) errors.

Since PHP is not built to automate connection timeouts and memory limits, it is up to the site owner to tune these settings based on site traffic.

In any PHP website framework, it’s easy to modify the ‘max_execution_time’ for the website (app), but keep the server timeout settings low. For example, this can be done by using the following PHP command:


### /var/www/html/example/wp-config.php
### put this at the top of the file
ini_set('max_execution_time', 120);

This will set the PHP timeout limit for this website (app) only to 120 seconds. Under normal circumstances, this is way to long, and can easily leave the site open to attack, yet instead of finding developers that can refactor the PHP to make it more efficient, they just crank up the timeout to try and fix the problem.

This is not a permanent solution and is always a bad idea. I have clients who routinely crank up their PHP timeout to solve a problem, but never really address the cause of the issue.

When a server has a high PHP timeout and a low ‘memory_limit’, the result will be an immediate increase in 502 and 504 errors.

This website configuration is the perfect prey for any hacktivist predator.

PHP-FPM Configurations

The purpose to running PHP-FPM (PHP FastCGI Process Manager) is to use the FastCGI (Fast Common Gateway Interface) to process PHP code and deliver it through to Nginx. The PHP-FPM is the bridge (gateway) between Nginx and PHP.

PHP-FPM has it’s own timeout configurations that have an impact on how efficiently the PHP can parse requests and deliver the results to both Nginx and MySQL.

The default configurations can be found here:


### /etc/php-fpm.d/www.conf

pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 8
pm.max_spare_servers = 16
pm.max_requests = 0

; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
; request_terminate_timeout = 0

While this configuration is just fine for a small site, even a small bump in traffic can crash the PHP. Looking at these settings, the one setting that will cause the most trouble is the default:


pm.max_requests = 0

This ‘max_requests’ parameter is set to ‘0’ which means ‘unlimited’, is the default setting and will crash any server should any one site experience even a small increase in traffic. This one parameter is the number one cause for PHP crashing using PHP-FPM. This is most likely why shared hosting servers, using PHP-FPM, cause sites to seemingly crash for no reason. The default is setup to fail right out of the box.

A better solution, even for shared servers would be:


### /etc/php-fpm.d/www.conf

pm = dynamic
pm.max_children = 128
; pm.start_servers = 5
pm.min_spare_servers = 16
pm.max_spare_servers = 48
pm.max_requests = 4096

request_terminate_timeout = 57
### make this 10 seconds longer than the PHP 'max_execution_timeout'
### if this is shorter than the 'max_execution_time' it will cut the time that PHP is allowed to execute

Comment out the ‘pm.start_servers’ and this will allow PHP-FPM to automatically determine the number of start servers based on the min/max spare servers. The formula is:


### /etc/php-fpm.d/www.conf

Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2

When the site is experiencing large numbers of 502 and 504 errors, nearly 100% of the time, it’s bad PHP configurations. This should be the first thing to review, but keep in mind that the site could have outgrown the server. For example, shared hosting is nearly guaranteed to produce errors because like the name implies, the site is being shared with many other websites. It only takes one site to crash the PHP for all the sites on the server.

Conclusion

As this article approaches 5,000 words, it’s important to understand that this is just where it starts. These are the basics that most self professed server admin’s should be able to navigate.

Now do you see why server DevOps charge over $200 per hour?

My client’s experience a 4 to 10 fold increase in online sales revenues because their servers are powerhouse performers. It takes decades of experience to decipher the real issues with a server, and just because the site is throwing multiple errors, doesn’t mean the basics found in this article will solve these problems.

The difference between a server administrator (Admin) and server development operations (DevOps), has to do with the custom server code that is written to aid in creating a high performance server. As this article points out, the any server can be crashed. Even billion dollar companies can experience server conflicts that can elude a good server admin. While these issues can often be difficult to overcome, a server DevOps builds the software that automatically recovers a server from Distributed Denial of Service (DDOS) attacks.

,