Deploying WordPress over Nginx and PHP-FPM

Welcome random web traveler. As the title suggests this post will deal with plain production ready examples of Nginx configuration (plus php-fpm) for WordPress site.

Before we move to the real thing, note that this examples are tested on both Debian 7 and CentOS 7 OSs. Since I don’t want to dive into setting up this servers for WordPress, I’m just giving your refined nginx configs that may be found useful. However the steps for building up WordPress on Linux are pretty simple:

– Installing Nginx from package repository or compiling it from scratch;

– Installing php5, php-mysql, php-fpm and other php libraries if needed (like php-gd);

– Installing MySql or Maria-DB;

– And off course setting up php, nginx and mysql/mariadb.

OK lets start with the nginx server configuration.  Found in /etc/nginx/nginx.conf

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

events {
 worker_connections 768;
 # multi_accept on;
}

http {

 ##
 # Basic Settings
 ##

 sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 2048;
 server_tokens off;

 client_max_body_size 100m;

 client_header_buffer_size 1k;
 large_client_header_buffers 8 8k;

 # server_names_hash_bucket_size 64;
 # server_name_in_redirect off;

 include /etc/nginx/mime.types;
 default_type application/octet-stream;

 ##
 # Logging Settings
 ##

 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;

 ##
 # SSL settings
 ##
 ssl_session_cache shared:SSL:10m;
 ssl_session_timeout 10m;

 ##
 # Gzip Settings
 ##

 gzip on;
 gzip_disable "msie6";

 # gzip_vary on;
 # gzip_proxied any;
 # gzip_comp_level 6;
 # gzip_buffers 16 8k;
 # gzip_http_version 1.1;
 # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

 ##
 # nginx-naxsi config
 ##
 # Uncomment it if you installed nginx-naxsi
 ##

 #include /etc/nginx/naxsi_core.rules;

 ##
 # nginx-passenger config
 ##
 # Uncomment it if you installed nginx-passenger
 ##

 #passenger_root /usr;
 #passenger_ruby /usr/bin/ruby;

 ##
 # Virtual Host Configs
 ##

 include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;
}

Pretty straightforward, right?

Next the Nginx magic behind WordPress. This example assumes that we want the administration of WordPress to go through SSL (https protocol). File: example.conf

server {
 ## Your website name goes here.
 server_name example.com www.example.com;
 listen 80;
 ## Your only path reference.
 root /opt/wordpress/;
 ## This should be in your http block and if it is, it's not needed here.
 index index.php;
 # port_in_redirect on;

 access_log /var/log/nginx/example_log;
 error_log /var/log/nginx/example_err warn;

 # rewrite all 403 to 404
 error_page 403 = 404;

 location = /favicon.ico {
 log_not_found off;
 access_log off;
 }

 location = /robots.txt {
 allow all;
 log_not_found off;
 access_log off;
 }

 # deny all access to .dot files
 location ~ /\. { access_log off; log_not_found off; deny all; }

 # deny access to files starting with a $, these are usually temp files
 location ~ ~$ { access_log off; log_not_found off; deny all; }

 location / {
 # This is cool because no php is touched for static content.
 # include the "?$args" part so non-default permalinks doesn't break when using query string
 try_files $uri $uri/ /index.php?$args;
 }

 location ~ /wp-admin/admin-ajax\.php {
 try_files $uri =404;

 # With php5-fpm
 fastcgi_intercept_errors on;
 fastcgi_pass 127.0.0.1:9000;

 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 include fastcgi_params;

 }

 # Request to wp-login to go through HTTPS protocol
 location ~ /(wp-admin/|wp-login\.php) {
 return 301 https://$host$request_uri;
 #rewrite /wp-(admin|login) $scheme://$host$request_uri/ permanent;
 }

 location ~ \.php$ {
 try_files $uri =404;
 #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini

 # With php5-fpm
 fastcgi_intercept_errors on;
 fastcgi_pass 127.0.0.1:9000;

 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 include fastcgi_params;

 }

 location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
 expires max;
 log_not_found off;
 }

 error_page 500 502 503 504 /50x.html;
 location = /50x.html {
 root /usr/share/nginx/www;
 }

}

server {
 listen 443 ssl;
 server_name example.com www.example.com;
 index index.php;

 root /opt/wordpress/;

 # Logs
 access_log /var/log/nginx/example_ssl_access.log;
 error_log /var/log/nginx/example_ssl_error.log info;

 ssl on;
 ssl_certificate /etc/ssl/certs/example.com.crt;
 ssl_certificate_key /etc/ssl/keys/example.com.key;

 ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
 ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
 ssl_prefer_server_ciphers on;

 # Process requests to wp-admin/* and wp-login.php
 location ~ /wp-(admin|login|content|includes) {

 location ~ \.php$ {
 try_files $uri =404;
 #fastcgi_split_path_info ^(.+\.php)(/.+)$;

 # With php5-fpm
 fastcgi_intercept_errors on;
 fastcgi_pass 127.0.0.1:9000;

 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 fastcgi_param HTTPS on;
 include fastcgi_params;

 }
 }

 # redirect everyone back to the non-ssl page
 location / { return 301 http://$host$request_uri; }

 location ~ !^(/wp-admin/|wp-login\.php) { return 301 http://$host$request_uri; }

 # rewrite all 403 to 404
 error_page 403 = 404;

 # deny all access to .dot files
 location ~ /\. { access_log off; log_not_found off; deny all; }

 # deny access to files starting with a $, these are usually temp files
 location ~ ~$ { access_log off; log_not_found off; deny all; }

 # keep logs clean by not logging access to favicon.
 location = /favicon.ico { access_log off; log_not_found off; }

 # keep logs clean by not logging access to robots.txt
 location = /robots.txt { access_log off; log_not_found off; }

}

FastCGI params that are defined in Nginx (/etc/nginx/fastcgi_params)


fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

fastcgi_param HTTPS $https;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;

You can see that 2 params are overwritten in the example.conf.

Blended with Nginx we use PHP FastCGI Process Manager or PHP-FPM. I like to start it like a daemon. For that reason we can use init script installed at /etc/init.d/php-fpm . Btw I borrowed it.


#!/bin/sh
### BEGIN INIT INFO
# Provides: php-fpm php5-fpm
# Required-Start: $remote_fs $network
# Required-Stop: $remote_fs $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts php5-fpm
# Description: Starts PHP5 FastCGI Process Manager Daemon
### END INIT INFO

# Author: Ondrej Sury <ondrej@debian.org>

PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="PHP5 FastCGI Process Manager"
NAME=php5-fpm
DAEMON=/usr/sbin/$NAME
DAEMON_ARGS="--fpm-config /etc/php5/fpm/php-fpm.conf"
PIDFILE=/var/run/php5-fpm.pid
TIMEOUT=30
SCRIPTNAME=/etc/init.d/$NAME

# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0

# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions

#
# Function to check the correctness of the config file
#
do_check()
{
[ "$1" != "no" ] && $DAEMON $DAEMON_ARGS -t 2>&1 | grep -v "\[ERROR\]"
FPM_ERROR=$($DAEMON $DAEMON_ARGS -t 2>&1 | grep "\[ERROR\]")

if [ -n "${FPM_ERROR}" ]; then
echo "Please fix your configuration file..."
$DAEMON $DAEMON_ARGS -t 2>&1 | grep "\[ERROR\]"
return 1
fi
return 0
}

#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
$DAEMON_ARGS 2>/dev/null \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}

#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=QUIT/$TIMEOUT/TERM/5/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/TERM/5/KILL/5 --exec $DAEMON
[ "$?" = 2 ] && return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return "$RETVAL"
}

#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal USR2 --quiet --pidfile $PIDFILE --name $NAME
return 0
}

case "$1" in
start)
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
do_check $VERBOSE
case "$?" in
0)
do_start
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
1) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
stop)
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
do_stop
case "$?" in
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
esac
;;
status)
status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
check)
do_check yes
;;
reload|force-reload)
log_daemon_msg "Reloading $DESC" "$NAME"
do_reload
log_end_msg $?
;;
reopen-logs)
log_daemon_msg "Reopening $DESC logs" $NAME
if start-stop-daemon --stop --signal USR1 --oknodo --quiet \
--pidfile $PIDFILE --exec $DAEMON
then
log_end_msg 0
else
log_end_msg 1
fi
;;
restart)
log_daemon_msg "Restarting $DESC" "$NAME"
do_stop
case "$?" in
0|1)
do_start
case "$?" in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|status|restart|reload|force-reload}" >&2
exit 1
;;
esac

:

This script assumes that we have one main php-fpm.conf file where we define pid file, log file, pools that we will use etc. Every change we do to php configuration like max_upload_size and else can be applied by reloading of this daemon.

That’s it my dearest. Don’t forget that WordPress requires additional settings for working with SSL too. Any questions or suggestions, please write.

Hopes I saved a little precious time of yours 😉

Resolving HTTP Error 416 Requested Range not satisfiable (IIS 7.5 example)

The chaos called HTTP often surprises us with some new sweet little error that put us in not so satisfiable position. For example the anonymous HTTP error 416.

This error is product of mismatch between the client (for example browser) and the server regarding to the Accept-Ranges header tag. In particular this error emerges when the client is requesting a bigger resource similar to pdf files or images and awaits for the response of the server. The initial response could be fine but the streaming of the resource could fail. Why?

Accept-Ranges: bytes means that the request can be served partially. After the initial content-length parameter the server should provides us Content-Range tag (bytes size) in the response with every ‘partial’ request to keep the consistency of the stream. If this doesn’t work right, say hello to error 416. To be noted that this is highly influenced by how the client is implemented, not only the server.

To check if the server supports range header we could do a thing. Send HEAD request instead of GET to the wanted resource. If response 206 is returned range is supported. If the response code is 200 server side should change Accept-Range header to not foul clients.

Curl example: $ curl -i -X HEAD http://address/path/resource

Nevertheless let’s just solve the problem. Usually I notice this error in relation IIS -> Chrome or Apache -> Chrome. But it occurs on Firefox too.

Setting Accept-Bytes: none in IIS: Intenet Information Service (IIS) Manager -> Server Node -> Sites -> Problematic Site -> HTTP Response Headers (In IIS section) -> Add (Action) -> Name: Accept-Bytes; Value: none .

Setting Accept-Bytes: none in Apache2 : Enable apache2 mod_headers. Add “Header set Accept-Ranges none” in configuration of host.

Restart changes to take effect.

Have a nice day : )

Cool usage of TimeCategory in Groovy

Groovy, the programming language based on JVM implements a feature called Categories. It is originally borrowed from Objective-C . Simple explanation for this feature can be the ability to implement new methods in existing classes without modifying their original code which in some way is injecting new methods through a Category class. For more information official documentation can be found here .

Rather interesting for me was playing with the TimeCategory class for writing a short and easy script for fixing some datetime columns in database. This class offers a convenient way of Date and Time manipulation.

General syntax for categories is the following:

use ( Class ) {
// Some code
}

Concrete usage of TimeCategory:

use ( TimeCategory ) {
// application on numbers:
println 1.minute.from.now
println 10.hours.ago
// application on dates
def someDate = new Date()
println someDate - 3.months
}

Seems weird? From when Integer has months, minutes, hours etc. methods ? Well it still doesn’t have any of that, however those methods are dynamically added with the TimeCategory use.

If you are interested how is this possible I suggest you to go through TimeCategory API and source code if possible. Also this forum post can be useful for deeper understanding of the groovy magic.

And last but not least, an example groovy script for your pleasure.


@GrabConfig(systemClassLoader=true)
@Grab(group='mysql', module='mysql-connector-java', version='5.1.27')

import groovy.time.TimeCategory
import java.sql.Timestamp

sql = groovy.sql.Sql.newInstance(
"jdbc:mysql://hostname:3306/DB_name?autoReconnect=true",
"user",
"password",
"com.mysql.jdbc.Driver")

def rows= [:]

// Select Data
sql.eachRow("select * from Table_Name"){
def impDates = new ImportedDates() // This is some custom Class found in the same package/directory if script
impDates.dateColumn = it.dateColumn

if(impDates.dateColumn!=null){
use(TimeCategory){
impDates.dateColumn = impDates.dateColumn - 1.day // Shift dateColumn for one day backwards in time
}
}

rows.put(it.UID,impDates) // Put private key and ImportedDate object in Map

}

// Update Data
rows.each {row->
ImportedDates id = row.value
// Check if value is different from null, if it is convert it to Timestamp(we use datetime column in db) and execute update query
dateColumn  = null
if(id.dateColumn) dateColumn = new Timestamp(id.dateColumn.getTime())

// Actual update query
sql.executeUpdate('update Table_Name set dateColumn = ? ' +
'where UID like ?',
[dateColumn, row.key.toString()])

}

Cheers.

Remove old backups with Bash Shell

Hello folks, I present you a bash script I wrote recently. It’s purpose  is deleting ‘old’ directories  marked by time period expressed in days. The script is simple and at its bases uses the command find to locate directories that we want to remove.

The general thinking on which this script is based are directories with backups categorized by some logical distinction and retention period for the second level of directories that are sorted by some other means like let’s say date. The script run by cron job searches the second level of directories and deletes the one older than some number of days n.

Example structure:

/mnt/backup/apps/2014-06-20/*

/mnt/backup/dbs/2014-06-1/*

/mnt/backup/logs/2014-06-1/*

 

If we want to exclude some of the directories in /mnt/backup , for example /mnt/backup/logs we can do that by putting the path into the variable EXCLUDED_DIRS. Note that only the directories from depth 1 can be excluded with this implementation (/mnt/backup/<> = OK , /mnt/backup/logs/<> != OK ).

EXCLUDED_DIRS=”/mnt/backup/logs /some/other/etc”

 

USAGE of the script:

<script_name> {arg1:/path/to/folder} {arg2:n days}  

or

./deleteOldDirs.sh         /mnt/backup           60

 

Note on find. For checking the timestamp of the directory I use ctime like option from find.  When this argument is used ,  find checks for the iNode pointing to the file/folder and collects information (like permissions, owner, etc) from which can be determinated when it was modified. In other hand if we want to check modification of the data in the backup itself we can use the argument mtime instead.

 

The script , beware of removing yourself.

May the force be with you !

#!/bin/bash

#
# GNRP LICENSE: Script licensed by GOD
#
# @author: Igor Ivanovski <igor at genrepsoft.net>
#
# July, 2014
#

#
# *** VARIABLES
#

PATH=$1 # First argument should be PATH of the backup folder
DAYS=$2 # Secound argument should be the max number of days we want to keep backups

GREP="/bin/grep"
MOUNT="/bin/mount"
FIND="/usr/bin/find"
RM="/bin/rm"

PATH_STATUS=0
MOUNT_STATUS=0

EXCLUDED_DIRS="/home/igor/Documents /home/igor/Programs"

#
# *** CHECKERS
#

if [ $# -eq 0 ]; then
    echo "No arguments provided"
    exit 1
fi

if [ -z $PATH ] || [ -z $DAYS ]; then
    echo "Not all arguments provided"
    exit 1
fi

#
# *** FUNCTIONS
#

function check_if_path_exists {

	[ ! -d $1 ] && echo "Path is invalild" || PATH_STATUS=1
}

function check_mount {
	 if [ -n "$($GREP $1 /etc/fstab)" ]; then
                echo "Mounting point exists..."
		if [ ! -n "$($MOUNT -l | $GREP $1)" ]; then
		    echo "...but not mounted."
		    echo "Mounting now."
                    $MOUNT $1
                    [ $? -eq 0 ] && { echo "Mounting was successfull!"; MOUNT_STATUS=1; } || echo "Can't mount";
		fi
         else
                echo "Mounting point doesn't exist"
         fi
}

# Excludes only if parent directory fits
function check_if_parent_dir_excluded { 
            for dir in $EXCLUDED_DIRS; do
		size=${#dir}
	        check=${1:0:size}
		#echo "$dir : ${#dir} == $check : ${#check}"
		if [ $dir ==  $check ]; then 
			echo "1"
		fi
	    done
}

function find_dirs {
	$FIND $PATH -mindepth 2 -maxdepth 2 -type d -ctime +$DAYS -print0
}

function print_dirs {
	while read -r n; do
                 result=$(check_if_parent_dir_excluded $n)
		 [ -z $result ] && printf '%q\n' "$n" || echo "Found EXCLUDE_DIR: $n" 
	done < <($FIND $PATH -mindepth 2 -maxdepth 2 -type d -ctime +$DAYS -print) # Use NL delimiter
}

function del_dirs {
        while read -r -d '' n; do
                 printf '%q\n' "$n"
                 result=$(check_if_parent_dir_excluded $n)
                 if [ -z $result ]; then
			 echo "Directory will be deleted now..."
			 $RM -rf $n
			 [ $? -eq 0 ] && echo "Gone!" || echo "Some error occured!"
		 else	
	 		echo "Found EXCLUDE_DIR: $n continue..." 
		 fi
        done < <($FIND $PATH -mindepth 2 -maxdepth 2 -type d -ctime +$DAYS -print0) # Use NUL delimiter
}

#
# *** MAIN
#

check_if_path_exists $PATH
[ $PATH_STATUS -eq 0 ] && check_mount $PATH
[ $MOUNT_STATUS -eq 1 ] && check_if_path_exists $PATH
[ $PATH_STATUS -eq 1 ] &&  del_dirs || { echo "Bye."; exit 0; }

#
# *** END
#

Creating Java keystore from existing private key and certificate

With the bunch of programming codes and programs found on the Web this days, Code Signing Certificates are fact and necessity. But the people who are end-users or developers are still in process of adjusting the awareness what programs/scripts/code are safe and junk free and should be trusted before running them on local machine. With this little guide I want to help new people which are diving into this area of problems.

Different platforms offer different way for code signing their apps , and in this post I will focus just on Java based systems.

Java web and desktop apps are bound with keystore files that keep the certificate chains signed by Internet authorities. With this technique it is easy to make distinction from trusted to untrusted programs with investment of some time and money.

Generally the process of creating Java keystore that can sign applications(source codes) can be covered in couple of steps that include the client and the certificate issuer:

  1. The client creates keystore file and generates private and public key pair
  2. The client exports Code Signing Request from the keys with personal and trustworthy data
  3. The client sends the CSR to Certificate issuer and waits for approval. Normally it is contacted during pending time.
  4. Certificate Issuer sends to the client the signed certificate and probably additional intermediate/root chain certificates that need to be included into the keystore.
  5. The client imports the certificate (probably in pkcs7 format) into the original keystore that was used to generate the keys and CSR with the appropriate alias that was used during the creation of the keystore.
  6. The keystore is included in Java applications and referenced with the alias so to sign the JARs used in the apps.

However it can happen the client to receive private key that ought to be used, without previously creating a valid keystore and generating key pair within it. This received key was used for generating CSR and certificate request was already sent to authority.

Well at this point it gets confusing what is the next step that should be taken and is it possible this key to be used for creating a new keystore? Some will say it is not possible (and seems logical because keytool doesn’t allow it), you will need to create new keystore and generate key pair  and issue a new certificate request with the CSR exported from this keystore key pair. That’s not true though, there is always a way.

Let’s suppose the original request has been approved and you received valid certificate cert.crt. At this point you have private.key and trusted cert.crt.

This files need to be merged and exported into pkcs12 format with the help of libssl library.

openssl pkcs12 -export -in cert.crt -inkey private.key -certfile cert.crt -name <certificate(alias)_name> -out keystore.p12

Next this new generated keystore.p12 should be used to create new keystore in JKS format with the help of keytool from the JDK.

keytool -importkeystore -srckeystore keystore.p12 -srcstoretype pkcs12 -destkeystore keystore.jks -deststoretype JKS
And that’s it voila! We have created keystore in jks format from existing private key.

After we have the keystore needed it is easy to import new certificates if required. Example:

keytool -import -trustcacerts -alias <alias(certificate)_name> -keystore keystore.jks -file <certificate_filename>

That’s it, three commands that will make your life easier.

SkeletorLead

Cheers,

Igor

Resolving error 413 (Request Entity Too Large/Not Allowed) in IIS 7.5

Working with IIS lately have brought me a lot of trouble, however it also increased my in-depth knowledge of its working abilities and adaptability.

One loving situation (between the minor ones) appeared after we did transition from http to https. After fixing the minor ones everything was working smooth and groovy except that sometimes the upload of files was broken. Then we realized it was not someTIME, but someTHING or the concrete size of the uploaded file that was causing the problem. Haa, so common problem when setting up Web Server you say, me said also, but this time the workaround was a little more pain in the *ss if you know what I mean.

The response given when uploading was an intelligent block by the Web Server resulting with error 413 – Request Entity Too Large. That doesn’t make a sense, I’m uploading  file that is 100KB but the settings allow max to 100MB file…

So with the help of rogue googling enabled by http://www.startpage.com I set my self digging into the problem overcome. One thing was clear, the trigger for this dysfunctionality certainly was https, since the uploading worked fine on http not secure. That gave me the rough but brilliant idea what https do. It encrypts, keeps extra request data, it certainly enlarges the request payload.

Hmm, OK first lets check the standard file limits in IIS.

Normal setting for max upload file size:

Setting the request limits in the root web.config of the site (default is 30 MB). This can be set in Internet Information Services Manager Program also (MACHINE->Site->IIS->Request Filtering->Edit Feature Settings)

<!– 100 MB . Format uses Bytes –>

<security>

    <requestFiltering>

        <requestLimits maxAllowedContentLength=”102400000″ />

    </requestFiltering>

   </security>

For ASP.NET you have more specific configuration:

http://msdn.microsoft.com/en-us/library/e1f13641%28v=vs.85%29.aspx

Example:

 <system.web>

   <httpRuntime maxRequestLength=”102400000″ executionTimeout=”3600″ />

 </system.web>

For legacy ASP:

<!– This goes under ASP – can be set with IIS Manager also 😉 –>

<limits maxRequestEntityAllowed=”102400000″/>

And here comes our solution:

http://www.iis.net/configreference/system.webserver/serverruntime

What we faced here is a buffer related problem. It’s not about the maxRequestEntityAllowed since the default size is Unlimited , but how IIS handles the request. After some empirics we noticed that the problem was occurring only with files larger or equal to 42 KB. And what is the default value of uploadReadAheadSize? 42 KB.

Then I charged myself to change this property. So I did the following:

Since we need to do section overriding through the means of <location> to change the default values (this configuration is not possible through IIS Manager)

<location path=”SiteName” overrideMode=”Allow”>

  <system.webServer>

   <asp>

    <session />

    <comPlus />

    <cache />

    <limits maxRequestEntityAllowed=”102400000″/>

   </asp>

    <serverRuntime enabled=”true” uploadReadAheadSize=”102400″ />

   </system.webServer>

</location>

However for some God sake reason, this configuration didn’t change the default values, even with overriding enabled for the serverRuntime section in the applicationHost.config . For easy check there is one good friend – the appcmd.

C:\Windows\System32\inetsrv\appcmd.exe list config “SiteName” -section:system.webServer/serverRuntime

And this bring us to the solution. To cut the story:

* Enable the serverRuntime section

C:\Windows\System32\inetsrv\appcmd.exe set config “SiteName” -section:system.webServer/ServerRuntime /enabled:”True” /commit:apphost

* Set the uploadReadAheadSize to 10MB

C:\Windows\System32\inetsrv\appcmd.exe set config “SiteName” -section:system.webServer/ServerRuntime /uploadReadAheadSize:”1024000” /commit:apphost

Restart if required, and that’s it.

Just aside note don’t forget to change the uploadReadAheadSize to something smaller and more realistic since 10MB is huge for a buffer, cause you don’t want to be hit by the nasty bad boyz with their huge payload packets.

Setting Up Context in Apache Tomcat for Serving Static Files

The intro:

So I’ve heard you want to serve static files from your Tomcat Web App in a way that they won’t be deleted on WAR redeploy or Tomcat restarted?

You have a solution, and that is mapping a custom Context in your Apache Tomcat server.xml .

The scenario :

You have a site that allows users to upload images that are public,shared and not under a hood of some security filter. The most intuitive solution is to put them in some directory i.e. ‘uploads’ , but then you realize that the things in the exploded WAR rewrite on redeploy or if the war is in the webapps directory on Tomcat restart. (you can change this behaviour)

The solution is simple: save the files to some directory outside of the war (something like ‘/usr/share/tomcat/uploads’) and map that directory on the server context of your Tomcat AS (something like http://lesite:8080/uploads).

With workaround like this you will see your uploaded cute kitty picture like this: http://lesite:8080/uploads/kitty.jpg

The implementation:

Let’s use the same examples. The mapping is done in <CATALINA_HOME>/conf/server.xml (hopes you know what and where catalina_home is )

This is default situation on new Tomcat install (a snippet from sever.xml):

<Host name=”localhost” appBase=”webapps” unpackWARs=”true” autoDeploy=”true”>

<Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”logs”
prefix=”localhost_access_log.” suffix=”.txt”
pattern=”%h %l %u %t &quot;%r&quot; %s %b” />

</Host>

But we want to change that in this:

<Host name=”localhost” appBase=”webapps” unpackWARs=”true” autoDeploy=”true”>

<Context docBase="/usr/share/tomcat/uploads" path="/uploads" />

<Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”logs”
prefix=”localhost_access_log.” suffix=”.txt”
pattern=”%h %l %u %t &quot;%r&quot; %s %b” />

</Host>

And that’s it, end of setting. Restart , code and redeploy.

The cookie:

Java snippet of simple utilization:

public class UploadsServlet extends HttpServlet {

    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletExcpetion, IOException {
        File file = new File("/usr/share/tomcat/uploads", request.getPathInfo());
        response.setHeader("Content-Type", Files.probeContentType(file.toPath()));
        response.setHeader("Content-Length", String.valueOf(file.length()));
        Files.copy(file.toPath(), response.getOutputStream());
    }

}

The conclusion:
In exact things the need for conclusion is deprecated. Everything should be concluded the one way :

return goToTopAndReadAgain();

The hint:

Maybe you won’t be impressed, and probably you have a better solution/implementation for/of the scenario. However let me give you a clue how this can be found useful in different situation. Proxying and load balancing, possibly with Nginx on front and couple of Tomcats behind. Defining new server contextses and getting a feel of that damn Superman speed.

Fiuuuuuuuuuuuuuuuuu…

(salutations and thanks to a friend of mine for collaboration)