In relationship with thoughts

Writing is hard. You need to have coherent, preformed thoughts that you want to transfer to some kind of a document. Thoughts are not something that is learned, their origin is yet another mystery, but at least  we can say that they are the reflection of one’s mind. So in order to have coherent thoughts you need to have a coherent mind, achieved either through training or be some kind of born thoughts genius. You can learn a lot about some person by reading or listening to them, it’s a glimpse into their mind. You can learn a lot about yourself by writing, slowing down your thoughts. It’s calming and sexy.


Coding on the other hand is much easier. Not that I want to trivialize it, it’s a valuable skill, but it’s something you can master by learning, trying, practicing. Mindfulness practitioners would argue that is the same with the mind, and probably they are right, but there is quite a difference between creating a thought, compressing an idea into a language in contrast to writing a command or an expression. For the latter you need the former. Good thing the former isn’t required to be always rational.

Both of these aspects are playing with a similar notion although their nature is different. Additionally, we can also say that inspiration plays a major role, but how does it interact with the mentioned actions? You don’t need to be inspired to code or to write, but most of the time, scripts that were written by an inspired mind are holding a finer quality than the non-inspired works. But, do we need inspiration to write some random code about validating a database transaction? Don’t think so (I’m not counting pay-check as valid inspiration).

Maybe if we started interacting with the machine as evolution nurtured us to communicate, things will get better. At least for the code reviewers. It will be like reading articles, gauging the author’s reasoning through actual human words and sentences. Kudos to all the programming languages that try to be as much prose-like as possible, but it’s not that that.

Let’s see how I assume people approach writing or coding. I can only speak for myself because until now I still haven’t been able to read other people’s minds. So the only thing I have is some anecdotal evidence, invented by myself only.

For example we can start with reasons to do it. Writing – to express something, coding – to express something. Writing – to create memory imprints that can be referenced indefinitely, coding – to create memory imprints that can be referenced indefinitely. Writing – to guide, communicate, coding – to guide, communicate. So much common traits, yet so much difference, so much fallacy in the logic of comparing random uncategorized attributes. Semantics are hard.

Then, we can reason about why we do it. Writing a program most of the time is a problem solution. Code essence is sourced from the need of solving a problem. Making something simpler is part of the solution, it dissolves the problem bit by bit. Writing an article is sometimes a problem solution, but quite often is a problem creator too. Anyhow, there are limitless answers to why do it.

It’s interesting to see how doing things like writing or coding, can actually have an opposite effect on the mind. Doing them will actually train your mind. Like keeping a dairy or keeping logs of your day-work. It is amazing to notice that inspiration and motivation sometimes work in the opposite direction. You can’t do a thing, because you lack motivation. Then you start doing something with the largest friction there is and suddenly – boom – you are starting to get motivated to continue.

I’ll stop myself now, mainly because I’m losing the coherency of my thoughts. I can return to this sometimes, but since probably I wouldn’t, I’ll just break here. Writing is hard, but rewarding.

Don’t think that I will say goodbye without leaving some inspiration.

Ruby, Rails, fixtures and fails

After a long time working on top of Java and its beautiful JVM and robust ecosystem my professional path lead me to another powerful web development system – Ruby and Rails. So how did I felt during the ‘transformation’ or did I metamorphose into a vermin ?

From programmsers point of view, very comfortable. Ruby was/is too easy to learn and grasp especially if you have worked with Groovy in the last couple of years. On the other hand Rails paradigms are very close and similar to that of Grails (it’s MVC and it was inspired by Rails so) and most of my web development is based on it. However changing the language and the framework happened to be more challenging from another aspect. I’m talking about the tools, helpers, libraries/gems and what the community have built until now in the field of helping and improving the development processes. Both Groovy & Grails and Ruby & Rails are open source, so you know what I mean.

As I dived more deeply into the presented problems I found about a feature that makes Rails great tool for TDD (Test Driven Development) – fixtures. Basically, a set of YAML files that contain mock data that can be loaded during the testing process. That means tables, rows, associations presented in a YAML format. Pretty cool.

But the transformation from YAML data into RDS principles for storing data isn’t so simple. One of it’s issues is the foreign key references. The problem that occurred for me with fixtures is the loading order of the files and their mapping into a database. Let’s assume that we have three tables: items, users, and user_items and the latter contains references to both user and item table. With fixtures we will have three different files: users.yml, items.yml, and user_items.yml that contain the mock data. With the start of the test process the Active Record fixture module starts loading this files into memory and executes the queries responsible for inserting data into the database. And everything is cool, no problem, but happens if we first have the user_items.yml loaded into the DB? Well it will fail. Why? Because of the database referential constraint system we will face the problem of non existing foreign key values for the user and item references.

Well Rails developers weren’t so stupid and they thought of this. If you dive into the fixtures implementation it will be revealed that Rails is invoking the method disable_referential_integrity. That means that Rails will try to remove the constraints for the test database and just insert the data. But on most RDS system the database users need to have super user privileges to execute those commands.

Since I stumbled upon on this problem and it reflected both locally and on the CI system, I needed to find a ‘workaround’ solution. So I started thinking that if we control the order of loading of the YAML files then we can control the inserting flow and like a consequence bypass the logical problem of referential integrity (the default loading is randomly alphabetical, but I’m not sure). Then started googling, reading blogs, scanning through stackoverflow and you know all of that monkey attempts to solve your problem. And luckily then – eureka! I found the solution: override fixtures method for loading yaml files and control the order of deleting the data (since that is influenced by the referential constraint too).

The solution is simple and can be found in this gist. Extending the ActiveSupport:FixtureSet class with the purpose to override the method create_fixtures (but not totally reimplement it thanks to the help of ruby aliases) that is responsible for the obvious, creating the fixtures, nails it. We implement this code sample in a file that is required by the tests. With the UserItem.delete_all we care that all user items are first deleted before the User and Item tables are dropped. The variable fs_names holds the names of the items/files that will be loaded and gives priority to users and items before any other. That means that they will be processed and loaded before the user_items yaml file and there won’t be any referential integrity issue.

Till I came to this which is just modified version of a proposed solution in the stackoverflow network I read and peeked into a lot of online resources. With this in mind I hope someone with similar problem will face this blog post first and save his/her time for something more productive that scraping half of the internet for a solution.

Cheers, I.


Used resources:

Deciphering an API documentation

JavaScript on the Web has a lot of APIs to work with. Some of them are fully supported some are still in draft. One of the things I worked with lately is the FileSystemDirectoryReader Interface of the File and Directory Entries API. It defines only one method called readEntries that returns an array containing some number of the directory’s entries. This is draft proposed API and is not supposed to be fully browser compatible. However I’ve tested it almost on all the latest versions of browsers and it works fine on each one except for the Safari browser (I think ;)).

This post will focus on one example that shows that sometimes reading an API documentation can be a little tricky. So the example in the documentation shows a common use of this API where the source of the FileSystemEntry items that we read from are passed with a (drag and) drop event in the browser. The FileSystemEntry can be either a file or a directory. What we want to do is build a file system tree of the dropped item . If the dropped item is a directory then the item is actually FileSystemDirectoryEntry object than defines the createRender method that creates the FileSystemDirectoryReader object on which we will call the readEntries method.

The demo example can be tested in this fiddle. What I want you to do is to drop a directory that contains more than 100 files. If you do that you can notice that the readEntries method returns only the first 100 queued files. That is the main reason for writing this post. The description on the successCallback argument of the readEntries method is a little bit confusing, it says: “A function which is called when the directory’s contents have been retrieved. The function receives a single input parameter: an array of file system entry objects, each based on FileSystemEntry. Generally, they are either FileSystemFileEntry objects, which represent standard files, or FileSystemDirectoryEntry objects, which represent directories. If there are no files left, or you’ve already called readEntries() on this FileSystemDirectoryReader, the array is empty.”

In their example we can see the scanFiles method that reads the items and creates html elements:

function scanFiles(item, container) {
        var elem = document.createElement("li");
        elem.innerHTML = item.name;
        container.appendChild(elem);

        if (item.isDirectory) {
            var directoryReader = item.createReader();
            var directoryContainer = document.createElement("ol");
            container.appendChild(directoryContainer);

            directoryReader.readEntries(function (entries) {
                entries.forEach(function (entry) {
                    scanFiles(entry, directoryContainer);
                });
            });
        }
    }

It seems that the successCallback functions returns the entries partially in a packages of 0 to max 100 items.  If we use this function we will never iterate more that 100 items in the given directory. What we need to do is to decipher this part: “If there are no files left, or you’ve already called readEntries() on this FileSystemDirectoryReader, the array is empty.”.  Translated this into JavaScript code is:

function scanFiles(item, container) {
        var elem = document.createElement("li");
        elem.innerHTML = item.name;
        container.appendChild(elem);

        if (item.isDirectory) {
            var directoryReader = item.createReader();
            var directoryContainer = document.createElement("ol");
            container.appendChild(directoryContainer);

            var fnReadEntries = (function () {
                return function (entries) {
                    entries.forEach(function (entry) {
                        scanFiles(entry, directoryContainer);
                    });
                    if (entries.length > 0) {
                        directoryReader.readEntries(fnReadEntries);
                    }
                };
            })();

            directoryReader.readEntries(fnReadEntries);
        }
    }

The change that we need to apply is to check after the iteration of the entries if the length of the entries array is bigger than 0.  Case that’s the truth then we should call the readEntries method again. If the entries size is zero, then the iteration is finished – all the file system items are iterated.

This fiddle has the improved version of the scan file method that will list all the files in the directory (more that 100), and won’t trick you.

Now finally we can conclude what the part “returns an array containing some number of the directory’s entries” meant. 🙂

Cheers

A Groovy parser for CSV files

Parsing CSV these days is pretty straight-forward and not a big deal especially when we have the handy libraries from Apache Commons (I’m talking bout Java world). In this post I will give you an example how to use the Apache Commons CSV with the magic of Groovy and its closures so it can look and feel a little more fun because parsing in general is job for sad people (not kidding).

We’ll make ourself a simple Groovy class that will hold a reference to the CVSParser file, and a reference to the headers and the current record/line of the file that we will iterate with the closure delegate set to the instance of this CSVParserUtils class.

Something like this:

class CSVParseUtils {

    CSVParser csvFile
    def record
    def headers

    CSVParseUtils(String fileLocation) {
        def reader = Paths.get(fileLocation).newReader()
        CSVFormat format = CSVFormat.DEFAULT.withHeader().withDelimiter(delimiter)
        csvFile = new CSVParser(reader, format)
        def header = csvFile.headerMap.keySet().first()
        headers = header.split(delimiter as String)
    }

As we can see it’s a constructor that takes the location to the CSV file that we want to parse, creates some default parsing format and generates new CSVFile that holds the CSV data.

As we see parsing is easy, but it’s better when we can transform the data on the run as we loop it. For that reason we will define a method called eachLine that will take a params Map and a Closure that will have access to the record/line instance and will do something with it.

/**
 * List each line of the csv and execute closure
 * @param params
 * @param closure
 */
def eachLine(Map params = [:], Closure closure) {
    def max = params.max ?: maxLines
    int linesRead = 0
    def rowIterator = csvFile.iterator()
    closure.setDelegate(this)

    while (rowIterator.hasNext() && linesRead++ < max) {
        record = rowIterator.next()
        closure.call(record)
    }
}

It’s nothing special only a simple loop that iterates through the iterator and calls the closure with the given record for that line as a closure argument.

How to use it?

def parser = new CSVParseUtils(fileLocation)
def result = [:]
// first 2 lines without header
parser.eachLine([max: 2]) { CSVRecord record ->
    result.put(record.recordNumber, record.values.size() > 4 ? 
             record.values[0..4] : record.values[0..record.values.size()])
}

We imagine that we need only the first 2 lines and the first 5 columns or something like that.

As you can see this closure loop is not specially connected with CSV, it’s just a clean way to iterate through any textual file line by line and do something with it. As a matter of fact you can use the BufferedReader which has method eachLine too.

The source code for this whole example can be found on github.

Thanks for reading.

Grails and SAPUI5 are friends

Hello reader,

instead of the planned walk through the city park and drinking some beer(s) mother nature swinging moods changed my plans and in place of the shiny sun gave me hard rain and sour mouth. In a situation like that, alone and bored I decided to bore you too and share this short text about two good friends called Grails and SAPUI5 (respect to the OpenUI5 project too). 🙂

I’ve been working hard with the Grails framework this couple of years and different situations led me to different scenarios. Lately I found myself in situation that asked bringing closer the powerful SAP services to the web/mobile clients. And what is better that using the outsourced JavaScript MVC framework made by SAP called SAPUI5 or if you prefer the open source project name OpenUI5 in conjunction with the versatile Grails Framework.

If you’re familiar with Grails then you certainly know that with the latest 3++ versions of Grails there is great support for already established and pretty much famous frameworks/libraries AngularJS and ReactJS in forms of Grails app profiles and plugins. But there is no “official” support for interbridging SAPUI5 and Grails and that is the main motive for writing this blog post and sharing it with you.

SAPUI5 is a single paged application where all the magic is done with JS so what we need is a single html file or in this case a single gsp file. We use that file to define the paths to the SAPUI5 runtime (or sdk) resources and to init the main SAPUI5 application via short JavaScript code. SAPUI5 is best when used with the OData services and that its where this software shines, however it has also great support when working with JSON and provides us with swift JSONModel that we can use to fill up the application data. And because we have JSON then we must have the Restful Grails controllers that will provide us with well defined JSON.

So the situation is pretty simple: Grails connects us with the backend via web services (or else?) or it provides us the data on its own via GORM or something else. Then Grails transforms the data into a JSON format that is a sweet cake for the SAPUI5 to consume and make it look great both on web browsers and on any mobile clients (Smart phones, tablets etc.).

Well this won’t be worth a penny without a working example, right? Because that’s the cause I’ve published a little demo of Grails and SAPUI5 playing together that you can check it on github. In short words we have a Spring Security Core plugin for the authentication and authorization, the JSON Views plugin for making the JSON even easier and also an example how to make it work via rest based http calls if your clients is native app . And of course the SAPUI5 application itself.

Here’s the link to the repo.

Thanks for reading,

cheers.

 

 

Passing by ‘reference’ in Java

One of the first things that every Java programmer learns (or should learn) is that method parameters in Java are passed-by-value. That is the only truth and there is no so called ‘reference’ passing in Java. Every method call with parameters means that their value is copied in some memory chunk and then they are passed (the copied memory) to the local function to be used.

What is more important though is of what type is the parameter that is passed. Generally there are two different data types: primitive (int, char, double etc…) and complex aka objects (Object, array etc…). The thing that matters is what is the ‘value’ of them when they are used in parameter passing.

When we are passing parameters of primitive type we are passing the actual value of it. So if we pass an integer with value of 4, then the function will receive an integer with value of 4 as parameter value. However, if we pass parameter of complex type let’s say some object of class Company then the function will actually receive the pointer to real object location in memory. Or in Java terms, it will receive the copied value of the reference (address) to the Java object that we want to pass and use.

If in C++ we have: Company *c; to get the pointer , then in Java we have Company c; . It’s pretty much the same, the difference is how things are designed and implemented under the cover.

If we understand this, then we realize that even if there are no out parameters in Java when defining and implementing methods, we can still use the reference advantage to program that thing by our self.

To get things clear and imagine the picture we should actually see the picture, I mean the code.

For example we can use an array of size one to be our data holder. Passing and mutating data this way will change the real value that we want to be changed. An example code, try it:


package com.groggystuff;

/**
*
* @author Igor
*/
public class JavaArguments {

/**
*
* @param argument
*/
public static void mutate(int argument){
argument++;
}

/**
*
* @param argument
*/
public static void mutate(int[] argument){
argument[0]++;
}

/**
* @param args the command line arguments
*/
public static void main(String[] args) {
// example pass by value
int i = 4;
System.out.println("Value of 'i' before mutation: "+i); //prints 4
mutate(i);
System.out.println("Value of 'i' after mutation: "+i); // prints 4

// example pass by value too
int[] j = new int[1];
j[0] = 4;
System.out.println("Value of 'j' before mutation: "+j[0]); // prints 4
mutate(j);
System.out.println("Value of 'j' after mutation: "+j[0]); // prints 5

}

}

Thanks for reading,

I hope this post will be helpful to you in solving the coding mysteries in life, or something similar.

Please comment if you feel that  your comment is needed. Or comment at will, just to say hi for example.

Custom Authentication Success Handler with Grails and Spring Security

It’s Sunday and instead of devoting this day to our Lord I will dedicate it to the great Machine and its coding brethren. The jokes aside, this is a quick show up of how to establish custom Authentication Success Handler if you are working with Grails Framework + Spring Security Core Plugin.

Well first, why would you need to alter the ‘normal’ behaviour  of the handler?

The answer: let’s say, you want to change the targetUrl for the specific authenticated user. Is that not enough? 🙂

HOW TO DO IT

With Spring and Java what you need to do is to implement the AuthenticationSuccessHandler interface. It has only one method to be implemented:

void onAuthenticationSuccess(HttpServletRequest var1, HttpServletResponse var2, Authentication var3);

With Grails and Spring Security Core plugin we follow the same path, just the ritual is a little bit different.

Spring Security Plugin use the AjaxAwareAuthenticationSuccessHandler  bean that extends SavedRequestAwareAuthenticationSuccessHandler and if you follow the hierarchy tree you will notice that at some point the needed interface is implemented at the upper classes. So what we need to do is just extend AjaxAwareAuthenticationSuccessHandler and define the bean in resources.groovy (оr .xml).

package com.wordpress.groggystuff.grails

import grails.plugin.springsecurity.web.authentication.AjaxAwareAuthenticationSuccessHandler
import org.springframework.security.core.Authentication

import javax.servlet.ServletException
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse
import javax.servlet.http.HttpSession

class GroggySuccessHandler extends AjaxAwareAuthenticationSuccessHandler {

    boolean userIsBadPerson = false

    @Override
    protected String determineTargetUrl(HttpServletRequest request,HttpServletResponse response) {

        if(userIsBadPerson){
            logger.info(&quot;This user is very nasty. Send him to /dev/null to rot.&quot;)
            return &quot;/dev/null&quot;
        }
        else {
            return super.determineTargetUrl(request, response)
        }
    }

    @Override
    public void onAuthenticationSuccess(final HttpServletRequest request, final HttpServletResponse response,
                                        final Authentication authentication) throws ServletException, IOException {
        try {
            checkIfTheUserIsBadPerson(request.getSession(),authentication)
            handle(request,response,authentication)
            super.clearAuthenticationAttributes(request)
        }
        finally {
            // always remove the saved request
            requestCache.removeRequest(request, response)
        }
    }

    protected void handle(HttpServletRequest request, HttpServletResponse response, Authentication authentication)
            throws IOException, ServletException {
        String targetUrl = determineTargetUrl(request, response)

        if (response.isCommitted()) {
            logger.debug(&quot;Response has already been committed. Unable to redirect to &quot; + targetUrl)
            return
        }

        redirectStrategy.sendRedirect(request, response, targetUrl)
    }


    private void checkIfTheUserIsBadPerson(HttpSession session, Authentication authentication){

        // do the groggy check to find if the user is a bad person
        // presume that the user is always a bad person
        userIsBadPerson = true
    }
}

When the user authenticates successfully onAuthenticationSuccess method is called. With this code the method determineTargetUrl will be always referenced when the user logins and from there we can easily change the targetUrl that the handle method redirects to. I wrote a logical check with which I check when to redirect and how to build my targetUrl in determineTargetUrl. Don’t forget to define the bean in resources.groovy .  The bean id must be the same as in the plugin, otherwise this class will be just ignored.

beans = {
    // other beans
    authenticationSuccessHandler(GroggySuccessHandler) {
        /* Reusing the security configuration */
        def conf = SpringSecurityUtils.securityConfig
        /* Configuring the bean */
        requestCache = ref('requestCache')
        redirectStrategy = ref('redirectStrategy')
        defaultTargetUrl = conf.successHandler.defaultTargetUrl
        alwaysUseDefaultTargetUrl = conf.successHandler.alwaysUseDefault
        targetUrlParameter = conf.successHandler.targetUrlParameter
        ajaxSuccessUrl = conf.successHandler.ajaxSuccessUrl
        useReferer = conf.successHandler.useReferer
    }
}

And that’s it my lads and gals. (tested with Grails 2.5.0 and spring-security-core:2.0-RC4)

And a song as always.

Dumping MongoDB database

Last couple of months I was wandering through the world of MongoDb, both as a developer and as a sysadmin. I won’t say how happy I’m with it or how much I got disappointed , I will just note that you don’t know how deep or serious you dived into a software product till the moment when you feel the need for a backup. And that day came, and I did something about it.

I’m not so much into installing backup software that will just do the things for me, I always start with the idea that I can do it with my own simple implementation that will fill my needs for saving data or doing something else. Mongo has become a huge software/database monster and there are a lot of different approaches regarding this question. Off course the right way for you depends of several factors. Some of them are: the infrastructure of the Mongo deployment, the importance of the data, the quantity of the data, the performance factor etc.

For my needs that are associated with a single cluster instance of Mongo database without replication or sharding I realized that using the “classical” dumping method will fulfil .
One of the things I like about Mongo is its nicely done documentation. A lot of information about backuping MongoDb can be found on its official site:

http://docs.mongodb.org/manual/core/backups/

The documentation is like a guide showing you the different ways of backuping. Generally there are 3 different approaches: doing file system snaphosts, using mongodump command and using MongoDB Management Service.

For my dumping approach I wrote simple and primitive bash shell script that you can use for local backups or as a push towards the idea how to backup mongo data. It is oriented around dumping databases. Mongo data can be dumped as whole instance, whole collection or part collection and as a database. Here is the scirpt, the product of it are BSON and metadata JSON files at the designated directory.

#!/bin/bash

#
# GNRP LICENSE: Script licensed by GOD
#
# @author: Igor Ivanovski <igor at genrepsoft.com>
#
# March, 2015
#

#
# Filesystem directory variables
#

# Now in format: YYYY-MM
DATEYYMM=`date +"%Y-%m"`

# Now in format: DD
DATEDD=`date +%d`

# Backup directory 
BACKUPDIR="/opt/backup/mongodumps"

# Daily backup directory
BACKUPDIRDAILY="$BACKUPDIR/$DATEYYMM/$DATEDD/"

#
# List of databases to backup
#
DBs="
admin
someDb
";

#
### Mongo Server Setup ###
#

# Don't forget to add adequate roles if you are using authenticatio
#use someDb db.createUser({user:"backup",pwd:"pwd",roles:["readWrite"]})
#use admin db.createUser({user:"backup",pwd:"pwd",roles:["backup"]})

# Mongo backup username  
MUSER="backup"

# Mongo backup password
MPASS="pwd"

# Mongo HOST  name
MHOST="localhost"

# Mongo PORT number 
MPORT="27017"

# Mongo dump binary 

# Check if mongodump is installed
STATUS=0
[ "Y`which mongodump`" != "Y" ] && STATUS=$? || STATUS=$?
[ $STATUS == 1 ] && echo "No mongodump found. Exiting"; exit 1 
MONGODUMP=`which mongodump`;

# Starting to dump databases one by one
if [ "X$DBs" != "X" ]; then
    for db in $DBs
        do
            echo "Backing up database $db"
            $MONGODUMP --host $MHOST --port $MPORT --username $MUSER --password $MPASS --out $BACKUPDIRDAILY --db $db
        done

else
        echo "All listed mongo databases dumped. Bye"
fi

Thank you for reading me, here is a bonus from me for swinging 😉

Deploying WordPress over Nginx and PHP-FPM

Welcome random web traveler. As the title suggests this post will deal with plain production ready examples of Nginx configuration (plus php-fpm) for WordPress site.

Before we move to the real thing, note that this examples are tested on both Debian 7 and CentOS 7 OSs. Since I don’t want to dive into setting up this servers for WordPress, I’m just giving your refined nginx configs that may be found useful. However the steps for building up WordPress on Linux are pretty simple:

– Installing Nginx from package repository or compiling it from scratch;

– Installing php5, php-mysql, php-fpm and other php libraries if needed (like php-gd);

– Installing MySql or Maria-DB;

– And off course setting up php, nginx and mysql/mariadb.

OK lets start with the nginx server configuration.  Found in /etc/nginx/nginx.conf

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

events {
 worker_connections 768;
 # multi_accept on;
}

http {

 ##
 # Basic Settings
 ##

 sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 2048;
 server_tokens off;

 client_max_body_size 100m;

 client_header_buffer_size 1k;
 large_client_header_buffers 8 8k;

 # server_names_hash_bucket_size 64;
 # server_name_in_redirect off;

 include /etc/nginx/mime.types;
 default_type application/octet-stream;

 ##
 # Logging Settings
 ##

 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;

 ##
 # SSL settings
 ##
 ssl_session_cache shared:SSL:10m;
 ssl_session_timeout 10m;

 ##
 # Gzip Settings
 ##

 gzip on;
 gzip_disable &quot;msie6&quot;;

 # gzip_vary on;
 # gzip_proxied any;
 # gzip_comp_level 6;
 # gzip_buffers 16 8k;
 # gzip_http_version 1.1;
 # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

 ##
 # nginx-naxsi config
 ##
 # Uncomment it if you installed nginx-naxsi
 ##

 #include /etc/nginx/naxsi_core.rules;

 ##
 # nginx-passenger config
 ##
 # Uncomment it if you installed nginx-passenger
 ##

 #passenger_root /usr;
 #passenger_ruby /usr/bin/ruby;

 ##
 # Virtual Host Configs
 ##

 include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;
}

Pretty straightforward, right?

Next the Nginx magic behind WordPress. This example assumes that we want the administration of WordPress to go through SSL (https protocol). File: example.conf

server {
 ## Your website name goes here.
 server_name example.com www.example.com;
 listen 80;
 ## Your only path reference.
 root /opt/wordpress/;
 ## This should be in your http block and if it is, it's not needed here.
 index index.php;
 # port_in_redirect on;

 access_log /var/log/nginx/example_log;
 error_log /var/log/nginx/example_err warn;

 # rewrite all 403 to 404
 error_page 403 = 404;

 location = /favicon.ico {
 log_not_found off;
 access_log off;
 }

 location = /robots.txt {
 allow all;
 log_not_found off;
 access_log off;
 }

 # deny all access to .dot files
 location ~ /\. { access_log off; log_not_found off; deny all; }

 # deny access to files starting with a $, these are usually temp files
 location ~ ~$ { access_log off; log_not_found off; deny all; }

 location / {
 # This is cool because no php is touched for static content.
 # include the &quot;?$args&quot; part so non-default permalinks doesn't break when using query string
 try_files $uri $uri/ /index.php?$args;
 }

 location ~ /wp-admin/admin-ajax\.php {
 try_files $uri =404;

 # With php5-fpm
 fastcgi_intercept_errors on;
 fastcgi_pass 127.0.0.1:9000;

 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 include fastcgi_params;

 }

 # Request to wp-login to go through HTTPS protocol
 location ~ /(wp-admin/|wp-login\.php) {
 return 301 https://$host$request_uri;
 #rewrite /wp-(admin|login) $scheme://$host$request_uri/ permanent;
 }

 location ~ \.php$ {
 try_files $uri =404;
 #NOTE: You should have &quot;cgi.fix_pathinfo = 0;&quot; in php.ini

 # With php5-fpm
 fastcgi_intercept_errors on;
 fastcgi_pass 127.0.0.1:9000;

 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 include fastcgi_params;

 }

 location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
 expires max;
 log_not_found off;
 }

 error_page 500 502 503 504 /50x.html;
 location = /50x.html {
 root /usr/share/nginx/www;
 }

}

server {
 listen 443 ssl;
 server_name example.com www.example.com;
 index index.php;

 root /opt/wordpress/;

 # Logs
 access_log /var/log/nginx/example_ssl_access.log;
 error_log /var/log/nginx/example_ssl_error.log info;

 ssl on;
 ssl_certificate /etc/ssl/certs/example.com.crt;
 ssl_certificate_key /etc/ssl/keys/example.com.key;

 ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
 ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
 ssl_prefer_server_ciphers on;

 # Process requests to wp-admin/* and wp-login.php
 location ~ /wp-(admin|login|content|includes) {

 location ~ \.php$ {
 try_files $uri =404;
 #fastcgi_split_path_info ^(.+\.php)(/.+)$;

 # With php5-fpm
 fastcgi_intercept_errors on;
 fastcgi_pass 127.0.0.1:9000;

 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 fastcgi_param HTTPS on;
 include fastcgi_params;

 }
 }

 # redirect everyone back to the non-ssl page
 location / { return 301 http://$host$request_uri; }

 location ~ !^(/wp-admin/|wp-login\.php) { return 301 http://$host$request_uri; }

 # rewrite all 403 to 404
 error_page 403 = 404;

 # deny all access to .dot files
 location ~ /\. { access_log off; log_not_found off; deny all; }

 # deny access to files starting with a $, these are usually temp files
 location ~ ~$ { access_log off; log_not_found off; deny all; }

 # keep logs clean by not logging access to favicon.
 location = /favicon.ico { access_log off; log_not_found off; }

 # keep logs clean by not logging access to robots.txt
 location = /robots.txt { access_log off; log_not_found off; }

}

FastCGI params that are defined in Nginx (/etc/nginx/fastcgi_params)


fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_FILENAME $request_filename;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

fastcgi_param HTTPS $https;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;

You can see that 2 params are overwritten in the example.conf.

Blended with Nginx we use PHP FastCGI Process Manager or PHP-FPM. I like to start it like a daemon. For that reason we can use init script installed at /etc/init.d/php-fpm . Btw I borrowed it.


#!/bin/sh
### BEGIN INIT INFO
# Provides: php-fpm php5-fpm
# Required-Start: $remote_fs $network
# Required-Stop: $remote_fs $network
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts php5-fpm
# Description: Starts PHP5 FastCGI Process Manager Daemon
### END INIT INFO

# Author: Ondrej Sury &lt;ondrej@debian.org&gt;

PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC=&quot;PHP5 FastCGI Process Manager&quot;
NAME=php5-fpm
DAEMON=/usr/sbin/$NAME
DAEMON_ARGS=&quot;--fpm-config /etc/php5/fpm/php-fpm.conf&quot;
PIDFILE=/var/run/php5-fpm.pid
TIMEOUT=30
SCRIPTNAME=/etc/init.d/$NAME

# Exit if the package is not installed
[ -x &quot;$DAEMON&quot; ] || exit 0

# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] &amp;&amp; . /etc/default/$NAME

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

# Define LSB log_* functions.
# Depend on lsb-base (&gt;= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions

#
# Function to check the correctness of the config file
#
do_check()
{
[ &quot;$1&quot; != &quot;no&quot; ] &amp;&amp; $DAEMON $DAEMON_ARGS -t 2&gt;&amp;1 | grep -v &quot;\[ERROR\]&quot;
FPM_ERROR=$($DAEMON $DAEMON_ARGS -t 2&gt;&amp;1 | grep &quot;\[ERROR\]&quot;)

if [ -n &quot;${FPM_ERROR}&quot; ]; then
echo &quot;Please fix your configuration file...&quot;
$DAEMON $DAEMON_ARGS -t 2&gt;&amp;1 | grep &quot;\[ERROR\]&quot;
return 1
fi
return 0
}

#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test &gt; /dev/null \
|| return 1
start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
$DAEMON_ARGS 2&gt;/dev/null \
|| return 2
# Add code here, if necessary, that waits for the process to be ready
# to handle requests from services started subsequently which depend
# on this one. As a last resort, sleep for some time.
}

#
# Function that stops the daemon/service
#
do_stop()
{
# Return
# 0 if daemon has been stopped
# 1 if daemon was already stopped
# 2 if daemon could not be stopped
# other if a failure occurred
start-stop-daemon --stop --quiet --retry=QUIT/$TIMEOUT/TERM/5/KILL/5 --pidfile $PIDFILE --name $NAME
RETVAL=&quot;$?&quot;
[ &quot;$RETVAL&quot; = 2 ] &amp;&amp; return 2
# Wait for children to finish too if this is a daemon that forks
# and if the daemon is only ever run from this initscript.
# If the above conditions are not satisfied then add some other code
# that waits for the process to drop all resources that could be
# needed by services started subsequently. A last resort is to
# sleep for some time.
start-stop-daemon --stop --quiet --oknodo --retry=0/30/TERM/5/KILL/5 --exec $DAEMON
[ &quot;$?&quot; = 2 ] &amp;&amp; return 2
# Many daemons don't delete their pidfiles when they exit.
rm -f $PIDFILE
return &quot;$RETVAL&quot;
}

#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
#
# If the daemon can reload its configuration without
# restarting (for example, when it is sent a SIGHUP),
# then implement that here.
#
start-stop-daemon --stop --signal USR2 --quiet --pidfile $PIDFILE --name $NAME
return 0
}

case &quot;$1&quot; in
start)
[ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_daemon_msg &quot;Starting $DESC&quot; &quot;$NAME&quot;
do_check $VERBOSE
case &quot;$?&quot; in
0)
do_start
case &quot;$?&quot; in
0|1) [ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_end_msg 0 ;;
2) [ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_end_msg 1 ;;
esac
;;
1) [ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_end_msg 1 ;;
esac
;;
stop)
[ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_daemon_msg &quot;Stopping $DESC&quot; &quot;$NAME&quot;
do_stop
case &quot;$?&quot; in
0|1) [ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_end_msg 0 ;;
2) [ &quot;$VERBOSE&quot; != no ] &amp;&amp; log_end_msg 1 ;;
esac
;;
status)
status_of_proc &quot;$DAEMON&quot; &quot;$NAME&quot; &amp;&amp; exit 0 || exit $?
;;
check)
do_check yes
;;
reload|force-reload)
log_daemon_msg &quot;Reloading $DESC&quot; &quot;$NAME&quot;
do_reload
log_end_msg $?
;;
reopen-logs)
log_daemon_msg &quot;Reopening $DESC logs&quot; $NAME
if start-stop-daemon --stop --signal USR1 --oknodo --quiet \
--pidfile $PIDFILE --exec $DAEMON
then
log_end_msg 0
else
log_end_msg 1
fi
;;
restart)
log_daemon_msg &quot;Restarting $DESC&quot; &quot;$NAME&quot;
do_stop
case &quot;$?&quot; in
0|1)
do_start
case &quot;$?&quot; in
0) log_end_msg 0 ;;
1) log_end_msg 1 ;; # Old process is still running
*) log_end_msg 1 ;; # Failed to start
esac
;;
*)
# Failed to stop
log_end_msg 1
;;
esac
;;
*)
echo &quot;Usage: $SCRIPTNAME {start|stop|status|restart|reload|force-reload}&quot; &gt;&amp;2
exit 1
;;
esac

:

This script assumes that we have one main php-fpm.conf file where we define pid file, log file, pools that we will use etc. Every change we do to php configuration like max_upload_size and else can be applied by reloading of this daemon.

That’s it my dearest. Don’t forget that WordPress requires additional settings for working with SSL too. Any questions or suggestions, please write.

Hopes I saved a little precious time of yours 😉

Resolving HTTP Error 416 Requested Range not satisfiable (IIS 7.5 example)

The chaos called HTTP often surprises us with some new sweet little error that put us in not so satisfiable position. For example the anonymous HTTP error 416.

This error is product of mismatch between the client (for example browser) and the server regarding to the Accept-Ranges header tag. In particular this error emerges when the client is requesting a bigger resource similar to pdf files or images and awaits for the response of the server. The initial response could be fine but the streaming of the resource could fail. Why?

Accept-Ranges: bytes means that the request can be served partially. After the initial content-length parameter the server should provides us Content-Range tag (bytes size) in the response with every ‘partial’ request to keep the consistency of the stream. If this doesn’t work right, say hello to error 416. To be noted that this is highly influenced by how the client is implemented, not only the server.

To check if the server supports range header we could do a thing. Send HEAD request instead of GET to the wanted resource. If response 206 is returned range is supported. If the response code is 200 server side should change Accept-Range header to not foul clients.

Curl example: $ curl -i -X HEAD http://address/path/resource

Nevertheless let’s just solve the problem. Usually I notice this error in relation IIS -> Chrome or Apache -> Chrome. But it occurs on Firefox too.

Setting Accept-Bytes: none in IIS: Intenet Information Service (IIS) Manager -> Server Node -> Sites -> Problematic Site -> HTTP Response Headers (In IIS section) -> Add (Action) -> Name: Accept-Bytes; Value: none .

Setting Accept-Bytes: none in Apache2 : Enable apache2 mod_headers. Add “Header set Accept-Ranges none” in configuration of host.

Restart changes to take effect.

Have a nice day : )