Ruby, Rails, fixtures and fails

After a long time working on top of Java and its beautiful JVM and robust ecosystem my professional path lead me to another powerful web development system – Ruby and Rails. So how did I felt during the ‘transformation’ or did I metamorphose into a vermin ?

From programmsers point of view, very comfortable. Ruby was/is too easy to learn and grasp especially if you have worked with Groovy in the last couple of years. On the other hand Rails paradigms are very close and similar to that of Grails (it’s MVC and it was inspired by Rails so) and most of my web development is based on it. However changing the language and the framework happened to be more challenging from another aspect. I’m talking about the tools, helpers, libraries/gems and what the community have built until now in the field of helping and improving the development processes. Both Groovy & Grails and Ruby & Rails are open source, so you know what I mean.

As I dived more deeply into the presented problems I found about a feature that makes Rails great tool for TDD (Test Driven Development) – fixtures. Basically, a set of YAML files that contain mock data that can be loaded during the testing process. That means tables, rows, associations presented in a YAML format. Pretty cool.

But the transformation from YAML data into RDS principles for storing data isn’t so simple. One of it’s issues is the foreign key references. The problem that occurred for me with fixtures is the loading order of the files and their mapping into a database. Let’s assume that we have three tables: items, users, and user_items and the latter contains references to both user and item table. With fixtures we will have three different files: users.yml, items.yml, and user_items.yml that contain the mock data. With the start of the test process the Active Record fixture module starts loading this files into memory and executes the queries responsible for inserting data into the database. And everything is cool, no problem, but happens if we first have the user_items.yml loaded into the DB? Well it will fail. Why? Because of the database referential constraint system we will face the problem of non existing foreign key values for the user and item references.

Well Rails developers weren’t so stupid and they thought of this. If you dive into the fixtures implementation it will be revealed that Rails is invoking the method disable_referential_integrity. That means that Rails will try to remove the constraints for the test database and just insert the data. But on most RDS system the database users need to have super user privileges to execute those commands.

Since I stumbled upon on this problem and it reflected both locally and on the CI system, I needed to find a ‘workaround’ solution. So I started thinking that if we control the order of loading of the YAML files then we can control the inserting flow and like a consequence bypass the logical problem of referential integrity (the default loading is randomly alphabetical, but I’m not sure). Then started googling, reading blogs, scanning through stackoverflow and you know all of that monkey attempts to solve your problem. And luckily then – eureka! I found the solution: override fixtures method for loading yaml files and control the order of deleting the data (since that is influenced by the referential constraint too).

The solution is simple and can be found in this gist. Extending the ActiveSupport:FixtureSet class with the purpose to override the method create_fixtures (but not totally reimplement it thanks to the help of ruby aliases) that is responsible for the obvious, creating the fixtures, nails it. We implement this code sample in a file that is required by the tests. With the UserItem.delete_all we care that all user items are first deleted before the User and Item tables are dropped. The variable fs_names holds the names of the items/files that will be loaded and gives priority to users and items before any other. That means that they will be processed and loaded before the user_items yaml file and there won’t be any referential integrity issue.

Till I came to this which is just modified version of a proposed solution in the stackoverflow network I read and peeked into a lot of online resources. With this in mind I hope someone with similar problem will face this blog post first and save his/her time for something more productive that scraping half of the internet for a solution.

Cheers, I.

Used resources:

Dumping MongoDB database

Last couple of months I was wandering through the world of MongoDb, both as a developer and as a sysadmin. I won’t say how happy I’m with it or how much I got disappointed , I will just note that you don’t know how deep or serious you dived into a software product till the moment when you feel the need for a backup. And that day came, and I did something about it.

I’m not so much into installing backup software that will just do the things for me, I always start with the idea that I can do it with my own simple implementation that will fill my needs for saving data or doing something else. Mongo has become a huge software/database monster and there are a lot of different approaches regarding this question. Off course the right way for you depends of several factors. Some of them are: the infrastructure of the Mongo deployment, the importance of the data, the quantity of the data, the performance factor etc.

For my needs that are associated with a single cluster instance of Mongo database without replication or sharding I realized that using the “classical” dumping method will fulfil .
One of the things I like about Mongo is its nicely done documentation. A lot of information about backuping MongoDb can be found on its official site:

The documentation is like a guide showing you the different ways of backuping. Generally there are 3 different approaches: doing file system snaphosts, using mongodump command and using MongoDB Management Service.

For my dumping approach I wrote simple and primitive bash shell script that you can use for local backups or as a push towards the idea how to backup mongo data. It is oriented around dumping databases. Mongo data can be dumped as whole instance, whole collection or part collection and as a database. Here is the scirpt, the product of it are BSON and metadata JSON files at the designated directory.


# GNRP LICENSE: Script licensed by GOD
# @author: Igor Ivanovski <igor at>
# March, 2015

# Filesystem directory variables

# Now in format: YYYY-MM
DATEYYMM=`date +"%Y-%m"`

# Now in format: DD
DATEDD=`date +%d`

# Backup directory 

# Daily backup directory

# List of databases to backup

### Mongo Server Setup ###

# Don't forget to add adequate roles if you are using authenticatio
#use someDb db.createUser({user:"backup",pwd:"pwd",roles:["readWrite"]})
#use admin db.createUser({user:"backup",pwd:"pwd",roles:["backup"]})

# Mongo backup username  

# Mongo backup password

# Mongo HOST  name

# Mongo PORT number 

# Mongo dump binary 

# Check if mongodump is installed
[ "Y`which mongodump`" != "Y" ] && STATUS=$? || STATUS=$?
[ $STATUS == 1 ] && echo "No mongodump found. Exiting"; exit 1 
MONGODUMP=`which mongodump`;

# Starting to dump databases one by one
if [ "X$DBs" != "X" ]; then
    for db in $DBs
            echo "Backing up database $db"
            $MONGODUMP --host $MHOST --port $MPORT --username $MUSER --password $MPASS --out $BACKUPDIRDAILY --db $db

        echo "All listed mongo databases dumped. Bye"

Thank you for reading me, here is a bonus from me for swinging 😉

Cool usage of TimeCategory in Groovy

Groovy, the programming language based on JVM implements a feature called Categories. It is originally borrowed from Objective-C . Simple explanation for this feature can be the ability to implement new methods in existing classes without modifying their original code which in some way is injecting new methods through a Category class. For more information official documentation can be found here .

Rather interesting for me was playing with the TimeCategory class for writing a short and easy script for fixing some datetime columns in database. This class offers a convenient way of Date and Time manipulation.

General syntax for categories is the following:

use ( Class ) {
// Some code

Concrete usage of TimeCategory:

use ( TimeCategory ) {
// application on numbers:
println 10.hours.ago
// application on dates
def someDate = new Date()
println someDate - 3.months

Seems weird? From when Integer has months, minutes, hours etc. methods ? Well it still doesn’t have any of that, however those methods are dynamically added with the TimeCategory use.

If you are interested how is this possible I suggest you to go through TimeCategory API and source code if possible. Also this forum post can be useful for deeper understanding of the groovy magic.

And last but not least, an example groovy script for your pleasure.

@Grab(group='mysql', module='mysql-connector-java', version='5.1.27')

import groovy.time.TimeCategory
import java.sql.Timestamp

sql = groovy.sql.Sql.newInstance(

def rows= [:]

// Select Data
sql.eachRow("select * from Table_Name"){
def impDates = new ImportedDates() // This is some custom Class found in the same package/directory if script
impDates.dateColumn = it.dateColumn

impDates.dateColumn = impDates.dateColumn - // Shift dateColumn for one day backwards in time

rows.put(it.UID,impDates) // Put private key and ImportedDate object in Map


// Update Data
rows.each {row-&gt;
ImportedDates id = row.value
// Check if value is different from null, if it is convert it to Timestamp(we use datetime column in db) and execute update query
dateColumn  = null
if(id.dateColumn) dateColumn = new Timestamp(id.dateColumn.getTime())

// Actual update query
sql.executeUpdate('update Table_Name set dateColumn = ? ' +
'where UID like ?',
[dateColumn, row.key.toString()])



Setting up MySql server with utf-8 charset(s)

I can’t remember how many times I installed mysql server or mysql client on some Linux machines (occasionaly on Windows also) and forgot to change the charset of both client, connection, server etc.
For me being from non-latin culture and also working in such an enviroment it is always a step plus and torment while configuring the mysql servers to make them workable with utf8.
With that tought in mind, the purpose of this post is to give you fast solution on setting up utf8 or any other charset in your servers or development machines.
I also suggest making a skeleton my.cnf file and copy-pasting it on new instances of MySql.

Log in on mysql console and execute the query: mysql> show variables like ‘%char%’;

This will give you probably an output like this:

mysql> show variables like ‘%char%’;
| Variable_name | Value |
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
8 rows in set (0.00 sec)

(sorry for bad formatting)

Well it’s latin1 for the client, connection, current selected database, results, and whole server. Filesystem and system are fine . Let’s do the changes that are needed and set up utf-8.

Open up /etc/my.cnf or /etc/mysql/my.cnf (depending od distribution) with some editor (Vim, nano …). As we know MySql configuration is set in blocks that are categorized by function or logical part of the full system where we put different MySql parameters that are appropriate for that part of mysql. The beginning of this blocks is marked with square brackets, like [mysqld].

That are some basic stuff, the changes that are needed are following.

For the character_set_client in the block [client] we should set default-character-set=utf8 .


The encoding  and collation type of whole server and all new created databases will be set by the following parameters:

init-connect=’SET NAMES utf8′      # A string to be executed by the server for each client that connects

(be aware of the group [mysqld])

A change is also needed in [mysql] for the mysql console client.



Remember you can always change this parameters by database scope, this is just the default behaviour of the server. You can also do system variables changes on runtime but not everything evaluates immediately. Of course the system variables have scopes of their own, some are global some or not.

See the following documentation for diving in :

That are the changes we do in the default option file. Remember, the existance of this file is recommended not needed for mysql to work.

Next we should restart the server and print the char variables.

This is the output:

mysql> show variables like ‘%char%’;
| Variable_name            | Value                      |
| character_set_client     | utf8                       |
| character_set_connection | utf8                       |
| character_set_database   | utf8                       |
| character_set_filesystem | binary                     |
| character_set_results    | utf8                       |
| character_set_server     | utf8                       |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
8 rows in set (0.00 sec)

On Windows I use the Mysql Workbench program for managing , its operable on Linux too.

Have a nice day ! o/