Ruby, Rails, fixtures and fails

After a long time working on top of Java and its beautiful JVM and robust ecosystem my professional path lead me to another powerful web development system – Ruby and Rails. So how did I felt during the ‘transformation’ or did I metamorphose into a vermin ?

From programmsers point of view, very comfortable. Ruby was/is too easy to learn and grasp especially if you have worked with Groovy in the last couple of years. On the other hand Rails paradigms are very close and similar to that of Grails (it’s MVC and it was inspired by Rails so) and most of my web development is based on it. However changing the language and the framework happened to be more challenging from another aspect. I’m talking about the tools, helpers, libraries/gems and what the community have built until now in the field of helping and improving the development processes. Both Groovy & Grails and Ruby & Rails are open source, so you know what I mean.

As I dived more deeply into the presented problems I found about a feature that makes Rails great tool for TDD (Test Driven Development) – fixtures. Basically, a set of YAML files that contain mock data that can be loaded during the testing process. That means tables, rows, associations presented in a YAML format. Pretty cool.

But the transformation from YAML data into RDS principles for storing data isn’t so simple. One of it’s issues is the foreign key references. The problem that occurred for me with fixtures is the loading order of the files and their mapping into a database. Let’s assume that we have three tables: items, users, and user_items and the latter contains references to both user and item table. With fixtures we will have three different files: users.yml, items.yml, and user_items.yml that contain the mock data. With the start of the test process the Active Record fixture module starts loading this files into memory and executes the queries responsible for inserting data into the database. And everything is cool, no problem, but happens if we first have the user_items.yml loaded into the DB? Well it will fail. Why? Because of the database referential constraint system we will face the problem of non existing foreign key values for the user and item references.

Well Rails developers weren’t so stupid and they thought of this. If you dive into the fixtures implementation it will be revealed that Rails is invoking the method disable_referential_integrity. That means that Rails will try to remove the constraints for the test database and just insert the data. But on most RDS system the database users need to have super user privileges to execute those commands.

Since I stumbled upon on this problem and it reflected both locally and on the CI system, I needed to find a ‘workaround’ solution. So I started thinking that if we control the order of loading of the YAML files then we can control the inserting flow and like a consequence bypass the logical problem of referential integrity (the default loading is randomly alphabetical, but I’m not sure). Then started googling, reading blogs, scanning through stackoverflow and you know all of that monkey attempts to solve your problem. And luckily then – eureka! I found the solution: override fixtures method for loading yaml files and control the order of deleting the data (since that is influenced by the referential constraint too).

The solution is simple and can be found in this gist. Extending the ActiveSupport:FixtureSet class with the purpose to override the method create_fixtures (but not totally reimplement it thanks to the help of ruby aliases) that is responsible for the obvious, creating the fixtures, nails it. We implement this code sample in a file that is required by the tests. With the UserItem.delete_all we care that all user items are first deleted before the User and Item tables are dropped. The variable fs_names holds the names of the items/files that will be loaded and gives priority to users and items before any other. That means that they will be processed and loaded before the user_items yaml file and there won’t be any referential integrity issue.

Till I came to this which is just modified version of a proposed solution in the stackoverflow network I read and peeked into a lot of online resources. With this in mind I hope someone with similar problem will face this blog post first and save his/her time for something more productive that scraping half of the internet for a solution.

Cheers, I.


Used resources:

A Groovy parser for CSV files

Parsing CSV these days is pretty straight-forward and not a big deal especially when we have the handy libraries from Apache Commons (I’m talking bout Java world). In this post I will give you an example how to use the Apache Commons CSV with the magic of Groovy and its closures so it can look and feel a little more fun because parsing in general is job for sad people (not kidding).

We’ll make ourself a simple Groovy class that will hold a reference to the CVSParser file, and a reference to the headers and the current record/line of the file that we will iterate with the closure delegate set to the instance of this CSVParserUtils class.

Something like this:

class CSVParseUtils {

    CSVParser csvFile
    def record
    def headers

    CSVParseUtils(String fileLocation) {
        def reader = Paths.get(fileLocation).newReader()
        CSVFormat format = CSVFormat.DEFAULT.withHeader().withDelimiter(delimiter)
        csvFile = new CSVParser(reader, format)
        def header = csvFile.headerMap.keySet().first()
        headers = header.split(delimiter as String)
    }

As we can see it’s a constructor that takes the location to the CSV file that we want to parse, creates some default parsing format and generates new CSVFile that holds the CSV data.

As we see parsing is easy, but it’s better when we can transform the data on the run as we loop it. For that reason we will define a method called eachLine that will take a params Map and a Closure that will have access to the record/line instance and will do something with it.

/**
 * List each line of the csv and execute closure
 * @param params
 * @param closure
 */
def eachLine(Map params = [:], Closure closure) {
    def max = params.max ?: maxLines
    int linesRead = 0
    def rowIterator = csvFile.iterator()
    closure.setDelegate(this)

    while (rowIterator.hasNext() && linesRead++ < max) {
        record = rowIterator.next()
        closure.call(record)
    }
}

It’s nothing special only a simple loop that iterates through the iterator and calls the closure with the given record for that line as a closure argument.

How to use it?

def parser = new CSVParseUtils(fileLocation)
def result = [:]
// first 2 lines without header
parser.eachLine([max: 2]) { CSVRecord record ->
    result.put(record.recordNumber, record.values.size() > 4 ? 
             record.values[0..4] : record.values[0..record.values.size()])
}

We imagine that we need only the first 2 lines and the first 5 columns or something like that.

As you can see this closure loop is not specially connected with CSV, it’s just a clean way to iterate through any textual file line by line and do something with it. As a matter of fact you can use the BufferedReader which has method eachLine too.

The source code for this whole example can be found on github.

Thanks for reading.

Grails and SAPUI5 are friends

Hello reader,

instead of the planned walk through the city park and drinking some beer(s) mother nature swinging moods changed my plans and in place of the shiny sun gave me hard rain and sour mouth. In a situation like that, alone and bored I decided to bore you too and share this short text about two good friends called Grails and SAPUI5 (respect to the OpenUI5 project too). 🙂

I’ve been working hard with the Grails framework this couple of years and different situations led me to different scenarios. Lately I found myself in situation that asked bringing closer the powerful SAP services to the web/mobile clients. And what is better that using the outsourced JavaScript MVC framework made by SAP called SAPUI5 or if you prefer the open source project name OpenUI5 in conjunction with the versatile Grails Framework.

If you’re familiar with Grails then you certainly know that with the latest 3++ versions of Grails there is great support for already established and pretty much famous frameworks/libraries AngularJS and ReactJS in forms of Grails app profiles and plugins. But there is no “official” support for interbridging SAPUI5 and Grails and that is the main motive for writing this blog post and sharing it with you.

SAPUI5 is a single paged application where all the magic is done with JS so what we need is a single html file or in this case a single gsp file. We use that file to define the paths to the SAPUI5 runtime (or sdk) resources and to init the main SAPUI5 application via short JavaScript code. SAPUI5 is best when used with the OData services and that its where this software shines, however it has also great support when working with JSON and provides us with swift JSONModel that we can use to fill up the application data. And because we have JSON then we must have the Restful Grails controllers that will provide us with well defined JSON.

So the situation is pretty simple: Grails connects us with the backend via web services (or else?) or it provides us the data on its own via GORM or something else. Then Grails transforms the data into a JSON format that is a sweet cake for the SAPUI5 to consume and make it look great both on web browsers and on any mobile clients (Smart phones, tablets etc.).

Well this won’t be worth a penny without a working example, right? Because that’s the cause I’ve published a little demo of Grails and SAPUI5 playing together that you can check it on github. In short words we have a Spring Security Core plugin for the authentication and authorization, the JSON Views plugin for making the JSON even easier and also an example how to make it work via rest based http calls if your clients is native app . And of course the SAPUI5 application itself.

Here’s the link to the repo.

Thanks for reading,

cheers.

 

 

Resolving error 413 (Request Entity Too Large/Not Allowed) in IIS 7.5

Working with IIS lately have brought me a lot of trouble, however it also increased my in-depth knowledge of its working abilities and adaptability.

One loving situation (between the minor ones) appeared after we did transition from http to https. After fixing the minor ones everything was working smooth and groovy except that sometimes the upload of files was broken. Then we realized it was not someTIME, but someTHING or the concrete size of the uploaded file that was causing the problem. Haa, so common problem when setting up Web Server you say, me said also, but this time the workaround was a little more pain in the *ss if you know what I mean.

The response given when uploading was an intelligent block by the Web Server resulting with error 413 – Request Entity Too Large. That doesn’t make a sense, I’m uploading  file that is 100KB but the settings allow max to 100MB file…

So with the help of rogue googling enabled by http://www.startpage.com I set my self digging into the problem overcome. One thing was clear, the trigger for this dysfunctionality certainly was https, since the uploading worked fine on http not secure. That gave me the rough but brilliant idea what https do. It encrypts, keeps extra request data, it certainly enlarges the request payload.

Hmm, OK first lets check the standard file limits in IIS.

Normal setting for max upload file size:

Setting the request limits in the root web.config of the site (default is 30 MB). This can be set in Internet Information Services Manager Program also (MACHINE->Site->IIS->Request Filtering->Edit Feature Settings)

<!– 100 MB . Format uses Bytes –>

<security>

    <requestFiltering>

        <requestLimits maxAllowedContentLength=”102400000″ />

    </requestFiltering>

   </security>

For ASP.NET you have more specific configuration:

http://msdn.microsoft.com/en-us/library/e1f13641%28v=vs.85%29.aspx

Example:

 <system.web>

   <httpRuntime maxRequestLength=”102400000″ executionTimeout=”3600″ />

 </system.web>

For legacy ASP:

<!– This goes under ASP – can be set with IIS Manager also 😉 –>

<limits maxRequestEntityAllowed=”102400000″/>

And here comes our solution:

http://www.iis.net/configreference/system.webserver/serverruntime

What we faced here is a buffer related problem. It’s not about the maxRequestEntityAllowed since the default size is Unlimited , but how IIS handles the request. After some empirics we noticed that the problem was occurring only with files larger or equal to 42 KB. And what is the default value of uploadReadAheadSize? 42 KB.

Then I charged myself to change this property. So I did the following:

Since we need to do section overriding through the means of <location> to change the default values (this configuration is not possible through IIS Manager)

<location path=”SiteName” overrideMode=”Allow”>

  <system.webServer>

   <asp>

    <session />

    <comPlus />

    <cache />

    <limits maxRequestEntityAllowed=”102400000″/>

   </asp>

    <serverRuntime enabled=”true” uploadReadAheadSize=”102400″ />

   </system.webServer>

</location>

However for some God sake reason, this configuration didn’t change the default values, even with overriding enabled for the serverRuntime section in the applicationHost.config . For easy check there is one good friend – the appcmd.

C:\Windows\System32\inetsrv\appcmd.exe list config “SiteName” -section:system.webServer/serverRuntime

And this bring us to the solution. To cut the story:

* Enable the serverRuntime section

C:\Windows\System32\inetsrv\appcmd.exe set config “SiteName” -section:system.webServer/ServerRuntime /enabled:”True” /commit:apphost

* Set the uploadReadAheadSize to 10MB

C:\Windows\System32\inetsrv\appcmd.exe set config “SiteName” -section:system.webServer/ServerRuntime /uploadReadAheadSize:”1024000” /commit:apphost

Restart if required, and that’s it.

Just aside note don’t forget to change the uploadReadAheadSize to something smaller and more realistic since 10MB is huge for a buffer, cause you don’t want to be hit by the nasty bad boyz with their huge payload packets.

Setting Up Context in Apache Tomcat for Serving Static Files

The intro:

So I’ve heard you want to serve static files from your Tomcat Web App in a way that they won’t be deleted on WAR redeploy or Tomcat restarted?

You have a solution, and that is mapping a custom Context in your Apache Tomcat server.xml .

The scenario :

You have a site that allows users to upload images that are public,shared and not under a hood of some security filter. The most intuitive solution is to put them in some directory i.e. ‘uploads’ , but then you realize that the things in the exploded WAR rewrite on redeploy or if the war is in the webapps directory on Tomcat restart. (you can change this behaviour)

The solution is simple: save the files to some directory outside of the war (something like ‘/usr/share/tomcat/uploads’) and map that directory on the server context of your Tomcat AS (something like http://lesite:8080/uploads).

With workaround like this you will see your uploaded cute kitty picture like this: http://lesite:8080/uploads/kitty.jpg

The implementation:

Let’s use the same examples. The mapping is done in <CATALINA_HOME>/conf/server.xml (hopes you know what and where catalina_home is )

This is default situation on new Tomcat install (a snippet from sever.xml):

<Host name=”localhost” appBase=”webapps” unpackWARs=”true” autoDeploy=”true”>

<Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”logs”
prefix=”localhost_access_log.” suffix=”.txt”
pattern=”%h %l %u %t &quot;%r&quot; %s %b” />

</Host>

But we want to change that in this:

<Host name=”localhost” appBase=”webapps” unpackWARs=”true” autoDeploy=”true”>

<Context docBase="/usr/share/tomcat/uploads" path="/uploads" />

<Valve className=”org.apache.catalina.valves.AccessLogValve” directory=”logs”
prefix=”localhost_access_log.” suffix=”.txt”
pattern=”%h %l %u %t &quot;%r&quot; %s %b” />

</Host>

And that’s it, end of setting. Restart , code and redeploy.

The cookie:

Java snippet of simple utilization:

public class UploadsServlet extends HttpServlet {

    @Override
    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletExcpetion, IOException {
        File file = new File("/usr/share/tomcat/uploads", request.getPathInfo());
        response.setHeader("Content-Type", Files.probeContentType(file.toPath()));
        response.setHeader("Content-Length", String.valueOf(file.length()));
        Files.copy(file.toPath(), response.getOutputStream());
    }

}

The conclusion:
In exact things the need for conclusion is deprecated. Everything should be concluded the one way :

return goToTopAndReadAgain();

The hint:

Maybe you won’t be impressed, and probably you have a better solution/implementation for/of the scenario. However let me give you a clue how this can be found useful in different situation. Proxying and load balancing, possibly with Nginx on front and couple of Tomcats behind. Defining new server contextses and getting a feel of that damn Superman speed.

Fiuuuuuuuuuuuuuuuuu…

(salutations and thanks to a friend of mine for collaboration)

It is first, so let it be groggy

Welcome to you , welcome to me, what this blog will be?

Well it won’t be just groggy I hope. Primarily it will be written after work is done, when I’m tired, little restless and maybe a little more creative.

I will bother with technician stuff, sysadmin daily ad-hoc solutions, programming , things that interest me, maybe I will write something about you. Personal opinions will be included also.

So cheers.