I had this weird thing: it seemed, since my update to mavericks, that my macbook pro did not go to sleep anymore. Well, when on power supply, my computer never goes to sleep, but my display does.

I know that some applications, like full screen video, make sure that the screen does not go to sleep, even though I do nothing (luckily). So I went on a venture to see if I could find which application “hanged”.

Seems actually there is a really easy way to see that, in your console type

> pmset -g

And in my case the output look like this:

[system] ~/work/git/on_the_spot (master) > pmset -g
Active Profiles:
Battery Power       -1
AC Power        -1*
Currently in use:
 standbydelay         4200
 standby              1
 womp                 1
 halfdim              1
 hibernatefile        /var/vm/sleepimage
 darkwakes            1
 gpuswitch            2
 networkoversleep     0
 disksleep            10
 sleep                0 (sleep prevented by backupd)
 autopoweroffdelay    14400
 hibernatemode        3
 autopoweroff         1
 ttyskeepawake        1
 displaysleep         10 (display sleep prevented by Google Chrome)
 acwake               0
 lidwake              1

The important line: sleep prevented by Google Chrome, awesome, even I understand that :) So now the only thing I needed to do was go through my verrrry long list of tabs and check which one keeps the display from sleeping (I am an avid tab-collector). Apparently the screenhero homepage (screenhero seems like a very interesting option for remote pair programming, that’s why I kept it open) for some reason blocks the display from sleeping. Makes sense when it is in use, not so much on the homepage. Closed it, and shabam … fixed :)

When using SimpleCov in a very ill-covered project, I got amazingly good results: SimpleCov just did not count not-covered files. So files that were never used in our test-suite, were just simply being ignored. While I understand that approach, it did not feel good. I want to measure absolute progress, and I want to know how badly it really is.

So, on a mission to count all uncovered files with simplecov, I encountered an issue in their github repo. In itself it contained/mentioned two solutions:

  • in a three year old comment, a solution to set a starting base-line, and merge it after the tests. Unfortunately, it did not work completely: all files were added, but with complete coverage. Doh.
  • a pull request, claiming to fix it: while it did add all files, files that had before 100% coverage, where now no longer.

Let’s get technical. The first solution added a baseline, with for all lines the value nil. The second added the value 0 for each line. The value nil is what SimpleCov uses internally to count for a never line: a line that never matters. Which is either an empty line, a comment line, or begin and end of a class.

The zero is line that is not covered (a 1 is a line that was covered). When merging lines for files that had coverage, I assume the base-line 0 takes precedence over the coverage-calculated nil and so we end up with a non-covered line in the merged result. Bummer.

So I added my spec_helper.rb as follows:

if ENV["COVERAGE"]
  require 'simplecov'
  SimpleCov.start 'rails'
  SimpleCov.coverage_dir 'coverage/rspec'

  all_files = Dir['**/*.rb']
  base_result = {}
  all_files.each do |file|
    absolute = File::expand_path(file)
    lines = File.readlines(absolute, :encoding => 'UTF-8')
    base_result[absolute] = lines.map do |l|
      l.strip!
      l.empty? || l =~ /^end$/ || l[0] == '#' ? nil : 0
    end
  end

  SimpleCov.at_exit do
    coverage_result = Coverage.result
    covered_files = coverage_result.keys
    covered_files.each do |covered_file|
      base_result.delete(covered_file)
    end
    merged = SimpleCov::Result.new(coverage_result).original_result.merge_resultset(base_result)
    result = SimpleCov::Result.new(merged)
    result.format!
  end
end

So, before the test runs, I create a “baseline” containing for all possible ruby files (might need to filter that later), a nil if the line is empty or the “ending end” of a file, and a zero otherwise. After the tests have run, I remove the covered files from the “baseline”, and those are then merged with the result that get the final result.

Now maybe try to translate that into a pull request :)

I develop my website using a MacBook Pro retina, and deploy on windows. And I noticed that my Macbook Pro is a lot faster than the standard GeoServer install. I use Geoserver only to serve the WMS layers, vector data which is stored in postgis.

So I needed to tune GeoServer on Windows for optimal performance. I googled around, and found I needed to make the following changes:

  • make sure jvm runs in -server mode
  • make sure jvm is allocated enough memory
  • optimise jvm settings
  • install native JAI/ImageIO binaries
  • switch to production logging

I used the standard Geoserver binary install, so to tune the jvm settings, you have to edit c:\program files\Geoserver xxx\wrapper\wrapper.conf and add the following lines:

# Java Additional Parameters
wrapper.java.additional.1=-Djetty.home=.
wrapper.java.additional.2=-DGEOSERVER_DATA_DIR="%GEOSERVER_DATA_DIR%"
wrapper.java.additional.3=-server 
wrapper.java.additional.4=-Xmx2048M -Xms2048m 
wrapper.java.additional.5=-XX:SoftRefLRUPolicyMSPerMB=36000 
wrapper.java.additional.6=-XX:MaxPermSize=128m 
wrapper.java.additional.7=-XX:NewRatio=2 

to be able to run the jvm in server mode, I had to copy $JAVA_HOME\bin\client to $JAVA_HOME\bin\server which feels like an awesome hack.

But Geoserver does seem a lot quicker.

I read the wms shootout benchmarks (2011) and one of the results showed that MapServer was a lot quicker on linux (vs. windows).

So that has got me wondering though, what possible other things I could do to improve performance.

E.g.

  • deploy geoserver on linux vs windows?
  • switch containers: jetty vs tomcat vs jboss, or doesn’t that make much of a difference?
  • or how hard would it be to switch to a “quicker” wms, e.g. mapnik/mapserver

So since none of this questions I can find answered somewhere on the web (at least not easily, and i asked), I will probably be doing some benchmarks for myself soon.

I am currently converting my GIS application that needs two browser windows (one for the map, and one for the administrative data and functions), to a single window application using the netzke gems (which in turn rely on Sencha Ext js).

The netzke gems are extremely powerful, but there is a bit of learning curve for me, since both netzke and ext js are new. But netzke makes a lot of things a lot easier. It is not really rails-like, since it abstracts a lot away, and I am not yet completely convinced of the approach. But for my current goals it is ok.

I have a panel containing an openlayers map, and a toolbar with action buttons. The action buttons need to indicate state (drawing/measuring). So when clicked they have to be in a “pressed” state, and start with the action.

Although I found no example of this, achieving it was pretty easy. I defined my action buttons as follows:

action :measure_line do |c|
  c.id = "measure_line"
  c.icon = :ruler
  c.text = ""
end

action :measure_area do |c|
  c.id = "measure_area"
  c.icon = :surface
  c.text = ""
end

js_configure do |c|
  c.layout = :fit
  c.mixin
end

def configure(c)
  c.desc = "Contains the openlayers map"
  c.title = "Map"
  c.tbar = [:draw_point, :draw_line, :draw_polygon, :measure_line, :measure_area, :navigation_history]
  c.body_padding = 5
end   

I am only showing the actions for :measure_line and :measure_area because these are now the relevant ones I want to show. I need to give explicit id to the buttons, so that I can use Ext.ComponentManager.get to find them later on.

Nice thing to know is that the action handler by default receives the pressed button (yes!). This will allow us to just toggle it. So your mixed in javascript will look this:

{
    onMeasureLine: function(btn) {
      Map.toggleMeasuring(btn, 'line');
    },
    onMeasureArea: function(btn) {
      Map.toggleMeasuring(btn, 'polygon');
    },
}

Because I prefer to write coffeescript (not found how I can do that for the netzke mixins), I have a class Map in map.js.coffee containing all map-related functions. toggleMeasuring needs the button and the measurement-action to activate/deactivate

toggleMeasuring: (button, measurement_type) ->
  button.toggle()
  if 'pressed' in button.uiCls
    @measureControls[measurement_type].activate()
  else
    @measureControls[measurement_type].deactivate()

  # make sure the other measuring is automatically switched off
  for  key of @measureControls
    unless key == measurement_type
      @measureControls[key].deactivate()
      Ext.ComponentManager.get("measure_#{key}").toggle(false)

That is about the gist of it. I will be sharing more of my netzke experience and code soon. For the moment still impressed :)

I am currently converting an Oracle database to Postgis. Instead of blindingly copying the data model, I am also checking which columns are actually really used, and drop those that are never used.

In most tables it is pretty easy, I can do a quick visual check and then count that one column that seems to be zero all the time. But we have a few tables with 30-50 columns.

For such a table there is also an easy way:

SELECT  t.column_name
FROM    user_tab_columns t
WHERE   t.nullable = 'Y'
    AND t.table_name = 'YOUR_TABLE_NAME_HERE'
    AND t.num_distinct = 0

Mind you, for this to work, your database must have gathered the statistics (if you haven’t done this before, this will also help your performance).

BEGIN
  DBMS_STATS.gather_database_stats();
END;

Yowza. Due to some devine intervention I am now responsible to upgrade three rails sites. I started developing them in 2009, but haven’t touched then since mid 2010. The persons who took over have not kept it up to date, nor (from their code) were they really good ruby programmers :)

So I have the daunting task to bring those projects into the present :)

Remember, 2010, rails 2.3.5? That seems like ages ago :) There was

  • no bundler
  • no rvm (actually, still isn’t since they deploy on windows –aaaaarggh since then I develop on ubuntu and mac)
  • vendor/plugins instead of gems
  • no asset pipeline
  • old routes

At first I couldn’t even get the correct set of gems together to get the rails site running correctly.

So the steps I took to get it running (on my mac first) :

  • create a new branch
  • use rvm to switch to start using 1.8.7 and a new empty gemset
  • gem install rails (which apparently installs the correct 2.3.5 version by itself –impressed :)
  • install rails_update plugin (script/plugin)
  • my current rake version is not compatible, luckily if you type rake _0.8.7_ instead of the normal rake it will work. Hehe.
  • to get to use rails_upgrade you have to have a working rails site, so I had to collect all gems in the correct versions
  • using rails_upgrade plugin to check/and upgrade routes/gems/…
    • the routes generated where correct
    • the gemfile I had to edit manually (the used gems were not specified in the environment.rb using config.gem)
    • also the generated application.rb needed to be edited (we added a lot of initializer code there –> should move to an initializer!)

Then I switched (with rvm) to ruby 1.9.3 and ran bundle install. Then I created a fresh 3.2.13 project and copied:

  • the scripts folder verbatim
  • the config/boot.rb, config/environment.rb and config/environments/*.rb (make sure to check and keep any changes you made)
  • the Rakefile (idem)

Then I was good to go!! We still have a lot of vendor/plugins I need to convert, one in particular I need to convert to an engine (gem).

Things I still need to do:

  • convert vendor/plugins:
    • convert engine to gem
    • convert others to lib/plugins
  • move assets to asset pipeline

And then I should have a running webapplication again. Wow.

Of course: the sites have no tests at all (my bad as well: when I started rails I did not know about testing), so will have to still add those. Starting with cucumber first, and add rspec later, when I touch the code (working from the outside in).

For my current employer I help in building a heavy javascript based website/application. Using a javascript-based site, with a lot of ajax, you have to make sure the user can still use the back button without breaking the experience.

So we used the HTML5 history, and tested that on Firefox and chrome and it worked just fine.

Of course, our first client is using IE9, and it breaks completely.

The most famous library to port HTML5 history behaviour to all browsers is history.js. Unfortunately I encountered a few very specific issues with it:

  • it uses statechange event, so it is triggered when pushing or popping a new state, and I can’t tell which change it is. I am only interested in the pop state. This is akward, but fixable.
  • we are building a SPARQL browser, so the url’s we build contain RDF identifiers, which are URI’s. History.js just can’t handle that. It will unescape the uri’s, and thus break the built url and stored state as well. This was not simply fixable at all. It was supposedly fixed in the dev-branch, but even that did not work

So I had to go looking for an alternative, with the following characteristics:

  • support the same API as the HTML5 history, or as close as possible
  • allow to build urls containing escaped uri’s
  • and of course: work on IE9 and up

And luckily, I found that library: HTML5-History-API, which is an exact implementation of the history API.

The only change was my popstate event (and include the javascript library, of course).

Before it was implemented as follows (coffeescript) :

   window.addEventListener "popstate", (e) ->
     state = window.history.state
     state = event.state
     if state
       if state.query != undefined
         update(state.query)

and now it looks like:

   window.addEventListener "popstate", (event) ->
     event = event || window.event
     state = event.state
     if state
       if state.query != undefined
         update(state.query)

And then my code just worked on IE9! Awesome :)

There are different reasons why a ActiveRecord::StaleObjectError would occur. Sometimes it is caused because rails does not have a stable identity map yet. This means that if the same object would be retrieved via different associations, your rails process would not know they point to the same object, would keep different copies of the same object and that could easily cause a StaleObjectError if you attempt to change both.

Such an occurrence needs to be fixed structurally in your code.

But sometimes you can get StaleObjectError because things happen at the same time. There are two ways to handle this:

  • make sure that controller actions which affect the same object, are always executed sequentially. One way to achieve this is to lock the object you want to update in the database. Then all other processes wanting to update the same object will have to wait until the lock is released (and the lock can be acquired). This is a valid approach but costly, intrusive (you need to explicitly add code), and possibly dangerous. It is normally easy to avoid, but you have to be careful for deadlocks and locks that are never released.
  • when a StaleObjectError occurs, just retry the request

Now that second option seems valid, if only it would be easy to handle. Luckily there is a very easy and unobtrusive way to automatically retry all requests when a StaleObjectError occurs. Let’s just create a middleware that catches the exception and retries the complete request a few times.

Create a new file lib/middleware/handle_stale_object_error.rb containing the following:

module Middleware

  class HandleStaleObjectError
    RETRY_LIMIT = 3

    def initialize(app)
      @app = app
    end

    def call(env)
      retries = 0

      begin
        @app.call(env)
      rescue ActiveRecord::StaleObjectError
        raise if retries >= RETRY_LIMIT

        retries += 1
        Rails.logger.warn("HandleStaleObjectError::automatically retrying after StaleObjectError (attempt = #{retries})")

        retry
      end
    end
  end

end

Then in your config/application.rb you need to add the following lines at the bottom to activate the middleware (this requires a restart of your rails server):

    #
    # Middleware configuration
    #
    require 'middleware/handle_stale_object_error'

    config.middleware.insert_after ActiveRecord::SessionStore, Middleware::HandleStaleObjectError

Warning: this code will retry all occurences of StaleObjectError. For some occurrences this will not help at all, so you have to use your own judgement if this middleware is something you want in your codebase. I like this approach because it is an “optimistic” approach: it only adds extra processing when a StaleObjectError occurs, and when a not-fixable StaleObjectError it still fails as it should.

Hope this helps.

In some part of my code I ended writing the following:

  self.count_processed ||= 0
  self.count_processed += 1

where self is some ActiveRecord model, and count_processed is an attribute of that model (and stored in the database).

What am i trying to achieve (if it is not blatantly obvious):

  • if count_processed is not initialised, make it zero
  • increment count_processed

Imho this code is clear and readable, but I had the feeling it could be more concise/prettier. So I asked that question on our campfire, to see if we could come up with something shorter.

Very nice to be working in a team where you can just throw up questions like these and a very useful, educational discussion unfolds. In short we came up with the following solutions.

Solution 1: to_i

self.count_processed = self.count_processed.to_i + 1

Nifty! Isn’t it? Use to_i because it will handle the nil correctly.

But for me this looked wrong. If I would return to this code after a few weeks, months, I would wonder why I did this way, and not just wrote self.count_processed += 1.

So while the code is correct, the intent of the code is not clear.

Solution 2: concise!

self.count_processed = (self.count_processed || 0) + 1

This is very beautiful, and the intent is also very clear. If it is not initialised, use the zero, else just use the value and add 1. Awesome.

Solution 3: change the getter

And alternative solution would be to overwrite the getter, like this

def count_processed
  self[:count_processed] ||= 0
end

Note the notation we used: we use self[:count_processed] because this will fetch the value from the database column. If this was a normal getter, we would write @count_processed (but that does not work for an ActiveRecord model).

After redefining the getter, we can just write:

self.count_processed += 1

While this will work always, does it express its intent more clearly or not? Actually you no longer have to worry about the initialisation, because it is handled, and we can focus on what we really want: increment the counter.

I opted for this solution.

What about you?

Which version do you prefer? Do you have any alternative suggestions?

Since rails 3.2.4 the link_to_function is effectively deprecated (again). Everything keeps on working, but when running my specs I got a horseload of deprecation warnings.

I know the optimal/recommended way is to use unobtrusive javascript, but the quickest way to fix this is really easy.

Just perform the following translation:

    # before
    link_to_function icon_tag('close.png'), '$(this).parent().hide()', :title => t('actions.close'), :class => 'close'

    # after
    link_to icon_tag('close.png'), '#', :onclick => '$(this).parent().hide()', :title => t('actions.close'), :class => 'close'

Dead-easy :)

In the next refactoring I will remove all onclick blocks and replace them with unobtrusive javascript. But for now I got rid of the deprecation warnings :)