Blog
what did i learn today
News coordinate systems oracle spatial oracle
[ORACLE] updating coordinate systems' definition

We are in the process of migrating an old GIS system. For our new systems we use POSTGIS. But this one still uses oracle. The data is spanning two countries: Belgium and the Netherlands. Our system does something awful: all data is stored in RD (the dutch coordinate system, using Oracle SRID 90112).

So how do we get data into the system: belgian data is entered as Lambert 72 (oracle srid 327680) and then transformed to 90112.

Our client uses a customised viewer that shows the data either in RD or Lambert72. Now we want to switch to a more generic solution, and show the data in WGS84. We are using oracle 11, so my initial process was the following

  • extract belgian data from tables, convert back to 327680 (SDO_CS.transform(geom, 327680))
  • set the SRID to 31370 (which is the correct/best srid for belgium --it has the correct transformation to wgs84) as follows: update be_geoms bg set bg.geom.sdo_srid = 31370 (so without transformation)
  • for dutch data I just set it to 28992
  • and then I transform both to WGS!

Easy! done! ready! However ... I was not ... The data was not positioned correctly. So I checked the definition in MDSYS.CS_SRS for both 28992 and 31370 and compared it to epsg.io and lo and behold: both where wrong. So now I had to update them.

Updating EPSG:31370

delete from mdsys.cs_srs where srid=31370;
Insert into MDSYS.CS_SRS (CS_NAME,SRID,AUTH_SRID,AUTH_NAME,WKTEXT,CS_BOUNDS,WKTEXT3D) values ('Belge 1972 / Belgian Lambert 72',31370,31370,'IGN Brussels www.ngi.be/html-files/french/0038.html','PROJCS["Belge 1972 / Belgian Lambert 72", GEOGCS ["Belge 1972", DATUM ["Reseau National Belge 1972 (EPSG ID 6313)", SPHEROID ["International 1924 (EPSG ID 7022)", 6378388.0, 297.0], -106.869,52.2978,-103.724,0.3366,-0.457,1.8422,-1.2747], PRIMEM ["Greenwich", 0.000000], UNIT ["Decimal Degree", 0.0174532925199433]], PROJECTION ["Lambert Conformal Conic"], PARAMETER ["Latitude_Of_Origin", 90.0], PARAMETER ["Central_Meridian", 4.3674866666666667], PARAMETER ["Standard_Parallel_1", 51.1666672333333333], PARAMETER ["Standard_Parallel_2", 49.8333339], PARAMETER ["False_Easting", 150000.013], PARAMETER ["False_Northing", 5400088.438], UNIT ["Meter", 1.0]]',null,'PROJCS[
  "Belge 1972 / Belgian Lambert 72",
  GEOGCS["Belge 1972",
    DATUM["Reseau National Belge 1972",
      SPHEROID[
        "International 1924",
        6378388.0,
        297.0,
        AUTHORITY["EPSG", "7022"]],
      TOWGS84[-106.869,52.2978,-103.724,0.3366,-0.457,1.8422,-1.2747],
      AUTHORITY["EPSG", "6313"]],
    PRIMEM["Greenwich", 0.000000, AUTHORITY["EPSG","8901"]],
    UNIT["degree (supplier to define representation)", 0.0174532925199433, AUTHORITY["EPSG", "9122"]],
    AXIS["Lat", NORTH],
    AXIS["Long", EAST],
    AUTHORITY["EPSG", "4313"]],
  PROJECTION ["Lambert Conformal Conic"],
  PARAMETER ["Latitude_Of_Origin", 90.0],
  PARAMETER ["Central_Meridian", 4.3674866666666667],
  PARAMETER ["Standard_Parallel_1", 51.1666672333333333],
  PARAMETER ["Standard_Parallel_2", 49.8333339],
  PARAMETER ["False_Easting", 150000.013],
  PARAMETER ["False_Northing", 5400088.438],
  UNIT["metre", 1.0, AUTHORITY["EPSG", "9001"]],
  AXIS["X", EAST],
  AXIS["Y", NORTH],
  AUTHORITY["EPSG", "31370"]]');

... and this worked and now my transformation for Lambert is correct!

Updating EPSG:28992

... proved to be a little trickier. I assumed I could just reuse the same method as for the belgian coordinate system (yes, I know, assume = ass-u-me).

I was unable to just delete or update 28992 because I got an error that a child record existed: ORA-02292 with reason COORD_OPERATION_FOREIGN_SOURCE. Googling this revealed nothing at all.

So I had to dig deeper. And deeper. Actually MDSYS.CS_SRS is actually a view which tries to update the underlying tables. And the TOWGS84 coordinates, which I had to change/update, are stored in SDO_DATUM. So after some searching, it actually proved to be quite easy. To updated the EPSG:28992, I just had to do:

update mdsys.sdo_datums set
  shift_x = 565.417,
  shift_y = 50.3319,
  shift_z = 465.552,
  rotate_x = -0.398957,
  rotate_y = 0.343988,
  rotate_z = -1.8774,
  scale_adjust = 4.0725
where datum_id = 6289;

EXECUTE SDO_CS.UPDATE_WKTS_FOR_EPSG_DATUM(6289);

My first initial (naive) assumption was that the SDO_CS.UPDATE_... functions would actually retrieve the latest EPSG definitions, unfortunately no such luck :) :)

Stuff like this makes me appreciate PostGIS even more.

More ...
News routing devise rails
[rails routing] protecting a mounted engine

In a project we built, we are using que for doing our background-jobs, and there is a very simple (but sufficient) and clean web-ui, called que-web, allowing us to monitor the status of the jobs online.

And normally, you just include it in your project by adding the gem, and then adding the following to your config/routes.rb :

require "que/web"
mount Que::Web => "/que"

But, this is completely open and unauthenticated. So we use devise, and it is really easy to limit a route to authenticated users:

require "que/web"
authenticate :user do 
  mount Que::Web => "/que"
end

At least this limits the accessability to logged in users. But we wanted it to be available only to admin-users. So I thought I had to resort to defining my own constraint-class, as follows

class CanSeeQueConstraint
  def matches?(request)
    # determine if current user is allowed to see que
  end
end

and in the routes write it as follows

require 'can_see_que_constraint'
mount Que::Web, at: '/que', constraints: CanSeeQueConstraint.new

The problem was: how do I get to the current user, in a constraint class? So I took a peek at how the authenticate block in devise works, and apparently there is an easier option: the authenticate block takes a lambda, where you can test the currently authenticated user. Woah! Just what we need. So we wrote the following to only allow our adminstrators to see/manage our background jobs:

authenticate :user, lambda {|u| u.roles.include?("admin") } do
  mount Que::Web, at: 'que'
end
More ...
Technology ruby render_anywhere ruby on rails
Using render-anywhere gem with partials

Normally in rails, you can only render views inside of the controller. But what if you want to render a view somewhere else? For instance we wanted to generate xml-files using views. Haml can be used to describe xml just as well as plain html.

There is a gem called render_anywhere that allows just that. So how does this work, for example:

class Organisation < ActiveRecord::Base

  has_many :members

  include RenderAnywhere

  def to_xml
    render partial: "#{self.to_partial_path}", object: self, layout: 'my_xml_layout'
  end
end

We had a little problem when using partials though.

Normally if you type something like

= render @member

it will ask the partial path from the model (@member.to_partial_path), but somehow this always got prefixed with render_anywhere. The gem creates a dummy RenderingController in the RenderAnywhere namespace, so apparently it will look for the following view:

render_anywhere/members/member

In our case, I did not want to use the render_anywhere subfolder. It took me a while to figure out how to overrule this, but in essence it is pretty simple: rails uses the namespace of the rendering controller to prefix the path. Some deep googling proved that any controller has a method called _prefixes which lists all the prefixes for that class.

We can easily verify this in the rails console:

:001 > RenderAnywhere::RenderingController._prefixes
=> ["render_anywhere/rendering"]

So if we could overrule _prefixes to just return ["rendering"] ... Mmmmmm fork the code of render_anywhere? Or ...

There is another option: render_anywhere allows you to supply your own RenderingController and will use that instead if found in the context where the RenderAnywhere code is included.

So, if you write something like:

class Organisation < ActiveRecord::Base

  has_many :members

  include RenderAnywhere

  class RenderingController < RenderAnywhere::RenderingController

    def self._prefixes
      ["rendering"]
    end

  end

  def to_xml
    render partial: "#{self.to_partial_path}", object: self, layout: 'my_xml_layout'
  end
end

it will look for a view called members/member. Woot. To specify a different sub-folder you can adapt the _prefixes method as you wish :)

More ...
News schema_plus postgis ruby on rails
[rails] ignoring specific postgis view and tables in schema.rb

Developing rails websites with a geographic component we rely heavily on Postgis, so we use activerecord-postgis-adapter for the Postgis support, and I always use schema_plus because it allows me to define views. Until recently, I always had to use the structure.sql instead of the schema.rb because the geometric columns did not dump correctly.

But for a while now, activerecord-postgis-adapter handles this correctly and so we use the schema.rb file again. Only to discover a "new" error:

ActiveRecord::StatementInvalid: PG::DependentObjectsStillExist: ERROR: cannot drop view geography_columns because extension postgis requires it
HINT: You can drop extension postgis instead.
: DROP VIEW IF EXISTS "geography_columns"

Apparently specific Postgis views are also dumped in the schema file, and those views obviously cannot simply be re-created.

A very naive solution I kept using was to comment those create_view lines in our schema.rb file. But apparently there is a much better solution: you can configure which tables and views schema_plus should ignore.

So I added an initializer in /initializers/schema_dumper.rb with the following content:

ActiveRecord::SchemaDumper.ignore_tables = [
   "geography_columns", "geometry_columns", "spatial_ref_sys", "raster_columns", "raster_overviews"
]

And now my schema.rb is correct, and simple commands as rake db:setup or rake db:test:prepare just work. Hehe.

More ...
News geoserver oracle
[geoserver] failed to look up primary key in oracle table

I have a very weird problem with my geoserver+oracle, when deployed on a Windows 2012R2 server (see here), and in attempting to solve that, I upgraded the geoserver from 2.6.3 to 2.7.1, hoping that that would fix that.

Sometimes fairy tales come true, but in this case it did not help, unfortunately. The 2.7.1 did render a lot quicker, except one layer which did not render at all anymore.

My style could not render with the error The requested Style can not be used with this layer. The style specifies an attribute of <missing attribute name>. Checking the layer in geoserver, I could see it was no longer to determine any of the attributes for the given table.

Further investigation in the logfile revealed the following (cryptic) error:

Failure occurred while looking up the primary key with finder: org.geotools.jdbc.HeuristicPrimaryKeyFinder@24cf7139

java.sql.SQLException: Exhausted Resultset

Mmmmmm. Luckily my google-fu revealed a linked issue, and simple solution:

update the driver from ojdbc14.jar to the newer ojdbc7.jar fixes this problem.

Hehe :)

More ...
News geoserver oracle
[geoserver] having duplicate columns in your oracle based layer

Updating geoserver did not fix my problem: my layer still had some duplicate columnnames. This might not be such a big problem: everything is drawn correctly, WMS calls work, but WFS calls gave the irritating yet predictable error ORA-00918: column ambiguously defined. Annoying.

So how does one find column-names for a table in oracle? With a query like:

select * from dba_tab_columns where table_name = 'YOUR_TABLE_NAME';

and all of a sudden I saw the same set of column-names, with some duplication. Apparently my oracle database contains the table twice, in two different schema's. Since my user had the permissions to access the other schema, it seems geoserver does not limit the query to the (specified) schema at all.

The fix then was easy: make the other schema unaccessible. In my case the second schema was for testing purposes, so I could just delete it.

More ...
News
Upgrading my local geoserver instance from 2.3.3 to 2.6.3

At my current job, we make GIS websites, using rails and geoserver. I develop on mac, and for some clients we need to deploy on windows. One client is still using an Oracle database, while in general I prefer to work with postgis databases, and also geoserver offers better support when using postgis.

So: when working locally I got a really weird phenomenon in my geoserver: it duplicated various oracle columns. Generally, for viewing not a problem but when using WFS I got the "column ambigously defined", and using Oracle SQL Views did not work (it went looking for meta data?) and the Geoserver SQL Views were painfully slow.

But on my clients server I installed Geoserver 2.6.3 and the oracle stuff just worked. Woot :) So I had to upgrade my ancient 2.3.3 geoserver. It is running inside a tomcat. Upgrading seemed easy enough: copy the old geoserver folder somewhere (actually you would only need the data folder and the web.xml but I am lazy/extra safe like that), and drop the new war, and theoretically, we should be good to go.

Except ... I got this peculiar error in my log-file

SEVERE: Error listenerStart

WTF! Thanks to some googling, I added a file logging.properties to my geoserver\WEB-INF\classes with the following content:

org.apache.catalina.core.ContainerBase.[Catalina].level = DEBUG
org.apache.catalina.core.ContainerBase.[Catalina].handlers = java.util.logging.ConsoleHandler

restarted my tomcat, and the following appeared:

SEVERE: Error configuring application listener of class org.geoserver.platform.GeoServerHttpSessionListenerProxy
java.lang.UnsupportedClassVersionError: org/geoserver/platform/GeoServerHttpSessionListenerProxy : Unsupported major.minor version 51.0 (unable to load class org.geoserver.platform.GeoServerHttpSessionListenerProxy)

Now what the hell cryptic error is that? But apparently this is a very compact way to say this code needs java 1.7 and I am still using java 1.6 (I am looking at you Apple). Updating now :)

More ...
News ruby on rails
[rails] select distinct values of a column

The simplest way to select all distinct values of a column is, somewhat unintuitively:

Visit.uniq.pluck(:project)

this runs the query select distinct project from visits, and returns an array of strings. Exactly what you need, except ... I want it to be paginated. So we

Visit.uniq.pluck(:project).page(1)

... and that completely bombs: we now get an array of numbers?

So we try something else, and write:

Visit.select('distinct project')

which runs the good query, but returns an array of Visit's with only the project filled in. I can live with that. And then pagination (using the kaminari gem) is again as expected:

Visit.select('distinct project').page(params[:page])

Nice :)

More ...
News javascript jquery html5
setting a html5 data attribute with jquery

Abstract

I have a single bootstrap modal, which is called from different places, and so the modal contains some data-* attributes I want to set before showing it. Just using the .data() offered by jquery does not work.

Detailed example

Suppose you have mark-up like this:

<div data-some-important-value="123">

Asking the value is quite easy:

$('[data-some-important-value]').data('some-important-value')

And, according to the documentation, setting the data on a DOM element, should be as easy as

$('[data-some-important-value]').data('some-important-value', 'new-value')

If you would try this in the console, you could verify that does not work. This is where it gets confusing (to me). Apparently the .data() offered by jquery existed before the HTML5 data-* elements did, and they nicely integrated them. But the data-* are only loaded once, and never written back to the document.

To still be able to do this, use the .attr() method instead:

$('[data-some-important-value]').attr('data-some-important-value', 'new-value')

Now I only have to include one modal "template" in my HTML, and set the data-* attributes to customize the behaviour.

More ...
Technology ruby
[ruby] hate private methods? use protected instead

I started out writing something like the following:

If I could change just one thing in ruby, I would change how private works. Aaarrgghh.

For instance, I am trying to implement an equality of bounding boxes in ruby. I would presume I could write something like:

class BoundingBox

  attr_accessor :xmin, :ymin, :xmax, :ymax

  def initialize(xmin, ymin, xmax, ymax)
    @xmin = xmin
    @ymin = ymin
    @xmax = xmax
    @ymax = ymax
  end

  def ==(other)
    other.class == self.class && self.state == other.state
  end
  alias_method :eql?, :==

  private

  def state
    [@xmin, @ymin, @xmax, @ymax]
  end

end

Because I am inside the scope of the class, I am allowed to access the private methods. That is how it is supposed to be. But no, ruby says something silly like:

private methods cannot be called with an explicit receiver

So you have to resort to something ludicrous like writing

def ==(other)
    other.class == self.class && self.send(:state) == other.send(:state)
  end

This works. So I could have left it there. But wait: protected is exactly what I want. I have been using it wrong completely.

So in short:

  • use public for your public API of your class (obviously)
  • use protected for methods that are not part of your API, but other instances of the same class need to be able to reach (e.g. for equality checking, sorting, ...)
  • use private for methods that are only used inside the class, and only refer to the implicit receiver.

So if in the first example I change private to protected, it just works. I have been doing it wrong all along.

More ...
Technology ruby method_missing rails
[rails] store your settings in a model

Most of the times I use a config.yml to store application settings, that I want to be able to change quickly between environments, servers, deployments. But what with settings that need to be changed on the fly, by a user?

I create a small model, with three fields.

rails g model Setting name:string description:string value:string

I use a seed file to define the different settings. Settings are checked in code, but I can not preload them, since a user might change them. So first I added a method as_hash that loads all settings, and I can then directly use a hash, but that gets wordy quickly.

What if ... I could provide a method on the class Setting for each setting in the database? That would be really nice. This seems like a job for ... method-missing-man Special superpower: seeing places where method_missing could be used :)

class Setting < ActiveRecord::Base

  validates_presence_of :name

  def self.as_hash
    settings = {}
    Setting.all.each do |setting|
      settings[setting.name.to_sym] = setting.value
    end
    settings
  end

  # offer all settings as methods on the class
  def self.method_missing(meth, *args, &block) #:nodoc:
    @all_settings ||= Setting.as_hash
    if @all_settings.keys.include?(meth)
      @all_settings[meth]
    else
      super
    end
  end
end

For documentation, the test:

context "convenience: define methods for all settings" do
  before do
    Setting.instance_variable_set('@all_settings', nil)
    Setting.create(name: 'my_other_test_setting', value: '123ZZZ')
  end
  it "Setting.my_other_test_setting returns the correct result" do
    Setting.my_other_test_setting.should == '123ZZZ'
  end
  it "an unexisting setting behaves as a normal missing method" do
    expect {
      Setting.this_setting_does_not_exist
    }.to raise_exception(NoMethodError)
  end
end

I love ruby :) :)

More ...
News
[postgis] fixing missing rt-postgis2.0

TL;DR

If you are using brew to manage your postgresql/postgis install, and you suddenly cannot access any postgis functionality, with the error that rtpostgis-2.0.so cannot be found, check your json-c version, I had to do

brew switch json-c 0.10

to get it working.

The long and dirty story

I had created a new database, but for some reason I could not add a geometry column, so I was thinking, maybe, somehow my POSTGIS extension needs to be (re)activated. And when I tried this I got the the obscure error

could not load library "/usr/local/Cellar/postgresql/9.2.3/lib/rtpostgis-2.0.so": dlopen(/usr/local/Cellar/postgresql/9.2.3/lib/rtpostgis-2.0.so, 10): Library not loaded: /usr/local/opt/sqlite/lib/libsqlite3.0.8.6.dylib Referenced from: /usr/local/lib/libgdal.1.dylib

Whaaaaaattttt????

Maybe my postgis installation is somehow corrupt, so I tried brew install postgis. Wronggggg move. Suddenly I am installing postgresql 9.3.4 too?

Ok. I did not see an alternative. This of course ment I should upgrade my database. First step: install postgresql 9.3.4 and postgis. Then I tried to follow this upgrade procedure. In short I issued the following commands:

initdb /usr/local/var/postgres9.3 -E utf8
pg_upgrade -d /usr/local/var/postgres -D /usr/local/var/postgres9.3 -b /usr/local/Cellar/postgresql/9.2.3/bin/ -B /usr/local/Cellar/postgresql/9.3.4/bin/ -v

and that failed? I got almost the same error, but now a bit more verbose:

PG::UndefinedFile: ERROR: could not load library "/usr/local/Cellar/postgresql/9.2.3/lib/postgis-2.0.so":   
    dlopen(/usr/local/Cellar/postgresql/9.2.4/lib/postgis-2.0.so, 10): Symbol not found: _json_tokener_errors
Referenced from: /usr/local/Cellar/postgresql/9.2.4/lib/postgis-2.0.so
Expected in: /usr/local/lib/libjson.0.dylib
   in /usr/local/Cellar/postgresql/9.2.4/lib/postgis-2.0.so

What? Json? And that led me to the answer: I had to switch my json-c version:

brew switch json-c 0.10

Restarted my pg_upgrade which now seemed to work, but failed at the end, that postgis-2.0.so and rtpostgis-2.0.so were not loadable. Sigh. These were of course compiled against the new json-c (I think?).

I switched the json-c version back to 0.11 and then I started my postgres process again. This showed my that my databases were NOT upgraded.

This almost feels like dll hell al over again.

Should I uninstall postgis, do db_upgrade and install it again? Go back to 9.2.3?

For the moment I switched back to 9.2.3 and opened an issue on the homebrew. I hope somebody can help me.

brew switch postgresql 9.2.3
 brew switch json-c 0.10

Not sure what will break, because something needed to install json-c 0.11 ? At least for now I am good. I hope.

[UPDATE] Nope. It only partly works now. The errors I saw were:

after brew switch json-c 0.10 :

PG::UndefinedFile: ERROR: could not load library "/usr/local/Cellar/postgresql/9.2.3/lib/rtpostgis-2.0.so": dlopen(/usr/local/Cellar/postgresql/9.2.3/lib/rtpostgis-2.0.so, 10): Library not loaded: /usr/local/lib/libjson-c.2.dylib Referenced from: /usr/local/opt/liblwgeom/lib/liblwgeom-2.1.1.dylib Reason: image not found

after brew switch json-c 0.11

PG::UndefinedFile: ERROR: could not load library "/usr/local/Cellar/postgresql/9.2.3/lib/postgis-2.0.so": dlopen(/usr/local/Cellar/postgresql/9.2.3/lib/postgis-2.0.so, 10): Symbol not found: _json_tokener_errors Referenced from: /usr/local/Cellar/postgresql/9.2.3/lib/postgis-2.0.so Expected in: /usr/local/lib/libjson.0.dylib in /usr/local/Cellar/postgresql/9.2.3/lib/postgis-2.0.so

So I was stuck. Reverting to the old version did not fix it.

I was able to get my situation rectified by doing a "clean" install of postgresql92 which installed postgresql 9.2.8.

brew install postgresql92
brew link --overwrite postgresql92
brew install postgis20
launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist
launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist

And now I am good. On version 9.2.8. Hehe.

conclusion

I have no clear conclusion or solution. Somehow my json-c got upgraded, which messed up my postgis installation. I guess installing the new version of postgis, messed up my old version of postgis (since now one file was linked to json-c.0.11 and the other against json-c.0.10).

However I did find a clean solution: upgrading to postgresql92 and postgis20, now nicely linked against json-c 0.11.

If you encounter the same error: switch your json-c version, before trying anything else, and then I hope you should be good to go (no need to upgrade).

More ...
News
[rails 4.1] when the rails command just hangs

For one of my projects I am using rails 4.1 (bleeding edge! yeah :) ) and suddenly noticed, that after opening my laptop in the morning my normal rails commands, like

$> rails c
$> rails g migration Bla name description some_more_fields

just ... were hanging and nothing happened??? Like they were waiting for further input. Upon closer investigation, I assumed that the connection to the spring process was lost/corrupt (I move between networks a lot? maybe that could explain it).

For those unaware, as I was, spring is a Rails application preloader. It speeds up development by keeping your application running in the background so you don't need to boot it every time you run a test, rake task or migration. Of course when that connection is lost, or corrupt, it hangs.

A simple

$> spring stop

stops the spring server, after which any rails command will restart it automatically. Fixed :)

More ...
News
[rails] using foundation 5 without installing bower

Starting with ZURB Foundation 5, they use Bower to distribute the assets, and in their "getting started" guide they propose to install bower.

I have not yet installed bower myself, but there is a really easy alternative: use rails-assets.org.

At the top of your Gemfile add a source line:

source 'https://rubygems.org'
source 'https://rails-assets.org' ## <---- add this line

and then add the gem

gem 'rails-assets-foundation'

In your application.js add

//= require foundation

And in your application.css add

*= require foundation

Done! :)

More ...
News oracle
[oracle] changing the value of a sequence

According to oracle documentation, to change a value of a sequence, you have to drop and recreate it, using the following command:

CREATE SEQUENCE table_name_seq START WITH 12345;

But there are some easy ways to change the value of an existing sequence too.

If you want to increment the current value by 500, you can just use

select your_sequence_name.nextval from dual connect by level <= 500;

If you want to decrement it, you can do that as follows:

alter sequence id_sequence increment by -500;
select id_sequence.nextval from dual;
alter sequence id_sequence increment by 1;

(of course this can also be used to increment it, but the connect by level trick is easier then)

More ...
News activerecord rails4 ruby on rails
[rails 4] add a reference to a table with another name

They default way in rails 4 to add foreign keys is simply

add_reference :people, :user

And this will add a column user_id to the people table, if the table exists. I have been looking in the rails code where this is handled, and it is really straightforward.

Note that standard rails does not really do anything for referential integrity, it creates the correct columns and an index if so specified.

But we use the schema_plus gem, for two reasons:

  • it does handle referential constraints correctly (on the database level)
  • it allows to specify the creation of views in a migration more cleanly

So, with schema_plus, if you write something like:

add_reference :people, :owner

and there is no owners table, this will give an error.

So instead you need to write :

add_reference :people, :owner, references: :users

This will correctly generate an owner_id which is a foreign key to users(id).

If you want to create a self-referential link, that is really easy. Just write

add_reference :people, :parent

This will create a self-referential link. Awesome :)

For completeness, the add_reference will add a reference columns to an already existing table. The same can be done when creating (or changing) a table with the following syntax:

create_table :people do |t| 
  t.references :owner, references: :users
  t.references :parent
end

So, in short, if you were not using the schema_plus gem already, start doing so now :)

More ...
News rails4 ruby on rails
How to clean assets in rails 4

Gentle reminder, do not forget, in rails 4

rake assets:clean

seems to work, but actually does nothing. That is not entirely true: it only cleans the old assets, leaving the three most recent versions of the assets intact. So it is a like a mild cleaning, a throw-away-the-garbage-clean, a bring-those-old-clothes-you-never-wear-to-the-thriftstore-clean.

But sometimes that does not cut it. Sometimes, don't ask me why, building my assets does not seem to work, my code is just not being picked up. And then you need to do use brute force cleaning (throw everything out). Run

rake assets:clobber

to actually clean the assets. The logic or meaning is lost on my me (clobber?), but this works.

More ...
Technology ruby windows ruby on rails
Run rails 2.3.18 using ruby 1.8.7 on windows server 2012

For my current job we have two 2.3.5 rails sites, of which I already succesfully upgraded one to rails 4. For the other we still need to start the migration, and we were asked to install new windows servers to run the rails servers on in the meantime (let's not digress why they choose windows, in a business environment Windows servers are still preferred, and let's be honest; they are easier to maintain and manage then *nix servers at first sight).

So whatever: I had to install the rails 2.3.5 on a new Windows 2012 server.

This proved problematic, since the new ruby 1.8.7 comes with the new rubygems, and this does not work nicely with rails 2.3.5.

So step 1: install the old ruby 1.8.7 (p302), the old gems and run the rails server. This worked.

But then I saw this one thing I really needed to improve. So I migrated the project to the latest 1.8.7 and rails 2.3.18, use bundler for gem dependencies. On my development box this worked like a charm (Macbook Pro). So then I deployed this back on the server, and then the following happened:

C:/Ruby187/lib/ruby/1.8/pathname.rb:290:in `[]': no implicit conversion from nil to integer (TypeError)
    from C:/Ruby187/lib/ruby/1.8/pathname.rb:290:in `chop_basename'
    from C:/Ruby187/lib/ruby/1.8/pathname.rb:343:in `cleanpath_aggressive'
    from C:/Ruby187/lib/ruby/1.8/pathname.rb:331:in `cleanpath'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rails-2.3.18/lib/rails/rack/log_tailer.rb:9:in `initialize'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:54:in `new'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:54:in `use'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:73:in `call'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:73:in `to_app'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:71:in `inject'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:73:in `each'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:73:in `inject'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rack-1.1.6/lib/rack/builder.rb:73:in `to_app'
    from C:/Ruby187/lib/ruby/gems/1.8/gems/rails-2.3.18/lib/commands/server.rb:95
    from script/server:3:in `require'
    from script/server:3

What was going on here? Apparently in the rails/rack/logtailer.rb is the following code:

class LogTailer
  def initialize(app, log = nil)
    @app = app

    path = Pathname.new(log || "#{::File.expand_path(Rails.root)}/log/#{Rails.env}.log").cleanpath

And the cleanpath is somehow crashing. I tried googling this, to no avail.

Of course:

  • rails 2.3.18: who uses that still?
  • ruby 1.8.7 is deprecated
  • and deploying on windows servers

So I was on my own. I tracked the error down to the cleanpath_agressive which called the chop_basename recursively to remove superfluous . and .. redirections.

I am guessing the problem is that on windows, a path starts with a drive-letter, like D:\ or C:\ which messes up the ending of the cleanpath_aggressive loop.

Instead of really diving in, the path handed down to cleanpath in my case, did not need any cleaning, and furthermore, an uncleaned path would still work.

So I added an initializer config\cleanpath_bug_fix.rb with the following code:

if RUBY_PLATFORM =~ /mingw/
  # on windows the Pathname.cleanpath crashes on windows paths, instead of fixing it thoroughly
  # just ignore it
  class Pathname
    def cleanpath_aggressive
      self
    end
  end
end

Now my rails 2.3.18, using ruby 1.8.7p374 runs on Windows Server 2012R2. Woot ;)

More ...
Technology displaysleep osx
max osx: computer does not go to sleep anymore?

I had this weird thing: it seemed, since my update to mavericks, that my macbook pro did not go to sleep anymore. Well, when on power supply, my computer never goes to sleep, but my display does.

I know that some applications, like full screen video, make sure that the screen does not go to sleep, even though I do nothing (luckily). So I went on a venture to see if I could find which application "hanged".

Seems actually there is a really easy way to see that, in your console type

> pmset -g

And in my case the output look like this:

[system] ~/work/git/on_the_spot (master) > pmset -g
Active Profiles:
Battery Power -1
AC Power -1*
Currently in use:
 standbydelay 4200
 standby 1
 womp 1
 halfdim 1
 hibernatefile /var/vm/sleepimage
 darkwakes 1
 gpuswitch 2
 networkoversleep 0
 disksleep 10
 sleep 0 (sleep prevented by backupd)
 autopoweroffdelay 14400
 hibernatemode 3
 autopoweroff 1
 ttyskeepawake 1
 displaysleep 10 (display sleep prevented by Google Chrome)
 acwake 0
 lidwake 1

The important line: sleep prevented by Google Chrome, awesome, even I understand that :) So now the only thing I needed to do was go through my verrrry long list of tabs and check which one keeps the display from sleeping (I am an avid tab-collector). Apparently the screenhero homepage (screenhero seems like a very interesting option for remote pair programming, that's why I kept it open) for some reason blocks the display from sleeping. Makes sense when it is in use, not so much on the homepage. Closed it, and shabam ... fixed :)

More ...
Technology test coverage testing ruby on rails
complete coverage with SimpleCov

When using SimpleCov in a very ill-covered project, I got amazingly good results: SimpleCov just did not count not-covered files. So files that were never used in our test-suite, were just simply being ignored. While I understand that approach, it did not feel good. I want to measure absolute progress, and I want to know how badly it really is.

So, on a mission to count all uncovered files with simplecov, I encountered an issue in their github repo. In itself it contained/mentioned two solutions:

  • in a three year old comment, a solution to set a starting base-line, and merge it after the tests. Unfortunately, it did not work completely: all files were added, but with complete coverage. Doh.
  • a pull request, claiming to fix it: while it did add all files, files that had before 100% coverage, where now no longer.

Let's get technical. The first solution added a baseline, with for all lines the value nil. The second added the value 0 for each line. The value nil is what SimpleCov uses internally to count for a never line: a line that never matters. Which is either an empty line, a comment line, or begin and end of a class.

The zero is line that is not covered (a 1 is a line that was covered). When merging lines for files that had coverage, I assume the base-line 0 takes precedence over the coverage-calculated nil and so we end up with a non-covered line in the merged result. Bummer.

So I added my spec_helper.rb as follows:

if ENV["COVERAGE"]
  require 'simplecov'
  SimpleCov.start 'rails'
  SimpleCov.coverage_dir 'coverage/rspec'

  all_files = Dir['**/*.rb']
  base_result = {}
  all_files.each do |file|
    absolute = File::expand_path(file)
    lines = File.readlines(absolute, :encoding => 'UTF-8')
    base_result[absolute] = lines.map do |l|
      l.strip!
      l.empty? || l =~ /^end$/ || l[0] == '#' ? nil : 0
    end
  end

  SimpleCov.at_exit do
    coverage_result = Coverage.result
    covered_files = coverage_result.keys
    covered_files.each do |covered_file|
      base_result.delete(covered_file)
    end
    merged = SimpleCov::Result.new(coverage_result).original_result.merge_resultset(base_result)
    result = SimpleCov::Result.new(merged)
    result.format!
  end
end

So, before the test runs, I create a "baseline" containing for all possible ruby files (might need to filter that later), a nil if the line is empty or the "ending end" of a file, and a zero otherwise. After the tests have run, I remove the covered files from the "baseline", and those are then merged with the result that get the final result.

Now maybe try to translate that into a pull request :)

More ...