Blog
what did i learn today

So, unfortunately we have to deploy our rails projects on servers which are managed by our clients, and so this means those are windows servers. Luckily this no longer is a big deal, but I develop on mac and mostly deploy on linux machines (which align). But a new deployment on windows almost always adds some surprises. So we deploy using ruby 2.4 and somewhere in our Gemfile we use eventmachine and on the most recent deployment I suddenly got this weird error:

Unable to load the EventMachine C extension; To use the pure-ruby reactor, require 'em/pure_ruby'

Not sure what they mean here: do I need to adapt the gem-code???? But luckily some googling quickly turned up a solution. Apparently the eventmachine gem is not updated correctly to use ruby 2.4 or 2.5 and the proposed solution is to do

gem uninstall eventmachine  
gem install eventmachine --platform=ruby 

instead. This sounds great. In theory. But in practice? I have a bundle Gemfile and after every deploy/bundle I will have to uninstall the eventmachine-1.2.7-x64-mswin32 gem. I do have a script that I run on windows to deploy, and so I could easily add

gem uninstall -aIx eventmachine 
gem install eventmachine --platform=ruby

(the -aIx will remove all eventmachine instances and not care about dependencies)
but this feels a little counter-productive (wrong?) and it did not always seem to work reliably.

So I was looking for ways to describe in my Gemfile how to install the gem with the correct platform. Unfortunately platform has a different meaning inside a Gemfile, and the ruby platform is anything but windows.

But then I had an inspirational moment, why not install the gem from github, in the correct version?

So in my Gemfile I wrote

gem 'eventmachine', '1.2.7', git: 'git@github.com:eventmachine/eventmachine', tag: 'v1.2.7'

Installing the required version directly from git, which does work and does not break my deployment script/routine.

We encountered this strange error using WiceGrid: on some occasions when paginating to the second page, we actually lost the filtering, but not for all columns.

WiceGrid offers to define columns which are only rendered when creating html or exporting to csv. For us specifically, in some cases we want to show some pretty html when rendering html but just show the text when rendering/exporting to csv. For instance:

  g.column name: 'Status', attribute: 'status', in_csv: false do |plan|
    render 'grid_status_label', plan_request: plan, history: true
  end
  g.column name: 'Status', attribute: 'status', in_html: false

When rendering html, it will render a partial called grid_status_label, when rendering csv it will just show the status-text.

However, when defining the same column twice, this also has an effect on the filter. Either because we "exclude" one of definitions the column or because the column is defined twice, I am not sure. The easy way would be to know if we are rendering csv before defining the column so we don't define it twice at all and not confuse WiceGrid.

Luckily, we can ask the @grid if it is outputting csv. So if in your controller you write something like

@grid = initialize_grid(SomethingWithAStatus, ...) 

in the view you can just ask @grid.output_csv? to know if we are currently exporting to csv instead of html.

So with that knowledge, in your view you can write

 <%= grid(@grid) do |g|  
       [ .. your other columns ..]            

       g.column name: 'Status', attribute: 'status', in_csv: false do |plan|
         render 'grid_status_label', plan_request: plan, history: true
       end
       if @grid.output_csv?
         g.column name: 'Status', attribute: 'status', in_html: false
       end
     end -%>

... and pagination while filtering on status will work!!

I really love(d) using WiceGrid but unfortunately it is no longer maintained actively. There is a somewhat active branch, but it only works for rails 5 and not entirely sure what the status is there. So this is at least a fix so we can keep using WiceGrid in our current projects for now.

Not quite sure how I would like to proceed with WiceGrid, because the code-base is really large and there are some things I do not really like (e.g. having to use erb, the dsl is sometimes a bit heavy, there is no test-coverage --there is a separate test-project but mmmm, the layout is pretty much fixed). But on the other hand it has proven extremely easy and robust and extensible (define your own column-filter and render types). I will probably try to fork or restart with something similar.

The on_the_spot gem allows inline editing of data. In general this is something I prefer over forms: I do not want to switch to a new page to edit something, I want to edit it where I see it (I understand there are some very good cases for the standard show/edit pages).

So a very long while ago I created a small gem to edit data inline. It relies on the jEditable javascript, which is still working.

But how do you style the dynamically injected form?

In my projects, I use the translation files as follows, e.g. in on_the_spot.en.yml I write :

en:
  on_the_spot:
    ok: <button class="btn btn-primary btn-sm">Ok</button>
    cancel: <button class="btn btn-default btn-sm">Cancel</button>
    tooltip: Click to edit
    access_not_allowed: Access not allowed 

This will make sure the buttons are styled correctly. But if you try this, the input is too narrow, and everything is just squished together.

So add this little sprinkle of css to make everything look a little better:

.on_the_spot_editing {
  input, select {
    width: auto !important;
    height: 30px !important;

    margin-right: 5px !important;

    //display: block;

    padding: 6px 12px;
    font-size: 14px;
    line-height: 1.42857143;
    color: #555555;
    background-color: #fff;
    background-image: none;
    border: 1px solid #ccc;
    border-radius: 4px;

    -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
    box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
    -webkit-transition: border-color ease-in-out 0.15s, box-shadow ease-in-out 0.15s;
    -o-transition: border-color ease-in-out 0.15s, box-shadow ease-in-out 0.15s;
    transition: border-color ease-in-out 0.15s, box-shadow ease-in-out 0.15s;
  }
  textarea {
    width: 80%;
  }

  .btn {
    margin: 1px !important;
  }
}

What inline editing solution are you using with rails?

I am currently contemplating to switch over to start using vue.js for javascript sprinkles like this.

If you started using FontAwesome-5 in a Turbolinks project, you will quickly notice the icons disappear after the first page reload. So how can we fix that? I did not immediately find a reference inside the FontAwesome documentation, but luckily google proved helpful and I found this issue

Inside the issue I found the fix which I applied and worked for me. I created a new file app/javascripts/fix_fontawesome_reload.js with the following content

document.addEventListener("turbolinks:before-render", function(event) {
    FontAwesome.dom.i2svg({
        node: event.data.newBody
    });
});

which was then automatically included in application.js (because I have the require_tree line).

I just read this article titled Useful RSpec trick for testing method with arguments which shows a nifty way to write a repetitive test-suite where you want to verify different arguments give the correct/expected result.

The method proposed by the author looks like this:

RSpec.describe Daru::Index do
  let(:index) { described_class.new [:a, :b, :c, :d] }

  describe '#pos' do
    subject { index.method(:pos) }

    context 'by label' do
      its([:a]) { is_expected.to eq 0 }
      its([:a, :c]) { is_expected.to eq [0, 2] }
      its([:b..:d]) { is_expected.to eq [1, 2, 3] }
      # .. and so on

Which is looking very readable and compact! This solution makes a lot of use of subject, let and relies on its to make it work. Then the author proceeds to list a few more basic/default rspec ways but does not list how I in general write tests like that.

Not sure if it is more readable or not, but imho it is in general a lot less work, it is usable for a all kinds of repetitive tests.

Let's see how it looks:

RSpec.describe Daru::Index do
  let(:index) { described_class.new [:a, :b, :c, :d] }

  describe '#pos' do
    context 'by label' do 
      [ [[:a], 0],
        [[:a, :c], [0,2] ],
        [[:b ..:d], [1,2,3] ], 
        # and so on
      ].each do |arr|
        it "returns #{arr[1].inspect} for #{arr[0].inspect}" do 
          expect(index.pos(arr[0]).to eq(arr[1])
        end 
      end 
    end
  end
end 

(note: not actually sure if this code works, but you get the gist of it I hope)

So I use ruby meta-programming to define a whole test-suite when the file is loaded, and when I need to test another input-output, I can just add it to the list, no need to copy-paste.

Render bug with ul in chrome 62?

In our GIS web-application we use leaflet with the superb sidebar-v2 component to have some fold-out pages and command-icons in one place. But we suddenly encountered a bug when users/developers started upgrading to chrome 62. Very weird rendering bug. The icons suddenly were no longer centered inside the lu > li items but only the top half was visible. As if the icons were shifted down.

To get a little more technical: the sidebar is a bar filled with icons and these are actually an unordered list, and to center the icons inside the foreseen space we have something like, in short:

ul > li { height: 40px; }
ul > li > a { 
  height: 100%; 
  line-height: 40px;
}

which is all in itself a pretty standard way to align "text" (icons in our case) vertically. So why is it broken now? Chrome 62 seems to add some kind of padding/margin, except setting those explicitly has not effect.

If I removed the line-height the icons came into view correctly, but they were not centered in the foreseen space anymore.

Luckily, I was not the first to encounter this issue and this bug was also already reported to google as well and a fix is underway Unfortunately (?) the bug was not severe enough to actually halt going live of version 62 and or but it will be fixed in 63.

Luckily fixing the layout is quite simple, just add

 ul > li { 
   list-style-type: none;
 }

and then it renders correctly over all platforms again.

So to recap: if your unordered lists have hidden overflows and are suddenly broken in chrome 62 this might help you as well.

I have an audit-log table with +300M rows and while most messages are just confirming we performed some very repetitive action, I was thinking of an easy way to compress the data: when a month is over, instead of adding 1000 lines per day with the message "CHECKED IT", replace them with 1 line saying we "CHECKED IT [logged X times on day Y]". Wrote the code, and that was pretty easy. But then came the moment I had to run this, and the simplest query to check the number of messages:

select count(*) as count
from audit_logs
where created_at < '2016-01-01' 
  and created_at > '2015-12-01'       
  and message = 'Checking for new map-requests'  
  and organization_id = 3

took 15 seconds. So then I checked my indexes: I had only an index on organization_id.

I read an article on the best way to create an index when searching for ranges, and it suggested a combined index on the equality operator first, and then the range second, so in my case I added a migration with the following addition

add_index :audit_logs, [:organization_id, :created_at]  

Rerunning the query showed an immense improvement, and now the query only took a mere 20ms :) :) :)

Before, with iPhotos it was pretty easy to extract files: just drag and drop and they would have the exact same date and time as when they were imported/taken. Now, with the Photos app, this is no longer the case. WTF. A dropped file gets the current date and time. While I can understand why this is the case, it is not very convenient.

Luckily there is another way to copy files out of your Photos library.

If you visit the Pictures folder, you can see a (in my case very large) file called Photos Library.photoslibrary. If you right click and select Show Package Contents you can browse the individual files within the library. The images can be found in the Masters folder. The files/folders are organized by year/month/day which might not make sense to you, but I find it very useful.

I copied the entire Masters folder to my external drive, and then every image-file retained its original timestamp (yes!).

Now I can free some diskspace without any hesitation :)

... and then I encountered a bug in Postgis 2.0.4. Fuck. ST_geomfromgeojson rounds my z-values to integers, effectively making them useless.

I have a little example demonstrating this, and so I imagined to submit a bug. Unfortunately the bug-tracker requires an OS-Geo account, and to get that I need to write an email to get a "mantra" (see form ). Done that.

So what did I do (note just random-numbers):

insert into original_be_geometries(originally_type, originally_id, geom) 
values ('Test', 1, ST_setSRID(ST_GeomFromGeoJSON('{"type":"LineString","coordinates":[[1.23445,2.234455,3.33445],[4.12345,5.12345,6.56789],[7.012,8.111,9.0001]]}'), 31370) ); 

some random-numer, but if I then do something like select st_astext(geom) from original_be_geometries where originally_type= 'Test'; I get

LINESTRING Z (1.23445 2.234455 3,4.12345 5.12345 6,7.012 8.111 9)

Fuck. Instead if I use ST_GeomFromText it does work:

insert into original_be_geometries(originally_type, originally_id, geom) 
values ('Test', 2, ST_setSRID(ST_GeomFromText('LINESTRING Z (1.23445 2.234455 3.33445, 4.12345 5.12345 6.56789, 7.012 8.111 9.0001)'), 31370) ); 

returns the expected geometry

LINESTRING Z (1.23445 2.234455 3.33445,4.12345 5.12345 6.56789,7.012 8.111 9.0001)

So I am going to switch my workflow from exporting geojson to exporting WKT's which I can then use to import. Now if only oracle supported 3d geometries when exporting to WKT this would be easy :eye-roll: :le-sigh: :rolls-up-sleeves: :)

[UPDATE] My team-mate has Postgis 2.2.2 and there this just works. I did not find this bugfix in the changelog, but this is good news. Damn. Now I have to upgrade my postgresql/postgis. Using brew. OMG! Last time I lost a weekend trying to get it fixed, I think I will still write my own sdo2wkt3d instead ;) (and update later).

Let me quickly introduce WiceGrid, if you do not know yet: it is a super-gem that will allow you to easily show a list/grid of items, and allow easy filtering/searching/pagination.

For rails there is, afaik, no better alternative. There are some javascript/jquery driven dynamic grids, but for me the big advantage is that with WiceGrid all work is done server-side, which is ideal when handling large sets of data.

Since you can just render html in any column, we do for instance the following for our KLIP platform :

Screen Shot 2016-06-05 at 21.08.43

In our first we show our internal identifier, and the external identifier. In code this looks like this:

g.column name: 'Id', in_csv: false do |pr|
  render 'title_with_info', plan_request: pr
end

and the partial title_with_info looks like

%h4
  = plan_request.ident
%p.text-muted.small
  = plan_request.maprequest_id

But now the problem is: how can we, when filtering, automatically look for both fields? WiceGrid automatically handles one field, but not both. Luckily, WiceGrid allows us to define custom filter types. What we want is:

  • we want the filter to just look like a standard string field
  • we want to build a query which will search for ident or maprequest_id.

Adding your own custom filter types is not entirely clear in the documentation, I had to take a look at the code to fully understand it. So that's why I decided to write it in detail here.

It takes three steps:

  • define a class to create the correct filter (a conditions generator)
  • define a custom filter_type inside WiceGrid, using your custom class
  • use the class in the column definition

Create Conditions Generator

Inside lib/wice/columns add a new file called conditions_generator_column_plan_request_identifier.rb and add the following content:

module Wice
  module Columns
    class ConditionsGeneratorColumnPlanRequestIdentifier < ConditionsGeneratorColumn  #:nodoc:

      def generate_conditions(table_alias, opts)   #:nodoc:
        if opts.kind_of? String
          string_fragment = opts
          negation = ''
        elsif (opts.kind_of? Hash) && opts.has_key?(:v)
          string_fragment = opts[:v]
          negation = opts[:n] == '1' ? 'NOT' : ''
        else
          Wice.log "invalid parameters for the grid string filter - must be a string: #{opts.inspect} or a Hash with keys :v and :n"
          return false
        end
        if string_fragment.empty?
          return false
        end

        table_name = @column_wrapper.alias_or_table_name(table_alias)
        op = ::Wice.get_string_matching_operators(@column_wrapper.model)
        search_value = "%#{string_fragment}%"

        [
            " #{negation}  (#{table_name}.ident #{op} ? OR #{table_name}.external_id #{op} ?)",
            search_value, search_value
        ]
      end
    end
  end
end

This class is actually almost copied from the standard column generator, except I generate a different condition at the end, where I compare with two fields with an OR operator. This way I will find results if either the ident or the external_id matches the search-value.

Define filter type in config

In config/wice_grid_config.rb add the following:

Wice::Defaults::ADDITIONAL_COLUMN_PROCESSORS = {
  plan_request_identifier_filter:  ['ViewColumnString', 'Wice::Columns::ConditionsGeneratorColumnPlanRequestIdentifier']
}

We just use the standard ViewColumnString to just show us a string filter.

Use the filter_type in the column

To enable the filter in the column, we just have to write the following:

g.column name: 'Id', attribute: 'ident', filter_type: :plan_request_identifier_filter, in_csv: false do |pr|
  render 'title_with_info', plan_request: pr
end

When you encounter SSL errors when installing gems on Windows, the easiest workaround is to change your sources from https://... to http://.... But ... I am an avid user/fan of rails-assets.org and today I suddenly started getting the error on their domain.

So at first I feared that rails-assets had stopped as foreseen (in this ticket), but the site was still reachable, and actually they switched (imho just two days ago) to a new maintainer, which is awesome: the future of rails-assets is safe for now.

But there is no rose without a thorn and now rails-assets enforces TLS (which is actually a good thing), so it is always SSL and gem cannot ignore SSL anymore. Doh! So I was stuck on windows.

I tried to make gem command ignore ssl errors regardless, by creating c:\ProgramData\gemrc with the following content:

---
:ssl_verify_mode: 0 

and that partly worked: I was now able to fetch the index, but now I received the SSL error on the first gem retrieved from rails-assets, so I was still not in the clear. I had to make sure the SSL verification actually worked!

Fortunately, after some googling this proved easier then expected! The root cause is that ruby on windows (or openssl) has no default root certificate. So I found a good description how to fix that on windows.

I used the boring/easy/manual approach, in short:

  • download the cacert.pem file from http://curl.haxx.se/ca/cacert.pem. I saved this to my ruby folder (e.g. c:\ruby21).
  • add an environment variable SSL_CERT_FILE, so ruby can pick it up. E.g. in your command prompt type set SSL_CERT_FILE=C:\ruby21\cacert.pem. To make this a permanent setting, add this to your environment variables.

In a system I am helping to develop a person can be linked to a myriad of things, including themselves, so we use a relation table PersonRelation defined as follows

 class PersonRelation
   belongs_to :person
   belongs_to :personifiable, :polymorphic => true
   belongs_to :person_relation_type
 end 

So a person could be linked to different "personifiable" things, and sometimes the meaning of the relation can change (e.g. a person could be an owner or a renter --expressed by the relation type).

In our datamodel, a person is actually a "legal person", so it could also be an organisation, and an organisation can have contacts. Logically a contact belongs to one or more organisation by following the association in the reverse direction.

When using rails 3+ up until 4.0 we wrote the association as follows:

  has_and_belongs_to_many :contacts,
                          :join_table => "person_relations",
                          :class_name => "Person",
                          :foreign_key => "person_id",
                          :association_foreign_key => "personifiable_id",
                          :readonly => false,
                          :conditions => ["personifiable_type = ? and people.archived_at is null and person_relations.archived_at is null and person_relation_type_id=?", "Person", PersonRelationType::CONTACT],
                          :insert_sql => proc {|record| "INSERT INTO person_relations(person_id, personifiable_id, personifiable_type, person_relation_type_id, created_at, updated_at) VALUES('#{self.id}', '#{record.id}', 'Person', 6, current_timestamp, current_timestamp)" }

  has_and_belongs_to_many :organisations,
                          :join_table => "person_relations",
                          :class_name => "Person",
                          :association_foreign_key => "person_id",
                          :foreign_key => "personifiable_id",
                          :readonly => false,
                          :conditions => ["personifiable_type = ? and people.archived_at is null and person_relations.archived_at is null and person_relation_type_id=?", "Person", PersonRelationType::CONTACT],
                          :insert_sql => proc {|record| "INSERT INTO person_relations(person_id, personifiable_id, personifiable_type, person_relation_type_id, created_at, updated_at) VALUES('#{record.id}', '#{self.id}', 'Person', 6, current_timestamp, current_timestamp)" }

Pretty complicated, but it does the job :) Now we keep getting deprecation warning because :conditions, :insert_sql, :finder_sql are all deprecated and essentially removed in rails 4.1+. I kept postponing because it seemed like really hard to translate. But one suggestion in the error-message is to use has_many :through instead.

We already had the following statement in our Person model:

  has_many :person_relations

so for a simple linked model, we could just write

  has_many :parcels, through: :person_relations, source: :personifiable, source_type: 'Parcel'

and likewise, for our contacts, we just have to add a condition, but on PersonRelation, so we write:

  has_many :contacts,
           -> { where("person_relations.person_relation_type_id" => PersonRelationType::CONTACT)},
           through: :person_relations,
           class_name: "Person",
           source: :personifiable,
           source_type: 'Person'

I am not really happy with the explicit mention of person_relations in the condition, this might impact chainability later on, but I am not sure how I could handle that differently. For now this does the job really cleanly.

Now the problem is how to follow the reverse association (from the contacts to the organisations), and actually this also proves pretty simple. If I "reverse" the person-relations first, than we can use that as the through table:

  has_many :reverse_person_relations, as: :personifiable, class_name: 'PersonRelation'
  has_many :organisations,
            -> { where("person_relations.person_relation_type_id" => PersonRelationType::CONTACT)},
           through: :reverse_person_relations,
           class_name: "Person",
           source: :person

At first I felt that the deprecation of :insert_sql, :delete_sql, :finder_sql would be an insurmountable hurdle, but actually it proved pretty simple to fix and in the end a lot easier and even more readable. Nice :+1:

[oracle] avoiding SLOW sdo_aggr_union

There is this recurring problem we have in GIS: getting road-segments and wanting to show complete roads. The naive approach would we to do something like the following:

insert into street_geoms
select ro.rd_ro_ident, ro.rd_ro_name, ro.com_code, ssdo_aggr_union(sdoaggrtype(rd.ro_geometry, 0.005)) as geom
from rd_road ro, rd_ro_sec ros
where ros.rd_ro_ident = ro.rd_ro_ident
group by ro.rd_ro_ident, ro.rd_ro_name, ro.com_code;

For good measure: we have 45.000+ roads, with a total of 230.000+ road segments. So when that query starts running and starts taking a long time, I started googling. Apparently there are two faster alternatives: SDO_AGGR_CONCAT_LINES and SDO_AGGR_SET_UNION. While the first was really quick, completed in minutes, the result was completely wrong (complete segments were missing). The second might be quicker, but it was really hard to get an idea about any progress, and if it would fail, everything should be lost (rolled back).

So I decided to write a little script, and issue a sql statement for each single road, allowing me to track progress and added restartibility. For each road I issued a statement like:

insert into street_geoms
select ro.rd_ro_ident, ro.rd_ro_name, ro.com_code, sdo_aggr_set_union(CAST(COLLECT(ros.rd_ros_geometry) AS mdsys.SDO_Geometry_Array),0.005) as geom
from rd_road ro, rd_ro_sec ros
where ros.rd_ro_ident = ro.rd_ro_ident
  and ro.rd_ro_ident = 1895101 
group by ro.rd_ro_ident, ro.rd_ro_name, ro.com_code;

I added some ruby code around it, to make sure it tracked the progress and calculated the remaining time, just to have an idea. The first "large" road it stumbled upon literally took hours. It only had to join 39 segments. A simple query learned I had 150+ roads with more segments, and a maximum of 125 segments in the database. I could not just simply ignore them :) So this was not going to work either.

Why would this be so hard? I just wanted to throw all linestrings together into one geometry. How could I do that? Querying the geometries was really easy, so what if I joined the geometries outside of oracle? And wouldn't that be hard? But there is a simple solution: convert the strings to WKT, and join all LINESTRING in a MULTILINESTRING. This would just be simple string manipulation. I can do that ;)

I had some hiccups with this approach: handling the long strings proved a bit akward (use CLOB instead) and I had to regularly call GC.start to make sure the open cursors were released. And I had to make sure not to build a string literal which was too long (ORA-06550).

But in the end I was able to join the road-sections for the 45.000 + roads in approx 1.5h, which is not blindingly fast, but faster than 1 single SDO_AGGR_SET_UNION operation :) :)

For reference you can see the full code:

class StreetGeom < ActiveRecord::Base
  self.primary_key = 'rd_ro_ident'
end


def format_time (t)
  t = t.to_i
  sec = t % 60
  min  = (t / 60) % 60
  hour = t / 3600
  sprintf("% 3d:%02d:%02d", hour, min, sec)
end


def eta(count)
  if count == 0
    "ETA:  --:--:--"
  else
    elapsed = Time.now - @start_time
    # eta = elapsed * @total / count - elapsed;
    eta = (elapsed / count) * (@total - count)

    sprintf("ETA: %s", format_time(eta))
  end
end


all_roads = Road.count
geoms_to_calculate = all_roads - StreetGeom.count
@total = geoms_to_calculate

puts "Joining geometries for #{all_roads} roads [still #{geoms_to_calculate} to do]"


cntr = 1
@start_time = Time.now

done = 0


Road.order(:rd_ro_ident).each do |road|
  street_count = StreetGeom.where(rd_ro_ident: road.rd_ro_ident).count
  print "\rConverting #{cntr}/#{all_roads} [ #{eta(done)} ] "
  if street_count == 0
    print "..."
    $stdout.flush

    ## get all geometries in WKT format
    get_geoms_sql = <<-SQL
      select sdo_cs.make_2d(ros.rd_ros_geometry).get_wkt() as wkt_geom from rd_ro_sec ros where ros.rd_ro_ident = #{road.rd_ro_ident}
    SQL

    cursor = Road.connection.execute(get_geoms_sql)

    line_strings = []

    while row = cursor.fetch
      line_string = row[0].read.to_s
      line_strings << line_string[10..-1]
    end


    insert_sql = <<-SQL
      DECLARE
        wkt_str clob;
      BEGIN
        wkt_str := 'MULTILINESTRING(#{line_strings.join(", ';\nwkt_str := wkt_str || '")})';
        insert into street_geoms(rd_ro_ident, name, com_code, geom)
        values (#{road.rd_ro_ident}, q'[#{road.rd_ro_name}]', '#{road.com_code}',
             sdo_util.from_wktgeometry(to_clob(wkt_str)) );
      END;
    SQL

    Road.connection.execute(insert_sql)
    done += 1
  else
    print "_"
  end

  cntr += 1

  # periodically cleanup GC so we release open cursors ...
  # to avoid ORA-1000 errors
  if (cntr % 50) == 0
    GC.start
  end
end

print "\n"
puts "\n\nDone!"

and I run this script in the rails environment as follows: rails runner lib\tasks\join_road_geometries.rb.

We are in the process of migrating an old GIS system. For our new systems we use POSTGIS. But this one still uses oracle. The data is spanning two countries: Belgium and the Netherlands. Our system does something awful: all data is stored in RD (the dutch coordinate system, using Oracle SRID 90112).

So how do we get data into the system: belgian data is entered as Lambert 72 (oracle srid 327680) and then transformed to 90112.

Our client uses a customised viewer that shows the data either in RD or Lambert72. Now we want to switch to a more generic solution, and show the data in WGS84. We are using oracle 11, so my initial process was the following

  • extract belgian data from tables, convert back to 327680 (SDO_CS.transform(geom, 327680))
  • set the SRID to 31370 (which is the correct/best srid for belgium --it has the correct transformation to wgs84) as follows: update be_geoms bg set bg.geom.sdo_srid = 31370 (so without transformation)
  • for dutch data I just set it to 28992
  • and then I transform both to WGS!

Easy! done! ready! However ... I was not ... The data was not positioned correctly. So I checked the definition in MDSYS.CS_SRS for both 28992 and 31370 and compared it to epsg.io and lo and behold: both where wrong. So now I had to update them.

Updating EPSG:31370

delete from mdsys.cs_srs where srid=31370;
Insert into MDSYS.CS_SRS (CS_NAME,SRID,AUTH_SRID,AUTH_NAME,WKTEXT,CS_BOUNDS,WKTEXT3D) values ('Belge 1972 / Belgian Lambert 72',31370,31370,'IGN Brussels www.ngi.be/html-files/french/0038.html','PROJCS["Belge 1972 / Belgian Lambert 72", GEOGCS [ "Belge 1972", DATUM ["Reseau National Belge 1972 (EPSG ID 6313)", SPHEROID ["International 1924 (EPSG ID 7022)", 6378388.0, 297.0], -106.869,52.2978,-103.724,0.3366,-0.457,1.8422,-1.2747], PRIMEM [ "Greenwich", 0.000000 ], UNIT ["Decimal Degree", 0.0174532925199433]], PROJECTION ["Lambert Conformal Conic"], PARAMETER ["Latitude_Of_Origin", 90.0], PARAMETER ["Central_Meridian", 4.3674866666666667], PARAMETER ["Standard_Parallel_1", 51.1666672333333333], PARAMETER ["Standard_Parallel_2", 49.8333339], PARAMETER ["False_Easting", 150000.013], PARAMETER ["False_Northing", 5400088.438], UNIT ["Meter", 1.0]]',null,'PROJCS[
  "Belge 1972 / Belgian Lambert 72",
  GEOGCS["Belge 1972",
    DATUM["Reseau National Belge 1972",
      SPHEROID[
        "International 1924",
        6378388.0,
        297.0,
        AUTHORITY["EPSG", "7022"]],
      TOWGS84[-106.869,52.2978,-103.724,0.3366,-0.457,1.8422,-1.2747],
      AUTHORITY["EPSG", "6313"]],
    PRIMEM["Greenwich", 0.000000, AUTHORITY["EPSG","8901"]],
    UNIT["degree (supplier to define representation)", 0.0174532925199433, AUTHORITY["EPSG", "9122"]],
    AXIS["Lat", NORTH],
    AXIS["Long", EAST],
    AUTHORITY["EPSG", "4313"]],
  PROJECTION ["Lambert Conformal Conic"],
  PARAMETER ["Latitude_Of_Origin", 90.0],
  PARAMETER ["Central_Meridian", 4.3674866666666667],
  PARAMETER ["Standard_Parallel_1", 51.1666672333333333],
  PARAMETER ["Standard_Parallel_2", 49.8333339],
  PARAMETER ["False_Easting", 150000.013],
  PARAMETER ["False_Northing", 5400088.438],
  UNIT["metre", 1.0, AUTHORITY["EPSG", "9001"]],
  AXIS["X", EAST],
  AXIS["Y", NORTH],
  AUTHORITY["EPSG", "31370"]]');

... and this worked and now my transformation for Lambert is correct!

Updating EPSG:28992

... proved to be a little trickier. I assumed I could just reuse the same method as for the belgian coordinate system (yes, I know, assume = ass-u-me).

I was unable to just delete or update 28992 because I got an error that a child record existed: ORA-02292 with reason COORD_OPERATION_FOREIGN_SOURCE. Googling this revealed nothing at all.

So I had to dig deeper. And deeper. Actually MDSYS.CS_SRS is actually a view which tries to update the underlying tables. And the TOWGS84 coordinates, which I had to change/update, are stored in SDO_DATUM. So after some searching, it actually proved to be quite easy. To updated the EPSG:28992, I just had to do:

update mdsys.sdo_datums set
  shift_x = 565.417,
  shift_y = 50.3319,
  shift_z = 465.552,
  rotate_x = -0.398957,
  rotate_y = 0.343988,
  rotate_z = -1.8774,
  scale_adjust = 4.0725
where datum_id = 6289;

EXECUTE SDO_CS.UPDATE_WKTS_FOR_EPSG_DATUM(6289);

My first initial (naive) assumption was that the SDO_CS.UPDATE_... functions would actually retrieve the latest EPSG definitions, unfortunately no such luck :) :)

Stuff like this makes me appreciate PostGIS even more.

In a project we built, we are using que for doing our background-jobs, and there is a very simple (but sufficient) and clean web-ui, called que-web, allowing us to monitor the status of the jobs online.

And normally, you just include it in your project by adding the gem, and then adding the following to your config/routes.rb :

require "que/web"
mount Que::Web => "/que"

But, this is completely open and unauthenticated. So we use devise, and it is really easy to limit a route to authenticated users:

require "que/web"
authenticate :user do 
  mount Que::Web => "/que"
end

At least this limits the accessability to logged in users. But we wanted it to be available only to admin-users. So I thought I had to resort to defining my own constraint-class, as follows

class CanSeeQueConstraint
  def matches?(request)
    # determine if current user is allowed to see que
  end
end

and in the routes write it as follows

require 'can_see_que_constraint'
mount Que::Web, at: '/que', constraints: CanSeeQueConstraint.new 

The problem was: how do I get to the current user, in a constraint class? So I took a peek at how the authenticate block in devise works, and apparently there is an easier option: the authenticate block takes a lambda, where you can test the currently authenticated user. Woah! Just what we need. So we wrote the following to only allow our adminstrators to see/manage our background jobs:

authenticate :user, lambda {|u| u.roles.include?("admin") } do
  mount Que::Web, at: 'que'
end
Using render-anywhere gem with partials

Normally in rails, you can only render views inside of the controller. But what if you want to render a view somewhere else? For instance we wanted to generate xml-files using views. Haml can be used to describe xml just as well as plain html.

There is a gem called render_anywhere that allows just that. So how does this work, for example:

class Organisation < ActiveRecord::Base

  has_many :members

  include RenderAnywhere

  def to_xml
    render partial: "#{self.to_partial_path}", object: self, layout: 'my_xml_layout'
  end
end

We had a little problem when using partials though.

Normally if you type something like

= render @member

it will ask the partial path from the model (@member.to_partial_path), but somehow this always got prefixed with render_anywhere. The gem creates a dummy RenderingController in the RenderAnywhere namespace, so apparently it will look for the following view:

render_anywhere/members/member

In our case, I did not want to use the render_anywhere subfolder. It took me a while to figure out how to overrule this, but in essence it is pretty simple: rails uses the namespace of the rendering controller to prefix the path. Some deep googling proved that any controller has a method called _prefixes which lists all the prefixes for that class.

We can easily verify this in the rails console:

:001 > RenderAnywhere::RenderingController._prefixes
=> ["render_anywhere/rendering"]

So if we could overrule _prefixes to just return ["rendering"] ... Mmmmmm fork the code of render_anywhere? Or ...

There is another option: render_anywhere allows you to supply your own RenderingController and will use that instead if found in the context where the RenderAnywhere code is included.

So, if you write something like:

class Organisation < ActiveRecord::Base

  has_many :members

  include RenderAnywhere

  class RenderingController < RenderAnywhere::RenderingController

    def self._prefixes
      ["rendering"]
    end

  end

  def to_xml
    render partial: "#{self.to_partial_path}", object: self, layout: 'my_xml_layout'
  end
end

it will look for a view called members/member. Woot. To specify a different sub-folder you can adapt the _prefixes method as you wish :)

Developing rails websites with a geographic component we rely heavily on Postgis, so we use activerecord-postgis-adapter for the Postgis support, and I always use schema_plus because it allows me to define views. Until recently, I always had to use the structure.sql instead of the schema.rb because the geometric columns did not dump correctly.

But for a while now, activerecord-postgis-adapter handles this correctly and so we use the schema.rb file again. Only to discover a "new" error:

ActiveRecord::StatementInvalid: PG::DependentObjectsStillExist: ERROR:  cannot drop view geography_columns because extension postgis requires it
HINT:  You can drop extension postgis instead.
: DROP VIEW IF EXISTS "geography_columns"

Apparently specific Postgis views are also dumped in the schema file, and those views obviously cannot simply be re-created.

A very naive solution I kept using was to comment those create_view lines in our schema.rb file. But apparently there is a much better solution: you can configure which tables and views schema_plus should ignore.

So I added an initializer in /initializers/schema_dumper.rb with the following content:

ActiveRecord::SchemaDumper.ignore_tables = [
   "geography_columns", "geometry_columns", "spatial_ref_sys", "raster_columns", "raster_overviews"
]

And now my schema.rb is correct, and simple commands as rake db:setup or rake db:test:prepare just work. Hehe.

I have a very weird problem with my geoserver+oracle, when deployed on a Windows 2012R2 server (see here), and in attempting to solve that, I upgraded the geoserver from 2.6.3 to 2.7.1, hoping that that would fix that.

Sometimes fairy tales come true, but in this case it did not help, unfortunately. The 2.7.1 did render a lot quicker, except one layer which did not render at all anymore.

My style could not render with the error The requested Style can not be used with this layer. The style specifies an attribute of <missing attribute name>. Checking the layer in geoserver, I could see it was no longer to determine any of the attributes for the given table.

Further investigation in the logfile revealed the following (cryptic) error:

Failure occurred while looking up the primary key with finder: org.geotools.jdbc.HeuristicPrimaryKeyFinder@24cf7139

java.sql.SQLException: Exhausted Resultset

Mmmmmm. Luckily my google-fu revealed a linked issue, and simple solution:

update the driver from ojdbc14.jar to the newer ojdbc7.jar fixes this problem.

Hehe :)

Updating geoserver did not fix my problem: my layer still had some duplicate columnnames. This might not be such a big problem: everything is drawn correctly, WMS calls work, but WFS calls gave the irritating yet predictable error ORA-00918: column ambiguously defined. Annoying.

So how does one find column-names for a table in oracle? With a query like:

select * from dba_tab_columns where table_name = 'YOUR_TABLE_NAME';

and all of a sudden I saw the same set of column-names, with some duplication. Apparently my oracle database contains the table twice, in two different schema's. Since my user had the permissions to access the other schema, it seems geoserver does not limit the query to the (specified) schema at all.

The fix then was easy: make the other schema unaccessible. In my case the second schema was for testing purposes, so I could just delete it.

At my current job, we make GIS websites, using rails and geoserver. I develop on mac, and for some clients we need to deploy on windows. One client is still using an Oracle database, while in general I prefer to work with postgis databases, and also geoserver offers better support when using postgis.

So: when working locally I got a really weird phenomenon in my geoserver: it duplicated various oracle columns. Generally, for viewing not a problem but when using WFS I got the "column ambigously defined", and using Oracle SQL Views did not work (it went looking for meta data?) and the Geoserver SQL Views were painfully slow.

But on my clients server I installed Geoserver 2.6.3 and the oracle stuff just worked. Woot :) So I had to upgrade my ancient 2.3.3 geoserver. It is running inside a tomcat. Upgrading seemed easy enough: copy the old geoserver folder somewhere (actually you would only need the data folder and the web.xml but I am lazy/extra safe like that), and drop the new war, and theoretically, we should be good to go.

Except ... I got this peculiar error in my log-file

SEVERE: Error listenerStart

WTF! Thanks to some googling, I added a file logging.properties to my geoserver\WEB-INF\classes with the following content:

org.apache.catalina.core.ContainerBase.[Catalina].level = DEBUG
org.apache.catalina.core.ContainerBase.[Catalina].handlers = java.util.logging.ConsoleHandler

restarted my tomcat, and the following appeared:

SEVERE: Error configuring application listener of class org.geoserver.platform.GeoServerHttpSessionListenerProxy
java.lang.UnsupportedClassVersionError: org/geoserver/platform/GeoServerHttpSessionListenerProxy : Unsupported major.minor version 51.0 (unable to load class org.geoserver.platform.GeoServerHttpSessionListenerProxy)

Now what the hell cryptic error is that? But apparently this is a very compact way to say this code needs java 1.7 and I am still using java 1.6 (I am looking at you Apple). Updating now :)