Blog
what did i learn today
Technology
[windows] Configuring ruby/git to work behind a proxy

While I develop on OS X, and by preference deploy on Unix/Linux machines, for some of our clients we have to deploy on Windows servers. An IT-department will only allow machines/OSes they can manage themselves. While there are some drawbacks to this, most of them can be worked around perfectly.

One of this those "difficulties" is configuring a proxy. In general, on Windows configuring a proxy means: opening Internet Explorer/IE Edge, editing the connection details and adding/configuring the proxy (I find this completely baffling, did Microsoft not get some kind of anti-trust lawsuit because of the too tight coupling of Internet Explorer and Windows OS? anyways ... old news probably).

Git/ssh

To make sure git (using ssh) uses the proxy correctly, you have to open your $HOME/.ssh/config file, and add at the top:

 ProxyCommand /bin/connect.exe -H 10.52.111.111:80 %h %p
 
 Host github.com 
     [...snipped..]

you can also add the proxy configuration for each Host separately, but in my most cases you will want to use a shared/general proxy (I guess). Replace the shown IP-address with your proxy obviously.

Ruby

For ruby it is actually quite simple: you have to define the environment variable http_proxy (I define it as lower case, I think upper case should also work, but not entirely sure).

[add screenshot]

Javascript

We also run some puppeteer scripts from ruby, and to configure the proxy there, so in a puppeteer script, when creating the browser we have to hand down the proxy details. E.g. something like:

const browser = await puppeteer.launch({
    args: [
        '--proxy-server=10.52.111.111:80',
        '--proxy-bypass-list=localhost,10.200.*.*',
        '--no-sandbox',
        '--disable-setuid-sandbox',
        '--ignore-certificate-errors',
    ]
});
const page = await browser.newPage();
More ...
Technology ruby on rails rspec
[RSPEC] Handling the MockExpectionError "does not implement"

I was writing a test for my helper which would, if the user is authorised to perform an action, show the actual link, and otherwise show the title or nothing (depending on what is wanted).

I make use of the vigilante gem which I should still write an article about, but this gem allows to store authorisations in the database and can be context-specific (e.g. you you can be assigned different roles for different "contexts", which could translate to organisations, projects, ... whatever applies to your setup). This gem adds a helper method is_allowed_to? to determine if one has the permissions to perform a certain given action.

When I wanted to mock this in my helper, I assumed a simple

     expect(helper).to receive(:is_allowed_to?) { true }

would suffice. However, I got a very strange error:

RSpec::Mocks::MockExpectationError: #<#<Class:0x00007fe2dc84e010>:0x00007fe29c83dc78 ....snip some very long things ... >> does not implement: is_allowed_to?

This is actually a great feature from Rspec: it checks if the thing you want to mock/stub actually exists on the original object, but in this case a helper (or a view) in Rails can access a lot of helper methods which are implicitly loaded.

This behaviour can be controller by setting the RSpec::Mocks.configuration.verify_partial_doubles to false. Of course I do not want to disable this for my entire test-suite, but just locally for the single test or single spec file.

So in my spec I temporarily disable the checking for the existence of doubles, as follows

require 'rails_helper'

RSpec.describe MapHelper do
  before(:all) do
    RSpec::Mocks.configuration.verify_partial_doubles = false
  end

  after(:all) do
    RSpec::Mocks.configuration.verify_partial_doubles = true
  end

  it "has the correct method" do
    expect(helper.respond_to?(:map_checked_link_to)).to eq(true)
  end

  context "map_checked_link_to" do
    context "a reachable feature" do
      before do
        @pipe = FactoryBot.create(:pipe)
      end
      it "renders a normal link" do
        allow(helper).to receive(:is_allowed_to?).and_return("XXX00")
        expect(helper.map_checked_link_to("Pijpstuk 123", @pipe, {})).to eq("<a href=\"/pipes/208\">Pijpstuk 123</a>")
      end
    end
    context "a blocked feature" do
      before do
      end
      it "does not render anything by default" do
        allow(helper).to receive(:is_allowed_to?) { false }
        expect(helper.map_checked_link_to("Pijpstuk 123", "", {})).to eq("Pijpstuk 123")
      end
      it "renders the title if so specified" do
        allow(helper).to receive(:is_allowed_to?) { false }
        expect(helper.map_checked_link_to("Pijpstuk 123", "", {}, true)).to eq("Pijpstuk 123")
      end
    end
  end

end

I also stumbled upon a PR implementing a method without_verifying_partial_doubles which takes a block which would do exactly the same. So one would be able to write

it "does something weird with mocks" do
  without_verifying_partial_doubles do
    ...
  end
end

But it did not work for me. Not sure if that is because the rspec version I am using in this project is too old, or the example I found was outdated.

More ...
Technology ruby on rails
[rails] Using ActionCable in production when using passenger+apache2

So on the server for work we use apache2+passenger to host all our rails servers. We now have about 10-ish, not too much, not all are used heavily, all are long-living, most are still rails 4. On my own server, with a similar setup, the rails versions vary from 4 to 6.

Recently I started a new project, and of course used rails 6. And I had the awesome idea to try out stimulus-reflex as this seems an interesting addition in the js front-end world. It really is a very interesting and different approach, and since the new/upcoming version 3.3 includes morphs, actually well-suited for our use-case which almost always involves a leaflet map (so we definitely need control over which area of the page to refresh). It works! Great. I will write in more detail about that at some other time.

So I had a working part of the website, enriched using stimulus/stimulus-reflex, and now I wanted to deploy it.

Only to realise ... passenger does not support ActionCable on apache2. ActionCable is one of those new tools I still need to investigate in detail, but yes, stimulus-reflex uses ActionCable to get a highly efficient channel between the browser and the server.

So now I am presented with two options:

  • we replace apache2 on our server with nginx. Frankly: this does not feel like a bad thing to do, however I am a bit hesitant to do this, since this seems to be a larger operation and will cause some down-time for our other applications.
  • use passenger-standalone and make apache2 proxy to it ---ha this seems interesting!

So I had to perform a few steps:

  • configure action-cable for deployment
  • install and run passenger standalone
  • change my apache2 config for my site

Configure ActionCable

Actually, this is amazing, but in development everything just works. Of course in production there are apparently a lot of options to deploy it.

I choose for the simplest setup: let my rails process host the websockets (and thus passenger), and use postgresql as the backend (I am not using redis yet).

So in my config/routes.rb I had to make sure we mount the ActionCable server

   mount ActionCable.server => '/cable'

In my config/environment/production.rb I specified our allowed domains:

  config.action_cable.allowed_request_origins = [ 'http://geotrax.be', /http:\/\/geotrax.*/ ]

And to use postgresql instead of redis is actually really simple (we were already using postgresql as our db backend). In your config/cable.yml just write

 production:
   adapter: postgresql 

Install passenger-standalone

I imagined this was easy, but it was only until I got all the steps correctly it worked. In short:

First I added the passenger gem to my Gemfile in the production block and move the puma to :development/:test

then create a Passengerfile.json file specifying the environment, port, user and instance_registry_dir

{
  // Run the app in a production environment. The default value is "development".
  "environment": "production",
  // Run Passenger on the given port. In this example, we use port 80,
  // the standard HTTP port.
  "port": 4000,
  // Tell Passenger to daemonize into the background.
  "daemonize": true,
  // Tell Passenger to run the app as the given user. Only has effect
  // if Passenger was started with root privileges.
  "user": "geotrax",
  "instance_registry_dir": "/srv/geotrax/tmp"
}

Somehow we still had to start passenger with the correct instance-registry-dir (specifying it in the Passengerfile did not work?). So we start passenger as follows

RAILS_ENV=production bundle exec passenger start --instance-registry-dir=/srv/geotrax/tmp

Then we have a running/waiting passenger instance. Now all that remains ...

Configure apache2 correctly

This was actually the hardest part. Also the passenger document just glosses over the subject. It is easy to write an apache site configuration that passes all http requests to passenger, but the websockets ... ?

To start you need the correct modules

sudo a2enmod proxy proxy_http proxy_wstunnel rewrite

(in my case this ment just adding the proxy_wstunnel mod, the other were already installed/used)

Then I edited my site.conf as follows:

<VirtualHost *:80>
    DocumentRoot /srv/geotrax/applications/geotrax/current/public

    ServerName  www.geotrax.be
    ServerAlias geotrax.be

    ErrorLog  /srv/geotrax/applications/geotrax/current/log/error.log
    CustomLog /srv/geotrax/applications/geotrax/current/log/access.log combined

    RewriteEngine on
    RewriteCond %{HTTP:Upgrade} websocket [NC]
    RewriteCond %{HTTP:Connection} upgrade [NC]
    RewriteRule /(.*) "ws://127.0.0.1:4000/$1" [P,L]

    ProxyPass / http://127.0.0.1:4000/
    ProxyPassReverse / http://127.0.0.1:4000/
    ProxyRequests Off

</VirtualHost>

Since we are using both http and websockets from the same domain, we are using the rewrite rules as explained in mod_proxy_wstunnel documentation

This took me a #$@!#$$%@#@ while to figure all out, I hope this helps someone to get started a little quicker (even if it is me the next time).

Next step: switch out apache2 and use nginx (but for now it is a little less urgent).

More ...
Technology bootstrap ruby on rails
[Bootstrap 4] Closing a modal programmatically

One of the UX patterns I like/prefer is to have some kind of in-place editing. So not to send the user to a completely new page or context, but edit or change a feature in place. One of the tools I frequently use for this is opening a modal.

This modal shows a mini-form, and will post the information to the server. There are two ways to handle this: we just post HTML and let the server decide what is rendered, or ... we post using js. In rails this is easy: we add remote: true.

For instance, I have a small form to resend a mail, but I want my users to overrule the email-address if they want to. The view looks something like this:

= link_to "(Re)send Email", '#', class: 'btn btn-large btn-info', 'data-toggle' => "modal", 'data-target' => "#resend-plan-request-#{plan_request.id}-email"

.modal.resend-plan-request.fade{id: "resend-plan-request-#{plan_request.id}-email"}
  .modal-dialog{role: "document"}
    .modal-content
      .modal-header
        %h4 Re(send) email for PlanRequest #{plan_request.external_id}
        %button.close{type: "button", 'data-dismiss' => "modal", 'aria-label' => "Close"}
      .modal-body
        = form_tag resend_admin_plan_request_path(plan_request), class: 'form', method: :post, remote: true do |f|
          .alert.alert-info.small
            This will allow either re-sending mails that clients/users claim have not received, or just for testing.

          .form-group
            = label_tag :emails, "Emails", class: 'form-label'
            .form-field-with-hint
              = text_field_tag :emails, plan_request.delivery_email, class: 'form-control', placeholder: 'Enter emails ...'
              %small.form-hint
                List the intended recipients, separate multiple emails using comma's e.g.
                %br
                email@abc.com, email2@def.com
          .form-group
            = submit_tag "Send Email", class: 'btn btn-primary', data: {disable_with: 'Sending ...'}
            = link_to 'Close', '#', class: "btn btn-secondary", 'data-dismiss' => "modal"

So for clarity: we have a small form in the modal, that will allow users to resend an email, and overrule the emails if wanted. We post the contents of the form to the resend action of the PlanRequestsController, which will actually do whatever is needed to send the email, and provide feedback to the user. I do not want to take the user to a new page, I just want to close the modal using js. According to the documentation this is simple, just do something like $(".modal").modal('hide');

So we write this inside resend.js.haml view

:plain
  toastr.success('#{@success_message}');
  $(".modal").modal('hide');

However I noticed that this leaves a .modal-backdrop div covering the entire screen, rendering a lot (or everything) unworkable. What seemed to be the case:

  • opening a modal using a button, adds 2 .modal-backdrop and closing a modal using the close-button(s) removes both
  • opening a modal using .modal('show') only adds a single .modal-backdrop, which is removed by the .modal('hide')

Problems:

  • controlling the modal using only buttons work
  • controlling the modal using only js code works
  • mixing (opening with button and closing with js) does not

I have two buttons that can close my modal: the x at the top, and a close button in the form, so I figured naively this bug (behaviour) was caused by the second close button for some reason. Unfortunately this is easy to verify (refute): if I remove the close button, it still adds two .modal-backdrop divs. So that was not a solution (and BTW I like to have a very visible close/cancel option).

It makes sense to close the modal in the same way it was opened, so either I change the way to open the modal, but I then tried if I could mimick "closing" by button by just triggering a click event on the close button(s) and then hoping it works as expected:

So do something like

    $("button[data-dismiss=modal]:visible").eq(0).click();

So to explain this:

  • first select all buttons that allow to close my modal: $("button[data-dismiss=modal]:visible") (select the visible just in case you have multiple modals defined in your html)
  • the added .eq(0) selects the first element out of that set
  • and then trigger the click event

Unfortunately this does not work either. During my numerous tests for a while it seemed to work, but I was just confused. Doh. This doesn't work either. I am guessing I just do not quite understand how bootstrap actually handles this, and (while I probably should) not really inclined to dive into their code to see what is wrong.

So I just went with:

:plain
  toastr.success('#{@success_message}');
  $(".modal").modal('hide');
  $(".modal-backdrop").remove();

Just close the modal, and if there are any pending/lingering .modal-backdrop divs, just remove those too.

More ...
Technology optimization ruby on rails
[RAILS] optimizing a slow request

We identified a single slow request in our moderation module: retrieving the json containing the entries to be moderated, feeding our react app.

So in short our datamodel looks as follows:

Class Topic 
  have_many :entries
  have_many :dynamic_attributes

Class Entry
  belongs_to :topic
  has_many :entry_values

Class EntryValue
  belongs_to :entry
  belongs_to :dynamic_attribute 

So in short: topics have a set of (dynamic) attributes that can be entered. An entry_value is the value entered for a dynamic_attribute and those are grouped in a complete entry.

In the moderation, our moderators verify that an entry (collection of entry-values) is appropriate, and have to option to edit or add missing information.

So in our controller we do something like

@entries = @q.result(distinct: true).page params[:page]

We are using ransack to filter/search on our entries. So we know we need entry-values, and possibly their dynamic attributes when building the json, so what is the best approach here? Use eager_load or includes? So what better way to decide than actually test this? So I temporarily changed the controller code as follows:

        preload_option = params[:pre].try(:to_i)
        if preload_option == 0
          @entries = @q.result(distinct: true).page params[:page]
        elsif preload_option == 1
          @entries = @q.result(distinct: true).eager_load(:entry_values).page params[:page]
        else
          @entries = @q.result(distinct: true).includes(:entry_values => :dynamic_attribute).page params[:page]
        end

Note:

  • I can only eager_load one level deep
  • with includes I can immediately fetch all dynamic attributes for the entry-values.

But which will prove to be more beneficial?

So then I ran a small benchmark script:

require 'benchmark'
require 'rest-client'

n = 10
Benchmark.bm do |x|
  x.report("normal ") do
    n.times { RestClient.get("http://admin.lvh.me:3013/moderation/projects/11/entries.json", {}) }
  end
  x.report("eager  ") do
    n.times { RestClient.get("http://admin.lvh.me:3013/moderation/projects/11/entries.json?pre=1", {}) }
  end
  x.report("include") do
    n.times { RestClient.get("http://admin.lvh.me:3013/moderation/projects/11/entries.json?pre=2", {}) }
  end
end

(note: for testing purposes I also disabled the need to authenticate, so I could easily fetch the jsons and time and compare)

This first run gave me the following results:

    normal   0.016362   0.005723   0.022085 ( 35.440171)
    eager    0.010036   0.004336   0.014372 ( 28.632490)
    include  0.012550   0.004061   0.016611 ( 29.173778) 

Ok. Not the kind of improvement I had hoped. Also nice to notice that eager_load in this case is more efficient than using the includes (which seemed a little counter-intuitive maybe).

I had recently changed a small part of the code, because in the moderation we also wanted to be able to edit fields that were not entered, and before we only had to retrieve entered values (:entry_values) so I presume that maybe there I fucked up the performance. Before we called entry.valid_entry_values which looked like

  def valid_entry_values
    entry_values.sorted.select do |ev|
      da = ev.dynamic_attribute
      da.attribute_type != 'item' || (da.attribute_type == 'item' && !ev.item_content_type.nil?)
    end
  end

and I replaced it with the following, adding empty entry-values to be filled in:


  def entry_values_with_empty
    result = []
    self.topic.dynamic_attributes.each do |da|
      ee = entry_values.find_by(dynamic_attribute_id: da.id)
      if ee.nil? || (da.attribute_type == 'item' && ee.item_content_type.nil?)
        ee = entry_values.build(dynamic_attribute: da)
      end
      result << ee
    end
    # check if we have entry-values not yet in the list
    # (e.g. from another topic when the entry was moved, and add those too)
    self.entry_values.each do |entry_value|
      if result.select{|ee| ee.id == entry_value.id}.count == 0
        result << entry_value
      end
    end
   
    result
  end

So what happens if we switch back to the old valid_entry_values : how does that change performance?

I ran my small benchmark script again, and got the following results:

    normal   0.015264   0.005280   0.020544 ( 33.283069)
    eager    0.009901   0.004359   0.014260 ( 17.145350)
    include  0.013153   0.004032   0.017185 ( 17.621856)

Wow! Now the eager_load or includes really seem to pay off. Also: almost the same speed improvement. Ok.

So if we check the entry_values_with_empty more closely, the implementation is somewhat naive: for each dynamic-attribute it will attempt to find the corresponding entry-value, except ... we use a query each time for each dynamic attribute, for each entry ... Mmmmmm. Let's see if we can improve this:

  def entry_values_with_empty
    result = []
    self.topic.dynamic_attributes.each do |da|
      ee = entry_values.detect{|ev| ev.dynamic_attribute_id == da.id}
      if ee.nil? || (da.attribute_type == 'item' && ee.item_content_type.nil?)
        ee = entry_values.build(dynamic_attribute: da)
      end
      result << ee
    end
    # check if we have entry-values not yet in the list
    # (e.g. from another topic when the entry was moved, and add those too)
    self.entry_values.each do |entry_value|
      if result.select{|ee| ee.id == entry_value.id}.count == 0
        result << entry_value
      end
    end

    result
  end

Notice: we only changed one line, replacing the find_by with a detect. This will, instead of launching a new query, iterate over the already retrieved array of entry_values. But does this make any difference?

Launching my small test script now returns the following:

    normal   0.016051   0.005418   0.021469 ( 16.448649)
    eager    0.009454   0.003858   0.013312 ( 22.479142)
    include  0.012419   0.003872   0.016291 ( 14.236868)

NICE! **fireworks** Not what I expected to see at all. A little baffled that the normal case is improved that much, and that the eager_load does not improve it (on the contrary). We have now found our optimal combination: improving the entry_values_with_empty and adding the includes will give the best performance.

Is this what you would have expected? Bottomline remains: it helps to measure (in Dutch we say: meten is weten which rhymes)

More ...
Technology ruby on rails rspec
[RSPEC] Cleaning up orphaned attachments when running specs

So when running the specs we also create a lot of fake attachments, but they are never cleaned up. Which is probably obvious, because we never actually destroy the models (containing the attachments), but truncate the database or rollback the transactions.

So I tried to find a way to 1) automatically/more easily clean up those dummy attachments, and 2) make sure it works when using parallel:specs. And over my different projects, where in some I use different gems to manage my attachments.

In one project, I am using paperclip and there I took the following approach. In the initializer config/initializers/paperclip.rb I wrote

  Paperclip::UriAdapter.register
  if Rails.env.production?
    Paperclip::Attachment.default_options.merge!(
      hash_secret: ENV.fetch('SECRET_KEY_BASE'),
      s3_protocol: :https,
      url: ':s3_domain_url',
      path: "/:class/:attachment/:id/:style/:hash.:extension",
      storage: :s3,
      s3_credentials: { .. }
    )
  elsif Rails.env.development?
    Paperclip::Attachment.default_options.merge!(
      url: "/system/:class/:attachment/:id/:style/:hash.:extension",
      hash_secret: Rails.application.credentials.secret_key_base
    )
  elsif Rails.env.test? || Rails.env.cucumber?
    Paperclip::Attachment.default_options.merge!(
      url: "/spec_#{ENV['TEST_ENV_NUMBER']}/:class/:attachment/:id/:style/:hash.:extension",
      hash_secret: Rails.application.credentials.secret_key_base
    )
  end

and then in rspec rails_helper.rb I can add the following piece of code

  config.after(:suite) do
    FileUtils.remove_dir(File.join(Rails.root, 'public', "spec_#{ENV['TEST_ENV_NUMBER']}"), true)
  end

In another projects I am using carrier_wave and there it is a little more difficult, but it amounts to the same approach. In CarrierWave we create different uploaders, and each have their own configuration. In my project I first iterate over all uploaders in my own code-base, and explicitly require one uploader from our own shared gem (between different projects). So we add an initializer config/carrierwave_clean_spec_attachments.rb (or whatever name you prefer) to override the path when in test mode:

if Rails.env.test? || Rails.env.cucumber?
  Dir["#{Rails.root}/app/uploaders/*.rb"].each { |file| require file }
  require 'document_uploader'

  CarrierWave::Uploader::Base.descendants.each do |klass|
    next if klass.anonymous?
    klass.class_eval do
      def cache_dir
        "#{Rails.root}/spec/support/uploads_#{ENV['TEST_ENV_NUMBER']}/tmp"
      end

      def store_dir
        "#{Rails.root}/spec/support/uploads_#{ENV['TEST_ENV_NUMBER']}/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
      end
    end
  end
end

and then in my rails_helper.rb I can then add the following statement:

config.after(:suite) do
  FileUtils.rm_rf(Dir["#{Rails.root}/spec/support/uploads_#{ENV['TEST_ENV_NUMBER']}"])
end

How do you do this? Do you use another gem for storage/attachments and how do you solve it? E.g. when using ActiveStorage ?

More ...
N/A
[rgeo] unable to convert multilinestring with zm geometry to geojson

I encountered a very specific issue when trying to convert the geometries read from shapefiles to geojson, and for some reason this failed.

I had some very simple code to inspect the shapefiles and see what I should do with them. In that process I also wanted to convert the geometries to geojson (for test), as that is the actual goal for me.

So my code iterates over a folder of shape-files, and then just does an inspection of the first element. This should give me an idea of what all the files contain. The code looked like this

Dir[File.join(root_folder, '*.shp')].sort.each do |shapefile|
  puts shapefile

  RGeo::Shapefile::Reader.open(shapefile) do |file|
    puts "File contains #{file.num_records} records."
    record = file.next
    puts "First record geometry WKT  : #{record.geometry.as_text}"
    puts "             coordinates   : #{record.geometry.coordinates}"
    puts "             geometry JSON : #{RGeo::GeoJSON.encode(record.geometry)}"
    puts "             Attributes    : #{record.attributes.inspect}"
    puts 
  end
end

I got the weirdest error when trying to run this code

 gems/rgeo-0.6.0/lib/rgeo/geos/zm_feature_methods.rb:305:in `block in each': no block given (yield) (LocalJumpError)

Apparently one of the shjapefiles contained a MultiLinestring with a z and m coordinate. All zero, so whyyyyy ? But still: rgeo should be able to handle that?

I tracked the code in rgeo and found the following culprit in Rgeo::Geos::ZMMultiLineStringMethods

 each.map(&:coordinates)

Ooooops. Now how could i fix this? I am currently working on an ancient version of rails, and thus also rgeo. I could open an issue to fix it, but still I would not be able to update my own version (mostly because of the activerecord-postgis-adapter).

But, thankfully, we are using ruby and we can hotfix code (reopen the class and fix the bug!). So I added an initializer in config\initializers\fix_rgeo_bug.rb with the following code

module RGeo
  module Geos
    module ZMMultiLineStringMethods # :nodoc:

      # overwrite to fix!
      def coordinates
        puts "COOOORDINATES"
        coords = []
        each do |gm|
          coords << gm.coordinates
        end
        coords 
      end

    end
  end
 end

... and now my code is running smoothly!!

So awesome it is possible in ruby to reopen classes. And that rails has a well controlled loading system and an entry-point before execution to place my own initializers. I have used this a few times before, mainly to fix outdated gems without having to update them, or add very specific behaviour. It is the combination of having readable code, open source code, and then re-opening classes to add my own behaviour or fix bugs. I am still thankful/happy every day to be working in ruby on rails :clap: :clap: :clap:

I wonder if there are other programming languages or frameworks where this is possible?

Now open an issue on rgeo to address this bug :)

More ...
News
Unable to load the EventMachine C extension; To use the pure-ruby reactor, require 'em/pure_ruby'

So, unfortunately we have to deploy our rails projects on servers which are managed by our clients, and so this means those are windows servers. Luckily this no longer is a big deal, but I develop on mac and mostly deploy on linux machines (which align). But a new deployment on windows almost always adds some surprises. So we deploy using ruby 2.4 and somewhere in our Gemfile we use eventmachine and on the most recent deployment I suddenly got this weird error:

Unable to load the EventMachine C extension; To use the pure-ruby reactor, require 'em/pure_ruby'

Not sure what they mean here: do I need to adapt the gem-code???? But luckily some googling quickly turned up a solution. Apparently the eventmachine gem is not updated correctly to use ruby 2.4 or 2.5 and the proposed solution is to do

gem uninstall eventmachine  
gem install eventmachine --platform=ruby

instead. This sounds great. In theory. But in practice? I have a bundle Gemfile and after every deploy/bundle I will have to uninstall the eventmachine-1.2.7-x64-mswin32 gem. I do have a script that I run on windows to deploy, and so I could easily add

gem uninstall -aIx eventmachine 
gem install eventmachine --platform=ruby

(the -aIx will remove all eventmachine instances and not care about dependencies)
but this feels a little counter-productive (wrong?) and it did not always seem to work reliably.

So I was looking for ways to describe in my Gemfile how to install the gem with the correct platform. Unfortunately platform has a different meaning inside a Gemfile, and the ruby platform is anything but windows.

But then I had an inspirational moment, why not install the gem from github, in the correct version?

So in my Gemfile I wrote

gem 'eventmachine', '1.2.7', git: 'git@github.com:eventmachine/eventmachine', tag: 'v1.2.7'

Installing the required version directly from git, which does work and does not break my deployment script/routine.

More ...
News
[wice-grid] solving error with losing filters upon paginating

We encountered this strange error using WiceGrid: on some occasions when paginating to the second page, we actually lost the filtering, but not for all columns.

WiceGrid offers to define columns which are only rendered when creating html or exporting to csv. For us specifically, in some cases we want to show some pretty html when rendering html but just show the text when rendering/exporting to csv. For instance:

g.column name: 'Status', attribute: 'status', in_csv: false do |plan|
    render 'grid_status_label', plan_request: plan, history: true
  end
  g.column name: 'Status', attribute: 'status', in_html: false

When rendering html, it will render a partial called grid_status_label, when rendering csv it will just show the status-text.

However, when defining the same column twice, this also has an effect on the filter. Either because we "exclude" one of definitions the column or because the column is defined twice, I am not sure. The easy way would be to know if we are rendering csv before defining the column so we don't define it twice at all and not confuse WiceGrid.

Luckily, we can ask the @grid if it is outputting csv. So if in your controller you write something like

@grid = initialize_grid(SomethingWithAStatus, ...)

in the view you can just ask @grid.output_csv? to know if we are currently exporting to csv instead of html.

So with that knowledge, in your view you can write

<%= grid(@grid) do |g|  
       [.. your other columns ..]            

       g.column name: 'Status', attribute: 'status', in_csv: false do |plan|
         render 'grid_status_label', plan_request: plan, history: true
       end
       if @grid.output_csv?
         g.column name: 'Status', attribute: 'status', in_html: false
       end
     end -%>

... and pagination while filtering on status will work!!

I really love(d) using WiceGrid but unfortunately it is no longer maintained actively. There is a somewhat active branch, but it only works for rails 5 and not entirely sure what the status is there. So this is at least a fix so we can keep using WiceGrid in our current projects for now.

Not quite sure how I would like to proceed with WiceGrid, because the code-base is really large and there are some things I do not really like (e.g. having to use erb, the dsl is sometimes a bit heavy, there is no test-coverage --there is a separate test-project but mmmm, the layout is pretty much fixed). But on the other hand it has proven extremely easy and robust and extensible (define your own column-filter and render types). I will probably try to fork or restart with something similar.

More ...
News
[rails] styling on_the_spot with bootstrap v3/v4

The on_the_spot gem allows inline editing of data. In general this is something I prefer over forms: I do not want to switch to a new page to edit something, I want to edit it where I see it (I understand there are some very good cases for the standard show/edit pages).

So a very long while ago I created a small gem to edit data inline. It relies on the jEditable javascript, which is still working.

But how do you style the dynamically injected form?

In my projects, I use the translation files as follows, e.g. in on_the_spot.en.yml I write :

en:
  on_the_spot:
    ok: <button class="btn btn-primary btn-sm">Ok</button>
    cancel: <button class="btn btn-default btn-sm">Cancel</button>
    tooltip: Click to edit
    access_not_allowed: Access not allowed

This will make sure the buttons are styled correctly. But if you try this, the input is too narrow, and everything is just squished together.

So add this little sprinkle of css to make everything look a little better:

.on_the_spot_editing {
  input, select {
    width: auto !important;
    height: 30px !important;

    margin-right: 5px !important;

    //display: block;

    padding: 6px 12px;
    font-size: 14px;
    line-height: 1.42857143;
    color: #555555;
    background-color: #fff;
    background-image: none;
    border: 1px solid #ccc;
    border-radius: 4px;

    -webkit-box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
    box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075);
    -webkit-transition: border-color ease-in-out 0.15s, box-shadow ease-in-out 0.15s;
    -o-transition: border-color ease-in-out 0.15s, box-shadow ease-in-out 0.15s;
    transition: border-color ease-in-out 0.15s, box-shadow ease-in-out 0.15s;
  }
  textarea {
    width: 80%;
  }

  .btn {
    margin: 1px !important;
  }
}

What inline editing solution are you using with rails?

I am currently contemplating to switch over to start using vue.js for javascript sprinkles like this.

More ...
News
[rails] using font-awesome-5 with turbolinks

If you started using FontAwesome-5 in a Turbolinks project, you will quickly notice the icons disappear after the first page reload. So how can we fix that? I did not immediately find a reference inside the FontAwesome documentation, but luckily google proved helpful and I found this issue

Inside the issue I found the fix which I applied and worked for me. I created a new file app/javascripts/fix_fontawesome_reload.js with the following content

document.addEventListener("turbolinks:before-render", function(event) {
    FontAwesome.dom.i2svg({
        node: event.data.newBody
    });
});

which was then automatically included in application.js (because I have the require_tree line).

More ...
News
Alternative RSpec trick for testing methods with arguments

I just read this article titled Useful RSpec trick for testing method with arguments which shows a nifty way to write a repetitive test-suite where you want to verify different arguments give the correct/expected result.

The method proposed by the author looks like this:

RSpec.describe Daru::Index do
  let(:index) { described_class.new [:a, :b, :c, :d] }

  describe '#pos' do
    subject { index.method(:pos) }

    context 'by label' do
      its([:a]) { is_expected.to eq 0 }
      its([:a, :c]) { is_expected.to eq [0, 2] }
      its([:b..:d]) { is_expected.to eq [1, 2, 3] }
      # .. and so on

Which is looking very readable and compact! This solution makes a lot of use of subject, let and relies on its to make it work. Then the author proceeds to list a few more basic/default rspec ways but does not list how I in general write tests like that.

Not sure if it is more readable or not, but imho it is in general a lot less work, it is usable for a all kinds of repetitive tests.

Let's see how it looks:

RSpec.describe Daru::Index do
  let(:index) { described_class.new [:a, :b, :c, :d] }

  describe '#pos' do
    context 'by label' do 
      [[[:a], 0],
        [[:a, :c], [0,2] ],
        [[:b ..:d], [1,2,3] ], 
        # and so on
      ].each do |arr|
        it "returns #{arr[1].inspect} for #{arr[0].inspect}" do 
          expect(index.pos(arr[0]).to eq(arr[1])
        end 
      end 
    end
  end
end

(note: not actually sure if this code works, but you get the gist of it I hope)

So I use ruby meta-programming to define a whole test-suite when the file is loaded, and when I need to test another input-output, I can just add it to the list, no need to copy-paste.

More ...
News
Render bug with ul in chrome 62?

In our GIS web-application we use leaflet with the superb sidebar-v2 component to have some fold-out pages and command-icons in one place. But we suddenly encountered a bug when users/developers started upgrading to chrome 62. Very weird rendering bug. The icons suddenly were no longer centered inside the lu > li items but only the top half was visible. As if the icons were shifted down.

To get a little more technical: the sidebar is a bar filled with icons and these are actually an unordered list, and to center the icons inside the foreseen space we have something like, in short:

ul > li { height: 40px; }
ul > li > a { 
  height: 100%; 
  line-height: 40px;
}

which is all in itself a pretty standard way to align "text" (icons in our case) vertically. So why is it broken now? Chrome 62 seems to add some kind of padding/margin, except setting those explicitly has not effect.

If I removed the line-height the icons came into view correctly, but they were not centered in the foreseen space anymore.

Luckily, I was not the first to encounter this issue and this bug was also already reported to google as well and a fix is underwayUnfortunately (?) the bug was not severe enough to actually halt going live of version 62 and or but it will be fixed in 63.

Luckily fixing the layout is quite simple, just add

ul > li { 
   list-style-type: none;
 }

and then it renders correctly over all platforms again.

So to recap: if your unordered lists have hidden overflows and are suddenly broken in chrome 62 this might help you as well.

More ...
Technology
[rails] improving the performance of a query involving date-ranges

I have an audit-log table with +300M rows and while most messages are just confirming we performed some very repetitive action, I was thinking of an easy way to compress the data: when a month is over, instead of adding 1000 lines per day with the message "CHECKED IT", replace them with 1 line saying we "CHECKED IT [logged X times on day Y]". Wrote the code, and that was pretty easy. But then came the moment I had to run this, and the simplest query to check the number of messages:

select count(*) as count
from audit_logs
where created_at < '2016-01-01' 
  and created_at > '2015-12-01'       
  and message = 'Checking for new map-requests'  
  and organization_id = 3

took 15 seconds. So then I checked my indexes: I had only an index on organization_id.

I read an article on the best way to create an index when searching for ranges, and it suggested a combined index on the equality operator first, and then the range second, so in my case I added a migration with the following addition

add_index :audit_logs, [:organization_id, :created_at]

Rerunning the query showed an immense improvement, and now the query only took a mere 20ms :) :) :)

More ...
Technology
[mac osx] extracting photos from photos library

Before, with iPhotos it was pretty easy to extract files: just drag and drop and they would have the exact same date and time as when they were imported/taken. Now, with the Photos app, this is no longer the case. WTF. A dropped file gets the current date and time. While I can understand why this is the case, it is not very convenient.

Luckily there is another way to copy files out of your Photos library.

If you visit the Pictures folder, you can see a (in my case very large) file called Photos Library.photoslibrary. If you right click and select Show Package Contents you can browse the individual files within the library. The images can be found in the Masters folder. The files/folders are organized by year/month/day which might not make sense to you, but I find it very useful.

I copied the entire Masters folder to my external drive, and then every image-file retained its original timestamp (yes!).

Now I can free some diskspace without any hesitation :)

More ...
News postgis
[postgis] BUG: ST_geomfromgeojson rounds z values

... and then I encountered a bug in Postgis 2.0.4. Fuck. ST_geomfromgeojson rounds my z-values to integers, effectively making them useless.

I have a little example demonstrating this, and so I imagined to submit a bug. Unfortunately the bug-tracker requires an OS-Geo account, and to get that I need to write an email to get a "mantra" (see form ). Done that.

So what did I do (note just random-numbers):

insert into original_be_geometries(originally_type, originally_id, geom) 
values ('Test', 1, ST_setSRID(ST_GeomFromGeoJSON('{"type":"LineString","coordinates":[[1.23445,2.234455,3.33445],[4.12345,5.12345,6.56789],[7.012,8.111,9.0001]]}'), 31370) );

some random-numer, but if I then do something like select st_astext(geom) from original_be_geometries where originally_type= 'Test'; I get

LINESTRING Z (1.23445 2.234455 3,4.12345 5.12345 6,7.012 8.111 9)

Fuck. Instead if I use ST_GeomFromText it does work:

insert into original_be_geometries(originally_type, originally_id, geom) 
values ('Test', 2, ST_setSRID(ST_GeomFromText('LINESTRING Z (1.23445 2.234455 3.33445, 4.12345 5.12345 6.56789, 7.012 8.111 9.0001)'), 31370) );

returns the expected geometry

LINESTRING Z (1.23445 2.234455 3.33445,4.12345 5.12345 6.56789,7.012 8.111 9.0001)

So I am going to switch my workflow from exporting geojson to exporting WKT's which I can then use to import. Now if only oracle supported 3d geometries when exporting to WKT this would be easy :eye-roll: :le-sigh: :rolls-up-sleeves: :)

[UPDATE] My team-mate has Postgis 2.2.2 and there this just works. I did not find this bugfix in the changelog, but this is good news. Damn. Now I have to upgrade my postgresql/postgis. Using brew. OMG! Last time I lost a weekend trying to get it fixed, I think I will still write my own sdo2wkt3d instead ;) (and update later).

More ...
News wice_grid ruby on rails
[wice-grid] adding custom filters for a column

Let me quickly introduce WiceGrid, if you do not know yet: it is a super-gem that will allow you to easily show a list/grid of items, and allow easy filtering/searching/pagination.

For rails there is, afaik, no better alternative. There are some javascript/jquery driven dynamic grids, but for me the big advantage is that with WiceGrid all work is done server-side, which is ideal when handling large sets of data.

Since you can just render html in any column, we do for instance the following for our KLIP platform :

Screen Shot 2016-06-05 at 21.08.43

In our first we show our internal identifier, and the external identifier. In code this looks like this:

g.column name: 'Id', in_csv: false do |pr|
  render 'title_with_info', plan_request: pr
end

and the partial title_with_info looks like

%h4
  = plan_request.ident
%p.text-muted.small
  = plan_request.maprequest_id

But now the problem is: how can we, when filtering, automatically look for both fields? WiceGrid automatically handles one field, but not both. Luckily, WiceGrid allows us to define custom filter types. What we want is:

  • we want the filter to just look like a standard string field
  • we want to build a query which will search for ident or maprequest_id.

Adding your own custom filter types is not entirely clear in the documentation, I had to take a look at the code to fully understand it. So that's why I decided to write it in detail here.

It takes three steps:

  • define a class to create the correct filter (a conditions generator)
  • define a custom filter_type inside WiceGrid, using your custom class
  • use the class in the column definition

Create Conditions Generator

Inside lib/wice/columns add a new file called conditions_generator_column_plan_request_identifier.rb and add the following content:

module Wice
  module Columns
    class ConditionsGeneratorColumnPlanRequestIdentifier < ConditionsGeneratorColumn #:nodoc:

      def generate_conditions(table_alias, opts) #:nodoc:
        if opts.kind_of? String
          string_fragment = opts
          negation = ''
        elsif (opts.kind_of? Hash) && opts.has_key?(:v)
          string_fragment = opts[:v]
          negation = opts[:n] == '1' ? 'NOT' : ''
        else
          Wice.log "invalid parameters for the grid string filter - must be a string: #{opts.inspect} or a Hash with keys :v and :n"
          return false
        end
        if string_fragment.empty?
          return false
        end

        table_name = @column_wrapper.alias_or_table_name(table_alias)
        op = ::Wice.get_string_matching_operators(@column_wrapper.model)
        search_value = "%#{string_fragment}%"

        [
            " #{negation} (#{table_name}.ident #{op} ? OR #{table_name}.external_id #{op} ?)",
            search_value, search_value
        ]
      end
    end
  end
end

This class is actually almost copied from the standard column generator, except I generate a different condition at the end, where I compare with two fields with an OR operator. This way I will find results if either the ident or the external_id matches the search-value.

Define filter type in config

In config/wice_grid_config.rb add the following:

Wice::Defaults::ADDITIONAL_COLUMN_PROCESSORS = {
  plan_request_identifier_filter: ['ViewColumnString', 'Wice::Columns::ConditionsGeneratorColumnPlanRequestIdentifier']
}

We just use the standard ViewColumnString to just show us a string filter.

Use the filter_type in the column

To enable the filter in the column, we just have to write the following:

g.column name: 'Id', attribute: 'ident', filter_type: :plan_request_identifier_filter, in_csv: false do |pr|
  render 'title_with_info', plan_request: pr
end
More ...
News rails-assets ruby ssl gem rails
[rubygems] handling SSL errors on Windows when installing gems

When you encounter SSL errors when installing gems on Windows, the easiest workaround is to change your sources from https://... to http://.... But ... I am an avid user/fan of rails-assets.org and today I suddenly started getting the error on their domain.

So at first I feared that rails-assets had stopped as foreseen (in this ticket), but the site was still reachable, and actually they switched (imho just two days ago) to a new maintainer, which is awesome: the future of rails-assets is safe for now.

But there is no rose without a thorn and now rails-assets enforces TLS (which is actually a good thing), so it is always SSL and gem cannot ignore SSL anymore. Doh! So I was stuck on windows.

I tried to make gem command ignore ssl errors regardless, by creating c:\ProgramData\gemrc with the following content:

---
:ssl_verify_mode: 0

and that partly worked: I was now able to fetch the index, but now I received the SSL error on the first gem retrieved from rails-assets, so I was still not in the clear. I had to make sure the SSL verification actually worked!

Fortunately, after some googling this proved easier then expected! The root cause is that ruby on windows (or openssl) has no default root certificate. So I found a good description how to fix that on windows.

I used the boring/easy/manual approach, in short:

  • download the cacert.pem file from http://curl.haxx.se/ca/cacert.pem. I saved this to my ruby folder (e.g. c:\ruby21).
  • add an environment variable SSL_CERT_FILE, so ruby can pick it up. E.g. in your command prompt type set SSL_CERT_FILE=C:\ruby21\cacert.pem. To make this a permanent setting, add this to your environment variables.
More ...
News
Handling deprecated :insert_sql and :conditions for habtm

In a system I am helping to develop a person can be linked to a myriad of things, including themselves, so we use a relation table PersonRelation defined as follows

class PersonRelation
   belongs_to :person
   belongs_to :personifiable, :polymorphic => true
   belongs_to :person_relation_type
 end

So a person could be linked to different "personifiable" things, and sometimes the meaning of the relation can change (e.g. a person could be an owner or a renter --expressed by the relation type).

In our datamodel, a person is actually a "legal person", so it could also be an organisation, and an organisation can have contacts. Logically a contact belongs to one or more organisation by following the association in the reverse direction.

When using rails 3+ up until 4.0 we wrote the association as follows:

has_and_belongs_to_many :contacts,
                          :join_table => "person_relations",
                          :class_name => "Person",
                          :foreign_key => "person_id",
                          :association_foreign_key => "personifiable_id",
                          :readonly => false,
                          :conditions => ["personifiable_type = ? and people.archived_at is null and person_relations.archived_at is null and person_relation_type_id=?", "Person", PersonRelationType::CONTACT],
                          :insert_sql => proc {|record| "INSERT INTO person_relations(person_id, personifiable_id, personifiable_type, person_relation_type_id, created_at, updated_at) VALUES('#{self.id}', '#{record.id}', 'Person', 6, current_timestamp, current_timestamp)" }

  has_and_belongs_to_many :organisations,
                          :join_table => "person_relations",
                          :class_name => "Person",
                          :association_foreign_key => "person_id",
                          :foreign_key => "personifiable_id",
                          :readonly => false,
                          :conditions => ["personifiable_type = ? and people.archived_at is null and person_relations.archived_at is null and person_relation_type_id=?", "Person", PersonRelationType::CONTACT],
                          :insert_sql => proc {|record| "INSERT INTO person_relations(person_id, personifiable_id, personifiable_type, person_relation_type_id, created_at, updated_at) VALUES('#{record.id}', '#{self.id}', 'Person', 6, current_timestamp, current_timestamp)" }

Pretty complicated, but it does the job :) Now we keep getting deprecation warning because :conditions, :insert_sql, :finder_sql are all deprecated and essentially removed in rails 4.1+. I kept postponing because it seemed like really hard to translate. But one suggestion in the error-message is to use has_many :through instead.

We already had the following statement in our Person model:

has_many :person_relations

so for a simple linked model, we could just write

has_many :parcels, through: :person_relations, source: :personifiable, source_type: 'Parcel'

and likewise, for our contacts, we just have to add a condition, but on PersonRelation, so we write:

has_many :contacts,
           -> { where("person_relations.person_relation_type_id" => PersonRelationType::CONTACT)},
           through: :person_relations,
           class_name: "Person",
           source: :personifiable,
           source_type: 'Person'

I am not really happy with the explicit mention of person_relations in the condition, this might impact chainability later on, but I am not sure how I could handle that differently. For now this does the job really cleanly.

Now the problem is how to follow the reverse association (from the contacts to the organisations), and actually this also proves pretty simple. If I "reverse" the person-relations first, than we can use that as the through table:

has_many :reverse_person_relations, as: :personifiable, class_name: 'PersonRelation'
  has_many :organisations,
            -> { where("person_relations.person_relation_type_id" => PersonRelationType::CONTACT)},
           through: :reverse_person_relations,
           class_name: "Person",
           source: :person

At first I felt that the deprecation of :insert_sql, :delete_sql, :finder_sql would be an insurmountable hurdle, but actually it proved pretty simple to fix and in the end a lot easier and even more readable. Nice :+1:

More ...
News ruby gis oracle ruby on rails
[oracle] avoiding SLOW sdo_aggr_union

There is this recurring problem we have in GIS: getting road-segments and wanting to show complete roads. The naive approach would we to do something like the following:

insert into street_geoms
select ro.rd_ro_ident, ro.rd_ro_name, ro.com_code, ssdo_aggr_union(sdoaggrtype(rd.ro_geometry, 0.005)) as geom
from rd_road ro, rd_ro_sec ros
where ros.rd_ro_ident = ro.rd_ro_ident
group by ro.rd_ro_ident, ro.rd_ro_name, ro.com_code;

For good measure: we have 45.000+ roads, with a total of 230.000+ road segments. So when that query starts running and starts taking a long time, I started googling. Apparently there are two faster alternatives: SDO_AGGR_CONCAT_LINES and SDO_AGGR_SET_UNION. While the first was really quick, completed in minutes, the result was completely wrong (complete segments were missing). The second might be quicker, but it was really hard to get an idea about any progress, and if it would fail, everything should be lost (rolled back).

So I decided to write a little script, and issue a sql statement for each single road, allowing me to track progress and added restartibility. For each road I issued a statement like:

insert into street_geoms
select ro.rd_ro_ident, ro.rd_ro_name, ro.com_code, sdo_aggr_set_union(CAST(COLLECT(ros.rd_ros_geometry) AS mdsys.SDO_Geometry_Array),0.005) as geom
from rd_road ro, rd_ro_sec ros
where ros.rd_ro_ident = ro.rd_ro_ident
  and ro.rd_ro_ident = 1895101 
group by ro.rd_ro_ident, ro.rd_ro_name, ro.com_code;

I added some ruby code around it, to make sure it tracked the progress and calculated the remaining time, just to have an idea. The first "large" road it stumbled upon literally took hours. It only had to join 39 segments. A simple query learned I had 150+ roads with more segments, and a maximum of 125 segments in the database. I could not just simply ignore them :) So this was not going to work either.

Why would this be so hard? I just wanted to throw all linestrings together into one geometry. How could I do that? Querying the geometries was really easy, so what if I joined the geometries outside of oracle? And wouldn't that be hard? But there is a simple solution: convert the strings to WKT, and join all LINESTRING in a MULTILINESTRING. This would just be simple string manipulation. I can do that ;)

I had some hiccups with this approach: handling the long strings proved a bit akward (use CLOB instead) and I had to regularly call GC.start to make sure the open cursors were released. And I had to make sure not to build a string literal which was too long (ORA-06550).

But in the end I was able to join the road-sections for the 45.000 + roads in approx 1.5h, which is not blindingly fast, but faster than 1 single SDO_AGGR_SET_UNION operation :) :)

For reference you can see the full code:

class StreetGeom < ActiveRecord::Base
  self.primary_key = 'rd_ro_ident'
end

def format_time (t)
  t = t.to_i
  sec = t % 60
  min = (t / 60) % 60
  hour = t / 3600
  sprintf("% 3d:%02d:%02d", hour, min, sec)
end

def eta(count)
  if count == 0
    "ETA: --:--:--"
  else
    elapsed = Time.now - @start_time
    # eta = elapsed * @total / count - elapsed;
    eta = (elapsed / count) * (@total - count)

    sprintf("ETA: %s", format_time(eta))
  end
end

all_roads = Road.count
geoms_to_calculate = all_roads - StreetGeom.count
@total = geoms_to_calculate

puts "Joining geometries for #{all_roads} roads [still #{geoms_to_calculate} to do]"

cntr = 1
@start_time = Time.now

done = 0

Road.order(:rd_ro_ident).each do |road|
  street_count = StreetGeom.where(rd_ro_ident: road.rd_ro_ident).count
  print "\rConverting #{cntr}/#{all_roads} [#{eta(done)}] "
  if street_count == 0
    print "..."
    $stdout.flush

    ## get all geometries in WKT format
    get_geoms_sql = <<-SQL
      select sdo_cs.make_2d(ros.rd_ros_geometry).get_wkt() as wkt_geom from rd_ro_sec ros where ros.rd_ro_ident = #{road.rd_ro_ident}
    SQL

    cursor = Road.connection.execute(get_geoms_sql)

    line_strings = []

    while row = cursor.fetch
      line_string = row[0].read.to_s
      line_strings << line_string[10..-1]
    end

    insert_sql = <<-SQL
      DECLARE
        wkt_str clob;
      BEGIN
        wkt_str := 'MULTILINESTRING(#{line_strings.join(", ';\nwkt_str := wkt_str || '")})';
        insert into street_geoms(rd_ro_ident, name, com_code, geom)
        values (#{road.rd_ro_ident}, q'[#{road.rd_ro_name}]', '#{road.com_code}',
             sdo_util.from_wktgeometry(to_clob(wkt_str)) );
      END;
    SQL

    Road.connection.execute(insert_sql)
    done += 1
  else
    print "_"
  end

  cntr += 1

  # periodically cleanup GC so we release open cursors ...
  # to avoid ORA-1000 errors
  if (cntr % 50) == 0
    GC.start
  end
end

print "\n"
puts "\n\nDone!"

and I run this script in the rails environment as follows: rails runner lib\tasks\join_road_geometries.rb.

More ...