Single Table Inheritance for legacy databases

Single table inheritance is a pretty common practice among rails developers. It allows the developer to seperate a single object into seperate concepts while keeping the data consistently the same. For example you could seperate the logic between an admin and a customer, but the data for these two types of users is simply the same. The problem is that rails defaults this distinction into a database column named type and stores the data as a string of the class name. Well, my legacy PHP framework stored it in a database column named typeID with the data as a single character representation of the class name.

self.inheritance\_column = :typeID

This tells the user model to use the typeID column as the single table inheritance base.

ADMIN\_TYPE = 'A'
  CLIENT\_USER\_TYPE = 'C'
  SUPER\_ADMIN\_TYPE = 'S'

  TYPES\_MAP = {
    ADMIN_TYPE => User::Admin,
    CLIENT_USER_TYPE => User::ClientUser,
    SUPER_ADMIN_TYPE => User::SuperAdmin
  }

  def self.find\_sti\_class(type\_name)
    TYPES_MAP[type\_name] or super
  end

  def self.sti\_name
    TYPES_MAP.invert[self]
  end

This tells the user model what the single characters map to what classes in the rails framework.

The only hicup I ran into was that instantiating and storing the subclass would think it also needed to be aware of subclasses. So, I added the following to all the subclasses.

self.inheritance\_column = :\_type\_disabled

Where I got the idea

Ransack

Dashboard: main project

This week I had the unique task of building a fully interactive dashboard. A system that not only displays a large amount of analytics from all over the site, but does it in a way that admins can drill into the information and get immediate data updates.

The first problem is handling all the different types of data on different pages without redundant code. I handled this using Rails partial views under the dashboard namespace. So, if I wanted a table of user data I would get the app/views/dashboard/users/_table.html.haml parital. I chose to use rails remote forms to handle the AJAX loading.

Ransack

At this point I had a dashboard that I could click into and see that data on another page. The code was clean because it was being reused. The big problem was that based on feedback we needed pages that could be drilled into on the same page too. So, an admin could click on a user and all other dashboard data on that page would reflect objects relating to just that user and so forth for the rest of the dashboard objects. This was a big issue in my mind because it could potentially be a limitless amount of parameters passed back and forth between a huge amount of partials. I found a Ruby Gem that was perfect for this kind of filtering called Ransack. There was a great railscast episode on ransack, but I found it lacking a lot of answers for my specific problem.

I’d like to delve into these issues. The first was that my dashboard was working off a legacy MySQL database built under a PHP framework. Ransack works based on single attributes on an object. The users controller action essentially is this simple.

@search = User.ransack(params[:q])
@users = @search.result(distinct: true)

The ransack call takes the q parameter that stores all the ransack filters. The next obvious question is how does the user interface interact with this. I used three different ways of interacting. I had buttons that would work as toggles to turn on certain filters and turn off other filters. I also had form fields for ‘LIKE’ searches. Last, I had table columns sortable.

The toggle links were probably the easiest step. The only real challenge was clearing out the other filters from ransack. I didn’t see anything in their documentation about clearing out other filters, so I accomplished it by simply overwriting the q param in the link.

= link\_to 'Toggle Link', dashboard\_users\_path(q: user\_query\_params(false, false, false)), remote: true

The helper method I’m referencing would look something like this.

def user\_query\_params(param1, param2, param3)
  ((params[:q].nil?)? {} : params[:q]).merge({filter1: param1, filter2: param2, filter3: param3})
end

Custom form searches

Ransack has a custom search_form_for method that takes a search object. The [:dashboard, search] references the namespaced @search variable from the users controller. I’m using a custom search here because I’d like to use my own model scope.

= search\_form\_for [:dashboard, search], remote: true do |f|
  = f.search\_field :filter\_by\_name

The scope on the model would look something like this.

scope :filter\_by\_name, -> (name) { where("CONCAT(firstName, ' ', lastName) LIKE ?", "%#{name}%") }

def self.ransackable\_scopes(auth\_object = nil)
  %i(filter\_by\_name)
end

ransackable_scopes defines all scopes that the user could access through ransack. This can be refined by the user roles, but I didn’t need this functionality. Normally custom form searches would reference something like f.search_field :name_cont that would search for users that contain the input text in the name attribute. Ransack has other search terms for equals, less than, etc.

Column sorts

Column sorts are very eligant in ransack.

%th= sort\_link [:dashboard, search], :name, 'Name', { default\_order: :desc }, { remote: true, method: :get }

This header has a sort_link on the column name. The first issue here is that Name in the system is actually a function that returns a concatinated field not an attribute. Well, it turns out that ransack has a thing called ransackers that define custom algorithms for this specific situation.

ransacker :name do |parent|
  Arel.sql('CONCAT(users.firstName, " ", users.lastName)')
end

Setting a default sort order on the users controller would look something like this.

@search = User.ransack(params[:q])
@search.sorts = ["name asc"] if @search.sorts.empty?
@users = @search.result(distinct: true)

I’m setting the defualt sort to name ascending unless another one is set.

Chef - Installing Postgres - Load Balanced Servers

I want to talk about provisioning a postgres server for a load balanced production rails application. This is probably a topic that everyone will come across. You want a live rails application and it starts becoming a bigger deal, you need to spin up more servers and have them load balanced. Well, in that case, you need one server that has the database and all the other application servers to talk to it. No big deal, right. Well, I’m using chef to provision my servers and I got stuck on a very simple thing. Okay, I provisioned my servers, but how do I get the password for my provisioned server so my application can talk to my database. Seems simple enough, but where is that password.

The postgres password is stored in the node config file. To access this using chef knife:

knife edit node SERVER-NAME

So simple, but I just didn’t know where it was. I thought I would make note, so no more time is wasted.

A couple other notes about connecting to a postgres server. After chef provisioning, when I ran cap deployment, I was running into this error:

rake aborted!
DEBUG [] 	PG::ConnectionBad: could not connect to server: Connection refused
DEBUG [] 		Is the server running on host "IP_ADDRESS" and accepting
DEBUG [] 		TCP/IP connections on port 5432?

So first thoughts turned to, is my server configured to accept tcp/ip connections? What port is my postgres server running on? I read this article on tcp/ip configurations with postgres, but when I added tcpip_socket = true, it ran errors. I then found this article on setting up your postgres server to listen outside of localhost. That seemed to do the trick.

Naturally, my next thoughts are awesome! this works but can I automate this for next time?. So I go to the documentation and of course it’s right there in front of me. Adding these 2 lines does the exact same thing as I described above which means I am now fully automated.

node.default['postgresql']['config']['listen_addresses'] = '*'
node.default['postgresql']['pg_hba'] = [{:type => 'host', :db => 'all', :user => 'postgres', :addr => 'IPADDRESS/32', :method => 'md5'}]

Chef - Installing Postgres - No Make file

I had to reformat my mac because of a bad installation of virtual box. I’m now using Vagrant 1.5.4 and VirtualBox 4.3.6 I’m just working through all the little bug nuances. Here’s one of them I thought I would mention.

[2014-06-19T13:34:28-05:00] WARN: Failed to properly build pg gem. Forcing properly linking and retrying (omnibus fix)
  * execute[generate pg gem Makefile] action run
    - execute /opt/chef/embedded/bin/ruby extconf.rb

  * execute[make pg gem lib] action run
================================================================================
Error executing action `run` on resource 'execute[make pg gem lib]'
================================================================================


Errno::ENOENT
-------------
No such file or directory - make


Cookbook Trace:
---------------
/var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:112:in `rescue in rescue in from_file'
/var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:63:in `rescue in from_file'
/var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:24:in `from_file'
/var/chef/cache/cookbooks/slice/recipes/default.rb:44:in `from_file'


Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/postgresql/recipes/ruby.rb

107:     lib_maker = execute 'make pg gem lib' do
108:       command 'make'
109:       cwd ext_dir
110:       action :nothing
111:     end
112:     lib_maker.run_action(:run)



Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:107:in `rescue in rescue in from_file'

execute("make pg gem lib") do
  action [:nothing]
  retries 0
  retry_delay 2
  command "make"
  backup 5
  cwd "/opt/chef/embedded/lib/ruby/gems/1.9.1/gems/pg-0.17.1/ext"
  returns 0
  cookbook_name "postgresql"
  recipe_name "ruby"
end




================================================================================
Recipe Compile Error in /var/chef/cache/cookbooks/slice/recipes/default.rb
================================================================================


Errno::ENOENT
-------------
execute[make pg gem lib] (postgresql::ruby line 107) had an error: Errno::ENOENT: No such file or directory - make


Cookbook Trace:
---------------
  /var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:112:in `rescue in rescue in from_file'
  /var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:63:in `rescue in from_file'
  /var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:24:in `from_file'
  /var/chef/cache/cookbooks/slice/recipes/default.rb:44:in `from_file'


Relevant File Content:
----------------------
/var/chef/cache/cookbooks/postgresql/recipes/ruby.rb:

105:      lib_builder.run_action(:run)
106:  
107:      lib_maker = execute 'make pg gem lib' do
108:        command 'make'
109:        cwd ext_dir
110:        action :nothing
111:      end
112>>     lib_maker.run_action(:run)
113:  
114:      lib_installer = execute 'install pg gem lib' do
115:        command 'make install'
116:        cwd ext_dir
117:        action :nothing
118:      end
119:      lib_installer.run_action(:run)
120:  
121:      spec_installer = execute 'install pg spec' do



[2014-06-19T13:34:35-05:00] ERROR: Running exception handlers
[2014-06-19T13:34:35-05:00] ERROR: Exception handlers complete
[2014-06-19T13:34:35-05:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
Chef Client failed. 2 resources updated
[2014-06-19T13:34:36-05:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

First thing I found was that the make library is part of build-essentials. So if I include that with

include_recipe "build-essential"

Chef - Server Provisioning Software

I’m trying to run capistrano to deploy a code base. When I run capistrano, it fails to install gem pg because of the following

rake aborted!
DEBUG [] 	PG::ConnectionBad: FATAL:  no pg_hba.conf entry for host "0.0.0.0", user "postgres", database "slice", SSL on
DEBUG [] 	FATAL:  no pg_hba.conf entry for host "0.0.0.0", user "postgres", database "slice", SSL off
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:831:in `initialize'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:831:in `new'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:831:in `connect'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:548:in `initialize'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:41:in `new'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/postgresql_adapter.rb:41:in `postgresql_connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:440:in `new_connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:450:in `checkout_new_connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:421:in `acquire_connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:356:in `block in checkout'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:355:in `checkout'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:265:in `block in connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:264:in `connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:546:in `retrieve_connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_handling.rb:79:in `retrieve_connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/connection_handling.rb:53:in `connection'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/migration.rb:863:in `initialize'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/migration.rb:764:in `new'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/migration.rb:764:in `up'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/migration.rb:742:in `migrate'
DEBUG [] 	/home/deploy/slice/shared/bundle/ruby/2.1.0/gems/activerecord-4.0.3/lib/active_record/railties/databases.rake:42:in `block (2 levels) in <top (required)>'
DEBUG [] 	Tasks: TOP => db:migrate
DEBUG [] 	(See full trace by running task with --trace)
cap aborted!
SSHKit::Command::Failed: rake exit status: 1
rake stdout: Nothing written
rake stderr: rake aborted!
PG::ConnectionBad: FATAL:  no pg_hba.conf entry for host "0.0.0.0", user "postgres", database "slice", SSL on
FATAL:  no pg_hba.conf entry for host "0.0.0.0", user "postgres", database "slice", SSL off

The pg_hba.conf file is generated when pg is is installed. The only problem is that I don’t want postgres installed on that server. I want the application to communicate with another server that hosts the postgres database so that I can have multiple application servers accessing the same database. So, since I’m not installing postgres, the config file is not being created. I could run gem install pg --without-pg_config, but then I wouldn’t be able to run capistrano. The pg_hba file is described as a configuration file for client authentication. At first I thought this file needed to be created on the application side, but the application side actually is looking for this file on the postgres server. So, this file needs to be on the postgres server at /etc/postgresql/VERSION/main/pg_hba.conf

# PostgreSQL Client Authentication Configuration File
# ===================================================
#
# Refer to the "Client Authentication" section in the PostgreSQL
# documentation for a complete description of this file.  A short
# synopsis follows.
#
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access.  Records take one of these forms:
#
# local      DATABASE  USER  METHOD  [OPTIONS]
# host       DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
# hostssl    DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
# hostnossl  DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
#
# (The uppercase items must be replaced by actual values.)
#
# The first field is the connection type: "local" is a Unix-domain
# socket, "host" is either a plain or SSL-encrypted TCP/IP socket,
# "hostssl" is an SSL-encrypted TCP/IP socket, and "hostnossl" is a
# plain TCP/IP socket.
#
# DATABASE can be "all", "sameuser", "samerole", "replication", a
# database name, or a comma-separated list thereof. The "all"
# keyword does not match "replication". Access to replication
# must be enabled in a separate record (see example below).
#
# USER can be "all", a user name, a group name prefixed with "+", or a
# comma-separated list thereof.  In both the DATABASE and USER fields
# you can also write a file name prefixed with "@" to include names
# from a separate file.
#
# ADDRESS specifies the set of hosts the record matches.  It can be a
# host name, or it is made up of an IP address and a CIDR mask that is
# an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that
# specifies the number of significant bits in the mask.  A host name
# that starts with a dot (.) matches a suffix of the actual host name.
# Alternatively, you can write an IP address and netmask in separate
# columns to specify the set of hosts.  Instead of a CIDR-address, you
# can write "samehost" to match any of the server's own IP addresses,
# or "samenet" to match any address in any subnet that the server is
# directly connected to.
#
# METHOD can be "trust", "reject", "md5", "password", "gss", "sspi",
# "krb5", "ident", "peer", "pam", "ldap", "radius" or "cert".  Note that
# "password" sends passwords in clear text; "md5" is preferred since
# it sends encrypted passwords.
#
# OPTIONS are a set of options for the authentication in the format
# NAME=VALUE.  The available options depend on the different
# authentication methods -- refer to the "Client Authentication"
# section in the documentation for a list of which options are
# available for which authentication methods.
#
# Database and user names containing spaces, commas, quotes and other
# special characters must be quoted.  Quoting one of the keywords
# "all", "sameuser", "samerole" or "replication" makes the name lose
# its special character, and just match a database or username with
# that name.
#
# This file is read on server startup and when the postmaster receives
# a SIGHUP signal.  If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect.  You can
# use "pg_ctl reload" to do that.

# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records.  In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.




# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database superuser can access the database using some other method.
# Noninteractive access to all databases is required during automatic
# maintenance (custom daily cronjobs, replication, and similar tasks).
#
# Database administrative login by Unix domain socket
local   all             postgres                                peer

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             APPLICATION_IP/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication     postgres                                peer
#host    replication     postgres        127.0.0.1/32            md5
#host    replication     postgres        ::1/128                 md5

The important line is the APPLICATION_IP that tells your postgres server where it will accept traffic from. Most of the other lines are just comments. Here is another for customizing that configuration file.

Chef - Server Provisioning Software

So, I got to the point where I wanted a clean slate so I reformatted and started my mac over with a clean install. I’m setting up things that I haven’t configured in a long time. It’s good for my memory though.

Problem #1 I’m running

bundle install

on ruby version 1.9.2 for a chef repository. It seems to have a problem installing the nokogiri gem. Nokogiri is a HTML, XML, SAX, and Reader parser.

bundle install
Fetching gem metadata from https://rubygems.org/.......
Fetching additional metadata from https://rubygems.org/..
Using rake 10.3.1
Using builder 3.2.2
Using gyoku 1.1.1
Using mini_portile 0.5.3
sh: -c: line 0: unexpected EOF while looking for matching `"'
sh: -c: line 1: syntax error: unexpected end of file

Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

    /Users/mahcloud/.rvm/rubies/ruby-1.9.2-p320/bin/ruby extconf.rb "--with-xml2-include=/usr/local/Cellar/libxml2/2.7.8/include/libxml2 --with-xml2-lib=/usr/local/Cellar/libxml2/2.7.8/lib --with-xslt-dir=/usr/local/Cellar

extconf failed, exit code 2

Gem files will remain installed in /Users/mahcloud/.rvm/gems/ruby-1.9.2-p320@1kb-chef/gems/nokogiri-1.6.2 for inspection.
Results logged to /Users/mahcloud/.rvm/gems/ruby-1.9.2-p320@1kb-chef/extensions/x86_64-darwin-13/1.9.1/nokogiri-1.6.2/gem_make.out
An error occurred while installing nokogiri (1.6.2), and Bundler cannot continue.
Make sure that `gem install nokogiri -v '1.6.2'` succeeds before bundling.

I take the advice and run gem install nokogiri -v '1.6.2' which fails because it can’t find the libiconv native library.

gem install nokogiri -v '1.6.2'
Building native extensions.  This could take a while...
Building nokogiri using packaged libraries.
ERROR:  Error installing nokogiri:
	ERROR: Failed to build gem native extension.

    /Users/mahcloud/.rvm/rubies/ruby-1.9.2-p320/bin/ruby extconf.rb
Building nokogiri using packaged libraries.
-----
libiconv is missing.  please visit http://nokogiri.org/tutorials/installing_nokogiri.html for help with installing dependencies.
-----
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
	--with-opt-dir
	--with-opt-include
	--without-opt-include=${opt-dir}/include
	--with-opt-lib
	--without-opt-lib=${opt-dir}/lib
	--with-make-prog
	--without-make-prog
	--srcdir=.
	--curdir
	--ruby=/Users/mahcloud/.rvm/rubies/ruby-1.9.2-p320/bin/ruby
	--help
	--clean
	--use-system-libraries
	--enable-static
	--disable-static
	--with-zlib-dir
	--without-zlib-dir
	--with-zlib-include
	--without-zlib-include=${zlib-dir}/include
	--with-zlib-lib
	--without-zlib-lib=${zlib-dir}/lib
	--enable-cross-build
	--disable-cross-build

extconf failed, exit code 1

Gem files will remain installed in /Users/mahcloud/.rvm/gems/ruby-1.9.2-p320@1kb-chef/gems/nokogiri-1.6.2 for inspection.
Results logged to /Users/mahcloud/.rvm/gems/ruby-1.9.2-p320@1kb-chef/extensions/x86_64-darwin-13/1.9.1/nokogiri-1.6.2/gem_make.out

libiconv is a library that converts string encodings from or to unicode. libiconv comes standard on my mac, but for some reason nokogiri thinks it is missing.

So we need install libiconv again and tell nokogiri where libiconv is. Some options for installing libiconv are macports and brew. I installed with brew. And then ran.

gem install nokogiri -- --with-iconv-dir=/usr/local/Cellar/libiconv/1.14/

This installed successfully. This fixed the problem in my development environment for my site, but for the chef repository, it still doesn’t recognize that nokogiri is installed.

Next, I ran

xcode-select --install

This installs the command-line developer tools for mac, which includes libiconv. Install nokogiri version as normal and that took care of the problem.

Chef - Server Provisioning Software

Okay, today I’m wearing my chef loves bacon hat. Today’s goal is to get started with Chef for provisioning load balanced API servers.

I’ll be starting out on a railscast video. Well this video is on chef-solo and I’ll be using chef server, so not as much help as I was hoping. Next I went through this tutorial, but it just flat out didn’t work, so I stopped half way through. My boss suggested this video series, but it is beyond boring and overly complex. We’ll be using opscode to host our chef server.

Ruby God - Process Management

So I’m back after a few months.

The project I was working on that needed a custom install was not received well. Originally everyone liked it but when customers that didn’t understand it started to complain, management just wanted it shutdown. Bad data is better than no data at all I guess.

Well, I’m back and onto a new project that needs rails deployment. This time it is a reporting tools. In order to not slow down the live orders/quotes table with reporting, I’m building a seperate database of order/quote data that should be populated identical to the live database. I’m using rabbitMQ to transfer a hash of the data when an order or quote is changed. This way it doesn’t slow down the transaction or database on the live site. The hash is put in a queue by the rabbit exchange. Our reporting application will then consume that data and save the calculations to a postgres database.

Now that we have all the background story covered, I’ve got the RoR application deploying live and to a beta staging environment. When I deploy to these boxes, I need a zero downtime switch to the new code. Since the reporting server is constantly consuming data from rabbitMQ, it would be very bad if it missed any of these queued items. I’ll be using god to keep the rabbitMQ listener up and running even during a code deployment. So, the line I need to add to my capistrano deployment is:

sudo /var/god/.rvm/bin/god restart APP-NAME

I’m having a little problem here.

Command: /usr/bin/env sudo
DEBUG [] 	usage: sudo -h | -K | -k | -V
DEBUG [] 	usage: sudo -v [-g group] [-h host] [-p prompt] [-u user]
DEBUG [] 	usage: sudo -l [-g group] [-h host] [-p prompt] [-U user] [-u user]
DEBUG [] 	  
DEBUG [] 	          [command]
DEBUG [] 	usage: sudo [-r role] [-t type] [-C num] [-g group] [-h host] [-p
DEBUG [] 	  
DEBUG [] 	          prompt] [-u user] [VAR=value] [-i|-s] [<command>]
DEBUG [] 	usage: sudo -e [-r role] [-t type] [-C num] [-g group] [-h host] [-p
DEBUG [] 	    
DEBUG [] 	        prompt] [-u user] file ...
cap aborted!
SSHKit::Command::Failed: sudo exit status: 1
sudo stdout: Nothing written
sudo stderr: usage: sudo -h | -K | -k | -V
/gems/sshkit-1.4.0/lib/sshkit/command.rb:98:in `exit_status='

First thought, this exact script does work on the server by manually entering it.

Next, what is SSHKit? It is a gem capistrano uses to make ssh connections to all the different environments I will be deploying to.

Exit status 1 just means there was an error with no more explanation than that. So, I’m a little stuck here. Bash doesn’t give any specific error code. I would guess it has something to do using sudo without a password on an ssh connection using an ssh-key for authentication. I asked my boss if he had any suggestions. He said does the deploy user have password-less sudo rights?. It looks like there is something called pty and tty. So what are they and what’s the difference? I found TTY and PTY and Teletype. Still very confused why I would need a virtual terminal for deploying and if I capistrano needs a virtual terminal to run commands how it did anything with out it. Maybe it was already using one and I can just set it to a different one.

Okay, so I still feel lost enough not to know where to go. Since, pty doesn’t sound harmful at all, I guess I will just turn it on and see what happens.

Turning on pty with set :pty, true or set :tty, true in config/deploy.rb file gave the same results.

Okay, I think was on the wrong track with the pty thing. I had to add my user as a passwordless sudo user for deploying. That took care of the problem.

First rails deploy

I’ve been working on learning how to deploy a rails app without the help of Heroku. You might ask why I would want to do that. That’s a really good question that I ask myself everytime I can’t figure things out for hours on end. I won’t answer that question now. I’ll leave that for the last post I make on RailsDeploy.

My teacher is the author of Fearless Rails Deployment. If you preorder his book, he’ll let you view the draft so far. I don’t want to give away anything that is in his book, so I’m just gonna skim over what I’ve learned.

I deployed using capistrano. This will create the capistrano files:

cap install

To use capistrano you’ll need to be able to ssh authenticate with ssh key instead of password. Here is a great tutorial on how to do that.

First, I needed to install unicorn as my rails server. This part was a lot easier than I thought it was going to be. After adding the gem and installing it, I just made a unicorn config file and ran unicorn.

vim /etc/unicorn/mahcloud.rb
pid '/home/root/mahcloud/tmp/unicorn.pid'
working_directory '/home/root/mahcloud/'

The config file has a little more to it than that.

Javascript Canvas Game Pokemon

Wouldn’t it be more fun if we were dealing with Pokemon instead of an Ogre?

Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image Background Image

These are the image dimensions for the pokemon.

var pokemon_dimensions = new Array(
	new Array(27, 32),
	new Array(27, 32),
	new Array(29, 32),
	new Array(27, 32),
	new Array(21, 32),
	new Array(32, 24),
	new Array(31, 32),
	new Array(32, 30),
	new Array(30, 32),
	new Array(27, 32),
	new Array(26, 32),
	new Array(28, 32),
	new Array(30, 32),
	new Array(32, 24),
	new Array(32, 30),
	new Array(24, 32),
	new Array(32, 26),
	new Array(18, 32)
);

Add this to your spawnMonster function.

pokemon = Math.floor(Math.random() * 18) + 1;
monsterImage.src = "images/pokemon/"+pokemon+".png";
monster.width = pokemon_dimensions[pokemon - 1][0];
monster.height = pokemon_dimensions[pokemon - 1][1];

Let’s rename some methods. monstersSlain should be pokemonCaught. slayMonster should be catchPokemon.

You can download this example.

← Newer Page 1 of 2