While converting my s3cache to a rails plugin, I could not figure out why when I installed the plugin, the directory structure it created was:

-vendor
–plugins
—svn
—-s3cache

I did not want it to create the svn directory and instead just have s3cache as the child under plugins.

I finally figured it out. Your svn repository for your plugin needs to have a plugin directory. So your repository might look something like this:

– svn
–branches
–tags
–trunk
—plugins
—-s3cache

Advertisements

The below are the step by step instructions I used to set up my EC2 instance for my soon to launch rails app (Quizical.net). It uses Rails and Litespeed as the server. At the end of the install, it makes heavy use of my Capistrano and EC2.rake tasks to install my app.

Warning. I’m not an expert on setting up a linux box. So caveat emptor. This is why I had to document everything I did so I could go back and do it again if I had to.

Set Up the EC2 Tools

Most of this section I followed the Amazon Getting Started Guide. but I also used these sources (1, 2)

First we have to set up the EC2 tools on our local computer (Mac OS X).

  • Download the Command Line Tools from Amazon
  • Unzip it to the directory of your choice. I put it in: /Documents/Projects/ec2/api/
  • Copy the below into /etc/profile:


export JAVA_HOME=/Library/Java/Home
export EC2_HOME=~/Documents/Projects/ec2/api/
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=~/Documents/Projects/ec2/auth/pk-4IMZKCL2QEK2FDPLWCKICJNOTNUNWT24.pem
export EC2_CERT=~/Documents/Projects/ec2/auth/cert-4IMZKCL2QEK2FDPLWCKICJNOTNUNWT24.pem

Now, we need to generate the private key pair. From the Amazon “Getting Started Guide”

You will be running an instance of a public AMI. Since it has no password you will need a public/private keypair to login to the instance. One half of this keypair will be embedded into your instance, allowing you to login securely without a password using the other half of the keypair.

# ~/Documents/Projects/ec2/api/bin/ec2-add-key pair rails-server

Which will generate something like….
KEYPAIR rails-server a8:20:2a:ad:c0:16:b8:20:ff:45:43:7e:54:8c:55:ce:43:36:32:d1
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAjVwZnRILPoTPSpij4+lLq7ByP8QGMkJOq50Z9Hf3+HOw+6v7MihrZaeprTz68+Lyi9O3P2MGrEFJmgEmvpIdmjpS+vfGlPd+g7BgvFMej+hiXONJZISxG6XbmnbmE1oaxblPgIR2
tMZ6sdwZ3xJt2+Pped8eqDcuYm4TCHZhZM9Qv3sCycoJ1fFAr5d3EjGijNTHfrWBcDA=
-----END RSA PRIVATE KEY-----

Copy everything between (and including) the “—–BEGIN RSA PRIVATE KEy” and “—–END RSA PRIVATE KEY—–” and paste it into a text file named ‘id_rsa-rails-server’. I saved my key in a directory called: /Documents/Projects/ec2/auth/

Next we need to change permissions to this file so its readable and writable.

cd Projects/ec2/auth/
chmod 600 id_rsa_rails-server

Let’s find an instance to start with. Instances are what Amazon refers to the disk images. These are the basic server configurations. We’ll start with one and customize to our needs.

cd /ec2/api/bin/

./ec2-describe-images -a

That will generate a list of all public images and our own images we had previously saved. The ‘-a’ parameter instructs it to return all public images and your private images as well. If you leave off the -a then it will only return your private instances.

I chose Marcin’s Fedora Core 6 Lite install (ami-78b15411).

Run the instance

./ec2-run-instances ami-78b15411 -k rails-server

The -k rails-server parameter is the name of our private key we created earlier.

This will take a few minutes to commission. You can keep checking with:

./ec2-describe-instances

It will tell you whether it is still pending or return the URL when it is ready. The URL will look something like:

domu-12-31-33-00-01-9f.usma1.compute.amazonaws.com

Log in!

Now, just like a regular remote linux box we can log in with:

ssh -i /Documents/Projects/ec2/auth/id_rsa-rails-server root@domu-12-31-33-00-01-9f.usma1.compute.amazonaws.com

Now we can start customizing our image.

Add users and groups

(The reference I used for linux users and groups is here)
Create a group:
groupadd www

Add you as a user:
useradd -g www steveodom
passwd steveodom
Changing password for user steveodom.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

Repeat the above to add user lsws. I run with litespeed server with this username and restrict its permissions.

I created a directory for my self (mkdir /home/steveodom/). This is where I’ll store my rails app. Set the permissions of /home/steveodom to allow members of the www group to access it.
chmod g+rwx /home/steveodom

Now add myself and www to sudo file: (use visudo and add to the end steveodom ALL = ALL)

Install the packages I need:

1. yum install wget tar zip fileutils sudo make gcc
2. yum install ruby ruby-libs ruby-mode ruby-rdoc ruby-irb ruby-ri ruby-docs ruby-devel rsync ruby-mysql.i386

Install Mysql (Source)
yum install mysql mysql-devel mysql-server mysql-admin

Instruct mysql to start on reboot:
/sbin/chkconfig mysqld on


Install Subversion

yum install subversion
export SVN_EDITOR=vi

Amazon Tools:
wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.noarch.rpm
rpm -i ec2-ami-tools.noarch.rpm

Ruby Gems:
wget http://rubyforge.org/frs/download.php/11289/rubygems-0.9.0.tgz
tar zxvf rubygems-0.9.0.tgz
cd rubygems-0.9.0
sudo ruby setup.rb

Let’s clean up a little
cd ..
rm ruby* -drf
rm ec2-ami-tools.noarch.rpm

*change to user steveodom*

Rails:
sudo gem install rails

Install the lsapi gem needed for litespeed:
sudo gem install ruby-lsapi

LiteSpeed

I chose Litespeed server over Mongrel. I’ll post another day the reasons why.

I followed the instructions here:
1. wget http://litespeedtech.com/packages/2.2/std/lsws-2.2.6-std-i386-linux.tar.gz
2. tar xf lsws-2.2.6.tar.gz
3. cd lsws-2.2.6
4. sudo ./install.sh

Went through the installation wizard. Selected the default ports. I set it up to run with user lsws and group lsws. They have no privileges.

Note: to start lightspeed: /opt/lsws/bin/lswsctrl start [restart | cancel]

This screencast is very useful too for setting up litespeed to use rails.

Note: Since litespeed is running as lsws I had to give my /home/steveodom/quizical directory permission for lsws to access it.

I did it with:
-sudo /usr/sbin/usermod -a -G steveodom lsws [where steveodom is the name of the group]-

I checked the permissions by doing:
sudo -u nobody ls -la /home/steveodom/ [should get permission denied]
sudo -u lsws ls -la /home/steveodom/ [should show you the directory listing]

MySQL Setup (Source)

Set a password for root!
mysql -u root -p

You will be prompted for a password, and as the password is currently empty, simply press the enter key.

Change the password by typing the following command:
SET PASSWORD FOR root@localhost=PASSWORD('newPassword');

Delete user accounts that have no usernames and/or passwords: (These are insecure accounts and should be deleted).
use mysql;
delete from user where user='';
delete from user where host='localhost.localdomain';

Create a new mysql account…
GRANT ALL PRIVILEGES ON *.* TO 'steveodom'@'%' IDENTIFIED BY 'xxxxxxx' WITH GRANT OPTION;
FLUSH PRIVILEGES;

..And allow him to connect remotely:
grant all privileges on *.* to steveodom@66.90.167.160 IDENTIFIED BY 'xxxxxxx';
exit;

Now let’s open up the port to allow remote access to mysql:
(back on your home machine)
ec2-authorize default -p 3306 (ssh)

At this point, before adding my rails app, I bundled and registered this instance. I used my Capistrano recipes and EC2.rake tasks. Using that, bundling, uploading to S3, and registering is as simple as typing (from local machine):
cap complete_bundle

Update the Server using Capistrano:
This part uses my capistrano recipes and EC2.rake tasks. You must run the below from your local machine. It will patch the server, checkout the latest code, do the migrations, etc.
cap initial_install

Configure the App in Litespeed

Goto the admin GUI for litespeed by pasting in the url for your new instance (see above) and appending ‘:7080’ on the end. Example: http://domu-12-31-33-00-01-F8.usma1.compute.amazonaws.com:7080

For this part it is helpful to watch Bob Silva’s screencast.

  • delete existing virtual host and its listener
  • clicked on EasyRailsWithSuExec
  • named my virtual host ‘quizical’
  • for domain I put ‘*’
  • for virtual host root I put ‘/home/steveodom/quizical/current’
  • instantiate it
  • restart
  • add your listener.

Once its restarted, you need to go to the quizical virtual host and change the location to /$VH_ROOT/current.
To do that, click on the Context Tab -> find the rails line – > click edit and change the location box to /$VH_ROOT/current.

You also have to put litespeed in development mode if that is what you want to run.

You should now have a running rails app. To check it, goto your url with :8088 appended at the end (or whatever ports you chose during the litespeed setup).


I am using EC2 to host my soon-to-launch Quizical.net application. Its great.

I use capistrano to manage these EC2 instances. With these tasks, I have automated many sets of EC2 commands into simple rake tasks.

For instance, to launch an instance, I can type…

rake ec2:run id=ami-61a54008

. ..and a minute or two later I have a new instance running.

Then to install my rails app, I type…

cap initial_install

…which

  • patches this instance with things I need;
  • starts my litespeed web server;
  • installs my app from subversion;
  • creates my databases;
  • writes my database.yaml
  • runs my migrations
  • imports my database from S3
  • restarts my server

So in a few minutes, I’ve got my app running on a newly commissioned server! Awesome.

I use to dread bundling my instances (that is, saving my instance with all the changes so I can re-use it again later). I’d have to look up how to do it in the API, paste in my secret keys, and then wait until bundling finished before uploading it, then registering it. It took a while. Now I can bundle, upload, and register with one key command:

rake ec2:complete_bundle

I’ll include the files I use to do this here. There are three files used.

  1. aws.yml – this is where I store all of the data needed by Amazon’s web services for ec2 and s3, such as my access and secret keys. This file goes in your config directory. Since I use this data in multiple places (ec2.rake, deploy.rb, and my s3_cache library), I keep it in this central location. It looks like:aws_access_key: 'XXXXXXXXXXX'
    aws_secret_access_key: 'x+XXXXXXXXXXXXXXXXXX'
    aws_account: '84441XXXXXXX'
    image_bucket: "steveodom_ec2_images"
    ec2_id_rsa: '~/Documents/Projects/ec2/auth/id_rsa-rails-server'
    ec2_keypair_name: "rails-server"
    primary_instance_url: 'domU-12-31-34-00-00-6A.usma2.compute.amazonaws.com'
  2. ec2.rake – this file goes in lib/tasks directory of your application. It contains these tasks:
    • images – lists out all public EC2 images and my own images
    • run – runs the image specified. Example: rake ec2:run id=ami-61a54008
    • instances – shows what instances are running. Example: rake ec2:instances
    • bundle_image – bundles my current image
    • upload_image – uploads my current image to s3 (the bucket is specified in aws.yml)
    • register_image – registers my image at amazon.
    • complete_bundle – combines bundle, upload and register
    • terminate – terminates a running instance. Example: rake ec2:terminate id=ami-61a54008
    • login – logs in to my instance. (this I use all the time)
  3. Deploy.rb – this is my capistrano deploy file. This calls tasks from ec2 as well as using tasks from Adam Green’s s3.rake library. So you will need to have Adam’s s3.rake and ec2.rake in your lib/tasks folder.Some of my tasks here are:
    1. patch_server – Anytime you change something on an EC2 instance, unless you re-bundle and register that change at EC2, your changes are not saved next time you run that instance. I put all my changes in this script so the next time I run an instance, I can call this task, and it gets my server the way I want it. If this task starts getting too long, I’ll then re-bundle and register my image.
    2. create_database – If I bundle an image that has databases all ready in the instance, it does not leave me flexibility if I want to use that instance for another web app. So I use this task to write my databases after the instance is already created.
    3. write_database_yaml
    4. backup_db – uses Adam’s S3.rake library.
    5. import_db – uses Adam’s S3.rake library.

    Next I start bundling tasks together. Such as initial_install, which I run right after running my instance, and this task patches it with the latest changes, starts my server, then sets_up my rails application (creates databases, writes the database yaml, does all my migrations, imports the latest version of my database from S3, then restarts).

I’ve found Capistrano with these tasks to be perfect for managing my EC2 instances. I hope that others can find them useful too and add to them.

Digg


I’ve written a plugin that uses s3 as a cache store for you rails app. Basically, it allows you to cache content at s3 instead of your own server – continuing the theme of using s3 as an asset server.

The Amazon Web Services blog links to a site using S3 to serve up their static content. The encouraging thing was that informal tests showed no noticable differences between serving it up locally and S3. Very nice.

I noticed the same thing using my soon to be released S3Cache plugin. As I mentioned in a comment to the story that I’m looking for a few good testers to test out this plugin. S3Cache creates a new fragment cache filestore using Amazon’s Simple Storage Service (S3). Instead of your fragment caches stored at say “#{RAILS_ROOT}/cache”, your cache can now be located at one of your S3 buckets.

S3Cache also extends ActionView Helpers by adding a new cache helper called “cache_xml”. This caches xml and rss files at S3. Your rss feed can then be accessed at something like http://steveodom.s3.amazonaws.com/feeds/my_feed.rss.

My main goal is writing this plugin was to offload my xml and rss feeds to S3. This way my server is never touched when these files are accessed. But I found that I could extend it too to the cached elements of my site and page performance not be effected. So in effect most of my application can be served from S3, yet it is still database driven and dynamic.

So if anyone is interested in testing this plugin, before I release it into the wild, let me know and I’ll email them a copy. Preferably you’ll be already using fragment caching on your site.


Wow. I thought this was an incredibly insightful post on how big deals are being done. It sure looks like Google is on its way to becoming the next Microsoft.

>The second request was to pile some lawsuits on competitors to slow
> them down and lock in Youtube’s position. As Google looked at it they
> bought a 6 month exclusive on widespread video copyright infringement.
> Universal obliged and sued two capable Youtube clones Bolt and
> Grouper. This has several effects. First, it puts enormous pressure on
> all the other video sites to clamp down on the laissez-faire content
> posting that is prevalent. If Google is agreeing to remove
> unauthorized content they want the rest of the industry doing the same
> thing. Secondly it shuts off the flow of venture capital investments
> into video firms. Without capital these firms can’t build the data
> centers and pay for the bandwidth required for these upside down
> businesses.


So I’ve just started experimenting with Amazon’s S3 service and boy is it easy. My plan was to use S3 as a media server to serve up my flash widgets and javascripts for my upcoming Quizical.net project. So to do that, I ….

  • created an S3 account.
  • Dowloaded the S3 Organizer plugin for Firefox.
  • Restarted firefox and opened up S3 Organizer.
  • Created a “bucket” (basically a new folder). To do this, it is just like creating a folder with an FTP client.
  • After my bucket was created (I called it “quizical”), I uploaded the image, javascript, and stylesheet directories of my rails app.
  • Using S3 Organizer, I set my permissions to public for these directories, by right-clicking on the file name and setting public in the dialog box and checking the “Apply to sub-folders” box.
  • In production.rb in config/environments, I revised this line to point to my asset server:
    config.action_controller.asset_host = “http://quizical.s3.amazonaws.com”

Now all of my public files are being served from S3, a true asset server. So simple.

The only thing that threw me off for a second was the terminology used. The bucket reference I got. But the URL structure is listed as http://s3.amazonaws.com/bucket/key. I kept putting in my access key for the /key. Turns out, key is what Amazon calls the filename.


I made a presentation to the Austin on Rails group last night on my experience the last month refactoring Trivionomy.com to REST. I’ve attached my presentation here.

I will add more text to this post later summarizing my talk.

steve-odom-how-i-wanted-an-api-and-got-clearner-code.pdf