Capistrano & EC2 Sitting in a Tree, K I S S I N G


I am using EC2 to host my soon-to-launch application. Its great.

I use capistrano to manage these EC2 instances. With these tasks, I have automated many sets of EC2 commands into simple rake tasks.

For instance, to launch an instance, I can type…

rake ec2:run id=ami-61a54008

. ..and a minute or two later I have a new instance running.

Then to install my rails app, I type…

cap initial_install


  • patches this instance with things I need;
  • starts my litespeed web server;
  • installs my app from subversion;
  • creates my databases;
  • writes my database.yaml
  • runs my migrations
  • imports my database from S3
  • restarts my server

So in a few minutes, I’ve got my app running on a newly commissioned server! Awesome.

I use to dread bundling my instances (that is, saving my instance with all the changes so I can re-use it again later). I’d have to look up how to do it in the API, paste in my secret keys, and then wait until bundling finished before uploading it, then registering it. It took a while. Now I can bundle, upload, and register with one key command:

rake ec2:complete_bundle

I’ll include the files I use to do this here. There are three files used.

  1. aws.yml – this is where I store all of the data needed by Amazon’s web services for ec2 and s3, such as my access and secret keys. This file goes in your config directory. Since I use this data in multiple places (ec2.rake, deploy.rb, and my s3_cache library), I keep it in this central location. It looks like:aws_access_key: 'XXXXXXXXXXX'
    aws_secret_access_key: 'x+XXXXXXXXXXXXXXXXXX'
    aws_account: '84441XXXXXXX'
    image_bucket: "steveodom_ec2_images"
    ec2_id_rsa: '~/Documents/Projects/ec2/auth/id_rsa-rails-server'
    ec2_keypair_name: "rails-server"
    primary_instance_url: ''
  2. ec2.rake – this file goes in lib/tasks directory of your application. It contains these tasks:
    • images – lists out all public EC2 images and my own images
    • run – runs the image specified. Example: rake ec2:run id=ami-61a54008
    • instances – shows what instances are running. Example: rake ec2:instances
    • bundle_image – bundles my current image
    • upload_image – uploads my current image to s3 (the bucket is specified in aws.yml)
    • register_image – registers my image at amazon.
    • complete_bundle – combines bundle, upload and register
    • terminate – terminates a running instance. Example: rake ec2:terminate id=ami-61a54008
    • login – logs in to my instance. (this I use all the time)
  3. Deploy.rb – this is my capistrano deploy file. This calls tasks from ec2 as well as using tasks from Adam Green’s s3.rake library. So you will need to have Adam’s s3.rake and ec2.rake in your lib/tasks folder.Some of my tasks here are:
    1. patch_server – Anytime you change something on an EC2 instance, unless you re-bundle and register that change at EC2, your changes are not saved next time you run that instance. I put all my changes in this script so the next time I run an instance, I can call this task, and it gets my server the way I want it. If this task starts getting too long, I’ll then re-bundle and register my image.
    2. create_database – If I bundle an image that has databases all ready in the instance, it does not leave me flexibility if I want to use that instance for another web app. So I use this task to write my databases after the instance is already created.
    3. write_database_yaml
    4. backup_db – uses Adam’s S3.rake library.
    5. import_db – uses Adam’s S3.rake library.

    Next I start bundling tasks together. Such as initial_install, which I run right after running my instance, and this task patches it with the latest changes, starts my server, then sets_up my rails application (creates databases, writes the database yaml, does all my migrations, imports the latest version of my database from S3, then restarts).

I’ve found Capistrano with these tasks to be perfect for managing my EC2 instances. I hope that others can find them useful too and add to them.


51 Responses to “Capistrano & EC2 Sitting in a Tree, K I S S I N G”

  1. Very cool! You beat me to this as we’re looking at ec2 for our rails project as well. Good luck with rollout!

  2. 2 Bree

    Gee.. What ever happened to BASH!? ACK!

  3. Very cool. I’ll have to try this. Are you running Quizical entirely on EC2?

  4. Hi Sean,

    Yep, it’s entirely on EC2. Its been a breeze so far. I didn’t do anything fancy on the database side to deal with impermanence. I just do a backup to s3 hourly.

  5. Bree, I guess you could do it most of it in bash and alias a lot of stuff. But this keeps everything inside my rails project, rather than my in my bash_profile. Plus, I’d rather work in ruby!

  6. How much do you pay for EC2 ?
    I mean, why you chose EC2 instead of a VPS.
    I made some math myself and went cheaper with not using S3 (for storage) and using textdrive or private servers.

  7. Hi Piku,

    EC2 is $72 a month plus modest bandwidth charges. Its more expensive than VPS’s, right. What I like about EC2 is the flexibility. I can commission and decommission servers at will. I can create a staging server with my app on it in just a few minutes, test my latest release on that server, then if all is well, terminate the staging server and have only paid $.15 or so. I can add new servers when I need the capacity (though this reason is still a pipe dream).

    And I think the impermanence fear about EC2 makes you think more about backups and redundancies.

  8. 8 Rahsun

    Dude this rocks!

  9. 9 Cxc

    Since you do an S3 back up every hour do you ever worry about your EC2 image going down somewhere inbetween backups? Am I missing something or when you talk about redundancies are you runing mutiple images? Any light you can shed on impermanence of EC2 woudl be greatly appreciated as this one reason I’m a bit reluctant to go full steam into EC3/S3 as a solution for right now.

  10. We have also been experimenting with EC2. How do you plan on handling the MySQL server. If an EC2 instance goes down, so goes all its data!

  11. 11 Cxc

    That was exactly what I was trying to say.


  12. Yep, impermanence is the primary obstacle for many deploying EC2. There’s something about the ‘virtual’ aspect of it that puts the spotlight more on impermanence than on dedicated solutions. I did not think as much about backing up and failures on a dedicated box as I do with EC2, though my dedicated box was maybe just as likely to fail. I think that is actually a benefit to EC2. The emotional triggers of ‘virtual’ forces you to design for failure. is not an ecommerce site where if my instance goes down I’ll be losing orders – or medical records – or even photos. I can get away with hourly backups, which I will start moving closer together as traffic builds. If traffic builds I’ll probably set up a master/slave database on multiple instances. This project also holds some interest:

    There has also been speculation that Amazon is going to come out with a virtual database solution – the potential third leg of their infrastructure on demand solutions.

  13. Nice!

    What are you doing about dynamic IPs for EC2 instances and DNS? The only solution I know of is having a simple hosted box that acts as a gateway to your cluster of EC2 instances.

  14. Wow. Jesse from overstimulate. The original EC2 experimenter!

    I’m using a I created two address records there (one for and one for and pointed them to my EC2 instance IP address.

    DNSmadeeasy is cheap too. I think I pay $30 a year.

  15. I saw this link on the front page of delicious. Congrats!

  16. Nice.
    That’s 72$ EC2 without bandwidth and storage ? I mean that’s extra 0.15$/GB/month?
    I personally don’t justify using EC2 as webservers. Testing I do at home or on devel machines (maybe I’m lucky that I have access to some servers).
    I thought about using S3 for storage as SmugMug does but I reached that that’s expensive too. A private server with 4*500GB hdds is 450$/mo and I don’t pay the bandwidth to access the data.

  17. I think those of us who have chosen to use EC2 and S3 have thought about it pretty hard Piku. We do get bills and we know what the others charge. The flexibility of EC2 is just great though. I am willing to pay for that.

  18. Steve, have you thought about creating and making publicly available an image geared specifically towards running Rails apps, stripping out some of the intermediate steps for people who want to get up and running with Rails as fast as possible?

  19. Donnacha,

    Good idea. I will try to put something together.

  20. Piku,

    If you use EC2 for hosting and S3 for storage, then bandwidth between the two services is free and really fast (since they are most likely in the same datacenter). You only pay for bandwidth to and from your users.

  21. 21 Meekish

    How do you handle multiple instances of EC2 each having its own mySQL database? Won’t sessions be lost if a user is bounced to a different instance between requests? It would seem to me that you would need a database server instance running, then each application server instance would access the same database instance server… or am I missing something?

  22. First of all nice work! I was glad to see this when we decided on ec2/s3.

    I’ve made some small changes, not exactaly necessary but it adds some structure to the yml file. I’m not sure about adding dev, test and prod but this will do for now:

    aws: &aws
    access_key: xxxxxxxxxxxxxx
    secret_access_key: xxxxxx+xxxxxx
    account: xxxxxxxx
    local_path: tmp/s3
    xyz_bucket: xyz.domain
    xyz_bucket: xyz.domain
    xyz_bucket: xyz.domain

    YamlReader is a quickly done reader written by Matt Johnson from Default, which allows sections of a yml to be read.

    module Default
    module YamlReader

    def self.read_file( file, options = {} )
    contents = YAML.load_file File.join( RAILS_ROOT, “config”, file )
    unless options.empty?
    if options.key? :restrict_to
    contents = contents.values_at( options[:restrict_to].to_s ).first


  23. Ok i forgot my code tags.

    Here is the aws.yml

    aws: &aws
    access_key: xxxxxxx
    secret_access_key: xxxx+xxxx
    account: xxxxxx
    local_path: tmp/s3
    xyz_bucket: xyz.domain
    xyz_bucket: xyz.domain
    xyz_bucket: xyz.domain

    YamlReader call in ec2.rake

    @@ec2_conf ||= Default::YamlReader.read_file 'aws.yml', :restrict_to => 'ec2'

    YamlReader module in lib\default

    module Default
    module YamlReader

    def self.read_file( file, options = {} )
    contents = YAML.load_file File.join( RAILS_ROOT, "config", file )
    unless options.empty?
    if options.key? :restrict_to
    contents = contents.values_at( options[:restrict_to].to_s ).first


  24. Adam,

    Thanks for this. It provides a bit more structure. I think I’m going to turn all of this into a gem. I’ll turn it into a project and see if you’d like to contribute.

  25. 25 vanilla_bean

    Hi Steve,

    Could you tell us a little bit about how you create and tweak your AMIs? I use windows as my development machine and test my deployments on virtualized Ubuntu boxes that are running under vmware. I’m at the stage now where I’d like to try to deploy to EC2 and play with clustering a bit, but I’m wondering what’s the best way to convert my virtualized Ubuntu machines into an EC2 AMI? Any tips?

  26. Hi Vanilla Bean,

    I take it that using one of the public Ubuntu-based AMIs would not work for you, such as the Ubuntu 6.10 image or the Ubuntu Feisty image.

    You could modify one of those and rebundle and register. That’s the easiest method.

    For creating an image from scratch, Here is a post that I found helpful:

    Good luck.


  27. 27 vanilla_bean

    Thanks for your reply Steve,

    Maybe I’m just thinking about this wrong. The Elastic Compute Cloud Walkthrough post was very informative, and similar to what I found the “EC2 Developers Guide”: Both guides explain clearly how to build a new AMI either based on an existing image or entirely from scratch, which is great. The next step according to these guides would be to upload, register and run these instances in EC2. Ok, also great – if you plan to use EC2 as your development and staging environment. I could do that I suppose. The thing is, as someone pretty new to *nix admin world, I’ve just got used to creating, cloning, tweaking my *nix staging deployments in vmware. It seems all so easy to create my own virtual networks of machines, with everything running on my notebook and my development server. Once I’m satisfied that my deployment is working as I want on my virtualized network, what I’d really like is a way to convert my vmware images into AMIs ready to be loaded and run on EC2.

    I guess what I was asking originally was whether you and others who are using EC2 basically use EC2 as your development and staging platform. Do you fire up one of your customized AMIs on EC2, make some changes, maybe via Capistrano, run some tests, then persist it back to S3 when the development day is done? Or do you develop with most everything running locally, and only when your ready to release your current iteration do you deploy to EC2.

  28. Vanilla Bean,

    I develop locally and use EC2 for staging and production.

  29. 29 greg1205

    Great post and questions/responses. I’m new to both Rails and EC2 but would like to do similar things.

    Do you have recommendations on a prebuilt EC2 image for Rails that includes MySQL? I’m trying to do this as easily as possible. I’m doing all my development on Windows as well.


  30. Greg, you might want to check out my plugin elastic rails – It might help you.

  31. 31 Larry


    Good work on what you have built. We currently run our entire web site on EC2/S3. We have about a million downloads a month and growing. We had 16TB of data sent out from EC2 last month.

    We do the same thing as you except we do mysql dumps to s3 every 2 hours. The main reason I am commenting is to ward you away from s3infinidisk. We purchased 2 copies of s3dfs (infinidisk) in the spring and the support was horrible. I gave the owner of it many bugs and he always said there were fixes coming but then nothing. Eventually he just stopped responding to me. $2K down the drain.

    We have done price comparisons on what we are saving not using a co-lo and I have to say we are doing great.

    Thanks for keeping the good word out on this great service!


  32. Larry, thanks for the heads up and the nice comments. Great looking site.

  33. It looks like your ec2.rake deploy.rb etc links are all giving access denied errors.

    Hopefully I can find another copy out there on the web somewhere, because I’ve got some AMIs to boot!

    Thanks for putting this together, it looks like exactly what I need.

  34. Hi Chris,

    I haven’t done much on this in a while and don’t have plans to continue supporting it. I’d suggest trying some of the other plugins and gems. Here’s a new one:

    Others include capazon and Paul Dowman’s project:

    Hope that helps.


  35. 35 Matt

    What do you guys think of this?…

    As you may know, standard template “Amazon EC2 Rails-All-in-one-trial” by Amazon AWS is not good… yeap, it is. Let me show you how fix it by your hads 🙂

    Let’s start. We have a useful EC2 based on CentOS 5.2. Great enterprise linux for the Rails application. But Ruby 1.8.5 on aboard, it’s deeply out-of-date… We recommend Ruby 1.8.7 for use with Rails. Ruby 1.8.6, 1.8.5, 1.8.4 and 1.8.2 are still usable too, but version 1.8.3 is not.

  37. I have just started recently playing with EC2, and the way that I am handling the impermanence issue is by creating an EBS persistent data block. The cost of storing data in this way actually seems to be a bit cheaper than S3 at 0.10 per GB-month vs 0.15 for S3.

    I then mount the EBS block onto /vol on my instance and create an ext3 filesystem on it.

    I moved my Apache document root, sites-enabled and sites-available directories to /vol/www where they live alongside site directories. All I needed to change in apache2.conf was the Include statement for sites-enabled which needed to be changed from /etc/apache2/sites-enabled to /vol/www/sites-enabled.

    I then moved my PostgreSQL data directory to /vol/data, updating my postgresql.conf file to reflect the new data directory location. I am almost certain that it is very similar to do this with MySQL.

    Way easier than I thought it would be!

    By the way I am using the new Ubuntu Server Edition for EC2 beta.

  1. 1 tech decentral » links for 2007-02-14
  2. 2 How I Set Up My EC2 Instance for Rails & Litespeed « Niblets….Simple. Less.
  3. 3 tecosystems » links for 2007-02-19
  4. 4 How to Deploy Rails Apps Using Capistrano and EC2 at Justin Lee’s JBlog 3.0
  5. 5 links for 2007-02-23 « Werner Ramaekers
  6. 6 artifactual » Blog Archive » roundup
  7. 7 links for 2007-07-03
  8. 8 Capistrano 2.0: Flexible Automated Deployment System
  9. 9 Feeds On Rails
  10. 10 Capistrano
  11. 11 Web Hosting Directory
  12. 12 Mighty Linuxz » EC2 & Capistrano Sitting in a Tree, K I S S I N G
  13. 13 A Fresh Cup » Blog Archive » Double Shot #5
  14. 14 » Blog Archive » Using Capistrano with EC2

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: