Heroku versus Cloud Foundry from an Educator's Perspective

There are several useful comparisons between Heroku and Cloud Foundry, including this one and that one. I am going to add a brief educator’s perspective.

Why APaaS in the first place? To have a common reference platform for students to deploy to from day one. This eliminates the common last-minute deployment problems.

My specific requirement is to support the technology stacks I have chosen for pedagogical reasons for my web application and web services courses. (In both courses, we pay considerable attention to architecture.)

  • Scala
  • Play framework for web apps
  • spray for RESTful web services

As a platform, Cloud Foundry is open-source, while Heroku is closed-source. Fair enough. But what are the relevant differences from my perspective?

  • With Heroku, your application needs to have a main class as its entry point. You structure your application using Foreman and push the source to the cloud using git. Building, staging, and deployment then take place again in the cloud. If these steps work on your local machine using, say, Maven or sbt, they are very likely to work the same way on Heroku. 
  • With Cloud Foundry, your application must be packaged as a war to run on Tomcat 6, in the case of Java or Scala. The build process takes places on the local machine, you push a war to the cloud, and this hard-coded server staging logic is invoked. The problem is that you are stuck with the default blocking Java connector! Because spray requires the non-blocking (async) connector, this means that you cannot deploy spray to CloudFoundry.com for now. If you know a workaround, please let me know! 
  • Both have various useful add-ons, such as relational and NoSQL databases, message queues, etc.
  • Heroku has a free tier, while Cloud Foundry has a free trial and it’s final pricing structure has not been announced yet.

For these reasons, I ended up choosing Heroku for this spring semester. I’m quite optimistic that it’s going to be fun.

How to back up a Heroku production database to staging

It’s right there in the docs but I didn’t notice it until recently:

heroku pgbackups:restore DATABASE `heroku pgbackups:url --remote production` --remote staging

Boom! It transfers the production Postgres database to staging.

It’s much faster than db:pull, then db:push, which is what I used to do (like a sucker).


git remote add staging git@heroku.com:my-staging-app.git
git remote add production git@heroku.com:my-production-app.git
heroku addons:add pgbackups --remote staging
heroku addons:add pgbackups --remote production

Create a database backup at any time:

heroku pgbackups:capture --remote production

View backups:

heroku pgbackups --remote production

Destroy a backup:

heroku pgbackups:destroy b003 --remote production

Errors uploading images from ASIHTTPRequest in iPhone to Rails 3 application hosted on Heroku

We just went through a painful debug session related to erratic timeouts and "Bad content body" errors when we try to upload images from an iPhone application to Rails 3 application hosted on Heroku.

The iPhone application uses ASIHTTPRequest, checking the Google Groups we found that the issues are related to Persistent Connections, the solution that worked for us was: 

[request setShouldAttemptPersistentConnection:NO];
Ruby on Rails 3.1, PostgreSQL and Heroku

My entire afternoon was devoted to using a postgresql database instead of a sqlite one on my sample app (from Rails 3 tutorial). I *must* write how the fuck you do this, it’s shamelessly hard for someone unaccustomed with DB handling. Installing it using MacPorts is pretty straightforward, but there are some necessary terminal incantations, it’s not just “sudo port install postgresql83”.

I did find (and, maybe, understand) how to properly put an app on heroku.com. With rails3, this is painful because of the assets pipeline - you need to tweak some configurations and I also needed to downgrade sprockets (2.0.2).

The almost-finished sample app is hosted under strong-wind-7904.herokuapp.com.

Instructions for migrating from shared database to dedicated database on Heroku

Before you start the migration please be aware of these facts:

  • Heroku doesn’t migrate the data from the shared database to the dedicated database automatically.
  • Heroku doesn’t assign the dedicated database as the primary database automatically.
  • Heroku doesn’t remove the shared database even after you complete the migration. You have to remove it manually. Otherwise you will continue to incur charges(if any) for the shared database.


  • Add the dedicated database to your application:

    $ heroku addons:add heroku-postgresql:ronin
    -----> Adding heroku-postgresql to sushi... done, v69 ($200/mo)
           Attached as HEROKU_POSTGRESQL_RED
    -----> The database should be available in 5-15 minutes
           Use `heroku pg:wait` to track status
  • Track the progress of the database creation:

    $ heroku pg:wait 
    Waiting for database HEROKU_POSTGRESQL_RED... done       
  • Get the connection credentials of the newly created database

    $ heroku pg:info
    Plan         Ronin
    Status       available
    Data Size    5.1 MB
    Tables       0
    PG Version   9.0.5
    Created      2011-10-10 17:59 UTC
    Conn Info    "host=ec2-107-22-255-345.compute-1.amazonaws.com 
                 port=5432 dbname=dk2sf5va7dsalklkdsj3rd 
                 user=uy5gymvzvyast6ee sslmode=require 
  • VS1: Verification step 1: Note down the count of users in the current database

    $ heroku console
    >> User.count
    => 1024
    >> exit
  • Enable the mainenance mode. This will prevent any further changes to the shared database.

    $ heroku maintenance:on 
  • Create a backup of the shared database. Note down the backup id.

    $ heroku pgbackups:capture --expire 
    SHARED_DATABASE (DATABASE_URL)  ----backup--->  b136
    ?[0KCapturing... doneB -
    ?[0KStoring... done
  • VS2: Verification step 2: Check the count of users in the dedicated database. The database will throw an error in this step.

    $ psql "host=ec2-107-22-255-345.compute-1.amazonaws.com port=5432 dbname=dk2sf5va7dsalklkdsj3rd user=uy5gymvzvyast6ee sslmode=require password=pssajjhjhfa2yzx1q8g48dp1mo3a"
    => select count(*) from users; 
    ERROR:  relation "users" does not exist
    LINE 1: select count(*) from users;  
    => \q
  • Seed the dedicated database from the the backup

    $ heroku pgbackups:restore HEROKU_POSTGRESQL_RED b136 --confirm
    ?[0KRetrieving... donet |
    ?[0KRestoring... doneB \-
  • VS3: Verification step 3: Count of users in the dedicated database should match the count returned in the step VS1

    $ psql "host=ec2-107-22-255-345.compute-1.amazonaws.com port=5432 dbname=dk2sf5va7dsalklkdsj3rd user=uy5gymvzvyast6ee sslmode=require password=pssajjhjhfa2yzx1q8g48dp1mo3a"
    => select count(*) from users; 
    (1 row)
    => \q
  • Make the dedicated database as the primary database for your application

    heroku pg:promote HEROKU_POSTGRESQL_RED
  • VS4: Verification step 4: Count of users in the heroku console should match the count returned in step VS1

    $ heroku console 
    >> User.count
    => 1024
    >> exit
  • Activate the application

    heroku maintenance:off --account thinkspeed
  • Deactivate the paid shared database plan

Deploying Rails 3.1.0 to Heroku

I just finished setting up Ruby on Rails 3.1.0 on Heroku and wanted to share some small tips.

Heroku Cedar

I’ll cut right to the chase. When you deploy Ruby on Rails to Heroku, you’ll want to specify the Cedar stack. If you try and deploy to Heroku normally, you’ll get an error. To specify a stack, simply do it as follows (this should look relatively familiar if you’ve deployed to Heroku before)

heroku create --stack cedar

Oh my, so easy? Yeah.

Asset Pipeline

This is a bit trickier. Honestly, I didn’t follow this guide at all, but I’ll refer you here because certainly they know more than I do and will likely keep it more updated than this post. Asset Pipeline Rails 3.1.0 on Heroku

SSH and different accounts in Heroku

In Heroku, if you have two different user accounts, you’ll need to use two different ssh keys to access applications in each account. The following commands can be used to switch between ssh keys quickly.

Check your currently loaded keys:

ssh-add -L

Remove that key from the agent with:

ssh-add -d <optional_key_file>

Add a new key (the one with your_new_email_address):

ssh-add /path/to/your/private/key/file

Another link related to this I found today (23/11/2011)

Heroku Scheduler を試す

heroku の Cron Add-on は Daily のみ無料で、Hourly にすると $3/月 の料金が掛かった。ところが、新たな Add-on である Heroku Scheduler がリリースされた。これは Every 10 minutes / Hourly / Daily が選択できる。しかも無料だ。これは試すしかあるまいと思い、Amazon の価格調査をするアプリを書くことにした。

ドキュメント には、Rake タスクや Ruby スクリプトを実行する例が載っている。なので、Rake タスクを書くことにした。内容は下記の通り。

# -*- coding: utf-8 -*-

require 'httpclient'

base_uri = 'https://xxxx-xxxx-9999.heroku.com'

desc "This task is called by the Heroku scheduler add-on"
task :update_price do
  puts "Updating price..."
  client = HTTPClient.new
  client.set_auth("#{base_uri}/amazon/update_price", 'xxxxx', 'xxxxxxxx')
  res =  client.get("#{base_uri}/amazon/update_price")
  puts res.status
  puts "done."

URL を叩くだけのものになっている。Basic 認証を掛けているので、httpclient を使っている。取り敢えず動くようにはなったが、色々手直しが必要に思う。本体と合わせて直そうと思う。

Setting up a custom sub-domain for the staging environment on Heroku

Let us say you have configured a custom domain; http://www.foobar.com; for the production environment of your application on Heroku. You followed the instructions here to create a staging environment. Now, you want to expose the staging environment on http://staging.foobar.com. I had a similar problem today, and I didn’t find any step by step instructions for addressing the problem.


  • Production and staging Heroku environments are called foobar-production and foobar-staging. Both environments have custom_domains add-on
  • The foobar-production environment has the Zerigo add-on. The A(foobar.com) and CNAME(www.foobar.com) records are configured using Zerigo add-on

Add the staging sub domain to point to the staging environment

    $ heroku domains:add staging.foobar.com --app foobar-staging 

Configure the Zerigo add-on to point to the staging environment.

Clone the CNAME record for www.foobar.com. The record points to proxy.heroku.com. Change the sub-domain name to staging. Wait for 10 minutes for the changes to propagate. Access the staging environment using http://staging.foobar.com.

Review of the DreamForce event in Munich

I attended the DreamForce event in Munich on 27th of October, which was acclaimed to be the biggest cloud event in Europe: I am not sure of that, but it was very worthwile experience.

My main intention was to evaluate how the well-known backend applications for e.g. CRM from SalesForce can be connected with applications running on the Heroku PaaS infrastructure, which was acquired by SalesForce.  (The background of this is that I am currently looking into business ideas, which can form the basis of startup)

First of all they had a keynote of Hans-Dietrich Genscher, a former german minister of foreign affairs (next to other offices he held): initally I was not completely sure what he can/could tell the audience, but it was a very focused & motivating keynote with some humorous sidenotes. You can see the keynote below:

Regarding my main intention: there seems at the moment not to be an integration between Heroku PaaS and the SalesForce apps. The main answer was all the time that you can make use of the (very good) API of SalesForce from e.g. an Heroku application.

I think especially from a startup perspective there would be a good value add, when you can focus on the design & functionality of your customer-focussing services while the business processes (mainly CRM and Service relevant here) are managed by a mature application in the backend. 

When I used my demo account on SalesForce I was very fond of the Chatter functionality: perhaps easiest explained as social networking enabled message bus for enterprises, which brings employees, business objects and business processes together in a nice Web 2.0 enabled way: it is especially interesting as it left already the annoucement stage and it is already available. (Surprisingly the most participants said in a short survey that they have not seen/used Chatter until now)

From a technical point of view I liked very much the developers corner, which had technical talks during the whole day: not only from SalesForce people, but also consultants, who bring their own experience to to the table (e.g. like the people from Tquila)   

A friend said before on Twitter, that at least last year he saw the event as a marketing only event: I have to admit that this is right when you look at most of the talks (they are just good at marketing), but I got also good ideas on how to look into the SalesForce user group: after all a well spent day. 


- The people from TQuila (@tquiladotcom ) have just opened their offices in Munich and are exploring demands/options for a Force.com user/developer group here.

pg_config executable not found.

First of all you have to install fink on you macbook. It will help install postures and after that psycopg2. If you got this error - fink install postgresql83-dev and then easy_install psycopg2 or pip install psycopg2

fink install postgresql83-dev
fink install postgresql83-dev-dev
export PATH=$PATH:/sw/bin/
pip install psycopg2

Sharing experiments and projects

At Danger Cove we do a mighty lot of experimenting, which usually involves code. Not too long ago these experiments were doomed to stay hidden on our hard drives. Waiting to become obsolete and known only to those who worked on them.

Platforms like Heroku and GitHub have changed this and we love them for it. We will share any code we write, that we feel might be useful to someone else (or interesting in any other way), on GitHub and deploy a working demo on Heroku. Have a look at the projects that we recently put up.

I could add a pretty lengthy explanation of what cloud platforms are all about and why they’re awesome, but the truth is that they are very easy to get into and it’s way more fun to just see how they work for yourself:

Foursquare API, and SSL, and OAuth. Oh, my!

Continuing our deeper dive into the nuts and bolts of how we built #mom, let’s walk through how we used the foursquare API. We we wanted the foursquare platform to: (1) handle user authentication, (2) supply general user data, (3) retrieve user checkins, and (4) promote the service by displaying #mom in the friend checkin feed.

Much like Twilio, our first step was to register our application with foursquare. They call this process “register a new consumer.” It made sense, after a while, since as a developer we are “consuming” foursquare data. Foursquare also asks you for a website and callback URL. We hadn’t settled on the hashtagmom.com domain yet (my original idea was theresafe.ly) but luckily it doesn’t matter what you put into this field.  The callback URL is more important and a not well documented. We had issues with invalid redirect uri errors when specifying the full callback path (http://www.hashtagmom.com/path/to/callback) for our app. If instead you just give foursquare the hostname (http://www.hashtagmom.com/) they will allow you to specify any path under that host in your OAuth2 request, so that seems like the way to go.  The foursquare API was also easier to work with locally than Twilio, as OAuth2 uses a browser redirect that works to localhost, so you don’t need a localtunnel set up. Please read our previous Twilio post if this distinction is unclear.

With the goal of connecting to foursquare and retrieving data from a simple standalone ruby program, I Googled “foursquare gem.” Unfortunately the first result, in my version of Google, is an out of date gem. I futzed with that until I stumbled upon a more recent gem called, wait for it, foursquare2 gem. Foursquare’s libraries page points to the latest gem so it’s always a good tip to just follow their official links rather than my brain-ingrained-let’s-Google-it approach. There is also a foursquare-api gem but I haven’t used it.

Ok, so we were able to query foursquare and retrieve data from endpoints that only required “userless” access permission. These are endpoints like get information about venues, tips, and lists. To access checkin details, user data, or many of the other interesting bits available in the platform, you need to act “as” the user— you need a user to authenticate your application with foursquare and explicitly get permission to query the platform as if you were that user.

The following is a  bit of an aside and not specifically related to any particular platform but necessary to understand. You’ve likely connected to Facebook before. You’re on SomeGreatSite.com, they have a connect to Facebook button, you click it, and now you’re redirected to a Facebook page that read, “Hi, would you like SomeGreatSite to have access to x, y, z Facebook permissions?” Yup, hit yes and you’re redirect back to SomeGreatSite.com. This flow is brought to you by OAuth2: check out this technical description and foursquare’s overview. From the developer’s perspective: you a user on your domain and want to query an API on that user’s behalf. Rather than ask for andstore a user name and password, you direct the user to the service and wait to hear back if you have permission or not. If you do get permission, you get a special token that is specific to that user. (Remember going to the arcade as a kid and getting special tokens that only fit into the video game machines?) Now, you can query the other service with this token and act as the user. The developer does not have to see or store the user’s authentication details. If the user wants to revoke your site’s access to the platform, they can just tell the platform to stop accepting that token. Change of password, no problem. Badaboom. 

We’ve shipped the user off to foursquare but does foursquare know how to return the user back to us? How do we know which user they’ve sent back to us? The callback URL! You inform foursquare which application you are with your client ID and secret on the redirect call  and foursquare calls your application back at your predefined URL. We can test this locally since a localhost callback URL will work. Just set up a route in Rails that handles the URL call: in our case this was /foursquare_callback and we handle this call in our Users controller. Foursquare returns the user’s access token in the callback URL which you can parse from the params hash. The next problem: which user are they referring to? We had a few options: (1) we thought about passing some user ID to foursquare and then parsing it back out of the callback but we’re not sure that would work (not to mention it seemed like a bad idea to pass a 3rd party an app specific user specific ID), (2) matching some foursquare user data against our user information but we already removed email address from the sign up process (and this also felt wrong at a gut level), (3) storing an ID in the browser cookie for that user’s session.

Rails easily allows you to store a session ID in the user’s browser’s cookie and access per-session data in your controllers. We bank on the fact that the callback URL will occur within the same session. Rails queries the user’s session from the cookie and then we look up which user had that session ID. We also store the OAuth2 access token (a string) in the user object and use it anytime we need to query foursquare on the user’s behalf.

It’s a windy road to get started with user authentication. You have to play by the platform’s rules. Once you have an access token, you’re ready to query!

We want the foursquare checkins as they happen. When someone includes “#mom” in his or her shout (foursquare’s field name for the 140 character message per checkin), we want to kick off a process that parses the message, looks up the user’s settings for call or text and tells Twilio to go. Foursquare has two ways to get checkins for a user: pull and real-time push. During the development process, we used the pull API to access the latest checkins and manually scheduled when to refresh this data. We’d briefly store the checkins then cycle through them looking for “#mom” and call the next step. This helped us manually test the service and we could even generate fake data within our database saving us an embarrassing number of #mom public checkins (though I generated like 20 of them in production the night we got it all working). We figured we could get push working later and guarantee that our data flow worked without the need for push. Push also requires SSL which I’m not aware you can do in local testing. 

Handling both push and pull calls from foursquare was made simple by abstracting checkin processing into a Checkin model. This model knows how to send out calls or texts based on the information contained in a checkin, and doesn’t care how that checkin got created.

Pulling data from foursquare is more straightforward than receiving pushes.  One tricky part is that foursquare doesn’t take care of de-duplicating the checkins they send you, so you have to make sure not to process the same checkin twice. Here’s our pull code, which fits nicely in the User model:

With this code, we can dump a User’s latest unprocessed checkins into our database, where we can manually process. We considered using the delayed job framework to pull and process checkins in the background, but eventually decided to take a shot at hooking into the push API.  

For push, foursquare posts to your servers whenever an authorized user has checked in to a venue.   The complicating issue that kept us from starting with push is similar to the problem we ran into with Twilio— it’s a pain to use these webhook APIs from a local development environment.  Foursquare’s API was even more complicated in this regard, as they only allow pushing of data over SSL. This ruled out the localtunnel solution from Twilio, and we never succeeded in pushing data to our local servers. It would be awesome if foursquare could allow non-SSL connections for testing (maybe restricting to only getting your personal checkins pushed). We would also be happy with a solution similar to what Stripe does with their Webhooks framework, where they send you a non-sensitive event ID that you can then use to make a secure request to their API from your server.

Regardless, we needed to get SSL up and running in production even if it wouldn’t work for development. We’re using heroku for our hosting and they offer free SSL via their *.herokuapp.com certificate.  This SSL isn’t useful for user-facing pages, 

since we’d have to redirect people away from www.hashtagmom.com to the scary-looking hashtagmom.herokuapp.com, but is perfect for giving foursquare a secure path to post to.  Once we had a valid SSL-enabled path for foursquare, processing the push data itself is straightforward:

Getting push working was a huge win for us.  We’ll process a checkin within seconds of it being made.

Like many things, the first time you get this all set up is much harder than the second attempt. This workflow touches many different technology stacks (HTTP, 3rd party APIs, local development, databases, browser sessions, security, user permissions, authentication through OAuth2, and a few others) so it’s not easy to just sit down and do it. I highly recommend working with a friend so you can catch each others mistakes and talk through what each part of your program intends to do before diving into a Google or StackOverflow-athon of article reading and reckless Github code copy and pasting: it won’t work. Writing about your efforts afterwords clarified the process as well; it feels good!

If you have any questions, would like to see more code snippets, or have other topics you’d be interesting in reading about please let us know.