coffeescript

The immediate benefit of CoffeeScript

All languages have their pros and cons. Some of these deciding factors are more superficial than others. Often the superficial benefits are what strike you first about a language. CoffeeScript for example has a really tight, sexy and expressive syntax that allows you to be concise consistently. Take this comparison of a pointless jQuery plugin:

Both do the same thing, but CoffeeScript definitely looks better right? Check it out over on GitHub. More to come on CoffeeScript sooooooon :)

vimeo

Wynn Netherland: Accelerating Titanium Development with CoffeeScript, Compass, and Sass

GolfStatus has been fortunate to have Wynn involved in the development of our platform.  During his recent presentation at CODESTRONG in San Francisco, California, Wynn used GolfStatus to highlight some of the techniques he uses to write clean, concise code.    

Introducing CrowdSync

Crowd-what?

This year, Nick and I wrote a little CoffeeScript program along with some HTML, CSS and a bit of C# for good measure. If you visit a web page that this code is running on, it makes your phone blink a randomly assigned color at specific times throughout a song performance. Your phone becomes synchronized with many other phones blinking the same or different colors all in time with a musical track which is also being played by a live band. We call it CrowdSync.

The Challenge

In August this past summer, Kim Vehon (from Central’s worship team) came into Nick’s and my office. She showed us this video of cell phones blinking a single color and emitting a tone to create music. It was a pretty cool idea, but we weren’t too sure where she really wanted to take it. I think initially, she wanted it as some part of the worship service element. Having phones somehow incorporated into the set that blinked with the music.

Initially, this sounded kind of boring to Nick and I. We thought, What if we could make everybody’s phone blink like this, rather than just a few phones on stage.” An interactive worship experience sounded much cooler than a fancy set piece.So that’s where this whole thing began. And it just snowballed from there, becoming bigger and cooler as we got closer and closer to Christmas.

Mr. Negativity

In the project’s earliest stages I had convinced myself that there is no way that we’re going to pull this thing off. Early on, I was Mr. Negative Nancy about the whole thing. The core problem we were trying to solve was what we geeks call non-trivial. That is to say, we had a mountain of a problem to climb: We needed to make everybody’s phone blink at the same time. Not just roughly the same time. We needed them to blink at precisely the same time, with millisecond precision.

If you’ve never thought about this, here are just a few of the variables we found ourselves facing:

  • Inconsistencies in how time is kept on each device.
  • Time differences between carriers and even between devices on the same carrier.
  • Different operating systems and mobile browsers and the limitations they imposed.
  • Network latency between the phone and the server.
  • The complete lack of control afforded to mobile browsers by the client OS.
  • A great myriad of things that can happen whilst dealing with a phone (i.e. - What happens if somebody gets a phone call, or a text, or their phone goes to sleep?).
  • What’s going to happen when a thousand people hit our web server all at once?

So you see, being the pragmatic geek that I am, I naturally thought there was no way we could pull this off. Boy was I wrong!

The Solution… Round 1

A couple days later, Nick went into a kind of fit of inspiration that would ultimately prove me wrong. I’ve been thankful for that every day since. :)

We initially talked about using something like Node.js with Socket.io to send push notifications to clients teling them when to “play”. We decided fairly quickly that this approach would be too “chatty”. We needed something that would be fairly quiet on the network to help solve some of our network scalability challenges.

Nick came up with the idea to start from a known time source, the server’s time. We wrote a little C# web service with 2 methods. One method returns the current time, and the other returns a designated start time based on a campus ID passed to it. We would then interact with these services on the client via AJAX.

This solved the time inconsistency problem. But how do we deal with network latency? We ended up prototyping out a little bit of CoffeeScript that would poll the server several times asking for the current time. We then took the server’s time and compared it to the client’s time. We did this over and over trying to get the smallest variance between the client and server possible. Approximately 20 tries seemed to be the magic number of times to poll the server. This allowed us to kind of synchronize our watches with the server and get each of our clients doing things at the same time. Since the client time becomes relative to the server’s time, in theory, this technique could even work across time zones.

OK, so we had the time problem down. Now we needed to know when to “play” our “note”. So we set up a timer on the client that would check the current time against the known start time for our “song”. We needed this to be very responsive, so we settled on doing this check 100 times a second. So like a little kid with a full bladder on a road trip, our little app asks "Are we there yet?" incessantly until it’s time to play the note it’s been assigned.

It took us about a full day to get a working prototype. And at the end of that day, we had something that would take an array of JavaScript object literals and play each note. We had all of our IT team’s phones in our office all blinking in perfect unison. It was brilliant.

More Problems… More Solutions…

Now that we had a working prototype that solved our core set of challenges, we needed to address another core piece of the puzzle. How will we get the musical arrangement into a format our app could understand?

After some pestering, we were finally given a midi file exported from an mp3. It was the piano arrangement that would be played during the service, played one note at a time. Thankfully, Nick found a midi parser written in C# that was able to decode the midi file and convert it into an array of relative timestamps. So we could convert this timing metadata stored in the midi track into an array of JavaScript object literals. The server determined the start time of the performance, and each note had a time stamp associated with it that was relative to the start of the track.

We then took this data and converted it to our “adjusted” time on the phones. We could add this relative time to the start time of the song and then our phone clients would know when to play their note during the course of the performance.

Our next challenge was getting the light boards and band to fall in step with our blinking lights. Our lighting genius, David Empy, suggested we use QLab to fire the lights and the click track for the band. We ended up using QLab for the lights only and created a control panel that synced up the QLab Mac’s time in much the same way that the phones became synced. This control panel allowed us to set start times for the performance at one of our campuses.

To synchronize the worship band with our app, we decided to fire the click track ourselves using WebKit’s Audio API *. After working through a few performance issues, we settled on a method of pre-loading and running the audio in a way that would fire at exactly the right time. We then were able to use an offset to account for any lead-in time on the click track for the band. Not super straight forward, but it worked really well.

Creating the song visualization

This was probably the funnest part of the whole process for me. About a month and a half ago, Nick and I found out we were going to be responsible for creating a visualization to go on the side screens. This first started out as a grid of rectangles. Each rectangle represented a phone in the grid. This allowed us to create some pretty neat looking patterns, but it was ultimately limiting. Our worship team didn’t think it was “Christmasy” enough (they were right).

So we went back to the drawing board and started playing around a little bit with the HTML 5 canvas element in conjunction with various pieces of audio API. We came up with programmatically drawing the Luminous graphic with code on the canvas and animating it with our CoffeeScript.

I enjoyed this part the most. Nick ended up coming up with the math that would plot each point of light on the tree (using some Geometry calculations that I don’t think either of us have used since high school). We assigned the color randomly and filled it with a radial gradient.

We even worked on adding little subtle touches like fading the lights out rather than shutting them off abruptly, moving the center point of the radial gradient for each light around and adding a bit of flickering here and there to make the tree look more organic.

Gotchas

Initially, syncing with an NTP server (like time.apple.com or time.windows.com) sounded like a no-brainer. We learned fairly deep into the project, however, that is can throw a wrench in things pretty quickly. If all your clients are dependent on time, what happens if the server’s time drifts?

Q: What happens when the server receives an NTP update and adjusts its time accordingly?

A: It starts serving up a new time to clients, that’s different from original time. This is not good.

We also ran into this same problem on our QLab Mac that would fire the lights, and once again on our iMacs that ran the visualization on the side screens in the house and fired the click track for the musicians. Yep. Still not good.

It initially looked like “gremlins in the system”, but we noticed that once we disabled NTP syncing and just had the clients keep time locally, our results became much more consistent. Since each client could adjust it’s start time based on the delta calculated off the server’s time, we could still maintain our synchronization between devices without relying on NTP.

Future iterations

Currently, all of our performance start times are accessed via an ASP.NET web service and stored as Lookups in our Arena ChMS database. We’re going to be breaking this dependency in the next major release. We prototyped it out using Arena since it offered us many conveniences, but we’d like to make it more portable and not reliant on a proprietary ChMS product.

We’ve got plans to re-write the back end of the system using Node.js and Express to offer a RESTful API to access the relevant information based on campus. We’ll likely be storing the data in a No-SQL database (MongoDB is looking pretty good) or perhaps something like MySQL if we end up needing a more traditional relational database engine.

I’ve got some other ideas that are perhaps a bit too abstract to go into here, but we’re actually looking to make CrowdSync more of a real-time app. We’ve got some pretty exciting ideas that we’re looking forward to exploring that should make the system better and more stable in future releases.

Conclusion (TL,DR)

Even though I was convinced we couldn’t pull it off in the earliest days, we did it! We successfully synchronized many devices to blink colors at the exact same time. Not only did they blink their assigned color at exactly the right time in time with each other, we were able to make them do it in time with music being played by a live band.

It was a huge success and we are super blessed to be a part of such an awesome team that could pull something like this off.

Keep an eye on the GitHub repo we’ve set up. We’ll be releasing the source code in the next few days. We’re also working on a video to more easily demonstrate what the app is capable of doing. As soon as we’ve finished work on it, Nick and I will make sure it gets included in the README on GitHub (and on our blogs too).

Pop over to Nick’s blog and give his post a read too. 

* I actually liked Mozilla’s HTML 5 Audio API better, but Google’s V8 JavaScript engine was just too fast for us to drop Chrome completely for this project. When you’re ‘ticking’ every 10 milliseconds and painting a <canvas> or manipulating DOM elements, you really need to eek out all the performance you can get.

fix IE issue with coffee script / javascript map

IE 7 does not like some array function (well it does not implement them), like map.

It throws syntax errors.

If you use coffeescript, you can use list comprehension instead :

    # ids = $(this).serializeArray().map (elem) -> {id: elem.value, name: elem.name}

    ids = ({id: elem.value, name: elem.name} for elem in $(this).serializeArray() )

translated in javascript in :

ids = (function() { var _i, _len, _ref, _results; _ref = $(this).serializeArray(); _results = []; for (_i = 0, _len = _ref.length; _i < _len; _i++) { elem = _ref[_i]; _results.push({ id: elem.value, name: elem.name }); }

Vim Macros and You

Ever get the urge to update a ton of files? I know I do. For example, I recently changed multiple hundred coffeescript files from the syntax of

MyGreatClass = Backbone.Model.extend(
  defaults:
    awesome: true
)

to

class @MyGreatClass extends Backbone.Model
  defaults:
    awesome: true

How long did it take me? A couple of minutes. Here’s how.

Ack (or Grep)

I like ack. To find all the files I need to edit, I’d write something like this:

ack '^[^\s].*\=.*\.extend\($' app/assets/javascripts -l

This finds everything that doesn’t start with a space, has an equals sign, and has .extend( at the end of the line.

Opening a list of files in vim

After looking over the results of ack and ensuring that everything that matched is what I want to edit, I’ll open those files in vim.

vim $(ack '^[^\s].*\=.*\.extend\($' app/assets/javascripts -l)

Vim Macros

If you’ve been using vim and haven’t taken advantage of macros (especially if you’re editing a lot of files in a similar fashion), you’re missing out. Open up vim and type :help q to get the nitty-gritty; I’ll summarize here.

To start recording a macro, press (in normal mode) q and then a letter or number. This will record a macro to whatever register you chose (via the letter or number).

Once you’re recording a macro, anything you type will be recorded to that macro so that it can be replayed. What I would type to change these files to the new format would be:

qqgg0iclass @<esc>f=cwextends<esc>2f.DGdd:wnq

Whoa, brain overload. Let’s break it down:

qq             # records the macro to the q buffer
gg             # first line in file
0              # first character in line
iclass @<esc>  # inserts class @ at the cursor and returns to normal mode
f=             # finds the first equal after the cursor
cwextends<esc> # changes the word (=) and moves to insert mode, adds extends, and returns to normal mode
2f.            # finds the second period after the cursor
D              # deletes the remainder of the line
G              # moves to the end of the file
dd             # deletes the line
:wn            # writes the file and moves to the next file in the list
q              # stops recording

This should be fairly straightforward; the only thing I really want to point out is the :wn. The n in that command moves to the next file in the list of files you opened with vim. This is one half of what makes editing all these files really fast.

Replaying a Macro

Now that you have your macro, it’s time to replay it. To replay a macro, press (in normal mode) @q (assuming you stored your macro into the q register). If you were to run that macro, it’ll run it against the current file, write the file, and move to the next. Since vim supports prefixing many commands with a number (for the number of times to repeat the command), running 100@q will run that macro on the first one hundred open files that I’ve opened with vim. Typically, this should be all you need to batch-edit, but if there are more files, just run that command again (or start with a higher number). If there are no more files to edit, vim will let you know.

Ben also mentioned recursive macros (my mind was blown) by adding @q right before the last q (which will run the q macro before stopping recording). Just make sure your q register is empty! This would allow you to run your macro once, without specifying the number of times to run it, because vim will run the macro until it’s out of files. Fancy!

Vim for fun and profit

Want to kick ass at vim? Pick up a copy of Vim for Rails Developers and become blazing-fast! If you want to hang out with fellow vim users to swap awesome tips like this, be sure to head to the Boston Vim Meetup!

So, you want to use require() in the browser...

You’ve written a bunch of JavaScript or CoffeeScript running on Node, which has helped you organize your code into modules, develop a test suite in any of numerous styles, experiment in a decent REPL, and take advantage of useful libraries.

Now you want to deploy that code to the final frontier—the web browser.

You have a couple options. A couple dozen, actually. Here are just the libraries that were easy to find, in order of GitHub popularity. The bars represent their number of watchers:

As you can see, quite a few developers have approached this problem. There are several ways to interpret this:

  • Maybe it’s an easy problem to solve, since so many people have opted to roll their own.
  • Maybe it’s hard to solve, since so many people evidently decided that the others got it wrong.
  • Maybe each person has legitimately different requirements.
  • Maybe it’s just “opinionated”, like web frameworks.

Downloading code on-demand: related, but different.

Admittedly, several of these libraries are trying to solve a related, but different problem: downloading the required modules on-demand (usually asynchronously via AJAX).

While an interesting problem to tackle, I think using this scheme in production is a bit misguided. Compiling your entire application into a single JavaScript file shouldn’t be a problem for anyone. Whose minified and compressed code is bigger than the size of a few JPEGs? (Hint: not Facebook’s, Grooveshark’s, or Pandora’s.) At most you should need a tiny loader script that grabs a couple of large, self-contained blobs of code when the page loads.

Loading dozens of small modules on demand is just going to result in more HTTP requests and worse compression. The simpler alternative is to already have the modules available, and just run them and return their exports on-demand. Thus, you have one file and no asynchrony.

Deploying the same code to production that you run in development: the actual problem.

The larger and more compelling use-case that many of these libraries tackle is running the same code in the browser that you wrote for Node. The foremost obstacle here just happens to be the organization of code into CommonJS-like modules. Even if you avoid using Node’s core modules and globals like process and __filename, the use of require for your own modules still must be addressed.

That’s where these libraries come in.

I happen to have implemented yet another one of these libraries for my own purposes, so I’m familiar with the problem. In my next blog post, I’ll explain what requirements the most popular libraries satisfy, describe where they fall short, and demonstrate some improvements we could make to truly call them production-ready.

Discuss this post on Hacker News.

2011-04-23のJS: jQuery1.6、IE10、モダンJavaScriptとは

今週は結構盛りだくさん(更新遅れたのもあるけど)
ノンプログラマーのためのjQuery入門はスライドとしてのできが良いので、jQuery興味なくても見ておくといいかもしれない。MVCとかそういうキーワードを見かける機会が増えてきたので、そういう規模のJavaScriptアプリケーションが増えて行ってるので、どういうのが最適なのかをいろいろ試していく段階な気がします。Modern JavaScriptの中でも、そのような話が含まれていて興味深い。

jQuery Conference 2011 Bay Area Videos
http://addyosmani.com/blog/jqcon-bayarea-videos-2011/

jQuery Conference 2011の動画
jQuery1.6についても語られていた。

jQuery: » jQuery 1.6 Beta 1 Released
http://blog.jquery.com/2011/04/15/jquery-16-beta-1-released/

jQuery1.6ベータリリース

ノンプログラマーのためのjQuery入門
http://www.slideshare.net/hayatomizuno/jquery-7665168

jQueryについて初心者向けに説明しているスライド。
とても分かりやすく、見やすいのでオススメ

steps to phantasien(2011-04-17)
http://steps.dodgson.org/?date=20110417

Readability をCoffeeScriptで書き直しながら仕組みを見てみる。DOMの木構造からトライアンドエラー方式で本文を見つけている。
実行環境としてのPhantomJSの評価

JS11
http://js11.org/

JavaScriptのメタ言語

Modern JavaScript - rmurphey
http://blog.rebeccamurphey.com/modern-javascript

Is JavaScript the New Perl?への反応記事
http://www.dagolden.com/index.php/1446/is-javascript-the-new-perl/
そもそもモダンJavaScriptとはどうあるべきなのか、今よく討論されているモジュール、非同期ローダーについての話。RequireJS,jQuery,NIH
PerlとJavaScriptに詳しい方は一読して欲しい。

Native HTML5: First IE10 Platform Preview Available for Download - IEBlog - Site Home - MSDN Blogs
http://blogs.msdn.com/b/ie/archive/2011/04/12/native-html5-first-ie10-platform-preview-available-for-download.aspx

IE10について。
CSS3 Multi-column Layout/CSS3 Grid Layout/CSS3 Flexible Box Layout/CSS3 Gradients/ES5 Strict Mode | CSS3 Transitions /CSS3 3D Transforms

『Opera: Opera 11.10 for Windows changelog』
http://www.opera.com/docs/changelogs/windows/1110/

OperaがWebPサポート
逆にFirefoxは現状ではWebPが中途半端なものであるため非サポートとしています
-Bug-org 600919 Implement WebP image support - WebStudio
-Jeff Muizelaar: WebP

WebStorm & PhpStorm Blog » Blog Archive » 50% OFF personal WebStorm licenses
http://blogs.jetbrains.com/webide/2011/04/50-off-on-personal-webstorm-licenses/

HTML+CSS+JavaScriptのIDEとしてとても優秀なWebStormが半額セールを実施中。
他のJetBrains IDEも30-50%セール中
-WebStorm & PhpStorm Blog » Blog Archive » Easter Sale from JetBrains
-IntelliJとAppCode(CIDR)、でもってその他(WebStorm/PhpStorm/PyCharm/RubyMine)の話

eBay Open Source
https://www.ebayopensource.org/index.php/VJET/HomePage

Eclipse ベースのJavaScript IDE
VJET JavaScript Type Librariesを作る事で対応するライブラリを増やせる。現在は著名なライブラリとNodeあたりに対応している

Taberareloo + upload from cache - 枕を欹てて聴く
http://d.hatena.ne.jp/Constellation/20110411/1302456745

Chrome ExtensionでHTML5 APIをいろいろ
XHR lv2 FormData,FileSystem API BlobBuilder(arraybuffer), dataをバイナリへ

『The Lessons - Hack The WebGL (WebGL勉強会)』
https://sites.google.com/site/hackthewebgl/learning-webglhon-yaku/the-lessons

WebGLのチュートリアル
Learning WebGLの翻訳

maccman/ichabod - GitHub
https://github.com/maccman/ichabod

Jasmine,Qunitを使えるヘッドレステスト。Webkitを使ってコマンドラインから実行できる。またRubyのメソッドをJavaScriptから叩ける

Firmin, a JavaScript animation library using CSS transforms and transitions
http://extralogical.net/projects/firmin/

CSS transforms と transitions を使ったJavaScriptアニメーションライブラリ

『Convert XML to JSON with JavaScript』
http://davidwalsh.name/convert-xml-json

JavaScript(DOM API)でXMLからJSONへ変換する

『The JavaScript Comma Operator | JavaScript, JavaScript』
http://javascriptweblog.wordpress.com/2011/04/04/the-javascript-comma-operator/

カンマ演算子について。
@cou929さんが掻い摘んだ訳を出してくれています。
-JavaScript のコンマ演算子 - フリーフォーム フリークアウト

『SproutCore Blog - SproutCore 1.5 Release Candidate 1 Released』
http://blog.sproutcore.com/post/4280548884/sproutcore-1-5-release-candidate-1-released

SproutCore 1.5 RC1がリリース

kbjr/Events.js at master - GitHub
https://github.com/kbjr/Events.js

イベントハンドラライブラリ

CommunityJS
http://communityjs.org/

世界のJavaScriptコミュニティ(ユーザーグループ)をまとめている

『ホーム - jQuery Mobile 1.0a4.1 日本語リファレンス』
http://dev.screw-axis.com/doc/jquery_mobile/

jQuery Mobileのリファレンスの日本語訳

書籍関係

『The Node Beginner Book』
http://nodebeginner.org/

Node.js入門 E-book的なもの

O’Reilly Japan - JavaScriptクックブック
http://www.oreilly.co.jp/books/9784873114941/

2011年04月22日 発売
浅く広くな感じの書籍
原著で読んだときの記録
http://efcl.info/adiary/JavaScript%20Cookbook%E3%81%AE%E8%A8%98%E9%8C%B2

『Amazon: HTML5基礎: WINGSプロジェクト 片渕彼富, 山田祥寛』
http://www.amazon.co.jp/o/ASIN/4839937931/book042-22/ref=nosim

HTML5本

O’Reilly Japan - 入門 HTML5
http://www.oreilly.co.jp/books/9784873114828/

2011年04月22日 発売
矢倉 眞隆 監訳

The Pragmatic Bookshelf | CoffeeScript
http://pragprog.com/titles/tbcoffee/coffeescript

2011年6月15日発売
CoffeeScriptの書籍

O’Reilly: Stateful JavaScript Applications
http://jswebapps.heroku.com/

Stateful JavaScript Applicationsというオライリー本のページ
リッチアプリケーションなどに注目して、MVCやWebSockect,Nodeなどについて扱う。

Coding with Less online with Tinkerbin

I just recently came across a website called Tinkerbin. "Tinkerbin lets you play around with HTML, JavaScript and CSS without creating files or uploading to servers."

I’m pretty sure that you have heard about Jsfiddle which is a online code editor that is free to use where you can write your code an then save it. Jsfiddle has become popular for developers & designers to show off there code so they can get help…or just to show off.
What makes tinkerbin different is the fact they support more then just the standered HTML,CSS and JavaScript. They also support HAML,SASS,Coffeescript an even Less!
Yep now you can open this online editor write your less code save it,let it run an then past it into your project without having to worry about compiling it the site does that for you. Best of all this whole thing is free
Down part is there’s no mobile web app(well Jsfiddle doesn’t have one either) anyways head your selfs over to tinkerbin bookmark the site as a development tool for later use it’s defiantly going as one of mine.

How to start with Backbone.js: A simple skeleton app

Written by Miha Rebernik

Outline

  1. Philosophy
  2. Backstory
  3. Tools
  4. Using the skeleton
  5. Useful resources

Translations

TL;DR

It took me some time to get an optimal code/directory layout for Backbone.js apps.

Because I think this is a major pain for beginners, I prepared a well commented sample skeleton app.

Get it from Github while it’s hot, pull requests/feedback are welcome.

Prerequisites

You need a solid knowledge of JavaScript, familiarity with Backbone, Ruby, HAML, SASS and CoffeeScript to find this writeup useful.

Also, I’m developing on a Mac and have not tested this on other platforms. Although, I do not see any reason why it shouldn’t work.

1. Philosophy

Part of the success of Rails was the conventions and its predefined directory tree. While looking overwhelming and maybe annoying to a beginner at first, it soon becomes liberating. With experience things fall into place, and soon you feel feel like every tiny bit of code has it’s dedicated home.

Backbone, being the nimble, does not prescribe any particular code or directory structure. Until I read enough material and settled on this particular layout, I was feeling very confused and disoriented.

This skeleton app was extracted from a production app and then extensively annotated, to explain certain decisions and choices.

2. Backstory

When I first started to play with Backbone I was already heavily entrenched in the Ruby and Rails world. So naturally I thought, yeah, MVC, I know that. It turned out to be a bit farther from the truth than I wanted or cared to admit.

Disclaimer: I rarely developed pure client-side software, though I was using JavaScript extensively to make things faster and more responsive.

Thing is that MVC on the server-side is quite a bit different from MVC on the client. It has something to do with wiping the state clean each time you reload the page. Statelessness.

The client on the other hand is stateful and thus keeps all your bad practices in memory until they start to slow things down and eventually stop working.

image

This was the biggest client-side project for me so far, building out the Dubjoy editor for dubbing online video.

The hardest part of learning to develop MVC on the client and using Backbone was seeing the big picture. Seeing where all the little parts fit in and how this all works together in the grand scale.

So for the most part, my journey with Backbone consisted of finding out best practices for file and code organization, setting up the environment and directory structures.

Using backbone.js as a library was “easy”. (Not really, but this isn’t what this article’s about.)

One of the biggest mistakes I was making when starting out was trying to use Backbone constructs for everything.

Backbone is intentionally kept simple, because it’s supposed to be a complement to your own JavaScript. So just create your own App class, and populate it with the stuff and initialization your app really needs.

3. Tools

So a good workflow needs good tools. Here I’ll describe the tools that I found indispensable when developing in Backbone.

image

CoffeeScript, HAML and SASS

Because I resent cruft and redundancy, I’m a big fan of abstraction languages. Whenever I can, I opt for HAML, SASS and CoffeeScript.

The brevity they bring is paramount to me.

HAML Coffee

In Backbone, you usually need a template engine. Templates provide the markup for views. There’s a lot of solutions for this, but because I like to be consistent, the best choice was to use HAML.

Fortunately, there’s a library for this: haml-coffee, which enables you to use HAML intertwined with snippets of CoffeeScript.

image

Guard

To be able to use these languages seamlessly, you need some sort of a on-demand compiler. Turns out a Ruby gem called Guard does exactly this.

Guard is extremely flexible. It watches for file system changes and then doing something to files that changed.

Jammit

image

Jammit is an asset packaging library. It concatenates and compresses CSS and Javascript. It’s easy to use, but needs a configuration file, that defines which files to work on.

Sinatra with Isolate

Backbone apps are static files and you can run them directly off your hard drive. But to do proper paths and even maybe some API, we need a server.

image

Sinatra, a mini Ruby web framework, forms the base of the server. This enables some quick server-side magic as well as making an API for persistence.

To make this part as easy as possible to use, I packaged the server with Isolate, a small Ruby library for sand-boxing, which is like a mini-Bundler. When launching the server with rake server for the first time, it will check and auto-install it’s dependencies. It just works.

4. Using the skeleton

Getting started with a new app using my skeleton is trivial. It uses Ruby in several critical places, so be sure you have a working installation of Ruby, preferably of the 1.9 kind.

All of the files, directories and their meaning is described with more detail in the README file of the skeleton.

A working example of this app is available online. This way you can check if the console output is the same on your local setup and here.

Start by cloning my backbone-skeleton repo.

$ git clone https://github.com/mihar/backbone-skeleton.git my-new-backbone-app

Then use the bundle command that comes with Ruby Bundler to install the necessary dependencies for guard. Guard will compile our HAML, SASS and CoffeeScript to their native counterparts.

$ cd my-new-backbone-app
$ bundle

Once Bundler completes the installation, we can try starting Guard, to immediately start watching files for changes.

$ bundle exec guard

While leaving guard running, go to another terminal and let’s fire up a simple, bundled Ruby web server, that we’ll use for development. The server will install all of it’s dependencies by itself.

$ rake server

[1/1] Isolating sinatra (>= 0).
Fetching: rack-1.4.1.gem (100%)
Fetching: rack-protection-1.2.0.gem (100%)
Fetching: tilt-1.3.3.gem (100%)
Fetching: sinatra-1.3.3.gem (100%)
[2012-12-05 18:17:05] INFO  WEBrick 1.3.1
[2012-12-05 18:17:05] INFO  ruby 1.9.3 (2012-02-16) [x86_64-darwin11.3.0]
[2012-12-05 18:17:05] INFO  WEBrick::HTTPServer#start: pid=39675 port=9292

Now our server is listening on http://localhost:9292, so go ahead, and open that.

If you see “Skeleton closet”, everything is go.

Go check out the JavaScript console for more information.

5. Resources

The missing CDN

They host all the libraries, including Backbone and underscore.

Backbone Peepcode tutorials

Peepcode has been my friend since my Ruby and Rails days. They produce high-quality screencasts on a variety of topics.

They have a series of 3 videos on Backbone, going from the basics to some pretty advanced stuff.

Be prepared to shell out $12 per video, though.

Derick Bailey backbone posts

I’ve learned so much about the correct ways to do things on Derick’s blog. He’s a seasoned Backbone developer that has overcome many problems and written up on the progress. Wealth of resources.

Derick Bailey’s 4 part screencast

Haven’t seen this one yet, but if I’m judging by his blog, this should be very worth the money. 4 videos, $12 a pop.

Backbone Patterns

Documented patterns extracted from building many Backbone apps.

Organizing Backbone apps with modules

Article exploring similar problems of code/directory structure and organizations.

Backbone Boilerplate

A much bigger project with a similar goal as mine.

Backbone for absolute beginners
Building a Web Client like Pros

Updates at the bottom

In this post, I will guide you with some tips and tricks on how to build a web application with the top of your own API.

So, let’s say you have an Awesome Web Startup and you want to build it the modern way. What I mean is: You have an API and a User Interface “Javascript application” that communicates with the API . We can find dozens of websites that uses this technique, for example Twitter, Foursquare, Quora and much more.

Lets say you are having myawesomeapp.com as a front end for your users. Once they signed in, they grant authentication to access the API. Then you want to make Ajax requests to other server than the domain that hosts the client application e.g. api.myawesomeapp.com. This leads lots of the newbie startups to setup the API server in the same domain of the client application to avoid the limitation of XmlHttpRequest object in the web browsers: myawesomeapp.com/api. This problem is caused because Same Origin Security Policy of browsers, which prevents a page from accessing data from another server. I will not talk about the Same Origin Security Policy issue, you can read about it here.

The Authentication

Leading startups like Twitter, are using their own public API in their client applications which is in most cases hosted on a different server than the application. I will take Twitter as a case for how they make call requests to their API server api.twitter.com and the way of the authentication they adopt for communication between twitter.com and api.twitter.com. Thought, Twitter is using oAuth to authenticate users for their API externally.

The easy way of that is to grant them the authentication by sharing same cookies of the front end application. By being signed in and authenticated to the client application, the API server will be accessible to the user via the client.

The Twitter way

For each user that signed up for Twitter, the system generate a unique token for each user (in case if Twitter is an oAuth client: access_token for the internal oAuth application). This token is being saved in the cookies with key auth_token. The token is used to grant the access for the user for to the API.

Talking about granting users’ access to the API could be done by many ways, cookies as Twitter does, sending the token as a parameter in the requested url, or sending the token via a header. Since this token is a randomly generated and created, beside it is unique token cross all users, it leads us to be hard to guess string. So on the API side, we find the user with this unique token and give her the access to the API.

How I do it

I use Rails for building my Javascript Client and Grape for the API. In the rails application I use plataformatec/devise for authentication, which is built on the top of hassox/warden. So in the API I can access the user via env["warden"].user.

We also share the cookies of the rails app with the API by:

# in config/initializers/session_store.rb
App::Application.config.session_store :cookie_store, key: '_app_session', :domain => :all

In the API’s config.ru

# THE_LONG_SECRET is from config/initializers/secret_token.rb in the rails app
use Rack::Session::Cookie, key: "_app_session", secret: "THE_LONG_SECRET", domain: ".domain.ltd", httponly: true

So now the cookies are shared by the two apps: the api and the client.

Once the user signed in, the authentication can be checked through env["warden"]. But we want to focus now on how to authenticate them throw the authentication_token.

In Devise, we have a great feature that we can generate a unique token for each user, just add :token_authenticatable to devise modules list in your model (User):

devise :database_authenticatable, :registerable,
         :recoverable, :rememberable, :trackable, :validatable, :token_authenticatable

Make sure that Devise is also ensure that each user has an authentication_token with adding before_save :ensure_authentication_token to the model.

Also, you can add the user’s authentication_token to cookies with key auth_token by a Warden callback (And another callback to remove it from cookies after signing out):

Warden::Manager.after_set_user do |user, auth, opts|
  auth.cookies["auth_token"] = {
   value: user.authentication_token,
   domain: :all,
   httponly: true
  }
end

Warden::Manager.before_logout do |user, auth, opts|
  auth.cookies.delete("auth_token", {domain: :all})
end

In the next snippet of code, I show you how we authenticate users to the API. So, as you see we grant the authentication via warden and authentication_token via cookies, headers, and as a parameter.

module ApiAuth
  # Finding auth_token in the HTTP_COOKIE.
  # Will loop throw all cookies to find it.
  # Returns the value of auth_token when find it.
  # Else returns nil.
  def auth_token_with_xml_http_req_via_cookies
    if env["HTTP_COOKIE"]
      env["HTTP_COOKIE"].split(/; /).each do |cookie|
        k, v = cookie.split(/\=/, 2)
        if k == "auth_token" and env["HTTP_X_REQUESTED_WITH"] == "XMLHttpRequest" # We check also if the request was made by an Ajax request
          return v.to_s
        end
      end
    end
    return
  end

  def auth_token_via_headers
    if env["HTTP_X_AUTH_TOKEN"]
      return env["HTTP_X_AUTH_TOKEN"].to_s
    end
    return
  end

  def warden
    env["warden"]
  end

  def authenticated
    if warden.authenticated?
      return true
    elsif auth_token_with_xml_http_req_via_cookies and User.where(:authentication_token => auth_token_with_xml_http_req_via_cookies).first
      return true
    elsif auth_token_via_headers and User.where(:authentication_token => auth_token_via_headers).first
      return true
    elsif params[:auth_token] and User.where(:authentication_token => params[:auth_token]).first
      return true
    else
      error!({ code: 401, message: "Unauthorized." }, 401)
    end
  end

  def current_user
    warden.user ||
      User.where(:authentication_token => auth_token_with_xml_http_req_via_cookies).first ||
        User.where(:authentication_token => auth_token_via_headers).first ||
          User.where(:authentication_token => params[:auth_token]).first
  end

  def authenticated_user
   authenticated
   error!({ code: 401, message: "Unauthorized." }, 401) unless current_user
  end
end

So, in the api you can ensure if the user is authenticated by authenticated_user method, and you can access the user object by current_user.

Client Application and the Ajax Requests to the API

Now, the beautiful part, you are on the client side myawesomeapp.com and want to make Ajax requests to the API api.myawesomeapp.com that returns JSON object.

I’m using jQuery as a javascript library, and here is a simple GET request:

$.get("http://api.myawesomeapp.com/1/me")

Ops!

XMLHttpRequest cannot load http://api.myawesomeapp.com/1/me. Origin http://myawesomeapp.com is not allowed by Access-Control-Allow-Origin.
Solution

There are lots of technique to avoid this limitation of browsers (mentioned at the top of the post) you can read here.

The best working solution is to setup an iframe that served by the API domain and make requests to the API via it. I read lots of Javascript in the source code of Twitter and Foursquare and I’ve extracted a great piece of code to solve this.

First, set up a static html page receiver.html that served by the API server: api.myawesomeapp.com/receiver.html and contains this html code:

<html>
  <head><meta http-equiv="Cache-Control" content="public, max-age=31556926" /></head>
  <body><script>document.domain='myawesomeapp.com'</script></body>
</html>

Second, create an iframeManager in the client app (extracted from foursquare.com). Here’s a the coffeescript snippet of the iframe manager: (javascript version: https://gist.github.com/1555068)

iframeManager =
  xhrCallback: null
  iframeLoading: !1
  loadQueue: []

  addLoadCallback: (a)->
    if iframeManager.isLoaded() then a() else iframeManager.loadQueue.push(a)

  runLoadCallbacks: ->
    a() for a in iframeManager.loadQueue

  isLoaded: ->
    null != iframeManager.xhrCallback

   buildIframe: (apiServer) ->
    if not iframeManager.iframeLoading
      iframeManager.iframeLoading = !0
      a = document.createElement("div")
      b = "#{apiServer}receiver.html?parent=#{encodeURIComponent(window.location.href)}"
      a.innerHTML = """<iframe onload="window._tempIframeCallback()" id="receiver_iframe" tabindex="-1" role="presentation" style="position:absolute;top:-9999px;" src="#{b}"></iframe>"""
      c = a.firstChild
      window._tempIframeCallback = ->
        delete api._tempIframeCallback
        iframeManager.xhrCallback =
        if window.XMLHttpRequest and ("file:" isnt window.location.protocol or not window.ActiveXObject)
        then ->
          new c.contentWindow.XMLHttpRequest
        else ->
          try
            return new c.contentWindow.ActiveXObject("Microsoft.XMLHTTP")
        iframeManager.runLoadCallbacks()
      document.body.appendChild(c)

window.iframeManager = iframeManager

And here’s a snippet of the Api class that through it we will create requests to the API (javascript: https://gist.github.com/1555071)

At the bottom of the code you have to set the API domain and the API version: window.api = new Api "http://api.myawesomeapp.com", 1

class Api
  constructor: (apiServer, apiVersion) ->
    @iframeManager = iframeManager
    @apiServer = apiServer
    @apiVersion = apiVersion

  ajax: (req) ->
    if @iframeManager.isLoaded()
      req.xhr = @iframeManager.xhrCallback
      req.crossDomain = not 1
      req.dataType = "json"
      $.ajax req
    else
      self = @
      @iframeManager.addLoadCallback ->
        self.ajax req
      @iframeManager.buildIframe(@apiServer)

  request: (options) ->
    options.type = options.type or "GET"
    options.url = if options.url then "#{@apiServer}#{@apiVersion}#{options.url}"
    options.data = options.data or {}
    options.headers = options.headers or {}
    @ajax options

window.api = new Api "API_DOMAIN", API_VERSION # notice to change API_DOMAIN and API_VERSION

So now you have access to the api variable in the whole of the client application. To make requests to the API you have to use api.request() function.

Here’s some example how to make requests:

api.request({url: "/me"}) // this makes a GET request to http://api.myawesomeapp.com/1/me

You can use jQuery methods of $.ajax: done, fail, always

api.request({url: "/me"}).done(function() { alert("success"); }).fail(function() { alert("error"); }).always(function() { alert("complete"); });

We grant the authentication of these requests by the authentication_token that we save in the cookies.

if you have not authenticated via cookies you can use the header:

api.request({url: "/me", headers: {"x-auth-token": "xhg7t37vy3rvFS4jt4"}})

or by parameter:

api.request({url: "/me", data: {"auth_token": "xhg7t37vy3rvFS4jt4"}})

for POST requests:

api.request({url: "/articles", type: "POST", data: {...}})

for PUT and DELETE requests: we handle it the Rails way: make a POST request with _method parameter for the request type and don’t forget to add this line to config.ru in the API application use Rack::MethodOverride

api.request({url: "/articles/123", type: "POST", data: {_method: "put", …}})

api.request({url: "/articles/123", type: "POST", data: {_method: "delete"}})
Conclusion

I read this tweet yesterday, retweeted by a friend: Samer Abu Khait.

not having an API in 2012 is like not having a website in 1998.

— Brent Grinna (@brentgrinna)

January 2, 2012

You know how to handle authentication and cross domain requests. It’s your time to craft your application the professional way!

Update

1- Dont forget to put this snippet you the top of your Client application

    <script type="text/javascript" charset="utf-8">
      document.domain = 'myawesomeapp.com';
      <!-- OR document.domain = '<%= request.host_with_port %>'; -->
    </script>

2- (Recommended step) You can drop using Warden in your application, and use only authentication via the cookie token, Also, I recommend to use CommonCookies rack middleware in case you dont want to set the domain manually (better for testing locally)

# THE_LONG_SECRET is from config/initializers/secret_token.rb in the rails app
use Rack::Session::Cookie, key: "_app_session", secret: "THE_LONG_SECRET", httponly: true
use Rack::CommonCookies
# dont for get to install `rack-contrib` gem and require "rack/contrib/common_cookies" in the top of API config.ru 
CoffeeScript Javascript Fast Fibonacci

With some extra time on my hands I figure now was a good time to dive in to Node.JS and Javascript, or more specifically CoffeeScript. To grease the wheels here is a gist of a fast Fibonacci sequence generator based on the work of Robin Houston.

A few TODOs for later include:

  1. Big Number support
  2. Newtonian algorithm optimization
  3. Binet’s algorithm (for benchmarking against)
  4. Burnikel / Ziegler optimizations
  5. Better divmod? Bit shifting?
  6. Make into CommonJS module for Node.JS

http://gist.github.com/1032685

When Textmate Can't Find Coffeescript

When you try to use the textmate coffeescript bundle you may get the error: coffee: command not found. As the coffeescript bundle’s README points out, textmate doesn’t inherit your regular PATH. You need to go into textmate’s preferences (Preferences > Advanced tab > PATH variable) and add the path to coffeescript.

You need to figure out where coffeescript is with the command-line command which coffee (which was /user/local/bin/coffee for me) and make sure is it part of textmate’s PATH variable (which, for me, meant adding :/user/local/bin to the end).

Carefree CoffeeScript Auto-compiler

EDIT:

Please refer to the newest post on coffeemaker instead.



Like all Jedi need to craft their own lightsabers… all CoffeeScript developers need to code their own autocompilers! *lol*

So here it goes: are you looking for some way to watch over your CoffeeScript working directory tree ever since system login, and automatically compile them into JavaScript on each file save/change or new file creation? Well look no further because it is right here:

#!/bin/bash

while true

do

    coffee -cw “$(cd “$( dirname “$0” )” && pwd)”/* &

    sleep 300

    kill $!

done

Or grab it on my gist since Tumblr has a nasty habit of messing with quotation marks in articles which isn’t very friendly for codes.

Kinda surprised that CoffeeScript didn’t walk the extra mile to take the —watch option here, because literally anyone who doesn’t treat CoffeeScript as a fancy toy but a real tool would be needing this bad.

How to Use It
  1. Download the script (“coffeemaker”).

  2. Put coffeemaker at the root of your working directory.
    For example: ~/project/coffeemaker

  3. Add coffeemaker into your Startup Applications; make sure absolute path is used:

    image



  4. Well that’s it! From now on this tiny script will execute automatically upon login, recompile your CoffeeScript automatically upon every file save however deep it is inside your directory tree, and rewatch your whole directory automatically every 5 minutes to extend its grip onto any new CoffeeScript file you might have just created. Speaking about automagic ;)
About coffeemaker
  • This tiny script fixes two of the major flaws of coffee —watch: new CoffeeScript files aren’t being watched, and that it requires manual initiation and goes on to occupy one of your terminal tabs afterwards.

  • Compiled JavaScript files are placed in the same directory as their CoffeeScript source files, which is the default behavior of coffee.

  • All it does basically is to start the coffee command in a subshell, wait 5 minutes before terminating the process and restart one. You can modify the value to make it wait longer or shorter, but I think 5 minutes is about right.

Be sure to leave me a comment if you think there are ways to further improve it, thanks! =D

How to directly upload files to Amazon S3 from your client side web app

Written by Rok Krulec

Why you need this?

You don’t want your heavy-weight data to travel 2 legs from Client to Server to S3, incurring the cost of IO and clogging the pipe 2 times.

Instead, you want to ask your server to give your client one-time permission to upload your data directly to S3. The process is still 2 legged, but heawy-weight data travels only on 1 leg.

On Amazon S3 this is implemented with CORS (Cross Origin Resource Sharing)

image

We use this at Dubjoy, where customers upload their huge video files to S3 for translation and voice-over, and we don’t want our Heroku server to have anything to do with heavy-weight video files.

Steps to implement this

  1. Set up Amazon S3 bucket CORS configuration
  2. Implement client-side JavaScript (CoffeScript, JavaScript)
  3. Implement server-side upload request signing (Ruby/Sinatra, trivial to do in any other language)
1. Amazon S3 bucket CORS configuration

Set this in AWS S3 management console. Right-click on the desired bucket and select Properties. Below, on the permissions tab, click Edit CORS configuration, paste the XML below and click Save.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Content-Type</AllowedHeader>
        <AllowedHeader>x-amz-acl</AllowedHeader>
        <AllowedHeader>origin</AllowedHeader>
    </CORSRule>
</CORSConfiguration>
2. Your web app

Head to GitHub repo with CoffeScript and JavaScript Class files to include. In your app, do the following:

HAML
%input#file{ :type => 'file', :name => 'files[]'}
or HTML for the chevron-lovers
<input type='file' name='files[]' />
CoffeScript
s3upload = s3upload ? new S3Upload
    file_dom_selector: '#files'
  s3_sign_put_url: '/signS3put'
    onProgress: (percent, message) ->
        console.log 'Upload progress: ', percent, message # Use this for live upload progress bars
    onFinishS3Put: (public_url) ->
        console.log 'Upload finished: ', public_url # Get the URL of the uploaded file
  onError: (status) ->
    console.log 'Upload error: ', status
or JavaScript for the brace-lovers
var s3upload = s3upload != null ? s3upload : new S3Upload({
  file_dom_selector: '#files',
  s3_sign_put_url: '/signS3put',
  onProgress: function(percent, message) { // Use this for live upload progress bars
    console.log('Upload progress: ', percent, message);
  },
  onFinishS3Put: function(public_url) { // Get the URL of the uploaded file
    console.log('Upload finished: ', public_url);
  },
  onError: function(status) {
    console.log('Upload error: ', status);
  }
});

Be sure to set the right DOM selector name file_dom_selector for file input tag, #files in our case. s3_sign_put_url is an end-point on your server where you will be signing S3 PUT requests.

3. Server-side request signing

Be sure to set S3_BUCKET_NAME,S3_SECRET_KEY,S3_ACCESS_KEY. Create a bucket and get the keys under Security Credentials menu in AWS management console.

S3_BUCKET_NAME = 'CREATE_A_BUCKET_AND_SET_THE_NAME_HERE'
S3_SECRET_KEY = 'GET_THIS_IN_AWS_CONSOLE'
S3_ACCESS_KEY = 'GET_THIS_IN_AWS_CONSOLE'

get '/signS3put' do
  objectName = params[:s3_object_name]
  mimeType = params['s3_object_type']
  expires = Time.now.to_i + 100 # PUT request to S3 must start within 100 seconds

  amzHeaders = "x-amz-acl:public-read" # set the public read permission on the uploaded file
  stringToSign = "PUT\n\n#{mimeType}\n#{expires}\n#{amzHeaders}\n/#{S3_BUCKET_NAME}/#{objectName}";
  sig = CGI::escape(Base64.strict_encode64(OpenSSL::HMAC.digest('sha1', S3_SECRET_KEY, stringToSign)))

  {
    signed_request: CGI::escape("#{S3_URL}#{S3_BUCKET_NAME}/#{objectName}?AWSAccessKeyId=#{S3_ACCESS_KEY}&Expires=#{expires}&Signature=#{sig}"),
    url: "http://s3.amazonaws.com/#{S3_BUCKET_NAME}/#{objectName}"
  }.to_json
end

Resources

The code is a based on these resources, but has been put in to an easy to use CoffeeScript/JavaScript Class

You can learn more about CORS here

Rok Krulec / @tantadruj