Last night I was reading the paper Haskell for the cloud [pdf], written by the super kick-ass Haskell guru Simon Peyton-Jones.
Two paragraphs got my eye, and I wanted to share them with you
We use the term “cloud” to mean a large number of processors with separate memories that are connected by a network and have independent failure modes. We don’t believe that shared-memory concurrency is appropriate for programming the cloud. This is because an effective programming model must be accompanied by a cost model. In a distributed memory system, the most significant cost, in both energy and time, is data movement. A programmer trying to reduce these costs needs a model in which they are explicit, not one that denies that data movement is even taking place— which is exactly the premise of a simulated shared memory.
In many ways, failure is the defining issue of distributed computation. In a network of hundreds of computers, some of them are likely to fail during the course of an extended computation; if our only recourse were to restart the computation from the beginning, the likelihood of it ever completing would become ever smaller as the system scales up. A programming system for the cloud must therefore be able to tolerate partial failure. Here again, Erlang has a solution that has stood the test of time; we don’t innovate in this area, but adopt Erlang’s solution
This a really interesting paper if you are a functional programming lover.