Rediscovering a cloud of the future
———————————–

Today’s computing clouds, often advertised as elastic, are rather rigid. When compared to stiffness of iron, they achieve elasticity of wood. Rubber-like clouds are still on drawing boards.

Let us have a peek at the sketch of the future cloud we, the Erlang on Xen creators, have on our drawing board.

There is a list of statements in the center. The top three say:

  • Smaller cheap-to-create OS-less instances provisioned on demand
  • Reduced cloud stack, sharing infrastructure with user applications
  • System administrators not necessary

There are a few more items on the list. Some are trivial, some, we believe, are too valuable to spoil. So I am clipping the rest, including the one that mentions ‘robotics’.


OS-less instances is what Erlang on Xen all about. Such instances start so fast you do not need to have anything pre-started. When your running application wants to use, for example, a message queue, one of these happens:

  • no message queue started — start it, then use it
  • message queue is available — use it
  • message queue is busy — spawn a copy and use it

Note that instances are only spawned by other instances just like Unix process forked from existing processes. On as-needed basis.

The startling outcome of the on-demand provisioning is that an application that does not do a useful work consumes no resources. Ten physical servers may now host a million client applications. A single Facebook-scale infrastructure may host a Facebook-like application for each human on Earth.

How is that for efficiency?


In the year 3000, everything will be instant… but the cloud stack will still take, like, nine f**king seconds. Our instance-per-request demo hints that this might just be the case.

Why not to apply the on-demand provisioning principle to the components of the cloud stack? It must be elastic too after all.


The bottom of the drawing board is all about how to make clouds a welcoming home for a database. Databases need a finer grain of control over their instances. First, a database may request two instances never to share a single physical node. These instance may contain replicas of an in-memory database and hosting them side by side negates their purpose of existence. Second, a database may stumble upon a query that requires scanning almost all its data scattered over several physical disks. The best strategy for such query is to spawn instances of the nodes that have these disks attached and skim the data without shoveling everything through the network.

I imagine, a cloud provider may charge extra for such ‘separate nodes’ and ‘disk node only’ instances. It may do so exactly because they are so valuable for performant cloud databases.


Virtualization is featured profoundly throughout our vision. Same goes for OpenFlow-aware network switches. Everything else is taken with a grain of salt – is it there to replicate a homely computing world of the 90s or it truly helps to weave a fabric of the future cloud?


On the left of the drawing board is a mockup of the cloud’s GUI. Frankly speaking, it resembles Visual Studio-style IDE more than anything else. The bulk of it is about editing source code. It also allows selection of services/components the cloud application needs. The trick is that all these services are ephemeral. None of them exist. They get created when first used.

The dark machinery behind the IDE bakes instance images and deploy them to the cloud the moment the user clicks the ‘Run’ button. The running application can be paused, variable values inspected, breakpoints set. All the usual debugging stuff is possible.

The remarkable observation is that there is no separate ‘administrator’ GUI, as well as no mention of Chef or Puppet. Instances are provisioned and configured by the application code. The monitoring is done by a logging/monitoring component added to the application. What other tasks justify a separate interface for an administrator? While you are inventing such task, we move on without.


comments powered by Disqus