Modern computing systems seems to encourage disregarding memory constraints. Nobody cares that a "Hello, world" program is 200K or that a Linux kernel is more than 10M in size. We have tens of gigabytes of memory after all.
Yet the memory consumption impacts various aspects of overall system’s performance. A large program does not fit the processor cache, can not be loaded fast enough over the network, slows down the whole system with intense usage of swap. Megabytes wasted in the software translate into inefficent use of cloud infrastructure. Sometimes, it is simply impossible to pack enough virtual servers given burgeoning needs of software.
Eliminating the operating system level does make the software stack leaner. A typical Ling VM image we use today is 6-7M. It is a unstripped development version with a lot of instrumentation code. It may be further slimmed down to 4-5M. Suddenly, an image fits within a single memory page.
A memory map of a running Ling VM instance is roughly as follows:
The rest is for the loaded code and Erlang processes.
We made a quick check on how Ling VM behaves when run within a memory-constrained instance.
First of all, it does not crash. It always behaves nicely.
No special optimization was performed – it is the very same image you can get from the build service.
Compare this to 400M for a typical (stripped down) Linux image. In modern cloud environments VMs become new "processes" and any memory savings multiply by thousands of running instances. In addition, modest size of images guarantees faster startup and simplifies image propagation. All of this may have a profound impact on the implementation of cloud services.
Tweet comments powered by Disqus