Docker

With series D funding of $95M, a pre-money market valuation of close to $1B and just about every PaaS service provider and Enterprise OS vendor either currently supporting the container or having plans to do so, it comes as little surprise that Docker will be the blueprint upon which industry leaders will create standards around container format and runtime.

While Linux containers have been around since 2006 and some other OS containers have been around for longer still, it wasn't until Docker that containers became a technology household name. Docker took the approach of making their container format and runtime easier to use than any of the alternatives available at the time. In part they achieved this by taking an image-based approach to containers. They also made them highly scriptable and easy for DevOps groups to work with. Easy to consume containers were just what organisations were looking for to increase DevOps speed and simplify deployment across platforms. However, during the past two years as Docker's image format and container runtime have emerged as the industry standard, steady improvements have been made to security, networking, availability and management, areas particularly important to Enterprise organisations looking for the next opportunity for greater efficiency after benefiting from OS virtualisation in private then public cloud platforms.

Following Docker's early success, it became clear that manually managing containers simply wasn't going to work. If organisations were to realise the full potential of application containerisation, then a single docker host would need to provide a manageable way to host several docker containers and make each of the services provided easily discoverable. Furthermore, a single node would only take us so far. We would need to run a cluster of docker hosts and ensure that new containers were deployed to hosts with available resources and handle failure and service routing. A swath of docker orchestration tools appeared on the market, but it wasn't until June 2014 when Google open sourced Kubernetes that a potential front-runner emerged.

Kubernetes is an open source orchestration system for Docker containers. It handles scheduling of containers onto nodes in a compute cluster and actively manages workloads to ensure that their state matches that specified. It uses the concept of labels and Pods to group together containers in logical groups to represent applications. Kubernetes draws upon Google's extensive experience running containers at, well, Google scale.

A running Kubernetes cluster contains node agents (kubelets) and master components (APIs, scheduler, etc), on top of a distributed storage solution. Each kubelet manages Pods (a co-located group of containers running with a shared context) with persistent master state being stored in an instance of etcd.

Each kubernetes node also runs a simple network proxy (kube-proxy). The application watches the Kubernetes master for the addition and removal of service and endpoint objects. For each service it opens a random port on the local node. Any connections made to that port are proxied to one of the corresponding backend Pods.

Using Kubernetes is quite simple. After we have installed Kubernetes, we create a pod or replication controller and then a service i.e. a named load balancer that proxies traffic to one or more containers. This gives us a deterministic way to route to the single master using an elastic IP.

As many industry players quickly decided to provide support for Docker, Kubernetes has seen similar strong adoption with Red Hat's Openshift and Microsoft Azure. Furthermore, Google and VMware will work together to bring the pod based networking model of Open vSwitch to enable multi-cloud integration.

So does the future of container orchestration belong firmly with Kubernetes? Maybe not. Docker is no longer just a container standard, but an open platform for developers and system administrators to build, ship and run applications. With the container standard having come of age, it was time to increase focus on orchestration. At the end off 2014, Docker announced Docker Machine, Swarm and Compose, a trilogy of tools each covering a different aspect of the lifecycle for distributed apps. Docker made the great decision to implement each with a “batteries included, but removable” approach which, thanks to their orchestration APIs, means that they can be swapped-out for alternative implementations from ecosystem partners designed for particular use cases. This pluggable approach is showing strong leadership for the future direction of how docker is used, while enabling the next round of orchestration innovation and giving users choice.

If Docker is as successful at making orchestration tools as easy to use as Docker, they may surpass Kubernetes as the orchestration tool of choice. However, given Google's vast experience managing large clusters as well as already strong early adoption by Enterprise management tools, Kubernetes is currently in the driving seat.