Why "Docker vs. VMs" Is the Wrong Way to Think

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Michael Churchman
Michael Churchman
April 27, 2018

"Docker vs. virtual machines: Which should you use?"

Maybe that question has been floating around in the back of your mind. Maybe you've heard people in (or on the fringe of) DevOps bring it up. It certainly sounds like a question that needs to be addressed and answered, doesn't it?

But…Stop. Maybe it's not a good question at all. Maybe, in fact, it's actually a non-question. Let's take a closer look, and see what it really is.

We'll start with a much more basic question—What are the important differences (the ones that really matter) between virtual machines and Docker containers (or containers in general)?

 

Virtual Virtuality

A virtual machine is just what the name implies. It virtualizes an entire computer, right down to the hardware level. Within that virtualized environment, you can install an operating system which is completely different from that of the host system. The OS and the software running under it interact entirely with the virtual machine, unless you explicitly give them access to specified host system resources and files.

The resources within the virtual machine are typically emulated or otherwise sandboxed, and are generally not shared with the host system. This means that the VM effectively provides the entire environment required by the OS and software.

 

Virtual Upside

The advantages of this kind of complete virtualization are significant:

  • Control. Virtualization generally allows a high degree of control over the environment in which the operating system and applications run. You can specify virtualized hardware settings, install the OS of your choice, and optimize it for your application.
  • Portability. From the outside, a typical virtual machine consists of a relatively large (physical or virtual) file containing the OS and virtualized file system, and a much smaller file containing the settings required by the hypervisor (the host system program which manages the virtual machine). These files can easily be transferred to and used on other host systems running the same (or a compatible) hypervisor application.
  • Security. Virtual machines are by their nature highly insulated from the host system. Unless the VM is specifically configured to have direct access to the host file system, any malware attacks or unauthorized access attempts are likely to remain contained within the VM.

 

Virtual Downside

Virtual machines have some potentially important drawbacks as well:

  • Size. Because a VM is a virtualization of a complete system, down to the hardware, it contains all of the basic features of that system, even if they aren't used. This has a cost, in terms of reserved storage, memory requirements, and other size-related factors.
  • Spin-up time. The size and internal complexity of VMs can also result in a significant time lag between the request for a new instance of a VM and the completion of its deployment.
  • Vulnerability. The presence of all-basic system features in a VM may provide malicious intruders with a broad set of known attack surfaces within the scope of the virtual machine itself, at least partly offsetting the increased security produced by virtualization.

Docker and Containers

Docker, on the other hand, as well as containers in general, represent a different approach to the challenge of deploying multiple, virtualized instances of an application and its environment. A container doesn't attempt to virtualize an entire machine. Instead, it provides a clearly defined environment (a "namespace") with distinct boundaries, along with resources and processes which operate within those boundaries.

What goes on within the container is confined to the container; data enters and leaves a container through limited and well-defined channels. Unlike virtual machines, containers can make direct use of specified host system resources. In general, only those resources which are required by the application running inside the container are included in the container deployment.

 

Container Upside

As you might expect, containers have some major advantages in comparison with virtual machines:

  • Size. A container includes only those resources required by the application, giving it a much smaller footprint than a virtual machine configured to run the same application. It generally requires less memory and storage, and has a much faster spin-up time.
  • Maintenance. Because containers typically operate on the microservice level (see below), upgrades and bug fixes generally do not require the entire application to be reconfigured and redeployed. They can instead be applied to only those containers which are affected by the changes.
  • Flexibility. With microservice architecture, only those containers required for specific tasks need to be deployed at any given time; both the number and types of containers deployed can be based entirely on current demand. Combined with the small footprint of a typical container, this allows lean and highly focused deployments.

 

Container Downside

Container architecture has some disadvantages as well:

  • Security. Since containers make considerable use of host system resources, and can interact directly with the host system, they allow greater opportunities for malicious access to both the host system and the container. Docker and other container management systems generally include proactive security measures designed to address these vulnerabilities.
  • Portability. Since containers are highly dependent on host system resources, they are not as portable or as system agnostic as virtual machines.

Virtual Apples and Container Oranges

There is another kind of difference, however, between virtual machines and containers that is in many ways even more important than the functional/design issues that we've looked at so far. This set of differences lies in how they are used.

 

Virtual Machines for Monolithic

Since VMs duplicate the kind of hardware/OS environments that have been in use for decades, they are well-suited for deploying monolithic applications designed for such environments. Virtual deployment allows you to run existing, well-established software in an environment that is secure, portable, and tailored for the requirements of the application. Moving an application to a VM typically requires minimal redesign or reconfiguration, since the VM provides a duplicate of its native environment.

VMs also lend themselves well to long-term deployment, and to the use of features required for prolonged use, such as persistent storage. Hypervisors can typically manage both storage within the virtual file system and storage to designated locations within the host file system.

 

Containers for Microservices

Containers, on the other hand, are optimized for a very different and much more recently developed form of architecture—microservice-based design. With microservice architecture, the overall application is broken down into individual services at a fairly low level (user inputs, database read/write, etc.), and these services are deployed in individual containers. For new applications, this requires highly non-traditional design, and for existing applications, it requires wholesale refactoring and radical redesign, which can require a significant outlay of time and money.

Containers are also much better suited for short-term deployment; an individual instance of a container only needs to exist for as long as the service which it contains is required by a process or set of processes. Containers were not originally designed for use with persistent storage, and, although it is now available for many container systems, they are often used for tasks which do not require such storage.

 

What is the Real Choice?

So, which is the best choice? Should you stick with the tried-and-true, and run monolithic applications in VMs, accepting the increased overhead and spin-up time, or should you opt for the speed and flexibility of containers, writing off the cost of refactoring as a necessary expense?

The good news is that you don't need to make that choice, and it is probably not the choice that you should be making. Since VMs and containers are optimized for different kinds of use, it makes more sense to see them not as being in competition, but rather serving parallel, or even complementary purposes.

 

Working in Parallel

If you have monolithic applications which you do not want or need to refactor into microservices, you can deploy them in VMs, making use of the considerable advantages that such virtualization offers. At the same time, you can deploy new, microservice-based applications (or older ones that you do decide to refactor) in containers, making good use of the strengths of Docker and other container deployment systems. From the point of view of end users or other actors interacting with the applications, the internal architecture and the operating environment of an application is likely to be irrelevant. From the development and operations point of view, you will be getting the best of both worlds.

 

Working Together

Yes, VMs and containers can work together. The use of VMs and containers together, in fact, is a key structural element in many cloud-based deployments. Virtual machines are ideal for hosting container systems in a cloud environment. They can provide the OS- and machine-level features required for container management, along with the high degree of encapsulation and overall security provided by the virtual machine environment. This kind of deployment also provides the container system with maximum portability, since the VMs which host them can be moved between cloud providers or even bare metal with little or no modification.

Containers vs. virtual machines? No. That's not the story. The real story is containers and virtual machines, working side-by-side, and working together.

 

About the Author

Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.

 

Like Wercker?

We’re hiring! Check out the careers page for open positions in Amsterdam, London and San Francisco.

As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag! 

 

Topics: Tutorials