Three things I wish Kubernetes could do

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Brien Posey
Brien Posey
May 4, 2018

Over the last several years, Kubernetes has become an indispensable tool for organizations that use containers on a large scale. As great as Kubernetes is, however, there are some things that could make it even better.

This article is something of a wish list for the future of Kubernetes. I’m not sure that every item on my wish list is totally realistic, but that’s OK. Innovation always starts with an idea.


Mix and Match Container Hosts

The first item on my wish list is the ability to mix and match container hosts. Now I know what you are thinking… There are already projects in existence that are designed to bring support for heterogeneous hardware platforms into a Kubernetes cluster. After all, datacenter hardware evolves over time, and Kubernetes needs to support that evolution. As such, there are host resource discovery solutions that allow Kubernetes to determine a host server’s hardware capabilities. Even so, that isn’t the type of mix and match that I am talking about.

Forget about containers for a moment, and think about virtual machines. It has become increasingly common for the virtual datacenter to contain a wide variety of resources. A single datacenter, for example, might include both VMware and Microsoft Hyper-V hosts. Likewise, there is often diversity among the virtual machines themselves. A single virtualization host might, for example, run both Windows and Linux virtual machines.

As more and more workloads become containerized, it is going to become increasingly important to support a diverse collection of platforms. There are already Kubernetes platforms for Linux, Windows, and even Mac. What would be cool is if Kubernetes clusters became completely platform-agnostic. Such an environment might be able to look at a pod and discover that it needs to run on Windows, and schedule that pod to run on a Windows host. It might look at another pod, discover that it needs to run on Linux, and schedule the pod to run on a Linux host.


Support Container Hosts and VM Hosts

My second wish list item, which I’m not sure is completely realistic, is to extend the Kubernetes engine to support virtual machines in addition to containers. As I’m sure you know, Kubernetes schedules pods to run on hosts that have resources available. There are server virtualization solutions that do essentially the same thing for virtual machines. 

I realize that right now the world is slowly moving away from virtual machines, in favor of containers. However, virtual machines are so pervasive that this migration is not going to occur overnight. Even after most workloads have been migrated to containers, there will still be some applications that run in virtual machines because of the cost or difficulty of migrating those resources, or simply because of a personal preference for virtual machines.

The point is that containers and virtual machines are going to coexist for the foreseeable future. So why not have an orchestration tool that can simultaneously handle both? Think of it as a single pane of glass management solution for both your container and your VM environment.


Perform Dynamic Host Scaling and Consolidation

One more wish list item for Kubernetes is dynamic bare metal host scaling and consolidation. This is something that has existed for a long time in the world of virtual machines, but does not seem to exist for Kubernetes (at least I haven’t seen it yet if it does).

The basic idea behind this one is pretty simple. Many organizations use fewer hardware resources at night and on the weekends due to lower demand. Because of this, some hypervisor vendors offer solutions for consolidating VMs onto a smaller number of hosts on a scheduled basis. After doing so, the management layer is able to automatically power down the unused host servers. This helps to reduce power and cooling-related costs, while also saving wear and tear on the hardware.

Similarly, such tools can be configured to power up additional hosts in anticipation of demand spikes. Once those additional hosts are online, load balancers can distribute the existing VMs across the available hosts, while orchestration engines might bring additional VM instances online to cope with the demand. 

My wish for Kubernetes is, therefore, to have bare metal hardware support that allows hosts to be powered on and off in response to an anticipated demand for resources.


About the Author

Brien Posey is a Microsoft MVP with over two decades of IT experience. Prior to becoming a freelance tech author, Posey was CIO for a national chain of hospitals and healthcare facilities. He also served as a network engineer for the United States Department of Defense at Fort Knox, and has worked as a network administrator for some of the largest insurance companies in America. In addition to Posey’s continued work in IT, he is in his third year of training as a commercial scientist-astronaut candidate.


Like Wercker?

We’re hiring! Check out the careers page for open positions in Amsterdam, London and San Francisco.

As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag! 

Topics: Tutorials