Kubernetes is a cornerstone piece of open source technology in the container management and orchestration space. It includes all of the pieces necessary to provide a true highly available infrastructure for container-based services running on multiple public clouds, private clouds, or even bare metal.
In some ways, Kubernetes is a different beast from traditional software tools. For example, it takes a different approach to networking as compared to a standard Docker or Docker Swarm installation.
Below, I explain how Kubernetes networking works and what you need to know to create and manage networks effectively in a Kubernetes environment.
Introduction to Container Networking
As a base for this article, we are assuming you, the reader, know what containers are and why you would want to use them. If not, Oracle has a nice introduction on their community site to give you a head start.
To start with containers, have two basic types of networks—host-based networking and bridged networking.
Host-based networking was once the default type, and is still commonly used for small Docker installations on desktops and small test environments. Host-based networking uses port forwarding to forward requests from the host's IP to the IP stack inside the container. So port 8080 on the host would forward to port 80 inside the container. This allows every container to be identical, but has a lot of management overhead around tracking ports for containers across multiple hosts.
Bridge networking allows every container to have its own IP address so it can directly connect to the network on its own. This simplifies things greatly with only the need to know container to IP mapping, not the IP and port.
How Kubernetes Networking is Different
Let’s set the ground rules by stating that Kubernetes has some requirements for any network implementation that it runs on:
- All containers can communicate with all other containers without NAT.
- All nodes can communicate with all containers (and vice versa) without NAT.
- The IP a container sees itself as is the same IP which others see it as.
In actuality, Kubernetes doesn’t manage individual containers. Kubernetes manages pods. A pod is defined as one or more containers which make up a service offering. So if you have a Java app using Spring Boot that has no hard dependencies, then that could be a single container pod. If you also require a container running an NGINX reverse proxy for SSL termination and a container running MariaDB to make the Java app run, those three containers would make up a pod—The idea being if any member of the pod fails, Kubernetes will shut it down, and based on the rules you put in place, replace the entire pod with a new one. Kubernetes really shines at scale when there are multiple copies of each pod running with multiple hosts in the mix to handle the client traffic or availability requirements that are defined.
To expose all the instances of a pod to the outside world, or to evenly distribute traffic from internal clients, they are grouped together as a service, which is load-balanced and managed by Kubernetes.
Advanced Networking Add-ons for Kubernetes
Kubernetes is happy to hand over control of network management to a third-party network implementation. That implementation needs to leverage the Kubernetes Container Network Interface (CNI) plugin framework.
By leveraging this framework, replacing the entire underpinning networking infrastructure can be as straightforward as running a single command line. Each of the drop-in replacements offer their own benefits and downfalls.
Popular drop-in replacements include:
- kubenet backed by Kubernetes - A basic networking plugin that only handles container networking at the host level. It is often used when running on a cloud provider that will handle routing the IPs to the host (for example, when using Google Compute Engine (GCE) to map individual subnets to the VMs running on the platform).
- Contrail backed by Juniper - Contrail is the commercial offering of OpenContrail. Contrail is a truly open networking platform built around essential networking standards which can integrate across multiple cloud platforms and container orchestration engines.
- Flannel backed by CoreOS - A simple networking plugin for Kubernetes that satisfies all the core principles which Kubernetes outlines, including routing between nodes.
- Open VSwitch - Originally by Nicira, and used as the base for NSX, OpenVSwitch is a full-featured, software-defined networking platform which is more complicated than other options, and is backed by several large technology firms including VMware and HPE.
- Project Calico backed by Tigera - From Kubernetes.io: Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the Internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent-based network security policy for Kubernetes pods via its distributed firewall. Calico can also be run in policy enforcement mode in conjunction with other networking solutions such as Flannel (aka canal), or native GCE networking.
For years, the Google-founded Kubernetes and its ecosystem have run large-scale infrastructures. This has translated into a unique approach to how to handle networking between containers and the external services that rely on them.
About the Author
Vince Power is a Solution Architect who has a focus on cloud adoption and technology implementations using open source-based technologies. He has extensive experience with core computing and networking (IaaS), identity and access management (IAM), application platforms (PaaS), and continuous delivery.
We’re hiring! Check out the careers page for open positions in Amsterdam, London and San Francisco.
As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag!