Containers are an older technology than most people think.
LXC containers date back to 2011, and self-contained applications could be found before then in FreeBSD, Solaris, Linux and Windows. However, it was Docker’s meteoric rise over the course of 2013 that launched them into the mainstream consciousness of the developer community.
Whilst the interim years have been exciting, it seems that from late 2015 a breakthrough for large enterprises moving towards using containers in production started to take shape. Let’s recap.
Late last year Goldman Sachs announced it was shifting 90% of its codebase to containers. While not seen as a software company, Goldman Sachs employs approximately 10,000 people in its technology division. Compare this with Facebook’s organisation-wide 13,000 headcount, and it’s easy to see this as a significant move forward for container technology. Meanwhile just weeks ago theglobal networking provider Cisco announced their acquisition of ContainerX.
Along with containers taking off came the need for a way to run them. In the last few years we’ve seen significant effort being put into schedulers and orchestration platforms that are capable of running containers at scale, provide networking between them and management of workloads. Kubernetes a popular scheduler, is an open source solution that came out of Google and is based on their internal scheduling system, Borg. The recent 1.3 update added many features that are specifically targeted to enterprise users, such as hybrid deployments, improved scalability and cross-cluster federated services.
What are the trends behind these moves? Hardware manufacturers are finding it increasingly difficult to generate progress and profit from hardware innovation, and so they’re looking to move up the stack to software services as a new moneymaker. Containers look set to be the mechanism that will deliver this change - but how?
Deployments: fit for purpose?
The current deployment model is what we like to call a push-pull model (where the old model was push- oriented). We push container images into a registry (e.g. DockerHub) then notify our scheduler (through its API) that a new image is available, after which the scheduler pulls this new image and schedules (and scales up the desired number of replicas) the new containers accordingly. However this workflow is not ideal, as it assumes that we build the container once, and then push it everywhere. There is a better way, specifically for enterprise companies with dozens of teams all working on different aspects of the same project.
Pipelines: the way forward
As enterprise applications get more and more complicated, the simple world of build and deploy pipelines is no longer sufficient. We need new ways to describe these processes, which have become fragmented as they’ve become more focused on delivering microservices.
Each microservice should have its own suite of tests, and when testing each new component, you have to rebuild and retest all dependent components and their API contracts. What you actually end up building is one distributed application for development consisting of multiple build pipelines.
We’re working towards this: with Wercker you can build custom workflows between your build and deploy pipelines. Building and deploying applications can no longer happen inside a single pipeline: this is the headache containers are well suited to solving, but tools are required to make these development pipelines as modular as possible.
Developing containers through these pipelines is very different to the way containers are currently created. Right now we’re building ‘fat’ containers: this means containers that contain extensive runtime code included in the build, test dependencies and testing frameworks. You don’t need your build dependencies included when deploying a container, so why include them? Pipelines enable you to programmatically build up your containers in different phases for different purposes such as creating dev containers to share with your team or production ready containers for deployment.
Virtual Private Pipelines
Right now we’re working on an exciting new feature called Virtual Private Pipelines, which we think will help alleviate some of these concerns. This product will allow customers to use network-isolated pipelines for application development on a pay-per-concurrency basis, with their own dedicated internal queuing system for their teams.
What can we expect from the coming months in the world of containerised development? As more enterprise-level investment pours into the container and microservices space, expect the likes of Google, Microsoft, Amazon and most recently Oracle to compete more intensively for market share.
At the same time, these developments make the work of companies that increase developer velocity ever more important: with more infrastructure being containerised and running in the cloud, services that link up container orchestration platforms with code repositories such as GitHub easily and quickly are very important indeed.
If you’d like to sign up to our early access program for VPP just click here.