We knew containers were going to be big ever since 2011, back when we were executing user jobs with LXC containers.
Docker’s rise in 2014 reinforced our beliefs as we moved towards a build system for Docker containers. We introduced that Docker-based system last April.
2015 was an exciting year for containers. With the hype surrounding Docker, the various container initiatives being started and the release of Kubernetes, the container landscape was starting to take shape. 2016 will be even more interesting, with the hype giving way for more serious discussions and more scalable implementations.
Hand in hand with the container hype-train we saw increased discussion happening around microservices: containers are a perfect tool for running microservices. With the release of the wercker dev command Wercker lets users combine the best of both worlds, allowing microservices to be developed inside containers in an automatable fashion.
With the release of Kubernetes (or k8s, as the cool kids call it) we were introduced to the concept of container-aware “schedulers”. These schedulers forced us to rethink our deployment model, insomuch that a linear model was no longer sufficient. Our unit of was no longer code to containers: simply pushing a container somewhere is not a viable strategy (not for implementations at scale, anyway).
The current deployment model is what we like to call a push-pull deployment model (where the old model was push-oriented). We push container images into a repository (e.g. DockerHub), then notify our scheduler (through an API) that a new image is available, after which the scheduler pulls this new image and schedules(and scales up) the new containers accordingly. But this workflow is still not ideal, we’re assuming that we build the container once and then push it everywhere. There’s a better way, as I’ll explain in a bit.
Most importantly we have to realize that our existing concepts of “build” and “deploy” pipelines no longer add up in this world where apps get more and more complex - we need new verbs to describe our deployment process. Your build pipelines, as applications get more complex and more micro-service oriented, become fragmented as well. Each microservice has its own suite of tests (preferably), and when testing a new component, you’d like to re-build and re-test every microservice that this component depends on. What you’re doing is building one distributed application consisting of multiple build pipelines.
We’re working towards exactly that, with a feature we call “Workflows” that goes beyond builds and deploys. These Workflows, as their name implies, have no pre-defined goal of building or deploying your app, because building & deploying complex applications can no longer be defined in a single pipeline. That’s why these Workflows will be chainable, giving you the control to make your pipelines as modular as you want.
Building containers through pipelines significantly differs from how we build our containers now. Right now we’re building fat containers, meaning containers with an extensive runtime included to build the app (e.g.: OS + python + pip + virtualenv + … ), test dependencies and testing frameworks. You don’t need your builddependencies included when deploying a container, so why include them? This is even more true for a compiled language such as Go, where you really only need to statically compile and deploy the binary; you don’t need to include anything else. Actively distinguishing between build and deploy containers provides huge benefits, in that you’re building slimmer, cleaner containers that are than more easily deployed and maintained.
Another challenge we’re facing is how to develop microservices in a container-driven environment. While tools such as Docker Compose create container topologies both local and remote, they’re more ops than dev tools.
At Wercker, we created a CLI tool that automates your development environment. It lets you execute code inside containers from the get-go, including all the dependent services that your application depends on (e.g. MongoDB, Redis, or maybe your own microservice). This helps you achieve dev/prod parity, because you’re running all your components inside containers, as you would in your production environment.
The best part about this CLI is that it’s the exact same tool we use to run your builds in the cloud. That means that you can run builds locally, before you commit your changes to Git, simulating an exact run as you would see in wercker. Actually, the real best part about it is that it’s open source :-).
Get in the Workflows Beta
If you’re building complex applications and deploying them in containers to registries and schedulers, the Workflows feature will significantly improve the way you automate your development workflows. We’re looking for advanced users that are willing to help us beta test this, so if you’re interested, I’d love to hear about your use case. Drop me an e-mail at email@example.com describing your current workflow!
Wrapping up - we’re hiring!
Are you as psyched about automating all the things as we are? Think containers are changing the world? Then check out our brand new careers page! We’d love to chat with you :-)