It’s been over five years that Marc Andreessen wrote the much-publicized essay on “why software is eating the world”.
If all companies are software companies, then your competitive advantage and business agility are determined by developer velocity and the productivity of your developer team.
By extension developer tooling that enables developer teams to move faster, deliver new features, and increase their product release cadence is paramount in achieving competitive advantage and reducing opportunity cost.
In the last couple of years, we’ve seen a new model for how software is being built and delivered, called microservices, whereby a large monolithic, application is broken down into separate small pieces of responsibility. How do microservices relate to developer productivity and what are some of the drivers for adoption of microservices?
Drivers for microservice adoption
At Wercker we believe there are two key drivers for microservices:
- The first is the need for speed, and this is mostly a business driver. We went from weekly, to daily, to hourly releases and the reason for this is that we want to iterate on our product quickly, we’re not only in need of a rapid feedback cycle, for instance from our customers, but we also want to be able to quickly adapt to our ever-changing environment. So fast iteration and rapid feedback cycles are an essential requirement to achieve business agility and remain competitive. Working on smaller individual pieces can be more efficient than many developers working on a large monolithic application.
- The second driver is team size and communication if we grow beyond a certain team size it makes sense to split up our team and their responsibilities. At some point throwing more developers at a problem breaks down in regarding communication and they will be stepping on each other’s toes and lose productivity.
The underlying concepts for microservices is not new, so why is the timing right for microservices?
Key ingredients for microservices
Two key building blocks are now in place for microservices to take off. On the one hand, we have the rise of software containers, spearheaded by Docker and a way to package up your application using OS-level virtualization. Docker provides a standard format to encapsulate code, alongside resource constraints, networking and storage. This standard format makes containers portable and allows them to be run on development environments as well as production clouds.
Along with containers taking off came the need for a way to run them. In the last few years, we’ve seen a significant effort being put into schedulers and orchestration platforms that are capable of running containers at scale, provide networking between them and management of workloads. Kubernetes) a favorite scheduler, is an open source solution that came out of Google and is based on their internal scheduling system, Borg. The recent 1.3 update added many features that are specifically targeted to enterprise users, such as hybrid deployments, improved scalability and cross-cluster federated services.
As with any new level of developer abstraction, microservices come with their set of challenges:
- More than one process: you need a way to run the different services that you and your colleagues created to test API contracts. Developers should be aware how the different processes get deployed and what the entire topology is.
- More than one codebase: your various services will most likely live in different repositories. And you and the different teams need to communicate about them.
- More than one configuration: you will have multiple configurations across various services and environments
- More than one environment: your services will operate in different environments ranging from production clusters to developer laptops. How do you create parity across these different environments?
Microservices and Wercker
The Wercker platform itself consists of many different services that run your automation pipelines every day, so we’ve been thinking a lot about how to build and deploy microservices, thus eating our own dogfood.
Every automation pipeline within Wercker runs inside of a container and every artefact from a pipeline can be a container (or any other type of output, be it a tarball or .deb package). Any service that you might need such as a database or queue will get spun up alongside your main pipeline as a separate container. This means that you are using actual services to run integration tests against as opposed to mocking them.
More importantly, as you can use any container as a service, you can spin up the services that your colleagues have been building as well. Say for instance you are building an API that needs an authentication backend that a different team built, on Wercker this authentication service would be spun up next to your own pipeline. This allows you to test API contracts across different services and codebases.
As you can fan-out pipelines on the Wercker platform, you can create containerized services for different purposes. For development purposes, for instance, you might want to create a version of your service that has debugging dependencies inside of it, allowing for easy local development. Just create a separate pipe that is based off the base image and push it to a registry.
In terms of service discovery and storing configuration management, you can use an etcd container, similar as you would in production and create dev-prod parity between your different environments.
In short, we think that microservices are here to stay, but we are in need of better tools and best practices to support building and delivering multi-service applications. At Wercker, we’re trying to push forward battle-tested solutions to these challenges. Keep an eye out on this blog for further posts on building and delivering applications for the modern cloud.
Earn some stickers!
As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using wercker, be sure to tweet out your #greenbuilds and we’ll send you some swag!