Kubernetes cluster with Docker containers

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Benno van den berg
Benno van den berg
September 6, 2016

This article is going to show you how to automate getting your code out of your git repository and into a Kubernetes cluster with Wercker. 


This happens in three easy steps:

  • Pipeline one starts to test and compile the code
  • Pipeline two creates a Docker image
  • Pipeline three instructs the Kubernetes cluster to run the specified application


Pipeline one - Build

In our first pipeline we want to our source code to become a binary. Our current preferred language is Go. Go allows us to write our application, which we can statically compile into a single binary, and run these binaries without any dependencies.

Before compiling however we first need to make sure that the code is “good”. We check this by utilizing the following tools: go vet, golint, and go test.

go vet is a command that comes with the go binary, and it is a simple checker for static errors. It is our first line of defence for some mistakes like calling Printf with too few or too many arguments. Next is golint, this is a separate application. This application lints the Go source code, and is concerned with coding styles. Things like comments, a consistent naming are checked by golint.

go vet and golint make sure that the code that is written is consistent, however it does not guarantee that the code is correct. Which is why we write tests and run these on every build. We also capture the coverage report, however we do not require certain test coverage. The reason for not requiring 100% code coverage is that some statements are very difficult to reproduce, and it would just give a false sense of security.

After we think that the code is “good”, we’re going to compile it. We’re actually not going to include CGO in the build more about the reason in the next pipeline. We are going to set some variables at build time. This allows us to inject variables like git commit, compile time and version information without having to change any source code files.

To make sure that our pipelines only contain the necessary files, we copy the binary and the deployment code to the $WERCKER_OUTPUT_DIR. This will ensure that only these files will be stored for the next pipeline.

  base-path: /go/src/github.com/wercker/example
    - script:
        name: go vet
        code: govendor vet +local

    - golint:
        exclude: vendor

    - script:
        name: install dependencies
        code: |
          go get -u github.com/kardianos/govendor
          govendor sync

    - script:
        name: go test
        code: govendor test +local

    - script:
        name: go build
        code: >
          go build
          -ldflags="-s -X github.com/wercker/example.GitCommit=$WERCKER_GIT_COMMIT -X github.com/wercker/example.PatchVersion=$(( ($(date +%s) - $(date --date=20150101 +%s) )/(60*60*24) )) -X github.com/wercker/example.Compiled=$(date +%s)"
          -installsuffix cgo
          -o $WERCKER_OUTPUT_DIR/example

    - script:
        name: forward deployment scripts
        code: cp -r deployment $WERCKER_OUTPUT_DIR/deployment

Pipeline two - Docker push

Now that we have a binary we’re starting the second pipeline with the content of the previous pipeline. At run-time we want to use the smallest possible image, so we start a second pipeline with different base image. For most of our services we use Alpine. We do not need any other dependencies, but we do install the root CA certificates (should the service need to communicate with a SSL enabled endpoint). Then we add a non-root user to the container, and make sure that this service is using this user during execution.

The resulting image will pushed to a Docker registry, Quay.io in our case. We tag this image with a predictable, but also unique format. The tag will contain the branch name combined with the commit hash. This format allows us to uniquely use any Docker image. We do not require a more “human” friendly format, since Wercker is responsible for all deployments.

    id: alpine
    cmd: /bin/sh
    - script:
        name: install apk packages
        code: |
          echo "@edge http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories
          apk update && apk add ca-certificates

    - script:
        name: add example user
        code: adduser example -D -u 1234

    - script:
        name: prepare
        code: mv ./example /example

    - script:
        name: forward deployment scripts
        code: cp -r deployment $WERCKER_OUTPUT_DIR/deployment

    - internal/docker-push:
        repository: quay.io/wercker/example
        registry: https://quay.io
        username: $DOCKER_USERNAME
        password: $DOCKER_PASSWORD
        entrypoint: /example
        ports: "6002"
        user: 1234

Pipeline three - Kubernetes deploy

We run the final pipeline manually. We instruct Kubernetes to create or update any services. Any related Kubernetes configurations are also created or updated, such as ingress, service, etc.

First we template all our Kubernetes configurations with environment variables. These environment variables are all defined on app.wercker.com. This allows us to tweak a deployment without having to commit any code, and it allows us to deploy to different environments (staging, production, etc). Other variables are defined by Wercker itself, such as the git commit, and git branch. The following is an example of a Kubernetes deployment configuration template:

apiVersion: extensions/v1beta1
kind: Deployment
  name: example
    branch: ${WERCKER_GIT_BRANCH}
    commit: ${WERCKER_GIT_COMMIT}
  replicas: ${TPL_REPLICAS:-1}
      app: example
        app: example
        branch: ${WERCKER_GIT_BRANCH}
        commit: ${WERCKER_GIT_COMMIT}
      - name: quay-readonly
      - name: server
        image: quay.io/wercker/example:${WERCKER_GIT_BRANCH}-${WERCKER_GIT_COMMIT}
        args: [
        - name: server
          containerPort: 6002
          protocol: TCP
          readOnlyRootFilesystem: true
          runAsNonRoot: true

The environment variables starting with WERCKER_ are defined by Wercker itself, and the ones starting with TPL_ are custom environment variables defined app.wercker.com. One thing to note is that we also add the branch and commit to the deployment. Though this is not required, it does help when debugging issues. We also enable some security settings in this configuration file. Such as ensuring that the service is running a non-root user, and mounting the file system as readonly.

After the templating is finished we combine all the .yml files and then we use the kubectl step to apply any changes to the deployment, secrets, ingress, etc. These will be applied to the Kubernetes cluster, which will start a rolling update on the deployment and will replace any other entities.

    - bash-template:
        cwd: deployment

    - script:
        name: merge kubernetes files
        cwd: deployment
        code: |
          rm *.template.yml
          cat *.yml > example.yml

    - kubectl:
        name: deploy to kubernetes
        cwd: deployment
        server: $KUBERNETES_MASTER
        token: $KUBERNETES_TOKEN
        insecure-skip-tls-verify: true
        command: apply -f example.yml


By using Wercker we’re able to automate our deployment pipeline to Kubernetes using Quay.io as our registry. It also allows us to deploy to multiple environments, without having to redefine our deployment.

Earn some stickers!

As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using wercker, be sure to tweet out your #greenbuilds and we’ll send you some swag!


Topics: Product, Kubernetes, Integrations, Steps