Deploying a Microservice to Kubernetes with Wercker

This tutorial will show you how to take an example microservice from source code to running on a Kubernetes cluster using Wercker.

Aaron Rice
Aaron Rice
November 8, 2016

This tutorial will show you how to take an example microservice from source code to running on a Kubernetes cluster using Wercker.

I’m going to assume that you’re running OS X, and that you already understand the basics of Kubernetes. If not, you may want to check out the Kubernetes Getting Started documentation before going any further.

I'm also going to assume that you have a running Kubernetes cluster somewhere. There are a few options for this:

  1. Follow the Kubernetes tutorial on setting up a cluster on AWS using the excellent Kops script.
  2. Follow our GKE with GCR tutorial to configure a cluster running on Google Cloud Platform
  3. Go all in and set a cluster up from scratch

You'll also need access to a Docker API compliant registry. I used the Docker Hub.

The microservice you’ll be deploying is called Get-IP, which is a simple Python Bottle application - the same application from one of our other tutorials. The application is invoked by running:

bin/get_ip <listen_hostname> <listen_port>

If you hit the listen_hostname and listen_port in a web browser you'll be greeted with your client IP in a JSON string:

{"client_ip": "127.0.0.1"}

Running bin/teststakes you through the following functional tests:

  • Mock the client IP and test that it is returned
  • Check that the HTTP response code is 200

Configuring a Wercker pipeline

Lets take a deeper look in to our application repository:

[23:29:29] riceo:wercker-kubernetes-demo git:(master) $ tree .
.
├── README.md
├── bin
│ ├── activate_env
│ ├── get_ip
│ └── tests
├── get_ip
│ ├── __init__.py
│ ├── get_ip.py
│ └── tests.py
├── kubernetes_deployment.yml.template
├── kubernetes_service.yml.template
├── requirements.pip
└── wercker.yml

For anyone that’s written a Python application before, some of the above tree should look familiar. Our application source code lives in get_ip/, and the bin directory contains some bash scripts to start the application.

activate_envwill create a Python virtualenv then use the requirements.pipfile to download the external dependencies required to launch the application.

bin/testswill run activate_env, followed by the integration tests defined inget_ip/tests.py.

bin/get_ip will run activate_envthen launch our application if provided with a listen host and listen port.

There are three files in this repo that you may not be familiar with:

  1. kubernetes_deployment.yml.template
  2. kubernetes_service.yml.template
  3. wercker.yml

Lets start with...

Wercker.yml

The wercker.yml file in the project repo defines pipelines and steps your Wercker pipelines will run to build, test, and deploy the application to Kubernetes.

Each pipeline name in wercker.yml will map to a pipeline name on Wercker.

To configure the application on Wercker, you first need to fork the demo repo to your Github or Bitbucket account, then create an account at app.wercker.com. Finally, create a new application on Wercker web by selecting the "Create" dropdown, and choosing "application":

Creating a new Wercker application

 

Follow the application setup flow, and provide the repo name you forked in to your Github or Bitbucket account. You should end up on a page like:

This page will prompt you to create a wercker.yml file for your application, but since the demo application repository already has one you don't need to worry about this.

The “Workflows” tab will show the default Workflow and Pipeline that Wercker creates with a new application, which is a simple workflow containing a buildpipeline, which will be triggered by a Git push.

Once a push is detected, Wercker will clone the repo, read the wercker.yml file, then begin running the steps defined within the buildpipeline.

From wercker.yml in the application repository:

build:
    box: python:2.7
    steps:

    # Check that our application's tests are passing. Since this is a python
    # application, our entry script will also install the application's dependencies
    # with Virtualenv
    - script:
        name: Nose tests
        code: bin/tests

    # Take our tested application revision and its dependencies, bake it in to a
    # Docker image, and push to Docker Hub.
    - internal/docker-push:
        entrypoint: bin/get_ip
        cmd: 0.0.0.0 8080
        working-dir: $WERCKER_ROOT
        tag: $WERCKER_GIT_COMMIT
        ports: "8080"
        username: $DOCKERHUB_USERNAME
        password: $DOCKERHUB_PASSWORD
        repository: $DOCKERHUB_REPO    

The build pipeline defined in wercker.yml will run the application’s tests using bin/tests in the application repo.

If the tests pass, the next step will be to package up everything within the working directory (which would be our test-passing application revision and its dependencies), build a Docker image from it, then push that image to the Docker Hub registry.

You may be wondering what those environment variables are doing in the yml file. Well, some are Wercker built-in variables that get passed to the pipeline build steps (the WERCKER_ variables), and some will be defined by you in Wercker’s application config:

From the official Werker docs on Environment variables, the built-in variables are:

  • $WERCKER_ROOT: The location of the cloned code
  • $WERCKER_GIT_COMMIT: The git commit hash

The custom variables that you need to define on Wercker are:

  • DOCKERHUB_USERNAME: Your Docker Hub username
  • DOCKERHUB_PASSWORD: Your Docker Hub password
  • DOCKERHUB_REPO: The repository to push your image to on the Docker Hub. This might look like "riceo/wercker-kubernetes-demo"

To set these environment variables on Wercker, head to your application page, select the “Environment” tab, then add each variable its corresponding value. Don't forget to protect your passwords, which will stop the values being displayed in builds and on the environment variable tab.


Triggering your first build

Now that you have created a Wercker application and provided the required credentials for Docker Hub, it's time to start your first build.

This would normally happen when you push a change to git, but since you don’t have any new code you need to go back to the “runs” tab, and then select “trigger a build now”, on the bottom of the page.

This should take you to the run, and show you the output of some Wercker prep steps followed by the steps defined in our wercker.yml file. If all went well, you should see:

Wercker build Wercked!

get code is a Wercker built-in step that checks out the attached application source revision to the run’s working directory, $WERCKER_ROOT. Next, the setup environment and wercker-init steps will read the wercker.yml file from this directory and export all required environment variables.

From this point onwards in the run, you can compare the screenshot above with thebuild pipeline section of wercker.yml in our application repo:

build:
    box: python:2.7
    steps:

    # Check that our application's tests are passing. Since this is a python
    # application, our entry script will also install the application's dependencies
    # with Virtualenv
    - script:
        name: Nose tests
        code: bin/tests

    # Take our tested application revision and its dependencies, bake it in to a
    # Docker image, and push to Docker Hub.
    - internal/docker-push:
        entrypoint: bin/get_ip
        cmd: 0.0.0.0 8080
        working-dir: $WERCKER_ROOT
        tag: $WERCKER_GIT_COMMIT
        ports: "8080"
        username: $DOCKERHUB_USERNAME
        password: $DOCKERHUB_PASSWORD
        repository: $DOCKERHUB_REPO    

The buildpipeline should have reported that the “Nose tests” step was successful. If you look at the “Nose tests” step in our wercker.yml file you’ll see that it’s a custom script step that runs “bin/tests”.

Since our source code was checked out to $WERCKER_ROOT, we know that bin/testswould have run our integration tests. If our tests were to fail (i.e the test script exits with a non-zero exit code), the Wercker pipeline would have also failed and no further steps would have been run.

In this case, our tests should have passed, so the internal Docker push step would have read the Docker Hub credentials from the environment variables you set above, then ran a Docker build of all files that were in $WERCKER_ROOT, which we now know contained all the dependencies required to run our app, along with a test-passing revision of the source.

Once the Docker build completed, the Docker image would have been pushed to Docker Hub (along with some extra parameters such as the Docker entrypoint, and ports to expose).

This is great, but what you really want is this image running somewhere as a container, which leads me to...

Deploying your image to Kubernetes with Wercker.

Take a look at the other section of our wercker.yml file: the pipeline entitled, deploy-to-kubernetes. It starts with:

deploy-to-kubernetes:
    box: python:2.7
    steps:

    # https://github.com/wercker/step-bash-template
    # This Wercker step will look for files in our repo with a .template extension.
    # It will expand any environment variables in those files, then remove the
    # template extension.
    - bash-template

The first step in the deploy-to-kubernetes pipeline is a really cool Wercker step called bash-template which looks for any file in the current directory with a “.template” extension. It then expands any environment variables found within those files, and drops the “.template” extension from the file when done.

Remember our app repository file list from a few sections above? Well, the other two files we haven’t gone over yet are template files that will be affected by ourbash-template build step:

1. kubernetes_deployment.yml.template
2. kubernetes_service.yml.template

Lets take a look at kubernetes_deployment.yml.template:

# This template file will have its environment variables expanded
# and the .template extension removed by the bash-template Wercker step.
# See wercker.yml.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: get-ip
  labels:
    commit: ${WERCKER_GIT_COMMIT}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: get-ip
  template:
    metadata:
      labels:
        app: get-ip
        commit: ${WERCKER_GIT_COMMIT}
    spec:
      containers:
      - name: get-ip-server
        image: ${DOCKERHUB_REPO}:${WERCKER_GIT_COMMIT}
        ports:
        - name: get-ip-server
          containerPort: 8080
          protocol: TCP
    

This file contains a Kubernetes deployment definition. It will tell Kubernetes to create a deployment consisting of a container running the image built in the build pipeline, which should now be sitting on the Docker Hub.

The environment variable for $WERCKER_GIT_COMMIT will be replaced with the commit hash of the this run’s revision of code, which will also be the Docker image tag for this run’s image.

The other template file, kubernetes_service.yml.template is the Kubernetes service definition for our application.

Back to wercker.yml, The next step creates a directory to put the expanded Kubernetes yml files in:

# The step above should leave us with a Kubernetes service and deployment yml files.
# We'll create a directory to move them to.
- script:
    name: Prepare Kubernetes files
    code: |
      mkdir $WERCKER_OUTPUT_DIR/kubernetes
      mv kubernetes_*.yml $WERCKER_OUTPUT_DIR/kubernetes

The final step of the pipeline will use the kubectl step to authenticate and deploy the application to your Kubernetes cluster!

    - kubectl:
        name: deploy to kubernetes
        server: $KUBERNETES_MASTER
        username: $KUBERNETES_USERNAME
        password: $KUBERNETES_PASSWORD
        insecure-skip-tls-verify: true
        command: apply -f $WERCKER_OUTPUT_DIR/kubernetes/    

In order for the kubctlstep to connect to your Kubernetes cluster, you’ll need to pass it some extra variables, which you should define on the environment tab on Wercker:

  • KUBERNETES_MASTER: Your Kubernetes master API endpoint
  • KUBERNETES_USERNAME: Your Kubernetes API username
  • KUBERNETES_PASSWORD: Your Kubernetes API password

In theory the deploy-to-kubernetes pipeline should now be able to interact with Kubernetes! But how do you trigger this pipeline?

You could trigger it manually once the build pipeline completes, but for the sake of achieving an automated build, test, and deploy process, lets modify your workflow to trigger the deploy-to-kubernetes pipeline after a successful run of the buildpipeline!

To do this, go to the “Workflows” tab for your project on the Wercker control panel, then:

  1. Select “Add new pipeline”
  2. Give your pipeline a name. For simplicity’s sake I called mine deploy-to-kubernetes
  3. Enter deploy-to-kubernetes in to the “YML pipeline name” field to map this pipeline to the pipeline of the same name in wercker.yml
  4. Create the pipeline with the default hook type
  5. Click on the “Workflows” tab again to take you back to the main Workflows page.
  6. Click the blue plus icon after the build step under the Workflow editor
  7. In the popup box, leave the branches box filled with an asterisk, then select deploy-to-kubernetes in the “Execute pipeline” box.

demo.gif

This will configure your Wercker application to start a “deploy-to-kubernetes pipeline” run after a successful build pipeline run! Now lets give it a test:

  1. Select the “Runs” tab for your application
  2. Click on the only run listd, which should be a green build pipeline.
  3. On the right side of the page there should be a a dropdown box labeled, “Actions”. Click it.
  4. Select deploy-to-kubernetes

What you’re doing here is triggering a deploy-to-kubernetes pipeline off of your last sucesssful build pipeline run.

Untitled-1.png

Once this is done you can confirm that your container is running on Kubernetes by running the following locally:

[22:52:47] riceo:~ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
get-ip-3658735603-b9lmw 1/1 Running 0 20s

If this run is successful you will have just deployed your application from source to Kubernetes!

Accessing the application from the Internet

So, how do you get to your application from the Internet? Well, this depends on where you deployed your Kubernetes cluster.

If you are running your cluster on a supported cloud platform such as GCP or AWS, a load balancer will already be created for you! How? 

Lets take another look at your Kubernetes service definition file, which is in the application repo as kubernetes_service.yml.template:

# This template file will have its environment variables expanded
# and the .template extension removed by the bash-template Wercker step.
# See wercker.yml.

apiVersion: v1
kind: Service
metadata:
  name: get-ip
  labels:
    app: get-ip
    commit: ${WERCKER_GIT_COMMIT}
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: get-ip
    commit: ${WERCKER_GIT_COMMIT}
  type: LoadBalancer

The very last line of this file is type: LoadBalancer, which will prompt the supported cloud provider plugin to ascynchronosly provision a loadbalancer pointing to the pods your application is running in. The Kubernetes documentation explains this in more detail.

You can see the current IP address of the loadbalancer by describing the Kubernetes service with kubectl:

[17:21:36] riceo:$ kubectl describe service get-ip
[20:55:23] riceo:wercker-kubernetes-demo git:(master) $ kubectl describe service get-ip
Name:           get-ip
Namespace:      default
Labels:         app=get-ip
            commit=1da23783dbcf975bd1ee36d86e71777bcfcf56ce
Selector:       app=get-ip,commit=1da23783dbcf975bd1ee36d86e71777bcfcf56ce
Type:           LoadBalancer
IP:         10.59.245.13
LoadBalancer Ingress:   104.199.55.23
Port:           <unset> 80/TCP
NodePort:       <unset> 30497/TCP
Endpoints:      10.56.0.6:8080
Session Affinity:   None
Events:

If you hit the Loadbalancer Ingress IP reported in the output of the above command in our browser, you should see your application, built, tested, and deployed by Wercker to Kubernetes - accessible from the public internet!

If your Kubernetes cluster isn't running on a LoadBalancer-supported cloud provider, then the "LoadBalancer" service type will be ignored, and it's up to you to decide how to provide ingress to your application. There are a couple of options available, all of which require a few minor changes to the Kubernetes service definition yml. The Kubernetes documentation explains the process for configuring these.

Conclusion

So there we have it! You should have just been able to take your application from source to deployed on Kubernetes via Wercker! 

We’ll be expanding on more advanced topics such as using the Wercker CLI and parallel testing with Wercker in future posts. We already have a version of this tutorial dedicated to deploying on GKE. If you have any questions feel free to reach out to Wercker on Twitter, or come have a chat in our public Wercker Slack channel.

 

Topics: Kubernetes, Containers, Tutorials