Deploying a Microservice to GKE with GCR

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Aaron Rice
Aaron Rice
October 20, 2016

This tutorial will show you how to take an example microservice from source code to a Kubernetes cluster using Wercker, Google Container Engine (GKE) and Google Container Registry (GCR).


I’m going to assume that you’re running OS X, and that you already understand the basics of Kubernetes. If not, you may want to check out the Kubernetes Getting Started documentation before going any further.

The microservice you’ll be deploying is called Get-IP, which is a simple Python Bottle application that can be found on Github, here, and is invoked with:

bin/get_ip <listen_hostname> <listen_port>

The above command will start the application. Hitting the listening host/port with a web browser will return you… wait for it…

{"client_ip": "127.0.0.1"}

… Your client IP address, in a JSON string!

We’ve also got bin/tests which will run through some functional tests of Get-IP. Currently the tests check that the client’s IP is in the response and that it returns a HTTP 200 response.

Step One: Prep

Google Container Engine is Google’s hosted Kubernetes offering, part of Google Cloud Platform. If you don’t already have an account, head over to https://cloud.google.com/ and take advantage of their free trial which - as of writing - is $300 of free credit for 60 days.

Once you’ve signed up you’ll be taken to the Cloud Platform control panel. The first thing you need to do here is create a project. I called mine “Wercker Tutorials”. You’ll be given a project ID, which you’ll need to take note of for later.

Create a Project- google cloud platforn

Creating a project can take around 30 seconds, so keep an eye on the notifications tab to the right of the page, then switch to your new project once it has been created.

Next you’ll need to download the Google Cloud SDK to interact with Google Cloud Platform from your local machine. Head over to the official tutorial on configuring the Cloud SDK, and run through the download and initialisation steps.

You should have been taken through (a rather awesome) OAuth authentication flow which will let you interact with Google Cloud Platform’s APIs. Since you’re going to be using Google Container Engine (GKE) - Google’s hosted version of Kubernetes - you will also need a tool to interact with your Kubernetes cluster. The gcloud tool provides a version of kubectl, that can be downloaded with the following command:

gcloud components install kubectl

Step Two: Configuring Google Container Engine (GKE)

Lets jump straight in to launching a new GKE cluster!

All of the big players have abstracted the somewhat lengthy process of launching a Kubernetes cluster from scratch in to a few commands. Google let you achieve this either via the gcloud SDK, or from their web interface.

I’ll show you how to do both here, but you’ll only need to pick one method. Personally, I prefer the command line option.

Provisioning a GKE cluster from the web interface

Hit the menu button on the top left of the Google Cloud Platform control panel, then select “Container Engine”. If this is a fresh account you may be greeted with an invitation to “Enable billing”. Nobody wants that invitation, but you’ll need to accept it to get any further. Don’t worry, the $300 free credit will be more than enough for this tutorial! Also, there’s no fee for managing GKE clusters under 5 nodes, so you’ll just be paying for the underlying compute instances.

Select “Create a container cluster”, and give your Container Engine cluster a name. I used “prod” for this tutorial.

Creating a GKE cluster in the control panel

Select the GCP zone you want your cluster to be launched in, I usedeurope-west1-d. In the above picture you’ll see that I used “small” tier instances, and asked for a cluster containing three nodes.

You can now hit the “Create” button to create your GKE cluster!

Provision your cluster using the Google SDK from your CLI:

If you followed the above steps, you can ignore this section. Otherwise, the followinggcloud SDK command will also launch your cluster, with the same specifications as above:

gcloud container clusters create prod --zone europe-west1-d --machine-type=n1-standard-2 --num-nodes=3

Whether you choose to run the above command or follow the web interface flow, you’ll need to wait a few minutes for the cluster to be created. Your reward for waiting will be a fully functioning GKE Kubernetes cluster!

[21:14:32] riceo:~ $ gcloud container clusters create prod --zone europe-west1-d --machine-type=n1-standard-2 --num-nodes=3
Creating cluster prod...done.
Created [https://container.googleapis.com/v1/projects/wercker-tutorials/zones/europe-west1-d/clusters/prod].
kubeconfig entry generated for prod.
NAME  ZONE            MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
prod  europe-west1-d  1.3.6           <REDACTED>
n1-standard-2  1.3.6         3          RUNNING

One benefit of using the gcloud SDK tool to create your cluster is that kubectlwill have been automagically configured to connect to your GKE cluster. If you used the web control panel to launch your cluster you’ll need to run a few commands to have gcloud configure kubectl for you:

gcloud config set compute/zone europe-west1-d
gcloud container clusters get-credentials prod

Lets make sure kubectl is connected to your GKE cluster by asking it to show you the Kubernetes nodes available:

kubectl get nodes

kubectl works!

If the result looks something like the above, you’ll be looking at the nodes in the Kubernetes cluster you just launched from scratch in only a few minutes…

How awesome is that?!

Step Three: Configuring a Google Service Account

You’ll need a Google Service Account (not to be confused with a Kubernetes Service account) for Wercker to interact with GCR and GKE. To create one, head to the GCP web control panel:

  1. Hit the menu button on the top left
  2. Go to IAM + Admin
  3. Service Accounts
  4. Select “Create”

Give your account a name, and give it a role. I called mine “wercker”, and gave it “owner” privileges for the project, but you may wish to look in to granular IAM permissions for some further reading.

Furnish/download a new JSON private key. You’ll be passing it to Wercker later on.

Step Four: Google Container Registry

This step doesn’t actually require any action, but seems like a nice place to explain GCR. GCR is incredibly simple to interact with, and you only pay for Google cloud Storage + Network egress, so it’s also very cost effective.

Google expose a bunch of GCR domains for registries by region. The full list can be found here, but I suggest using eu.gcr.io for the purpose of this tutorial. Once authenticated, the process for pulling and pushing images is simple:

docker push eu.gcr.io/<PROJECT-ID>/<Image-Name>/<Image-Tag>
docker pull eu.gcr.io/<PROJECT-ID>/<Image-Name>/<Image-Tag>

<PROJECT-ID> refers to the Google Cloud Platform project id that you set up in step one of this tutorial.

Step Five: Configuring a Wercker pipeline!

OK, this is where things gets interesting. You should now have a working 3 node GKE Kubernetes cluster, access to the Google Container Registry, and an application that’s just itching to be deployed!

Lets take another look at the application repo:

https://github.com/riceo/wercker-gke-demo

[23:29:29] riceo:wercker-gke-demo git:(master) $ tree .
.
├── README.md
├── bin
│ ├── activate_env
│ ├── get_ip
│ └── tests
├── get_ip
│ ├── __init__.py
│ ├── get_ip.py
│ └── tests.py
├── kubernetes_deployment.yml.template
├── kubernetes_service.yml.template
├── requirements.pip
└── wercker.yml

For anyone that’s written a Python application before, some of the above tree should look familiar. Our application source code lives in get_ip/, and the bin directory contains some bash scripts to start the application.

activate_env will create a Python virtualenv then use the requirements.pip file to download the external dependencies required to launch the get_ip application.

bin/tests will run activate_env, then run the integration tests defined inget_ip/tests.py.

bin/get_ip will run activate_env then launch our application if provided with a listen host and listen port.

There are three files in this repo that you may not be familiar with:

  1. kubernetes_deployment.yml.template
  2. kubernetes_service.yml.template
  3. wercker.yml

Lets start with…

Wercker[.yml]

The wercker.yml file in the project repo defines the steps your Wercker pipelineswill run to build, test, and deploy the application to your GKE cluster.

Each pipeline name in wercker.yml will map to a pipeline name on Wercker. To configure the application on Wercker, you first need to fork the demo repo in to your Github or Bitbucket account, then create an account at app.wercker.com.

Once that’s done you’ll need to create the application on Wercker

Creating a new Wercker application

Give the application a name, then follow the awesome setup flow, passing it your forked demo repo. You should end up on a page that looks a bit like this:

Wercker new application

Wercker will ask you to create a wercker.yml file, but since our repo already has one you don’t need to worry about that.

By default Wercker will create a Workflow that contains a build pipeline, which you can see on the “Workflows” tab. This workflow will be triggered by a Github/Bitbucket push.

When Wercker detects a push, it’ll clone the repo, read the wercker.yml file, then begin running the steps defined within the build pipeline:

From wercker.yml in our tutorial repo:

build:
    box: python:2.7
    steps:

    # Check that our application's tests are passing. Since this is a python
    # application, our entry script will also install the application's dependencies
    # with Virtualenv
    - script:
        name: Nose tests
        code: bin/tests

    # Take our tested application revision and its dependencies, bake it in to a
    # Docker image, and push to GCR.
    - internal/docker-push:
        entrypoint: bin/get_ip
        cmd: 0.0.0.0 8080
        working-dir: $WERCKER_ROOT
        tag: $WERCKER_GIT_COMMIT
        ports: "8080"
        username: _json_key
        password: $GCP_KEY_JSON
        repository: $GCR_TAG
        registry: $GCR_HOST/v2

The build pipeline defined in wercker.yml will run the application’s tests usingbin/tests in the application repo.

If the tests pass, the next step will be to package up everything within the working directory (which would be our test-passing application revision and its dependencies), build a Docker image from it, then push that image to GCR.

You may be wondering what those environment variables are doing in the yml file. Well, some are Wercker built-in variables that get passed to the pipeline build steps (the WERCKER_ variables), and some will be defined by you in Wercker’s application config:

From the official Werker docs on Environment variables, the built-in variables are:$WERCKER_ROOT - The location of the cloned code $WERCKER_GIT_COMMIT - The git commit hash

The custom variables that you need to define on Wercker are:

$GCP_KEY_JSON - The environment variable that contains the Google cloud JSON key file contents to connect to Google Cloud Platform products with. This is the same JSON file that you downloaded in step three of this tutorial.
$GCR_TAG - The docker image tag name to use.
$GCR_HOST - The GCR hostname for the Docker registry

To set these environment variables on Wercker, head to your application page, and select the “Environment” tab

Setting a settable

GCPKEYJSON:

IMPORTANT! You need to remove any newlines from the JSON file before pasting it in to the Wercker control panel:

tr -d '\n' < /path-to-downloaded-file.json

If you miss the above step all push/pull attempts made to GCR will fail and it’ll make you re-evaluate every decision in your life that led up to the moment of the 100th failure. Trust me on that one.

GCR_TAG:

This should be the GCR format listed in step four.

<GCR DOMAIN>/<GCP-PROJECT-ID>/<CONTAINER-NAME>.

e.g:

eu.gcr.io/wercker-tutorials-146915/get-ip

GCR_HOST:

The GCR base URI:

https://<GCR DOMAIN>

e.g:

https://eu.gcr.io

vars

Step Six: Triggering your first build

Now that you have created a Wercker application and given it access to your GKE cluster and GCR, you can manually trigger the build pipeline in Wercker, to make sure everything is working.

This would normally happen when you push a change to git, but since you don’t have any new code you need to go back to the “runs” tab, and then select “trigger a build now”.

This should take you to the the Run page, and show you the output of some Wercker prep steps followed by the steps defined in our wercker.yml file. If all went well, you should see:

Wercker build Wercked!

get code is a Wercker built-in step that checks out the attached application source revision to the run’s working directory, $WERCKER_ROOT. Next, thesetup environment and wercker-init built-in steps will read the wercker.yml file from this directory and export all environment variables defined.

From this point onwards in the run, you can compare the screenshot above with thebuild pipeline section of wercker.yml in our application repo:

build:
    box: python:2.7
    steps:

    # Check that our application's tests are passing. Since this is a python
    # application, our entry script will also install the application's dependencies
    # with Virtualenv
    - script:
        name: Nose tests
        code: bin/tests

    # Take our tested application revision and its dependencies, bake it in to a
    # Docker image, and push to GCR.
    - internal/docker-push:
        entrypoint: bin/get_ip
        cmd: 0.0.0.0 8080
        working-dir: $WERCKER_ROOT
        tag: $WERCKER_GIT_COMMIT
        ports: "8080"
        username: _json_key
        password: $GCP_KEY_JSON
        repository: $GCR_TAG
        registry: $GCR_HOST

The build pipeline should have reported that the “Nose tests” step was successful. If you look at the “Nose tests” step in our wercker.yml file you’ll see that it’s a custom script step that runs “bin/tests”. Since our source code was checked out to$WERCKER_ROOT, we know that bin/tests would have run our integration tests. If our tests were to fail (i.e the test script exits with a non-zero exit code), the Wercker pipeline would have also failed and no further steps would have been run.

In this case, our tests should have passed, so the internal Docker push step would have read the contents of $GCP_KEY_JSON from the environment variables you set above, then ran a Docker build of all files that were in $WERCKER_ROOT, which we now know contained all the dependencies required to run our app, along with a test-passing revision of the source.

Once the Docker build completed, the Docker image would have been pushed to GCR (along with some extra parameters such as the Docker entrypoint, and ports to expose) using the service account credentials defined in the $GCP_KEY_JSONenvironment variable.

This is great, but what you really want is this image running somewhere as a container, which leads me to:

Step Seven: Deploying your image to Kubernetes with Wercker.

Take a look at the other section of our wercker.yml file: the pipeline entitled,deploy-to-kubernetes. It starts with:

deploy-to-kubernetes:
    box: python:2.7
    steps:

    # https://github.com/wercker/step-bash-template
    # This Wercker step will look for files in our repo with a .template extension.
    # It will expand any environment variables in those files, then remove the
    # template extension.
    - bash-template

The first step in the deploy-to-kubernetes pipeline is a really cool Wercker step called bash-template which looks for any file in the current directory with a “.template” extension. It then expands any environment variables found within those files, and drops the “.template” extension when done.

Remember our app repository file list from a few sections above? Well, the other two files we haven’t gone over yet are template files that will be affected by ourbash-template build step:

1. kubernetes_deployment.yml.template
2. kubernetes_service.yml.template

Take a look at kubernetes_deployment.yml.template:

# This template file will have its environment variables expanded
# and the .template extension removed by the bash-template Wercker step.
# See wercker.yml.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: get-ip
  labels:
    commit: ${WERCKER_GIT_COMMIT}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: get-ip
  template:
    metadata:
      labels:
        app: get-ip
        commit: ${WERCKER_GIT_COMMIT}
    spec:
      containers:
      - name: get-ip-server
        image: ${GCR_TAG}:${WERCKER_GIT_COMMIT}
        ports:
        - name: get-ip-server
          containerPort: 8080
          protocol: TCP

This file contains a Kubernetes deployment definition. It will tell Kubernetes to create a deployment consisting of a container running the image built in the buildpipeline, that should now be sitting on GCR. The environment variable for$WERCKER_GIT_COMMIT will be replaced with the commit hash of the this run’s revision of code, which will also be the Docker image tag for this run’s image

The other template file, kubernetes_service.yml.template is the Kubernetes servicedefinition for our application.

Back to wercker.yml, The next step creates a directory to put the expanded Kubernetes yml files in:

# The step above should leave us with a Kubernetes service and deployment yml files.
# We'll create a directory to move them to.
- script:
    name: Prepare Kubernetes files
    code: |
      mkdir $WERCKER_OUTPUT_DIR/kubernetes
      mv kubernetes_*.yml $WERCKER_OUTPUT_DIR/kubernetes

The final step of the pipeline will use the kubectl step to authenticate and deploy the application to your GKE Kubernetes cluster!

# Since we're using GKE, we'll use a fork of the kubectl step that supports
# GKE service account authentication. We need to pass some GKE specific configuration
# to ensure we can authenticate, then point kubectl at the directory containing our
# Kubernetes configuration.

# `apply` is a good command to use here, as it'll create Kubernetes entities if they are missing.
- riceo/kubectl:
    name: deploy to kubernetes
    server: $KUBERNETES_MASTER
    gcloud-key-json: $GCP_KEY_JSON
    gke-cluster-name: $GKE_CLUSTER_NAME
    gke-cluster-zone: $GKE_CLUSTER_ZONE
    gke-cluster-project: $GKE_CLUSTER_PROJECT
    command: apply -f $WERCKER_OUTPUT_DIR/kubernetes/

You may notice that the kubectl step defined here is actually riceo/kubectl. This is a great example of how Wercker allows you to use both built-in steps (provided by Wercker), and user submitted steps. I created a fork of the Wercker kubectl step to add Google Container Engine service account authentication support, and published it on the Wercker step registry for everyone to use.

In order for the riceo/kubctl step to connect to your GKE cluster, you’ll need to pass it the Google service account JSON from earlier, along with some extra environment variables on the Wercker web interface:

GKE_CLUSTER_NAME - The cluster name you decided on when you created your GKE cluster
GKE_CLUSTER_ZONE - The availability zone you put your GKE cluster in
GKE_CLUSTER_PROJECT - The name of the project (remember to use the hyphenated version) you placed you GKE cluster in
KUBERNETES_MASTER - Your Kubernetes master node IP. You can get this from `kubectl config view` under the cluster.server parameter

Extra Wercker environment variables

In theory the deploy-to-kubernetes pipeline should now be able to interact with GKE! But how do you trigger this pipeline?

You could trigger it manually once the build pipeline completes, but for the sake of achieving an automated build, test, and deploy process, lets modify your workflow to trigger the deploy-to-kubernetes pipeline after a successful run of the buildpipeline!

To do this, go to the “Workflows” tab for your project on the Wercker control panel, then:

  1. Select “Add new pipeline”
  2. Give your pipeline a name. For simplicity’s sake I called mine deploy-to-kubernetes
  3. Enter deploy-to-kubernetes in to the “YML pipeline name” field to map this pipeline to the pipeline of the same name in wercker.yml
  4. Create the pipeline with the default hook type
  5. Click on the “Workflows” tab again to take you back to the main Workflows page.
  6. Click the blue plus icon after the build step under the Workflow editor
  7. In the popup box, leave the branches box filled with an asterisk, then select deploy-to-kubernetes in the “Execute pipeline” box.

Chaining workflow pipelines gke

This will configure your Wercker application to start a “deploy-to-kubernetes pipeline” run after a successful build pipeline run! Time to test!

  1. Select the “Runs” tab for your application
  2. Click on the only run listd, which should be a green build pipeline.
  3. On the right side of the page there should be a a dropdown box labeled, “Actions”. Click it.
  4. Select deploy-to-kubernetes

What you’re doing here is triggering a deploy-to-kubernetes pipeline off of your last sucesssful build pipeline run.

gke demos

Once this is done you can confirm that your container is running on Kubernetes by running the following locally:

[22:52:47] riceo:~ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
get-ip-3658735603-b9lmw 1/1 Running 0 20s

If this run is successful you will have just deployed your application from source to Kubernetes!

Step Eight: Accessing our application from the Internet

So, how do you get to your application from the Internet? well, lets go back a bit and take a look at the Kubernetes service template file in our application repo:

# This template file will have its environment variables expanded
# and the .template extension removed by the bash-template Wercker step.
# See wercker.yml.

apiVersion: v1
kind: Service
metadata:
  name: get-ip
  labels:
    app: get-ip
    commit: ${WERCKER_GIT_COMMIT}
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: get-ip
    commit: ${WERCKER_GIT_COMMIT}
  type: LoadBalancer

The very last line of this file is type: LoadBalancer, which uses the Google Cloud Platform Kubernetes cloud provider to provision a GCP load balancer instance pointing at our application’s pod. This all happens automatically when Kubernetes provisions the service for you, and if the Kubernetes pod needs to be moved to another node, Kubernetes will update the loadbalancer IP automatically!

You can see the current IP address of the loadbalancer by describing the Kubernetes service with kubectl:

[17:21:36] riceo:$ kubectl describe service get-ip
[20:55:23] riceo:wercker-gke-demo git:(master) $ kubectl describe service get-ip
Name:           get-ip
Namespace:      default
Labels:         app=get-ip
            commit=1da23783dbcf975bd1ee36d86e71777bcfcf56ce
Selector:       app=get-ip,commit=1da23783dbcf975bd1ee36d86e71777bcfcf56ce
Type:           LoadBalancer
IP:         10.59.245.13
LoadBalancer Ingress:   104.199.55.23
Port:           <unset> 80/TCP
NodePort:       <unset> 30497/TCP
Endpoints:      10.56.0.6:8080
Session Affinity:   None
Events:

If you hit the Loadbalancer Ingress IP reported in the output of the above command in our browser, you should see your application, built, tested, and deployed by Wercker to GKE - accessible from the public internet!

Ingress working

WE DID IT

Conclusion

So there we have it! You just built and configured a Kubernetes cluster via GKE, configured a Docker image registry via GCR, and built a Wercker workflow for building, testing, and deploying an application to Kubernetes whenever a commit is pushed to Github!

We’ll be expanding on more advanced topics such as using the Wercker CLI and parallel testing on Wercker in future posts. If you have any questions in the meantime feel free to reach out to Wercker on Twitter, or come have a chat in our public Wercker Slack channel!

 

Topics: Product, Kubernetes, Containers, Tutorials