How to create a production Kubernetes cluster with full CI/CD in 5 steps

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Garland Kan
Garland Kan
February 14, 2017

This tutorial will show you how to create a production quality Kubernetes cluster!

We'll also show you how to setup a sample application that will build and deploy on every push to the Github master branch.


Creating the Kubernetes cluster

Getting the Kops tool

The list of the latest downloads for kops can be found here:

$ chmod +x kops-linux-amd64  
$ mv kops-linux-amd64 /usr/local/bin/

Confirm that it is properly installed by typing kops. You should get a similar output:




Creating a Key Pair

Please note: you will need to create your own ssh public key and import it to the Key Pairs section under EC2 in the Amazon Web Console.

Before we create our kube cluster we will need to create our SSH key pair needed for access:

  1. Log into your AWS web console.
  2. Select the EC2 option.
  3. Under the Network & Security option select Key Pairs.

 Network & Security option_Feb_2017.png

  1. Select Create Key Pair.

Create Key Pair_Feb_2017.png

  1. Give your Key Pair a name and click create.

Create key pair_Feb_2017.png

The private key file is automatically downloaded by your browser. The filename is the name you specified as the name of your key pair, and the filename extension is .pem. Save the private key file in a safe place!

Important: This is the only chance for you to save the private key file. You'll need to provide the name of your key pair when you launch an instance and the corresponding private key each time you connect to the instance.

  1. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file so that only you can read it:

$ chmod 400 my-key-pair.pem 

Creating the Kube cluster

Please Note: Before we can create our kube cluster we will have to:

  • Create a route53 domain for our cluster.
  • Create an S3 bucket to store our clusters state with the following name: kop-store-<your_name>.

Instructions for accomplishing both can be found at steps 2 and 3 at the link below:

Export the various env variables:

export AWS_DEFAULT_REGION=us-east-1
export KOPS_STATE_STORE=s3://kop-store-<your_name>
export NAME=demo.k8s.<your_valid_domain_name>
Example of a valid domain name is:

Create the cluster configs:

kops create cluster

Get cluster

The below command will show you your newly created cluster:

$ kops get cluster
NAME  CLOUD   ZONES  aws        us-east-1b

Edit the Cluster Configs:

kops edit cluster ${NAME} 

Get the current instance groups:

$ kops get instancegroups --name ${NAME}

current instance groups.png

This output shows you that it will create 4 nodes all together. There will be 3 master nodes for redundancy and 1 minion node. 

Build the Cluster

This will start launching: subnets, ec2 instances, and start creating your cluster:

kops update cluster ${NAME} --yes

 subnets ec2 instances.png


Once you build the cluster the kube config file with the authorization credentials for access will be in your home directory:


bash logout.png 

kube demo_Feb_2017.png



Getting kubectl tool

Download the kubectl command line tool:

$ curl -LO$(curl -s

Make the kubectl binary executable and move it to your PATH:

$ chmod +x kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl

After downloading let's enable shell autocompletion:

$ echo "source <(kubectl completion bash)" >> ~/.bashrc

Once you've moved the kubectl binary to/usr/local/bin/confirm that it is properly installed by typing the below command. You should get a similar output from the screen capture below:

$ kubectl


Connection to the Kube API with the Kubectl tool

The kubectl tool uses a kube config file (default location is in~/.kube/config) to connect and to communicate with our kube cluster. The file that Kops generates is pointing to the public IP address of the cluster.

We have set the security groups within AWS so that you cannot reach the public IP of the kube API. With that, you have to open up access to the kube API. We want to severely limit access to these servers.

At this point we will need to add our public IP address to the inbound rule for our master instance in the Security Group of our AWS console.

  • Log into your AWS Console and select EC2.

  • Select the master instance and click on its Security Group option.


Image 2017-02-14 at 3.50.21 pm.png


  • Select the Inbound tab then Edit.


Image 2017-02-14 at 3.58.19 pm.png


  • Choose HTTPS for the type and select My IP then save.


My IP_Feb_2017.png


  • Before we leave the Security Group section let us repeat the steps above and whitelist the below IP addresses. We will need these IP’s for one of our steps moving forward that allows the CI/CD service to reach our Kubernetes cluster.

After you've whitelisted  your IP verify connectivity by checking with the nslookup command:

$ nslookup

Please Note: This can take up to 15 minutes for the DNS records to update with a response.


Code snip_Feb_2017.png


General Kubernetes Usage

Listing Pods

You can tell the command line which namespace you want to look at.

Another interesting namespace is thekube-systemnamespace. This is a namespace that is automatically created for you and it runs Kubernetes system pods in it.

kubectl --namespace kube-system get pods -o wide

Get pods_Feb_2017.png

Also notice there is a -o wide at the end of the command. That is optional but it gives extra information on which node the pod is running on.


You have a Kubernetes cluster, now what?

We now need to launch some stuff into the cluster and we will use the below repo as an example:

First clone the folder to your system:

$ git clone
$ cd kubernetes-ci-cd

Create the files needed with the kubectl create -f command for each of the yaml files below:

$ kubectl --namespace default create -f ./kubernetes/ingress/

This will create all of the resources in the `ingress` directory.

Launching Ingress

The Kubernetes Ingress is a Nginx pod that reads from the Kubernetes API to create it's own configuration so that it can dynamically route traffic inbound to the cluster.

The topology is as follows:

Internet<-->ELB<-->Ingress Nginx Pod<-->Application Pods

The ingress is created after the creation of the default-backend.yaml and rc.yaml files.

Verify the creation of the ingress (nginx) with:

$ kubectl get pods
NAME                                       READY     STATUS    RESTARTS   AGE
ingress-controller-2171626210-2bg8c        1/1       Running   0          4m
ingress-default-backend-2249275363-8f0tw   1/1       Running   0          4m


Build and Deploy With Wercker

You can follow this step-by-step guide on developing, building and deploying a sample app with Wercker:

For our purposes we will walk through adding your app to Wercker:


Wercker create org_Feb_2017.png


  • Select your Git provider along with the App we will deploy and lastly use selected repo.


Wercker choose a repo_Feb_2017.png


  • Choose Wercker will checkout the code without using an SSH key.


Wercker config_Feb_2017.png


  • Check the box and select Finish!


Wercker all done_Feb_2017.png


Triggering your first build

  • Make a change to one of the files in the getting-started-golang folder from the command line then run the below commands:

$ git commit -am 'wercker build time!'
$ git push origin master

Finally, navigate to your app page where you will  see a new build has been triggered!


Wercker build time_Feb_2017.png


You can also see the commits from wercker in the GitHub repo:

Wercker getting started_Feb_2017.png

Configuring continuous deployments

We will add 2 actions into that is in our wercker.yml file.

1) Push the built docker container to our repository.  You can also push it to any other Docker repositories.

a) Our definition for the “push docker” action:

Configure GUI for this new pipeline:

  • Go to the “Workflow” tab and click the “Add new pipeline” button.
  • Name: docker-push.
  • YML pipeline name: docker-push.


Kubernetes CICD.png


Configure GUI with our credentials:

  • Go to the “Environment” tab.
  • Add a variable with key: QUAY_USERNAME.
    • This is a username with permission to your repository.

  • Add a variable with key: QUAY_PASSWORD.

    • This is the password for the user.


Wercker Environment tab_Feb_2017.png

2) Deploy the built container onto our Kubernetes cluster when there is a push into the `master` branch.

a) Our definition for the “deploy” action:

Configure GUI for this new pipeline:

  • Go to the “Workflow” tab and click the “Add new pipeline” button.
  • Name: deploy.
  • YML pipeline name: deploy.


Wercker workflows tab deploy_Feb_2017.png

Configure GUI with our Kubernetes information:

  • Go to the “Environment” tab.

  • Add a variable with key: KUBE_ENDPOINT.

    • This is the URL to your Kubernetes API server.  Remember that we gave’s IPs access to this above.

    • The `kops` tool created us a Kubernetes configuration file when we created the cluster.  This is located in: `~/.kube/config`.

    • Run the command to output the entire config:

      • cat ~/.kube/config.

    • This will be the URL of the `server` key.

  • Add a variable with key: KUBE_USERNAME.

    • This is a username that Kubernetes uses for authentication.

    • You can find this in the `~/.kube/config` file.

    • There is a `username` key. This is the user. Unless you have changed it on build the user name is: admin.

  • Add a variable with key: KUBE_PASSWORD.

    • This is the password associated with the username.

    • This is the `password` key associated with the username.

  • Add a variable with key: KUBE_NAMESPACE.

    • Kubernetes allows you to segment off the cluster into what they call “namespaces”.  For this tutorial we will use the default namespace called “default”.

    • The value for this is: default.


Wercker envirnoment tab pusher_Feb_2017.png


 3) Creating the Pipeline. This strings along our “actions” above to the sequence we want which is:  

build -> push docker container -> deploy to our cluster if it is the master branch.

Configure GUI:

  • Click on the “Workflow” tab.
  • Click on the “+” on the right of the “build” icon.

Image 2017-02-14 at 4.46.32 pm.png


  • In the “Select pipeline” dropbox select “docker-push”. 

    • This tells our pipeline to run the docker-push action after a successful build


Wercker pipeline docker push_Feb_2017.png


  • Click on the “+” on the right of the “docker-push” action.

    • We will add the deploy next.
  • This time we only want to invoke this action when it happens on the “master” branch.  

Wercker pipeline docker finish_Feb_2017.png


We have a “Workflow” that looks like this now.  Notice the “master” under deploy which denotes this only happens for the “master” branch.


Wercker pipeline master_Feb_2017.png


We have finished configuring to be able to deploy into our Kubernetes cluster.

To invoke it, we just have to make a small change in the Github repository and then push the code into master.  You can do something as simple as changing/add something in the and then pushing it in.  You will get a pipeline run like this one:


Wercker adding godir_Feb_2017.png

You can check your Kubernetes cluster to see that this application has been deployed:

$ kubectl --namespace default get pods -o wide
NAME                                           READY     STATUS    RESTARTS   AGE       IP             NODE
ingress-controller-2171626210-5khg3        1/1       Running   0          13d    ip-172-30-93-224.ec2.internal
ingress-default-backend-2249275363-1vnt6   1/1       Running   0          13d    ip-172-30-93-224.ec2.internal
webapp-3028151003-5lb8f                    1/1       Running   0          30m   ip-172-30-145-246.ec2.internal

Check that the Kubernetes ingress has been deployed:

$ kubectl  --namespace default get ing
NAME      HOSTS                               ADDRESS         PORTS     AGE
webapp   80        49m

We can get the AWS ELB that Kubernetes created for us:

$ kubectl  --namespace default describe svc ingress-lb
Name: ingress-lb
Namespace: default
Labels: <none>
Selector: app=ingress-controller
Type: LoadBalancer
LoadBalancer Ingress:
Port: http 80/TCP
NodePort: http 32660/TCP
Port: https 443/TCP
NodePort: https 30841/TCP
Session Affinity: None
No events.

We can add a DNS CNAME for our Ingress to the AWS ELB address to reach it directly. Without the CNAME we can reach it via curl and adding in a host header.  

The Kubernetes ingress which is running Nginx is doing virtual host routing:

$ curl -H "HOST:"
"{'cities':'San Francisco, Amsterdam, Berlin, New York','Tokyo'}"


This tutorial walked you through setting up a Kubernetes cluster in a production configuration and then we walked through the setup of to automatically build and deploy our application.


Like Wercker?

We’re hiring! Check out the careers page for open positions in Amsterdam, London and San Francisco.

As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag!                             

Topics: Product, Kubernetes, Containers