Deploy Docker containers to a digital ocean server |Guide

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

David Griver
David Griver
February 25, 2016

The goal of this guide is to get you comfortable automating your deployments to a Digital Ocean server using the wercker platform. 


What is Wercker? In short it’s a platform that allows you among other things to automate deployment and containerization of applications. If you don’t understand exactly what that means don’t sweat it, everything will be explained as we go.

For the purpose of this tutorial I’m going to assume you already have your application source files committed to github/bitbucket - if not, do that first and come back.

The plan

When you trigger a git push your application will be containerized by wercker and pushed to the Docker Hub registry (think of it as the github of containers). Once pushed your digital ocean server will download the new version of your container, remove the old one, and run the new one - all without you lifting a finger. Sound good? let’s go!

Step 1 - Creating accounts for Docker Hub and wercker.

If you don’t have both a wercker and Docker Hub account now would be the time to get that out of the way.

Step 2 - Setting up your wercker application

First create a new wercker application and link it to your github/bitbucket repository.

To enable wercker to communicate with both the Docker Hub registry and your Digital Ocean server your are going to add some environment variables and an SSH key.

Under application settings > ‘Environment variables’ add two new variables called ‘DOCKER_USERNAME’ and ‘DOCKER_PASSWORD’. don’t forget to set your ‘DOCKER_PASSWORD’ as protected!

environment variables wercker

Head over to the SSH keys section and generate a new key pair with the name ‘digitalocean’. Copy the public key now as your going to need it when you set up your Digital Ocean server.

Head over to the targets section and click on ‘add deploy target’ > ‘custom deploy’, give it a name like ‘Production’ and set it to auto-deploy on branch ‘master’ (this isn’t strictly necessary but saves you the hassle of manually deploying your application after it is built by wercker).

Once your deploy target has been created your going to add the ssh key you just generated. click on ‘add new variable’, name it ‘DIGITAL_OCEAN’, choose ‘SSH key pair’ and select ‘digitalocean’.

deploy target

Step 3 - Setting up your Digital Ocean Server

Digital Ocean makes setting up a server with docker a piece of cake. Simply go to ‘Create Droplet’ > ‘One-click apps’ and choose Docker.

Before you click create under ‘Add your SSH keys’ click ‘new’ and paste in the public key you copied during the previous step.

Create your droplet and let Digital Ocean do its thing.

Step 4 - Setting up your wercker.yml

The wercker.yml file contains sets of instructions that wercker uses to build and deploy your application as a container. We are going to call each set of instructions apipeline, and each instruction a step. The pipeline we are going to concentrate on is of course the deploy pipeline.

Create a wercker.yml file in the root of your project and open it with your editor of choice.

The first line were going to add is going to be a base image that wercker will use to create our container, if you are creating a nodeJS app that would be:

box: node

If you are creating a python app you might use ‘box: python’ or if you would like a specific version ‘box: python:2.7.11’.

The image you specify will be pulled from Docker Hub, list of official docker images.

Now we are ready to define our two pipelines, build & deploy - our build pipeline is going to be pretty simple:

build:
 # The steps that will be executed on build
 steps:
     # A step that executes `npm install` command
     - npm-install

As you can see we only have one step right now, this is where you would add testing, linting and anything else you might fancy.

Lets take a look at what we want our first few steps to look like:

deploy:
    steps:
      - npm-install
      - script:
        name: install supervisor
        code: npm install -g supervisor
      - internal/docker-push:
        username: $DOCKER_USERNAME
        password: $DOCKER_PASSWORD
        repository: <username>/<app-name>
        ports: "<port exposed by app>"
        cmd: /bin/bash -c “cd /pipeline/source && supervisor - watch ./src src/server.js”

First I’m running npm-install, after that I’m installing supervisor to make sure our app restarts if it crashes.

Third step is where it starts getting interesting - this is a special step, an internal stepthat you can use to push containers to Docker Hub. Under repository replace<username> and <app-name> with your Docker Hub username and a name for your application. Also replace <port exposed by app> with whatever port your applications server exposes.

Under cmd we’re going to first change directory to /pipeline/source (where wercker stores our app inside the container) and then run our supervisor command to get the app going. This cmd is not being run now, it is being ‘baked’ into the container - that means docker will run it for us when we start the container later down our pipeline.

Next two steps we’re going to add are:

- add-ssh-key:
    keyname: DIGITAL_OCEAN
- add-to-known_hosts:
    hostname: <ip address>

First we add the ssh key we defined for our deploy target so that we can access our droplet. Then we add our server to the known hosts - Replace with your droplets ip address.

Next up we have:

- script:
    name: pull latest image
    code: ssh root@<server ip> docker pull <username>/<app-name>:latest
- script:
    name: stop running container
    code: ssh root@<server ip> docker stop <app-name> || echo ‘failed to stop running container’
- script:
    name: remove stopped container
    code: ssh root@<server ip> docker rm <app-name> || echo ‘failed to remove stopped container’
- script:
    name: remove image behind stopped container
    code: ssh root@<server ip> docker rmi <username>/<app-name>:current || echo ‘failed to remove image behind stopped container’
- script:
    name: tag newly pulled image
    code: ssh root@<server ip> docker tag <username>/<app-name>:latest <username>/<app-name>:current
- script:
    name: run new container
    code: ssh root@<server ip> docker run -d -p 8080:<port exposed by app> --name <app-name> <username>/<app-name>:current

This is where the magic happens, don’t worry, this is way simpler than it looks.

All the commands start with ‘ssh root@<server ip> because they are being executed on our server, don’t forget to replace <server ip>, <username> , <app-name> and<port exposed by app> !

  • First we are pulling the latest version of our container (the one we just pushed).
  • Then we are going to stop the currently running container.
  • Now we remove the container we just stopped.
  • Next step is mainly for cleanup, when we pull a docker container we are actually pulling an image that docker uses to create the container, these can take up a lot of space so this step removes the old image.
  • Now we’re going to tag the new image we pulled as the current image so that we can identify if for cleanup next time.
  • Finally we run our new container, notice we map 8080 to the port exposed by our application. the flag -d signifies we want to detach the containers process after running it (so that it runs in the background) and the --name flag gives our container a name so that we can easily stop/remove/restart it.

that’s it! :)

Commit your wercker.yml and push to your master branch, watch your project page as wercker first builds and then deploys your application to Digital Ocean.

 

Topics: Product, Tutorials, Integrations