There's a certain warm and fuzzy feeling to be felt when working on well-tested code.
Your reward for spending precious development time writing tests comes in the form of discovering edge cases and bugs well before your complex web application or microservice gets to a production environment.
However, with complex applications usually comes complex test flows, which can slow down your development if not handled with care, and since one of the key benefits of test driven development is the speed of feedback, I'd like to show you a few ways Virtual Private Pipelines can help keep your development agile with tests.
Let's take a look at a demo application that we'd like to test and build using Wercker. Name my cat is a simple website built in Python using Flask that helps prospective cat owners name their new pet. It simply picks a random cat name from a pool of potential names held within a Postgres database.
The user is able to submit a cat name to the pool of names using a simple form which fires a POST request, performs some very simple server side validation, then returns a success or failure message.
Each test is self-contained, and checks for things like:
- Ensuring a user isn't able to add an empty cat name
- Checking that a name is added to the database if successfully submitted
- Making sure a random name is pulled from the database on the homepage
- Correct HTTP response codes are returned to the user
- The user sees the correct messages in request responses
- Code coverage of tests
I suggest forking the demo application repo into your own GitHub account and creating a corresponding Wercker Web application for it within your own Wercker account. This should allow you to follow along and replicate the behaviour described in the rest of this post.
Whilst developing locally I've been using virtualenv and
pip to manage application dependencies. You can test the application locally by cloning your fork of the repo, and:
Creating and activating a virtualenv
virtualenv . && . env/bin/activate
Installing the application dependencies defined within requirements.txt into the virtualenv
pip install -r requirements.txt`
Exporting the required application environment variables. If you're trying this locally you'll need a Postgres server to connect to.
export FLASK_APP=namemycat.py export PG_DB="namemycat" export PG_USER="namemycat" export PG_PASS="password"
Initialising the database
Starting the application!
[11:40:21] riceo:namemycat git:(master*) $ flask run * Serving Flask app "namemycat.namemycat" * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Hitting localhost on the above port in your browser should now show the application:
You could also run the application tests by running:
python tests.py -v
Or better yet, you could use the Wercker CLI to run your tests without having to set up Postgres, virtualenv, or install any Python dependencies on your local machine by running:
wercker build --pipeline test
This command will run through the "test" pipeline I defined in the application wercker.yml file, which would usually be run as part of the Wercker Web workflow, but the Wercker CLI allows us to test this locally.
Let's break down what our application
wercker.yml file looks like.
First, the file defines a Docker container our pipelines will use as a base - in this case, we're using Docker's official Python image, along with a Postgres service. Every Wercker Pipeline that runs for this application will now have an instance of Postgres available to it, built from the public Postgres Docker image. This is perfect for our application since it won't run without one.
box: python:2.7 services: - id: postgres env: POSTGRES_PASSWORD: password POSTGRES_USER: namemycat
Our first actual pipeline is called
build and is intended to be triggered when Wercker detects a Git push. In this case, the name
build isn't exactly relevant since we're not actually building anything, but it's the default pipeline Wercker Web creates when adding a new application. We could change the name if we wanted to.
The sole purpose of this pipeline for our application is to check that the Postgres service is running and listening for connections. I took inspiration from the script mentioned in the Linking Services documentation and adapted it to work with the tools that the stripped down Python 2.7 Docker container ships with. Hint: It ships with Python.
The Linking Services documentation also explains the environment variables that are referenced by this step.
build: steps: - script: name: "Wait for Postgres to come up" code: | while ! $(python -c "import socket; soc=socket.socket(); soc.connect(('$POSTGRES_PORT_5432_TCP_ADDR', $POSTGRES_PORT_5432_TCP_PORT))"); do sleep 3; done
test pipeline will connect to our Wercker Postgres service, run through the various Integration, Functional, and code coverage tests defined for the application in
test: steps: - pip-install - script: name: "Set environment variables" code: | export PG_HOST=$POSTGRES_PORT_5432_TCP_ADDR export PG_PORT=$POSTGRES_PORT_5432_TCP_PORT export PG_USER=$POSTGRES_ENV_POSTGRES_USER export PG_PASS=$POSTGRES_ENV_POSTGRES_PASSWORD export FLASK_APP=namemycat.py export FLASK_DEBUG=0 - script: name: Run Integration tests code: | python tests.py -v IntegrationTests - script: name: Run Functional tests code: | python tests.py -v FunctionalTests - script: name: Check code coverage code: | coverage run --include namemycat.py tests.py coverage report
The final Pipeline is used to deploy our tested, dependency-packed Docker image containing the latest revision of our code to the Container scheduler of our choice. I've left this as a stub since it's not the point of this post and is a big enough topic to have its own. We wrote one!
deploy: steps: - script: code: | echo "This is where we would build and push our docker image to a registry" - script: code: | echo "This is where we would tell Kubernetes to deploy our container"
So far I've mentioned that the
build pipeline is triggered whenever a Git push occurs, but what about the other pipelines? Well, these are configured on the Workflows tab of Wercker Web for our application.
The process for creating a Workflow is:
- Head to the Workflows tab of your application
- Select "Add new pipeline"
- Give the pipeline a name. This can be anything since it's for illustration purposes only, but I tend to use the Pipeline name in Wercker.yml for the sake of clarity.
- Define the Pipeline name as it is in our wercker.yml file for this pipeline, e.g "build", or "test"
- Leave the radio button set to "Default"
- Hit "Create"
- Select the Workflows tab again to go back to the Workflow page
Repeat steps 1-7 for each pipeline defined in the application's wercker.yml file (i.e
- Select the blue "+" button after the build pipeline within the Workflow editor
- Select the next pipeline you want to run in the "Execute pipeline" drop-down.
Repeat the above until the Workflow editor looks like this:
At this point, everything should be configured to run a serial workflow. To see this in action, head to the "runs" tab of your application and select the "trigger a build now" link.
You should see the Workflow run through each pipeline in the order that you defined in the above steps. It should take a few minutes, but...
... You should end up with a green run! Your application was just taken from source to a known good revision via successfully passing tests ran against an external Postgres service, effectively in 50 lines of YML config!
One thing you may have noticed during your first green build run is that two steps in the pipeline took substantially longer to run than the rest.
In this case, I have artificially made our tests run longer than they need to with some strategic
tests.py, but in reality there are plenty of situations where a complex application may take minutes or hours to fully run through tests.
We can decrease the time it takes by running our tests in parallel with Wercker Workflows. All it takes is two small changes:
- Re-structure our wercker.yml to split the
testpipeline into multiple pipelines
- Update the Workflow on Wercker Web to run the pipelines at the same time.
First, let's look at a modified version of our wercker.yml, which can be found in the
parallel branch on the demo Git repo:
box: python:2.7 services: - id: postgres env: POSTGRES_PASSWORD: password POSTGRES_USER: namemycat build: steps: - script: name: "Wait for Postgres to come up" code: | while ! $(python -c "import socket; soc=socket.socket(); soc.connect(('$POSTGRES_PORT_5432_TCP_ADDR', $POSTGRES_PORT_5432_TCP_PORT))"); do sleep 3; done test1: steps: - pip-install - script: name: "Set environment variables" code: | export PG_HOST=$POSTGRES_PORT_5432_TCP_ADDR export PG_PORT=$POSTGRES_PORT_5432_TCP_PORT export PG_USER=$POSTGRES_ENV_POSTGRES_USER export PG_PASS=$POSTGRES_ENV_POSTGRES_PASSWORD export FLASK_APP=namemycat.py export FLASK_DEBUG=0 - script: name: Run Integration tests code: | python tests.py -v IntegrationTests - script: name: Run Functional tests code: | python tests.py -v FunctionalTests test2: steps: - pip-install - script: name: "Set environment variables" code: | export PG_HOST=$POSTGRES_PORT_5432_TCP_ADDR export PG_PORT=$POSTGRES_PORT_5432_TCP_PORT export PG_USER=$POSTGRES_ENV_POSTGRES_USER export PG_PASS=$POSTGRES_ENV_POSTGRES_PASSWORD export FLASK_APP=namemycat.py export FLASK_DEBUG=0 - script: name: Check code coverage code: | coverage run --include namemycat.py tests.py coverage report deploy: steps: - script: code: | echo "This is where we would build and push our docker image to a registry" - script: code: | echo "This is where we would tell Kubernetes to deploy our container"
You'll notice that it's mostly the same, except I've split the two longest running steps into their own pipelines:
For the purpose of seeing this happen in your local fork. I suggest copying or merging the contents of
wercker.yml on the
parallel branch of the demo application repo into your fork's
master branch and pushing the change.
Now that your
wercker.yml has been updated, you'll need to update Wercker Web so it is aware of the changes, and modify the Workflow so that the test Pipelines run in parallel. From the Workflows tab on your application, do the following:
test1, and change the YML pipelinetest1`.
- Create a new pipeline called
test2, pointing at YML pipeline
- Add the
test2pipeline to the Workflow via the Workflow editor by selecting the blue tick between the
- Delete the
deploypipeline from the end of the
test1pipeline, since this would currently be triggered when
test1ends, even though
test2might not be finished. More on that later.
Next, head back to the Runs tab and select the last green build pipeline. On the top right side of the page there will be an "actions" drop-down. Select "Execute this pipeline again" from the drop-down to trigger a new run.
This time around, you'll see the pipelines fork after the
build pipeline has finished executing:
Since we moved the two longest steps from running serially to parallel, we have effectively halved our test time!
If you have a Wercker community account you will be limited to two concurrent pipeline runs. You can split this up as two applications running one pipeline each at the same time, or one application running two pipelines at the same time, as we are doing in this demo.
If you go over your concurrency allocation by say, adding a third parallel test pipeline, the extra pipeline will enter a "waiting to start" state until one of your concurrency slots are freed up.
Our Virtual Private Pipeline accounts start with a concurrency of 3, and extra concurrency slots can be added at any time, so serious parallel testers may wish to consider upgrading.
You may be wondering what happened to your automatic "deploy" pipeline when you switched from a serial Workflow to a parallel Workflow. Well, since you now have multiple test pipelines running at the same time your application can't move on to the deploy step until they all complete successfully, or you'd risk deploying a partially tested revision of your application.
We're currently working on native functionality that will wait for completion of all parallel pipelines before moving on to further pipelines in a Workflow. In the meantime, there are a couple of workarounds I'd like to go over:
As a VPP customer, you will have a private work queue with at least three concurrency slots. If you have one slot spare you could use the Wait-Github-Statuses step from the Wercker Steps registry, which was written and released as an open-source step by Jason Hoos of MaestroHealth, which takes advantage of Wercker pipelines reporting their status back to GitHub, i.e:
Wercker automatically reports Pipelines that are being run to GitHub if the "report to SCM" tickbox is checked on a given Pipeline.
The wait-github-statuses step waits until all Pipelines are marked as "passing" on Github before executing the next Pipeline in your Wercker Workflow.
For example, your wercker.yml may look something like:
status-wait: steps: - maestrohealthcaretechnologies/wait-github-statuses status_contexts: wercker/build,wercker/test1,wercker/test2 timeout: 10
With a corresponding Wercker Workflow:
build pipeline run finishes (triggered by a Git commit), the
status-wait pipelines will begin. The
test2 pipelines will begin running application tests defined in wercker.yml at their own pace, whilst the
status-wait pipeline will have the
wait-GitHub-statuses step start watching the GitHub commit for status updates.
test2 pipelines have both finished successfully, Wercker will mark them as "passing" statuses on the GitHub commit. The
wait-github-statuses step will see this, and trigger the
If you're not a VPP customer yet you will be limited to running two concurrent runs, meaning that whilst you will be able to run two parallel pipelines, you won't be able to take advantage of the above
wait-github-statuses step. You may also notice a slight delay in the running of pipelines, as Community work queues are shared with finite capacity.
As an alternative, you could use after-steps to report each pipeline finishing successfully to your favourite collaboration tool, such as [Slack](https://app.wercker.com/applications/54d4a6c742494161430000f5/tab/details/, then manually trigger your
I hope this post has given you a good idea of how powerful Wercker can be for speeding up feedback by running tests in series and parallel. We'll be touching on further tips to increase your development and testing cycles in the future posts, but please do get in touch on Twitter or our public Slack community if you have any questions or comments. We're always keen to hear how our users use Wercker!
Why not join our early access club (you've seen the impact they have). We’ll invite you to try our beta products and treat you nice.
As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag!