How to run scheduled jobs to Kubernetes cluster

Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments

Benno van den berg
Benno van den berg
August 1, 2016

At Wercker we need to run scheduled jobs on our Kubernetes cluster. Since Kubernetes’ answer to cron (ScheduledJobs) isn’t quite production ready yet.

We set out to create a simple cron daemon to periodically run our jobs for us in the meantime. 

Editors note: Kuberenetes has made thier answer to cron production ready. Read more about it here. Wercker is proud to have provided this stop gap to the community :) 

Overview

The ScheduledJobs functionality is on the Kubernetes roadmap but we needed it now, so in anticipation we created Cronetes, which takes Kubernetes jobs and launches them in a Kubernetes cluster. When ScheduledJobs is finished it should be easy to migrate over because Cronetes uses the same objects.

To see all the possibilities Cronetes has to offer checkout the repository.

Here’s how it works:

In the following tutorial we’re going to use Wercker to build a new Docker image, which expands the official Cronetes image by adding a custom configuration to it.

First you need a repository on either GitHub or Bitbucket, and add a application for this repository on app.wercker.com. This repository will host the configuration and the wercker.yml.

Cronetes needs configuration to run. This consists of a schedule and a Kubernetes Job. Here is an example configuration and an example wercker.yml:

# Every minute
- schedule: 0 * * * * *
  job:
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: sendemails
    spec:
      template:
        metadata:
          name: sendemails
        spec:
          restartPolicy: Never
          containers:
          - name: job
            image: debian
box: debian

# build creates a config file. Currently it is a static file, but could be dynamic
build:
  steps:
    - script:
        name: forward config files
        code: cp config.yml $WERCKER_OUTPUT_DIR

# push-quay takes cronetes as the base container, adds the config and pushes it to quay
push-quay:
  box:
    id: quay.io/wercker/cronetes
    registry: https://quay.io
    # Overriding the entrypoint: see https://github.com/wercker/wercker/issues/218
    entrypoint: /bin/sh -c
    cmd: /bin/sh
  steps:
    - script:
        name: install apk packages
        code: |
          echo "@edge http://dl-cdn.alpinelinux.org/alpine/edge/main" >> /etc/apk/repositories
          apk update && apk add ca-certificates

    - script:
        name: forward config files
        code: mv config.yml /

    - internal/docker-push:
        repository: quay.io/wercker/custom-cronetes
        registry: https://quay.io
        username: $DOCKER_USERNAME
        password: $DOCKER_PASSWORD
        tag: $WERCKER_GIT_BRANCH-$WERCKER_GIT_COMMIT
        entrypoint: /cronetes

This wercker.yml consist of two pipelines: build and push-quay. The build pipeline in the example does a copy of a static file, but it could also generate the configuration using templating. The push-quay pipeline uses the Cronetes base image, then adds the configuration to it, before pushing the resulting image to a private registry.

For the push-quay pipeline you need to create a pipeline on app.wercker.com. Add any private environment variables, such as the Docker registry username and password, if necessary. Once the run for the build pipeline is finished, you can deploy that run to the push-quay pipeline.

Now you can start using this Docker image. The easiest way to do this is by hosting it in Kubernetes with a Deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: custom-cronetes
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 0
  template:
    metadata:
      labels:
        app: custom-cronetes
    spec:
      containers:
      - name: custom-cronetes
        image: quay.io/wercker/custom-cronetes:${WERCKER_GIT_BRANCH}-${WERCKER_GIT_COMMIT}
        args: [
          "--kube-in-cluster",
          "cron",
          "--config=/config.yml",
        ]

You need to change the image in the previous example to your own custom Cronetes image. After adding this Deployment to your own Kubernetes cluster you’ll have Cronetes running with your own configuration.

If you need to add or remove jobs, you simply update the configuration and wait for a new Docker image to be built. Then you can instruct Kubernetes to use that new image.

See our repository for more examples or open an issue.

 

Topics: Product, Containers, Tutorials, Integrations