Creating Jenkins pipelines with Ansible, part 2

The job-dsl and Pipeline plugins

This is a continuation of the previous post. For each project there are only two things we want to do.

  • Check out the source code.
  • Run the pipeline in the Jenkinsfile at the root of the repository.

This can be accomplished with the job-dsl plugin. It takes a definition of Jenkins jobs we want, and creates or updates them, as necessary. Somewhat confusingly, job-dsl itself needs to be run from a Jenkins job, known as a seed job.

The only reason for this job to exist is so that job-dsl can run, and every other job should be created with job-dsl. Yes, it is a bit mind bending, but if you don't think of it as a job but as a script which you run from Jenkins it gets easier.

In our case, every other job will look the same. They'll all be Pipeline jobs which run a Jenkinsfile after checking out a git repository from the same git host.

{% for repository in jenkins_git_repositories %}
pipelineJob('{{ repository }}') {
  definition {
    cpsScm {
      scm {
        git {
          remote {
            url('{{ jenkins_git_user }}@{{ jenkins_git_host }}:' +
                '{{ jenkins_git_path }}/{{ repository }}.git')
{% endfor %}


First we need to check if the seed job has already been created.

- name: Get list of jobs
  uri: url="[name]" return_content=yes
  register: jobs

- name: Check if seed job exists
    seed_exists: "{{ seed_name in|map(attribute='name')|list }}"

We'll create the seed job if it doesn't exist. If it exists, we'll update its configuration (omitted here, but you can see how in the source) to ensure that it is what it should be. When we run the seed job it will remove jobs for repositories that we've removed from the list and create jobs for repositories that we've added.

- name: Create seed job
    url: "{{ seed_name }}"
    method: POST
    HEADER_Content-Type: application/xml
    body: "{{ lookup('template', jenkins_seed_template) }}"
  register: jenkins_seed_updated
  when: not seed_exists

- name: Run seed job
    url: "{{ seed_name }}/build"
    method: POST
    status_code: 201
  when: jenkins_seed_updated|success

This assumes all your git repositories are on the same host and under the same path, with Jenkins using the same username for all of them. You can change the template to take a list of dictionaries which include relevant settings if you need more flexibility, but otherwise all you need to do is list your repositories when applying the role.

- hosts: localhost

  - name: jenkins-pipeline
    jenkins_admin_pass: use-a-vault-variable-for-this
    jenkins_ssh_private_key: jenkins-id_rsa
    jenkins_git_user: git
    jenkins_git_path: git

When you create a new repository you only need to add a single line to that list and run Ansible after you've committed your Jenkinsfile.

A Jenkinsfile for generating static websites

This site is built using Nikola, a static website generator written in Python, and its plugin to use Org mode for GNU Emacs. The repository contains little more than some .org files, which make up the posts, and some configuration and CSS styling. We'll add a Jenkinsfile to

  • Create a Python virtual environment.
  • Install a fixed version Nikola in this virtual environment.
  • Build the site using Nikola, which outputs a directory containing the site.
  • Create a .tar.gz archive containing the output.
  • Upload the archive to a server, for permanent storage.
  • Download and unpack the archive to an internal "staging" web server.
  • Do the same deployment to if staging looks fine.

Pipeline scripts are written in Groovy, but a lot of things are restricted by Jenkins, including certain substring operations. I seem to be better off avoiding Groovy by using the shell to do as much as possible (the output of which you can't get back into Groovy in any reasonable way, by the way). I'd usually put scripts in separate files, but to keep everything in the Jenkinsfile I'll write the scripts here in strings.

First we use node to say that we want to run on some build machine. It's possible to restrict the node to be of a certain type, like a 64-bit Linux server or a 32-bit Windows server, but there's no need for that with just a single Jenkins server doing everything. Each stage defines a separate step in our pipeline.

node {
  stage 'Checkout source'
  checkout scm
  def artifact = "\$(date +%Y-%m-%d)-\$(git rev-parse --short HEAD)"

We'll assume that the packages required by Nikola have already been installed, but we need to set up a Python virtual environment and install Nikola and the Org mode plugin. We also copy a configuration file for the plugin, included in the git repository, to the right location.

Finally, we combine the current date with the short git hash to create a unique artifact name. We include all output from Nikola, which lives in the output directory.

  stage 'Build site'
  sh """pyvenv nikola
        . nikola/bin/activate
        pip3 install wheel==0.29.0 Nikola==7.7.12 webassets==0.11.1
        nikola plugin -i orgmode
        cp init.el plugins/orgmode/init.el
        nikola build
        cd output && tar czf ../${artifact} *"""

Uploading the artifact is easy with scp, but requires that Jenkins is allowed to login and that the destination is in known_hosts.

  stage 'Upload artifact'
  sh "ssh mkdir -p artifacts/"
  sh "scp ${artifact}"

Before deploying to production we use input to ask for human approval.

  stage 'Deploy to staging'
  def deployCommand = """
ssh scp${artifact} /tmp
ssh tar xf /tmp/${artifact} --group www-data -C /usr/share/nginx/www"""
  sh "${deployCommand}-staging"

  stage 'Deploy to production'
  input 'Deploy to production?'
  sh "${deployCommand}"
  sh "ssh rm /tmp/${artifact}"

Here's what the pipeline looks like. Concurrent builds get separate workspaces, as build directories are called in Jenkins. When that happens the "Build site" stage takes a bit longer since the Python dependencies need to be downloaded. All stages share the same workspace. For this Jenkinsfile it would not be an issue if each stage ran on a different node, since the only file shared between stages is the artifact archive, which is uploaded to a shared server.

Jenkins pipeline image

You can do a lot of interesting things with parallel builds and so on with Pipeline but the point I'm trying to make is that, despite the number of Ansible lines in these posts, it's easy to get started with Jenkins. You can make use of it for your own projects, whether they are public or private, once you have Jenkins installed and configured.

In my opinion it makes a lot of sense to ignore all the plugins and legacy of Jenkins and use it as a simple and solid automation tool for building, testing, and deploying things. Even static websites.

Creating Jenkins pipelines with Ansible, part 1


Infrastructure as code, continuous integration (CI), continuous delivery (CD), and version control (also known as source control management, or SCM) are good things. In this post and the next we'll automate the deployment of a system to do CI/CD in what I consider a proper way, where the definition of your build and deployment pipelines live alongside your code in git repositories on some server you have SSH access to.

We'll use Jenkins for CI/CD and Ansible to install it. Jenkins has been around for more than a decade, created before anyone had ever uttered the word "devops". Since it's old, Jenkins has a lot of features, most of them provided as plugins. The core itself isn't bad, and is stable and relatively free from bugs because it has been in use by a lot of people for such a long time. All those years have created a graveyard of plugins, though, yet some plugins are essential if you want to use Jenkins "properly". Want a timestamp for your builds? There's a plugin for that. Actually, there's more than one plugin for that.

Jenkins comes from a time when people thought it was great to have easy to use web interfaces to configure their build systems. The web interface gets in the way of infrastructure as code, where configuration is version controlled and changed by editing text files, not by clicking buttons.

Pipeline is a "new" feature of Jenkins 2.0, but it is based on a plugin which used to be known as Workflow. Using Pipeline we can describe how to test, build and deploy our project using text. That description is typically saved in a file named Jenkinsfile. We'll also use the job-dsl plugin to create the Pipeline jobs, but everything else will be done with shell scripts and kept as simple as possible.


An Ansible role for Jenkins

We'll create an Ansible role which avoids plugins as much as possible and doesn't use the web interface for configuration. It will use the Jenkins REST API to install plugins and change configuration.

To get all the details you can read the source on github. We'll skip the boring parts, and omit some details, and get straight to the interesting bits of the Ansible role.

To disable the setup wizard we need to pass jenkins.install.runSetupWizard=false to Jenkins. Setup wizards may make it easier to get started, but they get in the way of infrastructure as code.

- name: Set Jenkins JAVA_ARGS
    dest: "{{ jenkins_defaults_file }}"
    insertbefore: "^JENKINS_ARGS.*"
    line: "JAVA_ARGS=\"-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false\""
  register: jenkins_defaults

Next, we need to create an admin password. Jenkins supports different password hashing algorithms, and we'll use SHA256 with the salt set to "jenkins". You may want to modify the salt to something unique, or look into stronger hashes, if you're feeling exposed.

- name: Create Jenkins admin password hash
  shell: echo -n "{{ jenkins_pass }}{jenkins}" | sha256sum - | awk '{ print $1; }'
  register: jenkins_pass_hash

We use jenkins_pass_hash.stdout in the admin-config.xml.j2 Jinja2 template to set the password for the admin user, setting force=no when creating the admin user's config file. Jenkins unfortunately saves some other information about the user, such as last login time, in this file and we don't want to always overwrite it. Consequently, our role will fail if the user changes the admin password. If that happens, users can delete user/admin/config.xml and let Ansible recreate it or change the password variable used for this role.

- name: Create admin user directory
    path: "~jenkins/users/admin"
    owner: jenkins
    group: jenkins
    mode: 0755
    state: directory
    recurse: yes

- name: Create admin
  template: src=admin-config.xml.j2 dest="~jenkins/users/admin/config.xml" force=no
  register: jenkins_admin_config

- name: Create config
  copy: src=config.xml dest="~jenkins/config.xml"
  register: jenkins_config

register is used in the last two commands in order to be able to restart Jenkins if the configuration was changed, which we can act on with the |changed filter. Restarts are typically done in handlers, which run only once at the end. We can't wait that long since the following commands will only work if the updated password is active, which requires a restart of Jenkins.

- name: Restart Jenkins if necessary
  service: name=jenkins state=restarted
  when: jenkins_defaults|changed or jenkins_admin_config|changed or jenkins_config|changed

- name: Wait for Jenkins to become available
  wait_for: port=8080

Here's the most complicated part. The Jenkins API uses a "crumb" to prevent Cross Site Request Forgery (CSRF) exploits. In order to use the API we need to retrieve this crumb. In addition, while we did use wait_for to wait for Jenkins start, it may still be initializing. We use until, retries and delay to get around that issue.

- name: Get Jenkins crumb
    user: admin
    password: "{{ jenkins_admin_pass }}"
    force_basic_auth: yes
    url: ""
    return_content: yes
  register: crumb_token
  until: crumb_token.content.find('Please wait while Jenkins is getting ready') == -1
  retries: 10
  delay: 5

- name: Set crumb token
    crumb: "{{ crumb_token.json.crumbRequestField }}={{ crumb_token.json.crumb }}"

Now we're ready to use the REST API to install some plugins we need to enable our automation based on checking out projects and letting their Jenkinsfile do the rest. We'll make a POST request to install each plugin, regardless of whether or not it has been installed already. You can take a look at the git repository for a more verbose solution which checks the list of installed plugins and only makes POSTs for plugins that are not installed.

The only plugins we need are git, job-dsl, workflow-aggregator, and workflow-cps.

- name: Install plugins
    user: admin
    password: "{{ jenkins_admin_pass }}"
    force_basic_auth: yes
    url: "{{ item }}.default=on&{{ crumb }}"
    method: POST
    status_code: [200, 302]
  with_items: "{{ jenkins_plugins }}"

We need to wait for Jenkins to finish installing the plugins. Some plugins require Jenkins to be restarted, so we need to look out for that as well. Plugins that are being installed have installStatus set to Pending. We'll give Jenkins up to 10 minutes to finish installing plugins, checking if it's done every 10 seconds.

Every time we use the API we need to specify credentials and the crumb, as above, but we'll omit those details from here on.

- name: Wait for plugins to be installed
    url: "{{ crumb }}"
    return_content: yes
  register: plugin_status
  until: "'Pending' not in|map(attribute='installStatus')"
  retries: 60
  delay: 10

- name: Check if we need to restart Jenkins to activate plugins
    url: "{{ crumb }}"
    return_content: yes
  register: jenkins_restart_required

- name: Restart Jenkins to activate new plugins
  service: name=jenkins state=restarted
  when: jenkins_restart_required.json.restartRequiredForCompletion|bool

- name: Wait for Jenkins to become available
  wait_for: port=8080

You can now login to Jenkins at as admin with the password you chose. You'll find that while it's nice and clean, it doesn't actually do anything yet. The plugins we installed enable us to check out code from git repositories and run build pipelines described in the Jenkinsfile at the root of the repository. We'll make that happen in the next post.