Jenkins Configuration-as-Code on Kubernetes

Guangyao Xie
Chartboost Engineering
6 min readMar 8, 2023

--

In this post, we share our experience running Jenkins on Kubernetes in production and Jenkins Configuration-as-Code (JCasC) in practice.

Over the years we managed several generations of Jenkins deployments. While Jenkins is a flexible tool that can do anything, it had been a bit of pain to manage it:

  • Plugin and config management could be more consistent
  • It’s difficult to achieve high availability with Jenkins (it’s not designed this way)
  • It can be challenging to scale the Jenkins worker pools

As part of our infrastructure migration to Google Cloud Platform, we decided to re-work our Jenkins deployment. We introduced JCasC plus a Jenkins container image with pre-baked plugins plus the revamped Jenkins Helm chart.

This is our experience solving the first problem: managing the Jenkins server and its plugins and configurations.

Immutable infrastructure

Immutability is good. It’s predictable, versioned, and consistent. Chartboost has been practicing immutable infrastructure for years. With every code release, our CI system bakes application artifacts into cloud images or container images. Application deployments are versioned essentially by GCE instance groups/AWS ASGs/Kubernetes Deployment versions.

For Jenkins, imagine packing everything into a binary including plugins and config files — it would be so much easier to test, deploy, and roll back in the event of any issues.

A less stateful Jenkins

Note: To enforce plugins from container images, Helm values must be set correctly.

Step 1. Build a customized Jenkins image with your favorite plugins!

You can easily extend the Jenkins container image with plugins of your choice. In the Dockerfile:

FROM jenkins/jenkins:2.332.3-lts
RUN jenkins-plugin-cli --verbose --plugins \
allure-jenkins-plugin:2.30.2 \
ansicolor:1.0.1 \
antisamy-markup-formatter:2.7 \
aws-credentials:191.vcb_f183ce58b_9 \
blueocean:1.25.3

Jenkins-plugin-cli resolves plugin dependencies and will error out upon unresolvable/conflicting plugin versions.

Tip: Run the image locally or on CI to verify plugin compatibility.

Step 2. Get started with Helm Chart

Here we utilize Terraform’s Helm provider to install the chart. Helm CLI or other tools also work just fine.

resource "helm_release" "jenkins" {
name = "jenkins"
namespace = "jennkins-staging"
repository = "https://charts.jenkins.io"
chart = "jenkins"

values = [
"${file("values.yaml")}" # Helm Value file
]
}

Inside the jenkins.yaml Helm values file:

controller:
adminUser: admin
adminPassword: <example-passwd> # you should use other auth methods
image : gcr.io/my-project/jenkins
tag: 2.332.3-lts-customized
jenkinsAdminEmail: jenkins@example.com
ingress:
enabled: true
apiVersion: "networking.k8s.io/v1"
labels: {}
initializeOnce: false
overwritePlugins: true
overwritePluginsFromImage: true
JCasC:
authorizationStrategy: |-
globalMatrix:
permissions:
- "Overall/Administer:example-user"

Important: The config below ensures that the container image is the source of truth for plugins. When the Jenkins server container starts, it starts up with the exact same plugins and versions you installed to the container image. This essentially allows plugin testing in non-production environments and ensures what you deploy is what you verified, eliminating much of the risk when rolling out plugin upgrades.

  initializeOnce: false
installPlugins: false
overwritePlugins: true
overwritePluginsFromImage: true

In fact, these configs define the init container behavior with the config values above. Every time Jenkins Pod starts, the init-container first wipes the existing plugins in the Jenkins HOME directory and then copies plugins from the image to Jenkins HOME.

Step 3. Handling secrets

Now you have a basic working environment. Immediately you will notice the new challenge of handling secrets in JCasC with many plugins requiring secrets to interact with external service providers. You definitely don’t want to store them as plain-text values to the repo.

Let’s break down secrets into two use cases:

Secrets used by JCasC

These are the secrets for server and plugin configurations. JCasC allows us to pass secrets as variables. For example:

authentication:
username: jenkins@example.com
password: ${jenkins-gmail-password}

${jenkins-gmail-password} here is a variable that references a key in the secret store, depending on the credential provider.

We highly recommend a read-through of Handling Secrets in JCasC’s official docs. For instance, this example setup uses HashiCorp Vault as its credential provider.

To authenticate with HashiCorp Vault, pass in these environment variables to the Jenkins server Pod. Here Jenkins will use an AppRole (created in HashiCorp Vault) to authenticate with myvault.example.com Vault URL and fetch values under myvaultprefix/jenkins/secrets and myvaultprefix/jenkins/files.

    - name: CASC_VAULT_APPROLE
valueFrom:
secretKeyRef:
name: jenkins-vault-auth
key: approle_id
- name: CASC_VAULT_APPROLE_SECRET
valueFrom:
secretKeyRef:
name: jenkins-vault-auth
key: approle_secret_id
- name: CASC_VAULT_PATHS
value: myvaultprefix/jenkins/secrets,myvaultprefix/jenkins/files
- name: CASC_VAULT_URL
value: myvault.example.com
- name: CASC_VAULT_MOUNT
value: approle

Note: AppRole secrets shouldn’t get passed as plain-text environment variables.

In the case of ${jenkins-gmail-password}, you will want to add a key jenkins-gmail-password and the value to the HashCorp Vault path: myvaultprefix/jenkins/secrets.

Jenkins Credentials — used in jobs

Following the principle of JCasC, we don’t save secrets through the UI to Jenkins credentials, but define Jenkins credentials items in JCasC with secret variables.

      credentials: |-
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
description: Example username password
id: example-user
password: ${example-user-password}
scope: GLOBAL
username: ${example-user-username}

Note 1: The config is a text blob to accommodate the Jenkins Helm chart setup.

Note 2: After changing the secret value in the underlying secret store (HashiCorp Vault in this case), we need to reload JCasC in <jenkins-url>/configuration-as-code/ or restart Jenkins to pick up the new value. Some credentials plugins (e.g., HashiCorp Vault plugin) provide new secret types that allow reading from the secret store directly and that can avoid the issue of “cached” secret values.

Step 4. Extending JCasC

The first thing you’ll want to get it right is auth configs. In this example, the server leverages the google-login plugin.

controller:
# .. other configs

JCasC:
securityRealm: |-
googleOAuth2:
clientId: "xxxxxxxx.apps.googleusercontent.com"
clientSecret: "${google_oauth_client_secret}"
authorizationStrategy: |-
globalMatrix:
permissions:
- "Overall/Administer:firstname.lastname@example.com"
- "Overall/Read:authenticated"
configScripts:
credentials: |-
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "exampleuser-creds-id"
username: "exampleuser"
password: "${key1}"
description: "Sample credentials of exampleuser"
scope: GLOBAL
unclassified: |-
unclassified:
mailer:
smtpHost: smtp.gmail.com
smtpPort: 465
useSsl: true
authentication:
username: jenkins@example.com
password: ${jenkins-gmail-password}

JCasC supports defining server, agents, and plugin configurations. Based on our experience, all the plugins we use are compatible with JCasC and we can eliminate the need for changing configuration through the Jenkins UI.

Here are a few recommendations for adding JCasC config for a new plugin:

Look up schema

Not all plugins provide good enough documentation. Fortunately Jenkins generates a document of the JCasC schema automatically.

<jenkins-hostname>/configuration-as-code/reference and <jenkins-hostname>/configuration-as-code/schema are the best places to look these up (and which data type to use).

Test in staging environments

The Jenkins Helm chart and JCasC make it easy to maintain a staging environment that has a similar setup as production. We can leverage it for testing risky changes.

Use Jenkins UI to verify changes

Quick and easy.

Troubleshoot with Jenkins server log

The Jenkins server log (container log) is useful for detecting JCasC-related problems that may occur when applying it to the server.

Scale with Kubernetes

Though not the focus of this post, it’s worth mentioning that this setup is also very scalable. The official Jenkins Helm chart leverages the Kuberntes plugin to launch Kubernetes Pods as agents. With customized agent images, the agent pools can be highly customizable and scalable. For instance, we can run agents on autoscaled Kubernetes node pools that resize automatically based on job loads. We can define dynamic PVCs of arbitrary sizes for specific agents so they can process large files.

agent:
connectTimeout: 200
containerCap: 32 #Number of agents/workers of same type that can run concurrently
# command and args have to be jnlp default in order to start agent
podTemplates:
default: |
- activeDeadlineSeconds: '0'
containers:
- alwaysPullImage: 'false'
envVars:
- envVar:
key: JENKINS_URL
value: http://jenkins:8080
image: myregistry/inbound-agent:v123
name: jnlp
privileged: 'false'
resourceLimitCpu: '4'
resourceLimitMemory: 10Gi
resourceRequestCpu: 500m
resourceRequestMemory: 1024Mi
ttyEnabled: 'false'
workingDir: /home/jenkins/agent
idleMinutes: '0'
instanceCap: '10'
label: default-agent
name: default
workspaceVolume:
emptyDirWorkspaceVolume:
memory: 'false'
yamlMergeStrategy: override
agent-2: |
# define the next agent template..

Note: If you use Jenkinsfile, you may also specify Pod templates at a per-job level.

Conclusion

By implementing JCasC and customized Jenkins images, we have developed a system that enables more engineers to contribute. This has increased efficiency in making changes while maintaining the stability of our Jenkins systems.

Jenkins is one of many tools the Chartboost DevOps team manages. It has been a team effort that would not have been possible without the engineers working on it: Artem Chekunov, Abhinav Damarapati, and many others who contributed to the project.

--

--