Tekton - OpenShift Pipelines
Table of Contents
- Table of Contents
- Prerequisites
- Requirements
- Step 1: Git Repository and Project
- Step 2: Creating a namespace
- Step 3: Access Token and Secret
- Step 4: PersistentVolumeClaim
- Step 5: Service, Route and Deployment
- Step 6. .dockerconfigjson Secret
- Step 7. Create the Tasks for the Pipeline
- Step 8: Pipeline
- Step 9: TriggerBinding and TriggerTemplate
- Step 10: Event listener and webhook
- Filtering with Interceptors
- Alternative for Webhooks: Setting up a CronJob
- Pipeline with restrictive permissions
Prerequisites
Before beginning with this guide, you should check that the Red Hat OpenShift Pipelines operator is installed on the cluster. If not, install it or request that a cluster admin installs it from the Operator Hub and then proceed to the next steps. Installing the OpenShift CLI on your system and having it authenticated for the cluster is also recommended for this guide, but all steps can also be performed through the OpenShift Dashboard.
For testing the pipeline, you can install the Tekton CLI on your system, but you can use the Dashboard for this as well. If you have authenticated using OpenShift CLI, Tekton CLI should also automatically be authenticated for the cluster.
Requirements
This guide will take you through the creation of a basic Tekton CI/CD pipeline using a simple Node.js application as an example but can relatively easily be adjusted for applications written in Java, Python, Go, C# (.NET) and more. The parts needed are as follows:
- A GitHub/GitLab repository containing the project code with a valid Dockerfile present
- A namespace to house the pipeline and the application
- A Secret to contain the access token to the repository
- A PersistentVolumeClaim for building the application
- A Service, a Route and a Deployment to serve the application, if necessary
- A .dockerconfigjson type Secret to pull and push images to the registry
- A set of Tasks to perform on the pulled code
- A Pipeline that outlines the Tasks to run, and the order they'll run in
- A TriggerTemplate and a TriggerBinding to trigger a pipeline run when new code is pushed into the repository
- An EventListener to receive webhook triggers from the repository
Step 1: Git Repository and Project
For the purposes of this guide, we'll set up a simple Node.js application, but with a proper selection of a base image (the FROM instruction in the Dockerfile), it is possible to adapt this guide to applications developed in various frameworks.
Let's start by creating and navigating to an empty directory for our project:
$ cd node-pipeline
Then initialize the directory both for git and Node.js:
Initialized empty Git repository in /home/sample/node-pipeline
$ npm init
...
package name: (node-pipeline)
version: (1.0.0)
description: A simple example for Node.js in Tekton pipeline
entry point: (index.js)
test command: echo "No test specified" && exit 1
git repository: https://version.helsinki.fi/sample/node-pipeline
keywords: tekton, node.js
author: Sample
license: (ISC)
type: (commonjs)
...
We can then add the index.js file to the directory.
index.js
const port = process.env.PORT || 8080;
const server = http.createServer((req, res) => {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello, World!\n');
});
server.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
});
Run the install command.
Then make the Dockerfile.
Dockerfile
WORKDIR /app
COPY . .
CMD ["node", "index.js"]
Then commit the created files.
$ git commit -m "First commit"
You can then push the repository to a developer platform of your choice (e.g. GitLab).
Step 2: Creating a namespace
Your project will need a namespace where you will build the pipeline. If you already have a namespace, you can skip this step.
For the purposes of this guide, we will call the namespace ocp-pipeline and use the OpenShift CLI (oc) to create it.
namespace/ocp-pipeline created
$ oc project ocp-pipeline
Now using project "ocp-pipeline" on server "https://api...".
If you are not a self-provisioner and do not have a namespace to use, you will have to ask someone with the necessary permissions to create the namespace for you.
Step 3: Access Token and Secret
The pipeline ServiceAccount that is added to all namespaces when the Red Hat OpenShift Pipelines operator is installed may need an access token to pull the code from your repository. You can skip this step if your repository can be pulled from without restrictions. Otherwise, you will need to create an access token for your repository with at least read_repository scope (or equivalent, e.g. read-only to Contents on GitHub) and then add it into a secret. Access tokens can often be created through the settings of your remote repository or your user settings.
Note that on self-managed GitLab, such as https://version.helsinki.fi, the access token has to have at least the Reporter role to pull private repositories!
We will use YAML files to create the required objects on the cluster end. Note the annotation here. It must start with tekton.dev/git- or Tekton will ignore it!
secret.yaml
apiVersion: v1
metadata:
name: git-access-token
annotations:
tekton.dev/git-0: https://version.helsinki.fi # The domain for your repository
stringData:
username: your-username # Can be anything for GitLab
password: paste-access-token-here
type: kubernetes.io/basic-auth
And apply it.
And then give it for the pipeline ServiceAccount to use:
Note that if you set the access token to expire, you will need to update this secret you created, or the pipeline may be unable to pull code!
Step 4: PersistentVolumeClaim
While it is possible to create a new volume claim for each deployment, for space efficiency, it is recommended to keep a persistent volume claim available. We can use a YAML and oc to create the PersistentVolumeClaim.
pvc.yaml
kind: PersistentVolumeClaim
metadata:
name: pipeline-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Number and unit, e.g. Ki, Mi, Gi
And then apply it to the cluster.
Step 5: Service, Route and Deployment
To enable access to the application (if meant to be accessible), you should create a Service, a Route and a Deployment.
Service
First create the Service. You can change most of these to suit the naming convention of your application.
service.yaml
kind: Service
metadata:
name: hello-world
# Having labels is not necessary but helps with categorizing
labels:
app: hello-world
spec:
selector:
app: hello-world # Target pods with the label app=hello-world
ports:
- name: web
protocol: TCP
port: 8080 # The port your app runs on
targetPort: 8080
And apply it to the cluster.
Route
And then create the route. Again, you can change much of this for the naming convention of your application.
route.yaml
kind: Route
metadata:
name: hello-world
# Having labels is not necessary but helps with categorizing
labels:
app: hello-world
spec:
port:
targetPort: web
to:
kind: Service
name: hello-world # The name of your Service
tls:
termination: edge
And apply it to the cluster.
Deployment
Finally, we'll create the Deployment to create the pods to serve our applications. Note the value of the image here. Its format is image-registry.openshift-image-registry.svc:5000/<namespace-name>/<image-name>:<image-version/tag>, <namespace-name> meaning the name of the namespace/project where your application will reside (in this case, the ocp-pipeline namespace we created), and <image-name> and <image-version/tag> being determined by you. We'll use hello-world:latest here.
deployment.yaml
kind: Deployment
metadata:
name: hello-world
# Having labels is not necessary but helps with categorizing
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world # Manages pods with the label app=hello-world
template:
metadata:
labels:
app: hello-world # Gives the created pods the label app=hello-world
spec:
containers:
- name: app
# Make sure this matches your namespace!
image: image-registry.openshift-image-registry.svc:5000/ocp-pipeline/hello-world:latest
imagePullPolicy: Always
ports:
- containerPort: 8080 # Same as defined in Service
Apply to cluster.
Step 6. .dockerconfigjson Secret
This Secret is a .dockerconfigjson type Secret that Buildpacks (added as part of the Tasks) can use for pulling and pushing images to the internal image registry.
Create a YAML for an empty secret.
registry-secret.yaml
kind: Secret
metadata:
name: registry-auth-secret
stringData:
.dockerconfigjson: "{}"
type: kubernetes.io/dockerconfigjson
And apply it.
Finally, link it to the pipeline ServiceAccount to push and pull images.
Step 7. Create the Tasks for the Pipeline
The Tasks are what will run when the Pipeline is activated. They consist of individual steps that have to be completed for the Pipeline to proceed. Each step has its own image under which it will run, so make sure you choose or create an image that contains the necessary software you want to use (e.g. OpenShift CLI oc).
Workspaces should also be noted. Workspaces are similar to Volumes, providing the Tasks access to parts of the filesystem. Tasks that do not need to read or write files (such as Task 4 here) do not need workspaces. You can refer to the path for a workspace with $(workspaces.<workspace-name>.path). For more information about workspaces, you can check the Tekton documentation.
Similarly to workspace paths, you can refer to any parameters you created for a Task with $(params.param-name).
Task 1: Convert .dockercfg
Let's start with a Task to convert an existing .dockercfg authentication secret to the .dockerconfigjson secret we created in the last step.
convert-dockercfg.yaml
kind: Task
metadata:
name: convert-dockercfg
spec:
params:
- description: The name of the .dockercfg secret
name: DOCKERCFG_SECRET_NAME
type: string
- description: The name of the .dockerconfigjson secret to create/modify
name: DOCKERCONFIGJSON_SECRET_NAME
type: string
default: registry-auth-secret # If the param is not defined in the Pipeline, this value will be used
steps:
- image: 'image-registry.openshift-image-registry.svc:5000/openshift/tools' # This image contains both oc and jq required for this step
name: convert-to-dockerconfigjson
script: |
#!/bin/sh
echo "> Converting .dockercfg to .dockerconfigjson"
# Get the .dockercfg Secret, extract its data and save it into a file named config.json
oc get secret $(params.DOCKERCFG_SECRET_NAME) -o json \
| jq -r '.data[".dockercfg"]' \
| base64 -d \
| jq '{auths: .}' > $(workspaces.output.path)/config.json
echo "> Applying .dockerconfigjson Secret to cluster"
# Dry-run create a .dockerconfigjson Secret for the YAML and apply to cluster
oc create secret generic $(params.DOCKERCONFIGJSON_SECRET_NAME) \
--type=kubernetes.io/dockerconfigjson \
--from-file=.dockerconfigjson=$(workspaces.output.path)/config.json \
--dry-run=client -o yaml | oc apply -f -
workspaces:
- name: output
Apply to cluster.
Task 2: Install and Test
Next, we'll create the Task to install the necessary dependencies for our application (of which there are none for this example Node.js application) and run any tests that the application might have. If you are using something other than Node.js for your application, you will need to change the image and script for each step. Refer to the Dockerfile you created earlier for a fitting image.
install-and-test.yaml
kind: Task
metadata:
name: npm-install-and-test
spec:
steps:
- image: 'node:18' # Same as in Dockerfile
name: install
script: |
#!/bin/sh
cd $(workspaces.source.path)
npm install
- image: 'node:18'
name: test
script: |
#!/bin/sh
cd $(workspaces.source.path)
npm test || echo "No tests, skipping."
workspaces:
- name: source
Apply to cluster.
Task 3: Buildpacks
You can use the ready-made Buildpacks Task and apply it directly to the cluster with the command found on within the README in a version directory. With the latest version at the time of writing this guide (note that kubectl and oc are interchangeable in a lot of situations):
This should work as is.
Task 4: Update deployment image
This Task will update the deployment image serving the application and restart it so the new version of the application becomes available. Note the lack of a workspace as the Task does not require reading or writing anything in the filesystem.
update-deployment.yaml
kind: Task
metadata:
name: update-deployment-image
spec:
params:
- name: DEPLOYMENT_NAME
type: string
- name: CONTAINER_NAME
type: string
- name: IMAGE
type: string
steps:
- image: 'quay.io/openshift/origin-cli:4.13' # This image has oc, which is required for this Task
name: update-image
script: |
#!/bin/sh
echo "> Updating deployment $(params.DEPLOYMENT_NAME)"
oc set image deployment/$(params.DEPLOYMENT_NAME) $(params.CONTAINER_NAME)=$(params.IMAGE)
echo "> Restarting rollout"
oc rollout restart deployment/$(params.DEPLOYMENT_NAME)
And apply it.
Step 8: Pipeline
With the Tasks created, it is time to gather them into a Pipeline. Due to the length of the YAML file, we will go through the creation of it step by step. Scroll to the end of this step to see the full YAML.
Start of YAML
The start of the file has one thing worth noting: params. The params defined here are what's given to the pipeline when it runs. In the case of this example, the pipeline will be given a Git repository URL to pull the code from, and the image URL to push the image to. You can refer to the params in the tasks with $(params.param-name).
kind: Pipeline
metadata:
name: nodejs-pipeline
spec:
params:
- description: git repo url to clone
name: repo-url
type: string
- description: the url to the application image
name: image-url
type: string
Task: clone-repo
We'll start by adding the first Tasks we created. Note that tasks are under spec in the YAML file!
We'll start with a cluster resolver task to clone the repository. Cluster resolvers come with the Red Hat OpenShift Pipelines operator and are referred to a bit differently than user-created Tasks. You can view these Tasks and their details on the tektoncd catalog on GitHub. If for whatever reason you cannot access cluster resolvers, you can also use the raw links from the catalog to apply these to your namespace similarly to the Buildpacks Task. Note that in this case, you will refer to this Task the same way as any other in the Pipeline!
We will use the git-clone Task here. At least the URL param is necessary. Revision is included here to showcase its existence (you can use it to pull, for example, a specific branch of the respository) but as it defaults to "main", it is unnecessary in this case.
- name: clone-repo
params:
- name: URL
value: $(params.repo-url)
- name: REVISION
value: main
taskRef:
params:
- name: kind
value: task
- name: name
value: git-clone
- name: namespace
value: openshift-pipelines # This is the default but may vary!
resolver: cluster
workspaces:
- name: output # Since there is a workspace called output in the Task, it has to be defined here
workspace: shared-workspace # This will come to play later
Task: install-and-test
Next, we'll run the Task that installs the necessary dependencies and runs any tests we may have. Something to note here is runAfter. The Task will only run after the Tasks listed under runAfter have completed. If any of the Tasks listed fail during a run, this Task will not run.
runAfter: # Will only run after all listed tasks complete
- clone-repo
taskRef:
kind: Task
name: npm-install-and-test # Make sure this matches the name you gave to the Task
workspaces:
- name: source
workspace: shared-workspace
Task: convert-dockercfg
Now, we'll add the Task to convert the .dockercfg Secret to the .dockerconfigjson secret we created. The name of the created .dockerconfigjson Secret goes in the DOCKERCONFIGJSON_SECRET_NAME param, though due to setting a default, it is technically unnecessary here. To get the name of the .dockercfg secret for the DOCKERCFG_SECRET_NAME param, we can use oc and grep:
The .dockercfg Secret name should be called pipeline-dockercfg-<random characters>. In this example, pipeline-dockercfg-ftwrt.
runAfter:
- clone-repo
params:
- name: DOCKERCFG_SECRET_NAME
value: pipeline-dockercfg-ftwrt # The .dockercfg Secret name
- name: DOCKERCONFIGJSON_SECRET_NAME
value: registry-auth-secret # Make sure this matches the secret you created
taskRef:
kind: Task
name: convert-dockercfg
workspaces:
- name: output
workspace: conf-workspace # Note the different workspace!
Task: build-image
This task will use the Task we applied from GitHub. You can check the name, parameters, workspaces and other info from the Task it created ($ oc describe task buildpacks or Search -> Resources -> Task -> Select "buildpacks" in the Dashboard). Check the GitHub page under your version for a compatible builder image for your application. If your specific programming language or framework is not listed, you may need to use a builder other than Buildpacks, or search for a third-party builder image for your needs. Looking up "Cloud Native Buildpacks <programming language name>" may yield some results. Optionally, refer to the documentation to build your own image.
params:
- name: APP_IMAGE
value: $(params.image-url)
- name: BUILDER_IMAGE # Check for compatible images on the Tekton Hub page
value: 'paketobuildpacks/builder:base'
runAfter:
- install-and-test
- convert-dockercfg
taskRef:
kind: Task
name: buildpacks
workspaces:
- name: source
workspace: shared-workspace
Task: update-image
This Task will update your Deployment. Note that this Task does not require workspaces due to only making changes on the cluster.
params:
- name: DEPLOYMENT_NAME
value: hello-world # The name you gave to the Deployment
- name: CONTAINER_NAME
value: app # The name you gave to the container inside the Deployment
- name: IMAGE
value: $(params.image-url):latest # Use the same version/tag as in the Deployment
runAfter:
- build-image
taskRef:
kind: Task
name: update-deployment-image
Finishing up
To finish up the YAML, we'll define the workspaces the application will use. In this case, we use two. Note that these workspaces are separate, so if you define one to be the workspace used for a given Task, the changes to the filesystem in that Task are not reflected in the others. In this guide, we'll define a separate workspace conf-workspace for the convert-dockercfg Task as the other Tasks do not need to access the config.json file. While the task would still work under the same shared-workspace as the others, this is done to showcase the possibility, and to avoid possible conflicts.
- name: conf-workspace
- name: shared-workspace
Full YAML
To sum up, here is the full YAML created during this step.
pipeline.yaml
kind: Pipeline
metadata:
name: nodejs-pipeline
spec:
params:
- description: git repo url to clone
name: repo-url
type: string
- description: the url to the application image
name: image-url
type: string
tasks:
- name: clone-repo
params:
- name: URL
value: $(params.repo-url)
- name: REVISION
value: main
- name: DELETE_EXISTING
value: 'true'
taskRef:
params:
- name: kind
value: task
- name: name
value: git-clone
- name: namespace
value: openshift-pipelines
resolver: cluster
workspaces:
- name: output
workspace: shared-workspace
- name: install-and-test
runAfter:
- clone-repo
taskRef:
kind: Task
name: npm-install-and-test
workspaces:
- name: source
workspace: shared-workspace
- name: convert-dockercfg
runAfter:
- clone-repo
params:
- name: DOCKERCFG_SECRET_NAME
value: pipeline-dockercfg-ftwrt
- name: DOCKERCONFIGJSON_SECRET_NAME
value: registry-auth-secret
taskRef:
kind: Task
name: convert-dockercfg
workspaces:
- name: output
workspace: conf-workspace
- name: build-image
params:
- name: APP_IMAGE
value: $(params.image-url)
- name: BUILDER_IMAGE
value: 'paketobuildpacks/builder:base'
runAfter:
- install-and-test
- convert-dockercfg
taskRef:
kind: Task
name: buildpacks
workspaces:
- name: source
workspace: shared-workspace
- name: update-image
params:
- name: DEPLOYMENT_NAME
value: hello-world
- name: CONTAINER_NAME
value: app
- name: IMAGE
value: $(params.image-url):latest
runAfter:
- build-image
taskRef:
kind: Task
name: update-deployment-image
workspaces:
- name: conf-workspace
- name: shared-workspace
We can apply it to the cluster.
Testing the Pipeline
If you want to test the pipeline at this point, you can use either the Pipelines page on the OpenShift Dashboard (press the three dots on the right side of your Pipeline and press Start) or the Tekton CLI with the following command:
Whichever method you use, use the following values:
- repo-url: The URL of your project remote repository
- image-url: The image URL you defined for the Deployment, without the version/tag
- conf-workspace
- Name: conf-workspace
- Sub-path: (leave empty, not applicable for Dashboard)
- Workspace Type: emptyDir/Empty Directory (will be covered in the next step)
- Type of emptyDir: (leave empty, not applicable for Dashboard)
- shared-workspace
- Name: shared-workspace
- Sub-path: (leave empty, unless you're using a sub-path for your project, not applicable for Dashboard)
- Workspace Type: pvc/PersistentVolumeClaim
- Claim Name: The name of the PersistentVolumeClaim you created
If you used Tekton CLI, you will be given a command to run to follow the progress of the PipelineRun. Similarly, if you start the Pipeline from the Dashboard, you will be taken to the PipelineRun and you can view its progress through the Logs tab.
If everything works correctly, all the Tasks should complete. If not, you'll know which Task fails and the logs will tell you why to help with debugging. The git-clone task may fail due to incorrect URL or incorrectly set up access token (if required). Building the image may fail if your ServiceAccount cannot gain access to the image repository. Check that you have formatted the image URL correctly and that the .dockerconfigjson secret is linked to the pipeline ServiceAccount! If the issues relate to file system permissions with errors like Permission Denied, see Pipeline with restrictive permissions before returning to the next step.
Note that if the app is not visible through its route (which you can find with $ oc get route <name of your route>), you may need to restart the rollout with
Step 9: TriggerBinding and TriggerTemplate
The TriggerBinding and TriggerTemplate can start a Pipeline run when triggered.
Important to note before proceeding through steps 9 and 10: If your cluster only works inside a specific network, requiring a VPN or proxy to use outside of it, you cannot trigger the pipeline from a repository hosted outside of that network! https://version.helsinki.fi/ should work for clusters hosted at University of Helsinki. If it doesn't work or you absolutely want to use a specific out-of-network platform to host your code, see Alternative for Webhooks: Setting up a CronJob for a workaround.
TriggerBinding
The TriggerBinding can take fields from an event payload and convert them into parameters for the TriggerTemplate to use. Let's create one! The important thing to note here is the value of the params. You need to be aware of the structure of the payload in order to use it. Most often you can navigate to documentation through the webhook creation page on the developer platform of your choice. To cover the most common ones, you can find the GitLab documentation here, and the GitHub documentation here.
trigger-binding.yaml
kind: TriggerBinding
metadata:
name: gitlab-binding
spec:
params:
- name: repo-url
value: $(body.repository.git_http_url) # In the payload body, find 'repository' and 'git_http_url' under that
Then apply to cluster.
TriggerTemplate
Then create the TriggerTemplate to run the Pipeline whenever triggered. Set the image-url param to what you set in the Deployment, without the version/tag. Also note that we defined conf-workspace to use an emptyDir volume. An emptyDir volume will only hold data for the duration of a given Task. Since the other Tasks do not need to access the files created by the convert-dockercfg Task, an emptyDir is suitable here.
trigger-template.yaml
kind: TriggerTemplate
metadata:
name: nodejs-trigger-template
spec:
params:
- name: repo-url # Param from the TriggerBinding
- name: image-url
resourcetemplates:
- apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
generateName: nodejs-pipeline-run- # The prefix for your PipelineRun name
spec:
pipelineRef:
name: nodejs-pipeline # The name of the Pipeline you created
params:
- name: repo-url
value: $(tt.params.repo-url) # You can use tt.params to make the TriggerTemplate reusable
- name: image-url
value: image-registry.openshift-image-registry.svc:5000/ocp-pipeline/hello-world # Set in the Deployment, without version/tag
# Note that if you did the setup for restrictive permissions, you should include the subdirectory here as well, if you are not using the default name
workspaces:
- name: shared-workspace
persistentVolumeClaim:
claimName: pipeline-pvc # The name of the PersistentVolumeClaim you created
- name: conf-workspace
emptyDir: {} # An emptyDir volume only holds data for the duration of the Task
Apply this to the cluster.
Step 10: Event listener and webhook
To finish up, we'll create an EventListener to create a Trigger defined by the TriggerTemplate, and a webhook for our remote repository to send a payload to the EventListener.
EventListener
The EventListener ties up to the TriggerBinding and TriggerTemplate created in the previous step. Let's create a YAML for it:
event-listener.yaml
kind: EventListener
metadata:
name: nodejs-listener
spec:
serviceAccountName: pipeline
triggers:
- name: gitlab-trigger
bindings:
- ref: gitlab-binding # The TriggerBinding you created
template:
ref: nodejs-trigger-template # The TriggerTemplate you created
And apply it to the cluster.
After applying, expose the service that was created. Note the prefix el-!
You should now be able to get a URL to use for your webhook. The URL you want is under HOST/PORT when you run the command below. Note the prefix again!
Copy the URL for the webhook.
Webhook
With the event listener created and waiting for traffic, it is time to create a webhook in your remote repository. On GitLab, you can navigate to the Project Settings (by pressing the three vertical dots) and then open the Webhooks tab. On GitHub, you can open the Settings tab at the top of your repository and then navigate to the Webhooks tab. When creating the webhook, make sure (if applicable) that you choose JSON as its type as that's the format TriggerBindings expect. For the URL, you can input http://<the route you got above>. Make the webhook trigger when push events happen. On GitLab, you can even specify specific branches to check, but you can do the filtering on the EventListener's end too (see Filtering with Interceptors).
Test the pipeline
To test the Pipeline you can either try to push code to your remote repository or, on GitLab, you can also press the Test button to send a push event payload. If, when on the Pipeline tab of the Dashboard, you see the pipeline running, it was triggered correctly! You can open the PipelineRun and go to the Logs tab to see how it works. If any errors occur, the Logs tab will help you debug.
Filtering with Interceptors
While the pipeline should work and trigger correctly in its current state, you may notice that it will trigger even when you push on a branch other than main (which we set the clone-repo Task to always use). It even triggers if you send any compatible payload to the EventListener, not just ones through the webhook! This may allow a malicious actor to send and run any software on the cluster! To remedy these issues, you will want to use interceptors in your EventListener. Interceptors process the payload before the TriggerBindings, allowing you to filter out undesired traffic.
Secret tokens
Both GitLab and GitHub allow you to specify a secret token to send with your payload. This string allows the recipient of the payload (in this case, our EventListener) to verify the payload originates from the developer platform. Let's first create a hard-to-guess string to use as our token. You can use anything you want to create the string. As an example, we will use OpenSSL here.
UQucha5S045/xu0YwQX3GFBAgCF61yKj
We can then create a new Secret to use the random string.
webhook-secret.yaml
apiVersion: v1
metadata:
name: webhook-token
stringData:
token: UQucha5S045/xu0YwQX3GFBAgCF61yKj
type: Opaque
And apply it to the cluster.
Interceptors
We can then add the interceptor's definition under our trigger in event-listener.yaml. The CEL interceptor should be noted separately. These types of Interceptors allow you to use the CEL expression language to filter based on the fields in the payload. Here, we split the field ref (with the value being, in this case, refs/heads/main) to only get the branch name.
event-listener.yaml
triggers:
- name: gitlab-trigger
bindings:
- ref: gitlab-binding
template:
ref: nodejs-trigger-template
interceptors:
- ref:
name: gitlab # Change this to 'github' if your repository is there instead
params:
- name: secretRef
value:
secretName: webhook-token # The name of the Secret you created
secretKey: token # Whatever you named it under stringData
- name: eventTypes
value:
- Push Hook # Use 'push' for GitHub
- ref:
name: cel
params:
- name: filter
# Get just the branch name from body.ref
value: >-
body.ref.split('/')[2] == 'main'
You can read more about interceptors and their many possible uses for payloads of different developer platforms in the Tekton documentation.
You can apply the changes to the EventListener the same way you created it:
Editing your webhooks
With the OpenShift side configured, we can go ahead and add the token we created to our webhook. Both GitLab and GitHub allow us to edit preexisting webhooks through their respective Webhooks pages. The only thing you should need to change is adding the secret token.
You can then test the filtering system by, for example, creating a tokenless webhook and sending a payload through that. If everything works correctly, your Pipeline should not start without supplying the correct token.
Alternative for Webhooks: Setting up a CronJob
If you cannot have your pipeline trigger through a webhook, another option is to have a CronJob periodically check for changes in your repository, triggering the pipeline whenever changes are detected.
Commit hash ConfigMap
We'll start by creating a ConfigMap to store the hash of the latest commit so the CronJob has something to compare to the current hash. We can use oc here.
This creates a ConfigMap named git-last-commit containing the key hash with the initial value of initial.
PipelineRun template ConfigMap
We'll create a template for a PipelineRun that the CronJob will replicate. Save the file locally:
nodejs-pipeline-run-template.yaml
kind: PipelineRun
metadata:
generateName: nodejs-pipeline-run- # Prefix for the PipelineRuns created from this template
spec:
pipelineRef:
name: nodejs-pipeline # The name of your Pipeline
params:
- name: repo-url
value: https://github.com/your-org/your-repo.git # Use the HTTP url of your repo
- name: image-url
value: image-registry.openshift-image-registry.svc:5000/ocp-pipeline/hello-world # What you defined for the Deployment, no version/tag
# If you did the setup for restrictive permissions, you should include the subdirectory here as well if you are not using the default name
workspaces:
- name: shared-workspace
persistentVolumeClaim:
claimName: pipeline-pvc # The PVC you created
- name: conf-workspace
emptyDir: {}
And save it into a ConfigMap.
--from-file=nodejs-pipeline-run.yaml=nodejs-pipeline-run-template.yaml
This will create a ConfigMap named pipeline-run-template containing the key nodejs-pipeline-run.yaml with the value being the contents of the nodejs-pipeline-run-template.yaml file we created.
CronJob
Finally, we'll create a CronJob that will create PipelineRuns periodically when new changes are detected in the repository.
We can create the CronJob using a YAML.
cronjob.yaml
kind: CronJob
metadata:
name: check-git-changes
spec:
schedule: "*/5 * * * *" # Run every 5 minutes
jobTemplate:
spec:
template:
spec:
serviceAccountName: pipeline
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: git-checker
image: image-registry.openshift-image-registry.svc:5000/openshift/tools # Contains oc and git
command:
- /bin/bash
- -c
- |
set -e
# NOTE! These lines are not necessary if your repo needs no authentication to pull from!
mkdir -p /tmp/git-creds
USER=$(cat /auth/username)
PASS=$(cat /auth/password)
# Change the domain here if your code is not on GitHub!
echo "https://${USER}:${PASS}@github.com" > /tmp/git-creds/.git-credentials
git config --file=/tmp/git-creds/.gitconfig credential.helper 'store --file=/tmp/git-creds/.git-credentials'
export GIT_CONFIG_GLOBAL=/tmp/git-creds/.gitconfig
# /END NOTE
# Enter the details of your repo
REPO_URL="https://github.com/your-org/your-repo.git"
BRANCH="main"
# Get the current commit hash from the repo
HASH=$(git ls-remote $REPO_URL $BRANCH | cut -f1)
# Get the hash from the ConfigMap
OLD_HASH=$(oc get configmap git-last-commit -o=jsonpath='{.data.hash}')
if [ "$HASH" != "$OLD_HASH" ]; then
echo "New commit $HASH detected at $REPO_URL! Updating hash and triggering pipeline..."
# Add new hash to ConfigMap
oc patch configmap git-last-commit --type merge -p "{\"data\":{\"hash\":\"$HASH\"}}"
# Load template from mounted volume
oc create -f /templates/nodejs-pipeline-run.yaml
else
echo "No changes in $REPO_URL."
fi
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
volumeMounts: # Used to load the template YAML and the Git credentials Secret
- name: pipeline-template
mountPath: /templates
# Mounting git-auth is unnecessary if your repo does not require authentication
- name: git-auth
mountPath: /auth
readOnly: true
restartPolicy: OnFailure
volumes:
- name: pipeline-template
configMap:
name: nodejs-pipeline-run-template # The created ConfigMap template
# Not necessary if authentication is not required
- name: git-auth
secret:
secretName: git-access-token # The secret containing the access token for your repo
And apply it to the cluster.
The CronJob will now check the repository every 5 minutes and compare the current hash to the hash on file, creating a new PipelineRun whenever changes are detected.
Pipeline with restrictive permissions
Sometimes you may need to run a pipeline inside a cluster or use a file system with more strict access control. If that's the case, you will need additional tasks and settings for the pipeline and its tasks. This section will walk you through what to do differently to work around more restrictive permissions.
Using a subdirectory
To make sure your Tasks have full access to the files from your Git repository, you will need to create a subdirectory so you have full control of the access to that directory.
Task: make-subdir
This task will create a subdirectory as the user 65532 (nonroot), which is chosen due to it being what the git-clone cluster resolver task uses.
make-subdir.yaml
kind: Task
metadata:
name: make-subdir
spec:
params:
- default: app
description: The name of the subdirectory to create
name: DIRECTORY_NAME
type: string
steps:
- name: make-directory
image: busybox
# Define which user to run the step as
securityContext:
runAsUser: 65532
runAsNonRoot: true
script: |
echo "> Creating subdirectory $(workspaces.output.path)/$(params.DIRECTORY_NAME)"
mkdir -p $(workspaces.output.path)/$(params.DIRECTORY_NAME)
workspaces:
- name: output
Apply to cluster.
You now have a new Task that will create a subdirectory (by default named app) inside the mounted PersistentVolumeClaim. It runs as user 65532, meaning the subdirectory is fully writable for that user.
We can now add the task to the Pipeline. Start by defining a new param for the Pipeline for ease of access to the subdirectory in the other Tasks:
pipeline.yaml
spec:
params:
- description: git repo url to clone
name: repo-url
type: string
- description: the url to the application image
name: image-url
type: string
# Add this!
- description: the name of the subdirectory
name: subdir-name
default: app
type: string
...
We can then add the new Task to the Pipeline.
pipeline.yaml
tasks:
- name: make-subdir
params:
# Task param name
- name: DIRECTORY_NAME
# The value of the param in the Pipeline
value: $(params.subdir-name)
taskRef:
kind: Task
name: make-subdir
workspaces:
- name: output
workspace: shared-workspace
...
Make sure git-clone runs only after this:
pipeline.yaml
- name: clone-repo
params:
- name: URL
value: $(params.repo-url)
- name: REVISION
value: main
- name: DELETE_EXISTING
value: 'true'
# Add this!
runAfter:
- make-subdir
...
Using the subdirectory
Additionally, you should make all Tasks in the Pipeline to use the newly-created subdirectory, only exceptions being convert-dockercfg (due to using a different workspace), update-image (due to not using workspaces) and build-image (shown later). Add this to the workspaces definition of the affected Tasks in the Pipeline:
pipeline.yaml
workspaces:
- name: source # May also be called "output"
# Add this!
subPath: $(params.subdir-name)
workspace: shared-workspace
...
Setting Tasks to run as a certain user
The git-clone cluster resolver task leaves its files with limited permissions, causing difficulties for the other Tasks. Luckily, as seen in the make-subdir Task, we can define which user each step in a Task runs as. As stated, git-clone runs as user 65532 by default, so we can set the steps inside npm-install-and-test to run as this user as well.
Task: npm-install-and-test
install-and-test.yaml
kind: Task
metadata:
name: npm-install-and-test
spec:
steps:
- image: 'node:18'
name: install
# Add this to both steps
securityContext:
runAsUser: 65532
runAsNonRoot: true
script: |
#!/bin/sh
cd $(workspaces.source.path)
npm install
- image: 'node:18'
name: test
# Here, too
securityContext:
runAsUser: 65532
runAsNonRoot: true
script: |
#!/bin/sh
cd $(workspaces.source.path)
npm test || echo "No tests, skipping."
workspaces:
- name: source
Once reconfigured
the Task will run as user 65532, making it possible to access the files created by the git-clone cluster resolver task.
Modifying buildpacks
The default behavior of the buildpacks Task may cause issues inside the mounted volume due to restrictions set on commands like chown. You will need to edit the YAML locally to make it function with more restrictive policies. Let's download the file locally.
We can then open the file in a text or code editor of choice and modify it. The modifications, in order, are as follows:
Don't chown inside the workspace
Since the first step, prepare, runs as user 0 (root) by default, it is fine to let it modify the permissions of /tekton/home and /layers. However, since the file system of the mounted workspace is more restrictive, chown will be stopped regardless of whether the step claims to run as root. Since we will use the correct user in the next step anyway, chown is unnecessary and the workspace can be removed from the script.
Before:
buildpacks.yaml
for path in "/tekton/home" "/layers" "$(workspaces.source.path)"; do
echo "> Setting permissions on '$path'..."
chown -R "$(params.USER_ID):$(params.GROUP_ID)" "$path"
...
After:
buildpacks.yaml
for path in "/tekton/home" "/layers"; do
echo "> Setting permissions on '$path'..."
chown -R "$(params.USER_ID):$(params.GROUP_ID)" "$path"
...
"$(workspaces.source.path)" has been removed from the for loop.
Stop using the root directory to store authorization files
Since we will have the create step run as non-root, we cannot have it using the root directory to save the authorization secrets for the image registry. This can be avoided by adding the environmental variable HOME for this step:
buildpacks.yaml
- name: create
image: $(params.BUILDER_IMAGE)
imagePullPolicy: Always
command: ["/cnb/lifecycle/creator"]
env:
- name: DOCKER_CONFIG
value: $(workspaces.dockerconfig.path)
# Add this!
- name: HOME
value: /tekton/home
...
Since the prepare step will be chowning the /tekton/home directory for the user we'll be running this step as, it's okay to have the step store the authorization files there.
Run as user:group 65532:65532
By default, the create step runs as 1000:1000. However, this user will not be able to access the files created by the git-clone cluster resolver task. The default has to be changed.
Before:
buildpacks.yaml
securityContext:
runAsUser: 1000
runAsGroup: 1000
...
After:
buildpacks.yaml
securityContext:
runAsUser: 65532
runAsGroup: 65532
runAsNonRoot: true
...
We can then reconfigure the buildpacks task with the modified one:
Modifying buildpacks in the Pipeline
With the buildpacks Task modified, we will need to do some additions to the definition of the Task inside the Pipeline, too. We will be adding SOURCE_SUBPATH, USER_ID and GROUP_ID to the params.
pipeline.yaml
- name: build-image
params:
- name: APP_IMAGE
value: $(params.image-url)
- name: BUILDER_IMAGE
value: 'paketobuildpacks/builder:base'
# Add the next three
- name: SOURCE_SUBPATH
value: $(params.subdir-name)
- name: USER_ID
value: '65532'
- name: GROUP_ID
value: '65532'
...
SOURCE_SUBPATH is the reason why we did not have to define subPath under workspaces for this Task. Buildpacks will look in the directory marked in the param when building the image.
With the Pipeline modified, we can reconfigure it on the cluster.
With the changes made, you can test the pipeline again, fixing any errors you may face, and then continue to step 9.