This document demonstrates passing information from one task to another in a Tekton pipeline. It provides all the steps needed, including YAML files, for a test use case demonstrating a simple way of passing information from one task to another in a Tekton pipeline. It starts from the basics, like creating a project and persistent volume creation (PVC), before building the task and pipeline. Finally, it includes the YAML files for the respective task and pipeline runs. You can extend this example for your own use cases.
I tested this example with Red Hat OpenShift Pipelines version 1.70 and higher, running on OpenShift Version 4.10 and higher. Confirm that the Red Hat OpenShift Pipelines operator is installed.
Create a test project
kind: Project
apiVersion: project.openshift.io/v1
metadata:
name: test
Before you create tasks and pipeline, confirm that you have a PVC associated with the project where you are creating them.
Create a PVC named test
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: gp2
volumeMode: Filesystem
In the first task, add a workspace like the one in this spec. Use the workspace name source in this case to use in the second task. Capture what you need to pass in data as results and redirect that information to a file. For example, ee_data.json, which you call in the second task.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: task1
spec:
description: >-
Add execution environment to automation controller
workspaces:
- name: source
results:
- name: data
description: ID to be passed to next task
steps:
- name: task1
image: quay.io/rshah/jq
workingDir: $(workspaces.source.path)
resources: {}
script: |
#!/usr/bin/env bash
data="This is the output from task 1"
printf "%s" "${data}" > ee_data.json
AC_EE_ID=$(cat ee_data.json)
printf "%s" ${AC_EE_ID}
In task 2, you can reference ee_data.json as shown below:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: task2
spec:
workspaces:
- name: source
steps:
- name: task2
image: quay.io/rshah/jq
workingDir: $(workspaces.source.path)
resources: {}
script: |
#!/usr/bin/env bash
AC_EE_ID=$(cat ee_data.json)
printf "%s" ${AC_EE_ID}
When you run task1 and task2 in a pipeline, both tasks should print the same output.
Create a pipeline
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: "value_pass_pipeline"
spec:
workspaces:
- name: source
params:
- description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry)
name: TLSVERIFY
type: string
default: "false"
- description: Dummy parameter for task1
name: task1
type: string
default: "task1"
- description: Dummy parameter for task2
name: task2
type: string
default: "task2"
tasks:
- name: task1
taskRef:
kind: Task
name: task1
params:
workspaces:
- name: source
workspace: source
- name: task2
taskRef:
kind: Task
name: task2
params:
runAfter:
- task1
workspaces:
- name: source
workspace: source
When you run the pipeline, both tasks will show the output shown below:
STEP-TASK1 This is the output from task 1
STEP-TASK2 This is the output from task 1
Task and pipeline runs
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: test-0ij91k-task1
namespace: test
spec:
resources: {}
serviceAccountName: pipeline
taskRef:
kind: Task
name: task1
timeout: 59m59.989014151s
workspaces:
- name: source
persistentVolumeClaim:
claimName: test
---
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: test-0ij91k-task2
namespace: test
spec:
resources: {}
serviceAccountName: pipeline
taskRef:
kind: Task
name: task2
timeout: 59m59.989014151s
workspaces:
- name: source
persistentVolumeClaim:
claimName: test
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: test-0ij91k
namespace: test
spec:
pipelineRef:
name: test
serviceAccountName: pipeline
timeout: 1h0m0s
workspaces:
- name: source
persistentVolumeClaim:
claimName: test
This simple example provides a basic understanding of passing output from one task as input to another. You can extend this to your use cases.
About the author
Ritesh Shah is a Principal Architect with the Red Hat Portfolio Technology Platform team and focuses on creating and using next-generation platforms, including artificial intelligence/machine learning (AI/ML) workloads, application modernization and deployment, Disaster Recovery and Business Continuity as well as software-defined data storage.
Ritesh is an advocate for open source technologies and products, focusing on modern platform architecture and design for critical business needs. He is passionate about next-generation platforms and how application teams, including data scientists, can use open source technologies to their advantage. Ritesh has vast experience working with and helping enterprises succeed with open source technologies.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit