Jenkins X on RedHat OpenShift 3.11¶
Why Jenkins X on RedHat OpenShift 3.11? Well, not everyone can use public cloud solutions.
So, in order to help out those running OpenShift 3.11 and want to leverage Jenkins X, read along.
Note
This guide is written early March 2020, using jx
version 2.0.1212
and OpenShift version v3.11.170
.
The OpenShift used is installed on GCP in a minimal fashion, so some shortcuts are taken. For example, there's only one user, the Cluster Admin. This isn't likely in a production cluster, but it is a start.
Pre-requisites¶
- jx binary
- kubectl is 1.16.x or less
- Helm v2
- running OpenShift 3.11 cluster
- with cluster admin access - for withouth, take a look at this guide
- GitHub account
If you're like me, you're likely managing your packages via a package manager such as Homebrew or Chocolatey. This means you might run newer versions of Helm and kubectl and need to downgrade them. See below how!
Caution
If you run this in an on-premises solution or otherwise cannot contact GitHub, you have to use Lighthouse for managing the webhooks.
As of March 2020, the support for Bitbucket Server is missing some features read here on what you can about that. Meanwhile, we suggest you either use GitHub Enterprise or GitLab as alternatives with better support.
Temporarily set Helm V2¶
Download Helm v2 release from Helms GitHub Releases page.
Place the binary somewhere, for example $HOME/Resource/helm2
. Then set your path with the location of Helm v2 first, before including the whole path to ensure Helm v2 is found first.
Ensure you're now running helm 2 by the command below:
It should show this:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Downgrade Kubctl¶
Downgrade kubectl (need lower than 1.17):
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.7/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
To confirm your kubectl version is as expected, run the command below:
The output should be as follows:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:34:02Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"darwin/amd64"}
Install Via Boot¶
The current (as of March 2020) recommended way of installing Jenkins X, is via jx boot.
Boot Configuration¶
- provider: kubernetes: Normally, this is set to your cloud provider. in order to stay close to Kubernetes itself and thus OpenShift, we set this to
kubernetes
- registry: docker.io: If you're on a public cloud vender,
jx boot
creates a docker registry for you (GCR on GCP, ACR on AWS, and so on), in this example we leverage Docker Hub (docker.io
). This should be indicative for any self-hosted registry as well! - dockerRegistryOrg: caladreas: when the docker registry owner - in my case,
caladreas
- is different from the git repository owner, you have to specify this viadockerRegistryOrg
- secretStorage: local: Thre recommended approach is to use the HashiCorp Vault integration, but that isn't supported on OpenShift
- webhook: prow: This uses Prow for webhook management. In March 2020 the best option to use with GitHub. If you want to use Bitbucket read my guide on jx with lighthouse & bitbucket.
jx-requirements.yaml
autoUpdate:
enabled: false
bootConfigURL: https://github.com/jenkins-x/jenkins-x-boot-config.git
cluster:
clusterName: rhos11
environmentGitOwner: <GitHub User>
gitKind: github
gitName: github
gitServer: https://github.com
namespace: jx
provider: kubernetes
registry: docker.io
dockerRegistryOrg: caladreas
environments:
- ingress:
domain: openshift.kearos.net
externalDNS: false
ignoreLoadBalancer: true
namespaceSubDomain: -jx.
key: dev
repository: environment-rhos11-dev
- ingress:
domain: "staging.openshift.kearos.net"
namespaceSubDomain: ""
key: staging
repository: env-rhos311-staging
- ingress:
domain: "openshift.kearos.net"
namespaceSubDomain: ""
key: production
repository: env-rhos311-prod
gitops: true
ingress:
domain: openshift.example.com
externalDNS: false
ignoreLoadBalancer: true
namespaceSubDomain: -jx.
kaniko: true
repository: nexus
secretStorage: local
versionStream:
ref: v1.0.361
url: https://github.com/jenkins-x/jenkins-x-versions.git
webhook: prow
Jx Boot¶
Go to a directory where you want to clone the development environment repository.
Create the initial configuration file, jx-requirements.yml
, and run the initial jx boot
iteration.
It will ask you if you want to clone the jenkins x boot config
repository:
Say yes, and it will clone the configuration repository and start the jx boot pipeline. It will fail, because not all values are copied from your jx-requirements.yml
into the new cloned repository.
To resolve this, go into the new cloned repository and replace the values of jx-requirements.yml
with your configuration. Once done, restart the installation.
Failed to install certmanager¶
Jenkins X will fail to install Certmanager, because it relies on newer API components from Kubernetes than are available in OpenShift 3.11. The 11
of 3.11 refers to Kubernetes 1.11
. Certmanager requires 1.12
+.
To disable the installation of certmanager, we edit the jenkins-x.yml
, which is the pipeline executed by jx boot
.
We have to remove the step
, that tries to install certmanager; install-cert-manager-crds
. The block of code we have to remove, is as follows:
- args:
- apply
- --wait
- --validate=false
- -f
- https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
command: kubectl
dir: /workspace/source
env:
- name: DEPLOY_NAMESPACE
value: cert-manager
name: install-cert-manager-crds
Once done, we can run jx boot
again.
Pipeline Runner faillure¶
For me, the pipeline runner
deployment failed, failing the jx boot
process - when it validates if everything came up.
error: unable to clone version dir: unable to create temp dir for version stream: mkdir /tmp/jx-version-repo-083799486: permission denied
A solution, is add a volume to the pipelinerunner
deployment, which mounts an emptyDir
1 to at /tmp
.
Once the pipelinerunner pod is running, rerun the jx boot
installation.
It should now succeed with cluster ok
.
Create Quickstart¶
To validate Jenkins X works as it should, the first step is to create a quickstart
23.
For simplicity, lets stick to a Go (lang) project.
This creates a new repository based on the quickstart for Go (lang)4 and the build pack for Go (lang)5.
I ran into two issues:
- Tekton is not mounting my Docker registry credentials, thus the Kaniko build fails with
401: not authenticated
- the expose controller6 is using Ingress resources by default, but doesn't want to create those on OpenShift7
Once the issues below are solved, the application is runing in the staging environment. You can view the applications in your cluster as follows:
Which should look something like this:
Missing Docker Credentials¶
I used Docker hub as my Docker registry, but this applies to any other self-hosted Docker registry.
We have to do the following:
- create a
docker-registry
secret in Kubernetes, with the credentials to our Docker registry (dockerhub or otherwise) - mount this secret in a location Kaniko picks it up
kubectl create secret docker-registry kaniko-secret --docker-username=<username> --docker-password=<password> --docker-email=<email-address>
Mount docker hub secret as json in classic Kaniko style.
pipelineConfig:
env:
- name: DOCKER_CONFIG
value: /root/.docker/
pipelines:
overrides:
- pipeline: release
stage: build
name: container-build
volumes:
- name: kaniko-secret
secret:
defaultMode: 420
secretName: kaniko-secret
items:
- key: .dockerconfigjson
path: config.json
containerOptions:
volumeMounts:
- mountPath: /root/.docker
name: kaniko-secret
This should be enough. But if Kaniko still runs into a 401 unathenticated
error, you have to change the ConfigMap for the builder PodTemplate. For example, if you use Go with the go
build pack, your build container will use the jenkins-x-pod-template-go
config map. This contains some environment variables related to Docker. If you still have issues, remove these environment variables.
those pod template configmaps are to support traditional jenkins servers so we don't really need much from them anymore with tekton, though if we delete them things fail for now so need to keep them, but just try and remove all the DOCKER related stuff from the configmap
Expose Controller Options¶
When the build succeeds, Jenkins X makes a PullRequest to your environment repository. By default, the first one is Staging
, which will automatically promote10 and run the application.
This fails, because the default setting of the staging environment, is to expose the applications via the Expose Controller
with an Ingress
resource. Currently (March 2020), the Expose Controller assumes OpenShift cannot handle Ingress resources7.
So there's two options here:
- configure the Expose Controller to use a
Route
to expose an application11 - customize the Expose Controller to only issue a warning when using
exposer: Ingress
on OpenShift environment
If you choose option one, change the value of exposer
from Ingress
to Route
of the env/values.yaml
.
env/values.yaml
If you choose option 2, fork the Expose Controller repository and change the line that stops it from creating Ingress resources7.
As can be seen here: https://github.com/jenkins-x/exposecontroller/blob/master/exposestrategy/ingress.go#L48
if t == openShift {
return nil, errors.New("ingress strategy is not supported on OpenShift, please use Route strategy")
}
And the following steps:
- new local build
- create and push Docker image to a registry accessable in the cluster
- create a new helm package, in
charts
directory, executehelm package exposecontroller
- upload Helm chart somewhere, for example, a GitHub repository
- update the
env/requirements.yaml
to use your helm chart instead of the Jenkins X one for the Expose Controller
For example:
env/requirements.yaml
Promote To Production¶
To promote an application to the Production environment, we have to instruct Jenkins X to do it for us10.
For example:
Aside from the Expose Controller issues, there's nothing else to be done.
Just be sure to make those changes in your production environment repository.
Preview Environments¶
The only thing required to generate a Preview Environment in Jenkins X12, is to create a PullRequest to the master
branch from a other branch.
Wether the preview environment succeeds, depends on two things.
One, does the Jenkins X service account - tekton-bot
- have enough permissions to create the namespace unique to the pull request - default naming scheme is jx-<user>-<app>-pr-<prNumber>
.
Two, because each Preview Environment has its one Expose Controller13, the Expose Controller needs to be configured to either use Route
or accept creating Ingress
when on OpenShift.
If all is done, you can retrieve the current preview environments as follows:
Which should yield something like this:
PULL REQUEST NAMESPACE APPLICATION
https://bitbucket.openshift.kearos.net/jx/jx-go/pull/1 jx-jx-jx-go-pr-1 http://jx-go.jx-jx-jx-go-pr-1.openshift.example.com
Errata¶
Registry Owner Mismatch¶
It can happen that the docker registry owner is not the same for every application. If this is the case, the application will have to make a workaround after it is imported into Jenkins X (via jx import
or jx create quickstart
).
In order to resolve the mismatch between the default Jenkins X installation Docker registry owner and the application's owner, we need to change two things in our Jenkins X pipeline (jenkins-x.yml
)8.
- add an override for the Docker registry owner in the
jenkins-x.yml
, the pipeline of your application. - add an override for the
container-build
step of thebuild
stage, for both therelease
andpullrequest
pipelines.
Overriding the pipeline is done by specifying the stage to override under pipelineConfig.overides
89.
When you set dockerRegistryOwner
, it overrides the value generated elsewhere.
The only exception is where the image gets uploaded to via Kaniko
.
The end result will look like this.
jenkins-x.yml
dockerRegistryOwner: caladreas
buildPack: go
pipelineConfig:
overrides:
- pipeline: release
stage: build
name: container-build
steps:
- name: container-build
dir: /workspace/source
image: gcr.io/kaniko-project/executor:9912ccbf8d22bbafbf971124600fbb0b13b9cbd6
command: /kaniko/executor
args:
- --cache=true
- --cache-dir=/workspace
- --context=/workspace/source
- --dockerfile=/workspace/source/Dockerfile
- --destination=docker.io/caladreas/jx-go-rhos311-1:${inputs.params.version}
- --cache-repo=docker.io/todo/cache
- --skip-tls-verify-registry=docker.io
- --verbosity=debug
- pipeline: pullrequest
stage: build
name: container-build
steps:
- name: container-build
dir: /workspace/source
image: gcr.io/kaniko-project/executor:9912ccbf8d22bbafbf971124600fbb0b13b9cbd6
command: /kaniko/executor
args:
- --cache=true
- --cache-dir=/workspace
- --context=/workspace/source
- --dockerfile=/workspace/source/Dockerfile
- --destination=docker.io/caladreas/jx-go-rhos311-1:${inputs.params.version}
- --cache-repo=docker.io/todo/cache
- --skip-tls-verify-registry=docker.io
- --verbosity=debug
References¶
-
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ↩
-
https://jenkins-x.io/docs/getting-started/first-project/create-quickstart/ ↩
-
https://github.com/jenkins-x-buildpacks/jenkins-x-kubernetes/tree/master/packs ↩
-
https://jenkins-x.io/docs/concepts/technology/#whats-is-exposecontroller ↩
-
https://github.com/jenkins-x/exposecontroller/blob/master/exposestrategy/ingress.go#L48 ↩↩↩
-
https://jenkins-x.io/docs/reference/pipeline-syntax-reference/ ↩↩
-
https://technologyconversations.com/2019/06/30/overriding-pipelines-stages-and-steps-and-implementing-loops-in-jenkins-x-pipelines/ ↩
-
https://github.com/jenkins-x/exposecontroller#exposer-types ↩
-
https://jenkins-x.io/docs/getting-started/build-test-preview/#generating-a-preview-environment ↩