Database Connection And Secrets¶
In most cases, connecting to a database from your application requires three pieces of information.
- Database URL
- Username
- Password
Now, you can argue if the fourth would be the schema, or if that is included in #1.
Eitherway, our application is currently missing this information. If you remember, we've set our URL to jdbc:mysql://127.0.0.1:3306/fruit
, and our username and password to ${GOOGLE_SQL_USER}
and ${GOOGLE_SQL_PASS}
respectively.
Before we configure these secrets, lets manage our database and it schema in a way that fits Jenkins X.
Code Start¶
If you do not have a working version after the previous chapter, you can find the complete working code in the branch 04-jx-import-finish.
Manage Database Via Flyway¶
One part of CI/CD that is often forgotten, is managing your Database (schema) updates. There are several options available, such as Liquibase and Flyway.
As I've used Flyway in the past with good results, we use Flyway for this guide, but I'm certain you can achieve the same results wichever route you take. You can also choose to ignore database schema management, and leave it to Hibernate or Spring Data to update the schema based on your classes.
Important
If you do use Flyway or Liquibase to manage the Database schema, you have to re-create the database.
The moment to do this, is when the updated application lands in the Staging environment.
Quarkus Integration¶
Quarkus has excellent support for Flyway, and the Quarkus Flyway guide explains all the details . To use Flyway with our Quarkus application, we take the following steps:
- add the
quarkus-flyway
dependency to thepom.xml
- update the
application.properties
file insrc/main/resources
- update our Fruit entity,
src/main/java/.../Fruit.java
- create initial schema definition
- ensure find all call return a sorted list
pom.xml
src/main/resources/application.properties
The highlighted lines are the changes.
quarkus.datasource.db-kind=mysql
quarkus.datasource.jdbc.url=jdbc:mysql://127.0.0.1:3306/fruits
quarkus.datasource.jdbc.max-size=8
quarkus.datasource.jdbc.min-size=2
quarkus.datasource.username=${GOOGLE_SQL_USER}
quarkus.datasource.password=${GOOGLE_SQL_PASS}
quarkus.flyway.migrate-at-start=true
quarkus.flyway.baseline-on-migrate=true
quarkus.hibernate-orm.database.generation=none
quarkus.log.level=INFO
quarkus.log.category."org.hibernate".level=INFO
To ensure we don't run into any troubles with MySQL locally or within Google Cloud, it is best to explicitly set the Table name to exclusively lower case.
Secondly, instead of letting Hibernate take care of the id of our entities, we can rely on MySQL doing this for us. So we set the generation strategy to IDENTITY
.
src/main/java/.../Fruit.java
Schema definitions¶
Flyway uses SQL files for managing the schema - and data - of your database. By default, it looks in the src/main/resources/db/migration
folder.
Flyway runs the files in order and uses the filename for determining the schema version and update title. We create two files:
V1.0.0__initial.sql
: houses our Fruit tableV1.0.1__test_data.sql
: has our initial data entries, which we can then also use for our Integration Tests!
V1.0.0__initial.sql
The comments were generated by CloudSQL's export, if you use CloudSQL I recommend leaving them intact. If you use something else, probably best to remove them.
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
--
-- Current Database: `fruits`
--
--
-- CREATE DATABASE /*!32312 IF NOT EXISTS*/ `fruits` /*!40100 DEFAULT CHARACTER SET utf8 */;
--
-- USE `fruits`;
--
-- Table structure for table `Fruit`
--
DROP TABLE IF EXISTS `fruit`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `fruit` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`color` varchar(255) DEFAULT NULL,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = @saved_cs_client */;
CREATE UNIQUE INDEX `fruit_name_color` ON `fruit` (`name`, `color`);
V1.0.1__test_data.sql
Unit Test Configuration¶
To ensure our unit tests also keep working, we have to make some changes to our Tests' application.properties
file.
The mandatory lines are highlighted, the lat three are for debug logging - should be obvious. This gives you more information on what Flyway is doing, which is almost mandatory when you run into issues. These can be safely ignored, but I thought it best to show you how to set that type of logging, just in case.
src/test/resources/application.properties
quarkus.datasource.url=jdbc:h2:tcp://localhost/mem:fruits;MODE=MySQL;DB_CLOSE_ON_EXIT=FALSE;DB_CLOSE_DELAY=-1;DATABASE_TO_UPPER=false;
quarkus.datasource.driver=org.h2.Driver
quarkus.hibernate-orm.log.sql=true
quarkus.hibernate-orm.database.default-schema=FRUITS
quarkus.flyway.migrate-at-start=true
quarkus.flyway.schemas=FRUITS
quarkus.log.console.level=DEBUG
quarkus.log.category."org.flywaydb.core".level=DEBUG
quarkus.log.category."io.quarkus.flyway".level=DEBUG
CloudSQL Proxy Container¶
But first, we have to do one more configuration for our CloudSQL Database. If you do not use the CloudSQL database, you can skip this step, but make sure to replace the JDBC URL appropriately.
Google takes its security serious, and thus doesn't allow you to access its CloudSQL databases from everywhere. Our Kubernetes cluster cannot directly access CloudSQL, but it can use a Proxy, provided we connect with a Google Cloud Service Account - not to be confused with a Kubernetes Service Account.
We deal with the secrets later, we first have to add this Proxy container configuration. To add the container, we update the deployment.yaml
in our charts/Name-Of-Your-Application/templates
folder. Containers is a list, so we add the cloudsql-proxy
container as an additional list item.
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: ["/cloud_sql_proxy",
"-instances={{.Values.secrets.sql_connection}}=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
If you look carefully, you can see we already reference a secret: {{.Values.secrets.sql_connection}}
. We come back to this later. For further clarification on how the containers section of our deployment.yaml
should look like, expand the example below.
charts/Your-Application-Name/templates/deployment.yaml snippet
spec:
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: ["/cloud_sql_proxy",
"-instances={{.Values.secrets.sql_connection}}=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
- name: {{ .Chart.Name }}
envFrom:
- secretRef:
name: {{ template "fullname" . }}-sql-secret
- secretRef:
name: {{ template "fullname" . }}-sentry-dsn
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
Jenkins X and Secrets¶
If you have followed the pre-requisites, you have a Jenkins X installation with Hashicorp Vault. Where Jenkins X is configured to manage its secrets in Vault.
This allows Jenkins X to retrieve secrets from Vault for you, and inject them where you need them. We will explore the different ways you can do so, in order to give our application enough information to connect to the MySQL database.
Jenkins X can deal with secrets in several ways, and we will use most of them going forward.
- configure a Kubernetes secret in the Chart templates, not really Jenkins X, but remember you have this option as well
- inject a secret as environment variable into the container from Vault, this is a global secret, in the sense that the Vault URI is always the same, for every environment
- inject a secret via a
values.yaml
variable, you can - and should - change this depending on the environment, this is a environment secret as our Jenkins X environment will do the replacement from Vault URI to value - use a container in our pipeline that has bot the Jenkins X binary (
jx
) and the Vault CLI, through which we can then interact with Vault's API
Caution
When injecting variables directly as environment variable, they will show up in Kubernetes manifests.
This might leak the secrets further than you intent. You can use this a shortcut to centralized configuration management - forgoing something such as Consul or Spring Cloud Config Server.
When using the option to read the secret into a values.yaml
variable, you can use this variable in a template. This means you can create a Kubernetes Secret manifest in your templates folder, and have Jenkins X populate its value from Vault.
Create Secrets¶
Lets configure the secrets in our application. we start by setting up variables and placeholders in the values.yaml
of our Chart. After which we configure the secret in our Helm templates and enter them in Vault.
Important
To access Vault, you can use the Vault CLI, see how to install here, or via its UI.
Either way, to get the Vault configuration of your Jenkins X cluster, use the get vault-config command:
It prints out the URL and the connection token.
Every secret we create, is assumed to be in the /secrets
KV vault. Jenkins X makes the same assumption, and omits this in the Vault secret URI.
Configure Values.yaml¶
Depending on the secret, we either want it to be the same everywhere or unique per environment. That partly depends on your, do you have a different database for every environment that isn't Production?
In my case, the user of my database is always the same, so I enject it as a environment variable. We do this, by adding key: value
pairs to the env:
property.
The other information pieces are both more sensitive and environment specific. So we create a new property called secrets
, and fill in empty (placeholder) values:
- sql_password: the password for our database
- sql_connection: the connection information the CloudSQL Proxy container will use
- sqlsa: the Google Cloud Service Account (JSON) key for validating the database connection request
values.yaml
Google CloudSQL Connection URL¶
Access Vault and create a new Secret under secrets/Name-Of-Your-Application
. In these examples, we use quarkus-fruits
as our application name.
Each secret in Vault is a set of Key/Value pairs.
For the CloudSQL Connection URL, we use INSTANCE_CONNECTION_NAME
, as this is how CloudSQL names this information.
Google CloudSQL Service Account Key¶
If everything is allright, you have created this service account earlier. If not, please revisit How To Connect To The Database paragraph from the CloudSQL page.
Due to the secret being a JSON file, it is best to Base64 encode the contents before adding it as a secret value in Vault.
In Linux of MacOS, this should be sufficient:
As key, we use SA
.
Create Secret Manifest¶
This is very sensitive information, so we follow the best practice of moutning the secret. To do so, we have to create it as a secret in Kubernetes. We create a new Kubernetes Secert manifest in the charts/Your-Application-Name/templates
folder with the name sql-sa-secret.yaml
so that Jenkins X will create this for us.
templates/sql-sa-secret.yaml
Update Deployment¶
We have to make two changes. We have to create a volume and a volumeMount to ensure the cloudsql-proxy
has access to the Service Account Key.
templates/deployment.yaml
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: ["/cloud_sql_proxy",
"-instances={{.Values.secrets.sql_connection}}=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
Caution
Kubernetes' YAML manifest files can be difficult to get right, especially the indenting.
If you have Helm
installed locally, you can use its linting feature to see if everything is allright.
Google CloudSQL Password¶
Add the CloudSQL password to your secret in Vault (in my case, secrets/quarkus-fruits
) and use the Key GOOGLE_SQL_PASS
.
Create Secret Manifest¶
We again make a Kubernetes Secret Manifest, this time by the name of sql-secret.yaml
. Please not, that as we did not Base64 encode our password, we have to do this in our manifest. But no worries, Helm templating can do this for us, via | b64enc
.
templates/sql-secret.yaml
Update Deployment¶
Instead of mounting the secret, we inject it as a environment variable. Unlike the the username, which was added as a environment variable directly, we let Kubernetes take care of the injection via envFrom
. This ensures the secret does not show up in the manifest (when you do a kubectl get pod -o yaml
for example), and saves us the hassle of reading the property from a file.
When using envFrom
, there are several options of where it comes from. In our case, it comes from a secret, so we use -secretRef
with the name of our sql-secret
secret manifest.
Important
The deployment.yaml
is set up in such a way, that it expect us to declare envFrom
in the values.yaml
. Unfortunately, in our case, that doesn't work because we don't know the name of the secret up front. It is derived from the name of our Helm chart's installation, and it is best to keep this templated.
To solve this, we have to remove the line {{ toYaml .Values.envFrom | indent 10 }}
templates/deployment.yaml
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
- secretRef:
name: {{ template "fullname" . }}-sql-secret
{{ toYaml .Values.envFrom | indent 10 }} ## LINE TO REMOVE
Google CloudSQL Username¶
Add the CloudSQL username to your secret in Vault (in my case, secrets/quarkus-fruits
) and use the Key GOOGLE_SQL_USER
.
Info
If you haven't created a separate user - which is probably best - the user would be root
.
If you create a new user, ensure the password set in GOOGLE_SQL_PASS
matches the user in GOOGLE_SQL_USER
.
For the username, this is all we have to do, as we directly inject this variable as an environment variable.
Configure Jenkins X Environments¶
For each Jenkins X environment that your application is going to land in, such as jx-staging
and jx-production
, we have to enable Vault support.
We do this by making a change in the Environment's jx-requirements.yml
file in the root of the repository of the environment. This file might not exist, if so, create it.
If you're not sure where the repository of your environment is, you can retrieve this information via the jx
CLI.
To enable support for Vault, we add secretStorage: vault
to the file. The file will look like this:
Update Staging Environment¶
Once you create the secret and the INSTANCE_CONNECTION_NAME
key with its secret value - which should be the instance connection name from your CloudSQL database - we can add this information to our Jenkins X environment(s), such as Staging. Again, here we assume the application's name is quarkus-fruits
.
env/values.yaml
For this secret, this is all we need to do. As our CloudSQL Proxy Container directly uses this value in the deployment.yaml
.
Summary of Changes Made To Application¶
- created two new templates in the folder
charts/Name-Of-Your-Application/templates
sql-secret.yaml
sql-sa-secret.yaml
- updated the
charts/Name-Of-Your-Application/templates/deployment.yaml
- add second container, for the CloudSQL Proxy
- add volume and volumeMount to CloudSQL Proxy container
- add environment injection from secret for password
- updated
charts/Name-Of-Your-Application/values.yaml
with placeholders for the secrets
templates/deployment.yaml
Your deployment should now look like this:
{{- if .Values.knativeDeploy }}
{{- else }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
draft: {{ default "draft-app" .Values.draft }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
selector:
matchLabels:
app: {{ template "fullname" . }}
{{- if .Values.hpa.enabled }}
{{- else }}
replicas: {{ .Values.replicaCount }}
{{- end }}
template:
metadata:
labels:
draft: {{ default "draft-app" .Values.draft }}
app: {{ template "fullname" . }}
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.16
command: ["/cloud_sql_proxy",
"-instances={{.Values.secrets.sql_connection}}=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $pkey, $pval := .Values.env }}
- name: {{ $pkey }}
value: {{ quote $pval }}
{{- end }}
envFrom:
- secretRef:
name: {{ template "fullname" . }}-sql-secret
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: {{ .Values.livenessProbePath }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.readinessProbePath }}
port: {{ .Values.service.internalPort }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: {{ template "fullname" . }}-sql-sa
Summary of Changes Made To Staging Environment¶
- created a new file, called
jx-requirements.yml
at the root - updated the
env/values.yaml
with values for our application