HashiCorp Vault (Vault) is a popular open source tool for secrets management that codifies many of the best practices around secrets management including time-based access controls, principles of least privilege, encryption, dynamic credentials, and much more. Google Kubernetes Engine (GKE) is Google's hosted, managed Kubernetes offering. This codelab combines these two tools in a two-part series:

  1. Running Vault as a service on GKE
  2. Connecting to Vault from other services in GKE

We will use the following architecture to run and connect to Vault on Kubernetes:

What you'll learn

In an Incognito window or separate browser, visit console.cloud.google.com and login with the provided credentials. Because this is a temporary account, which you will only have access to for this one lab:

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the GCP Console click the Cloud Shell icon on the top right toolbar:

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.

Before deploying Vault in production, first install Vault locally. This will enable you to use the vault CLI locally and will be used to interact with the cluster later.

You could browse to the Vault website, but this section will teach you how to download, verify, and install Vault securely. Even though Vault is downloaded over a TLS connection, it may still be possible for a skilled attacker to compromise the underlying storage system or network transport. For that reason, in addition to serving the binaries over TLS, HashiCorp also signs the checksums of each release with their private key. Thus, to verify the integrity of a download, we must:

  1. Import and trust HashiCorp's GPG public key
  2. Download the Vault binary
  3. Download the Vault checksums
  4. Download the Vault checksum signature
  5. Verify the signature of the checksum against HashiCorp's GPG key
  6. Verify the checksums of the binary against the file

This way, even if an attacker were able to compromise the network transport and underlying storage component, they wouldn't be able to sign the checksums with HashiCorp's GPG key. If this operation is successful, we have an extremely high degree of confidence that the software is untainted.

Since that process can be tedious, we will leverage a Docker container to do it for us. Execute the following command to install Vault locally. We install Vault into $HOME/bin because that will persist between restarts on Cloud Shell.

$ docker run -v $HOME/bin:/software sethvargo/hashicorp-installer vault 1.2.2
$ sudo chown -R $(whoami):$(whoami) $HOME/bin/

Add the bin to our path:

$ export PATH=$HOME/bin:$PATH

Finally, optionally, explore the Vault CLI help. Most Vault commands will not work because there is no Vault server running. Do not start a Vault server yet.

$ vault -h

Vault itself is not a storage mechanism. Instead, it has a pluggable storage system for where it persists data at rest. This lab uses Google Cloud Storage as the Vault storage backend because of it's high performance, low cost, and high availability support. There are many other options for storage backends, each with their own trade offs.

This lab uses the Google Cloud Storage HashiCorp Vault storage backend, which means we need to create a storage bucket in which Vault can read/write/update information. To do that, use the gsutil command:

$ gsutil mb "gs://${GOOGLE_CLOUD_PROJECT}-vault-storage"

In order to make our deployed Vault cluster highly available, we need to leverage automatic unsealing through Google Cloud KMS. By default, a new Vault server starts in an uninitialized state, meaning it's waiting for a human operator to execute commands and configure it. The vault-init service automates that the process of configuring and unsealing the Vault cluster.

Because this process is automated, the initial root token and initial unseal keys must be persisted somewhere at reset. Additionally, we do not want to persist those values in plaintext, since then anyone with access to the bucket could become an administrator in Vault. The vault-init service uses Google Cloud KMS to encrypt the initial root token and unseal keys before storing them in Google Cloud Storage.

We need to create the KMS key that the vault-init service will use to encrypt/decrypt these secret values. It is important to note that Google Cloud KMS is only used by the vault-init service, not Vault itself.

First, enable the Google Cloud KMS API:

$ gcloud services enable \
    cloudapis.googleapis.com \
    cloudkms.googleapis.com \
    cloudresourcemanager.googleapis.com \
    cloudshell.googleapis.com \
    container.googleapis.com \
    containerregistry.googleapis.com \

Next, create a crypto key ring for Vault and a crypto key for the vault-init service. In a later section, we will create IAM permissions which allow encryption and decryption from this crypto key.

$ gcloud kms keyrings create vault \
    --location us-east1

$ gcloud kms keys create vault-init \
    --location us-east1 \
    --keyring vault \
    --purpose encryption

Both Vault and the vault-init service need the ability to communicate to Google Cloud Platform APIs. Following the principle of least privilege, we want to give these services the most minimal amount of permissions possible. Similarly, we want the ability to change or revoke the permissions in the future without a full re-deploy of the service. This is where Google Cloud IAM and service accounts are useful.

A service account is a special type of Google account that belongs to your application or a virtual machine, instead of to an individual end user. Your application assumes the identity of the service account to call Google APIs, so that the users aren't directly involved. A service account has zero or more service account keys, which are used to authenticate to Google.

In this case, our service account needs the following permissions:

In Google IAM, those permissions translate to:

First, let's create the service account. It's important to note that, even after we create the service account, it has no permission.

$ export SERVICE_ACCOUNT="vault-server@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com"
$ gcloud iam service-accounts create vault-server \
    --display-name "vault service account"

Next, grant the service account full access to all objects in the storage bucket:

$ gsutil iam ch \
    "serviceAccount:${SERVICE_ACCOUNT}:objectAdmin" \
    "serviceAccount:${SERVICE_ACCOUNT}:legacyBucketReader" \

Lastly, grant the service account the ability to encrypt and decrypt data from the crypto key:

$ gcloud kms keys add-iam-policy-binding vault-init \
    --location us-east1 \
    --keyring vault \
    --member "serviceAccount:${SERVICE_ACCOUNT}" \
    --role roles/cloudkms.cryptoKeyEncrypterDecrypter

Next we need to create the Kubernetes (GKE) cluster which will run Vault. It is recommended that you run Vault in a dedicated namespace or (even better) a dedicated cluster and a dedicated project. Vault will then act as a "service" with an IP/DNS entry that other projects and services query.

To get started, enable the GKE container API on GCP:

$ gcloud services enable container.googleapis.com

Since the process for creating a cluster can take some time, start executing it now, and then continue reading to learn more about what is happening in the background:

$ export SERVICE_ACCOUNT="vault-server@${GOOGLE_CLOUD_PROJECT}.iam.gserviceaccount.com"

$ gcloud container clusters create vault \
    --cluster-version 1.14 \
    --enable-autorepair \
    --enable-autoupgrade \
    --enable-ip-alias \
    --machine-type n1-standard-2 \
    --node-version 1.14 \
    --num-nodes 1 \
    --region us-east1 \
    --scopes cloud-platform \
    --service-account "${SERVICE_ACCOUNT}"

The options are as follows. Many of these values come from the GKE Cluster Hardening resource and are actually the default options:

Next, we need to create an entrypoint for the cluster. In the case of Vault, since it's an HTTP API, it will need to be accessible at an IP or DNS address. For simplicity, we will use an IP address.

Allocate a new public IP address:

$ gcloud compute addresses create vault --region "us-east1"

This is arguably the most complex and nuanced piece of this codelab - generating Vault's certificate authority (CA) and server certificates for TLS. Vault can run without TLS, but this is highly discouraged.

First, some workspace setup:

$ export LB_IP="$(gcloud compute addresses describe vault --region us-east1 --format 'value(address)')"
$ export DIR="$(pwd)/tls"
$ mkdir -p $DIR

Next, create the OpenSSL configuration file:

$ cat > "${DIR}/openssl.cnf" << EOF
default_bits = 2048
encrypt_key  = no
default_md   = sha256
prompt       = no
utf8         = yes

distinguished_name = req_distinguished_name
req_extensions     = v3_req

C  = US
ST = California
L  = The Cloud
O  = Demo
CN = vault

basicConstraints     = CA:FALSE
subjectKeyIdentifier = hash
keyUsage             = digitalSignature, keyEncipherment
extendedKeyUsage     = clientAuth, serverAuth
subjectAltName       = @alt_names

IP.1  = ${LB_IP}
DNS.1 = vault.default.svc.cluster.local

Generate Vault's certificate and certificate signing request (CSR):

$ openssl genrsa -out "${DIR}/vault.key" 2048

$ openssl req \
    -new -key "${DIR}/vault.key" \
    -out "${DIR}/vault.csr" \
    -config "${DIR}/openssl.cnf"

Create a Certificate Authority (CA):

$ openssl req \
    -new \
    -newkey rsa:2048 \
    -days 120 \
    -nodes \
    -x509 \
    -subj "/C=US/ST=California/L=The Cloud/O=Vault CA" \
    -keyout "${DIR}/ca.key" \
    -out "${DIR}/ca.crt"

Sign the CSR with the CA:

$ openssl x509 \
    -req \
    -days 120 \
    -in "${DIR}/vault.csr" \
    -CA "${DIR}/ca.crt" \
    -CAkey "${DIR}/ca.key" \
    -CAcreateserial \
    -extensions v3_req \
    -extfile "${DIR}/openssl.cnf" \
    -out "${DIR}/vault.crt"

Finally, combine the CA and Vault certificate (this is the format Vault expects):

$ cat "${DIR}/vault.crt" "${DIR}/ca.crt" > "${DIR}/vault-combined.crt"

At this point, you should have the following files in tls/:


In the next section, we will use these values as Kubernetes secrets so that Vault can access them when it runs.

Next we create the configmap and secrets to store data for our pods. Vault, at boot, will retrieve this information for its configuration.

The insecure data such as the Google Cloud Storage bucket name and IP address are placed in a Kubernetes configmap:

$ export LB_IP="$(gcloud compute addresses describe vault --region us-east1 --format 'value(address)')"

$ kubectl create configmap vault \
    --from-literal "load_balancer_address=${LB_IP}" \
    --from-literal "gcs_bucket_name=${GOOGLE_CLOUD_PROJECT}-vault-storage" \
    --from-literal "kms_project=${GOOGLE_CLOUD_PROJECT}" \
    --from-literal "kms_region=us-east1" \
    --from-literal "kms_key_ring=vault" \
    --from-literal "kms_crypto_key=vault-init" \

The secure data like the TLS certificates are put in a Kubernetes secret:

$ kubectl create secret generic vault-tls \
    --from-file "$(pwd)/tls/ca.crt" \
    --from-file "vault.crt=$(pwd)/tls/vault-combined.crt" \
    --from-file "vault.key=$(pwd)/tls/vault.key"

At this point, we have fulfilled all the prerequisite steps - we are ready to run Vault on Google Kubernetes Engine.

We will deploy Vault as a StatefulSet on Kubernetes. Even though Vault itself is not stateful (remember, we are using Google Cloud Storage for persistent state), Kubernetes stateful sets provide some other benefits for our deployment:

  1. It guarantees exactly one service starts at a time. This is required by the vault-init sidecar service.
  2. It gives us consistent naming for referencing the Vault servers (which is nice for a codelab).

In our deployment, Vault will automatically be initialized and unsealed via the vault-init service.

First, apply the Kubernetes configuration file for Vault, then we'll take a look at what's happening under the hood.

$ kubectl apply -f https://raw.githubusercontent.com/sethvargo/vault-kubernetes-workshop/master/k8s/vault.yaml

There are a few things to note in the specification:

Vault Kubernetes Spec

kind: StatefulSet
  name: vault
    app: vault

# ...

Verify the pods are running:

$ kubectl get pods

vault-0   2/2       Running   0          37s
vault-1   2/2       Running   0          27s
vault-2   2/2       Running   0          15s

Even though Vault is running, it will not be publicly accessible yet.

As mentioned in the previous section, even though Vault is running, it is not available. That's because we have not mapped the public IP address allocated earlier to the cluster. To do this, we need to create a LoadBalancer service in Kubernetes.

To create the service, run the following:

$ export LB_IP="$(gcloud compute addresses describe vault --region us-east1 --format 'value(address)')"

$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
  name: vault
    app: vault
  type: LoadBalancer
  loadBalancerIP: ${LB_IP}
  externalTrafficPolicy: Local
    app: vault
  - name: vault-port
    port: 443
    targetPort: 8200
    protocol: TCP

Verify the service is read:

$ kubectl get service

NAME         TYPE           CLUSTER-IP   EXTERNAL-IP    PORT(S)         AGE
kubernetes   ClusterIP     <none>         443/TCP         1h
vault        LoadBalancer    443:30499/TCP   3m

Congratulations! You now have a publicly accessible, highly available Vault cluster on Google Kubernetes Engine!

At this point, you have successfully completed the first part of the exercise - deploying Vault on GKE. The architecture looks like this:

The HashiCorp Vault servers are running in high availability mode on GKE. They are storing their data in Google Cloud Storage and they are auto-unsealed with keys encrypted with Google Cloud KMS. All the nodes are load balanced with a load balancer.

Now that the cluster is up-and-running, we can connect to it from our Cloud Shell instance. The Vault CLI can be configured using environment variables to reduce typing each command. The CLI needs to be configured with:

These values correspond to the following environment variables:

Set VAULT_ADDR to the IP of the load balancer. In this configuration, we always talk to the load balancer, never to a Vault server directly:

$ export LB_IP="$(gcloud compute addresses describe vault --region us-east1 --format 'value(address)')"

$ export VAULT_ADDR="https://${LB_IP}:443"

Set VAULT_CACERT to the path to the CA certificate on disk:

$ export VAULT_CACERT="$(pwd)/tls/ca.crt"

Set VAULT_TOKEN to the decrypted root token:

$ export VAULT_TOKEN="$(gsutil cat "gs://${GOOGLE_CLOUD_PROJECT}-vault-storage/root-token.enc" | \
  base64 --decode | \
  gcloud kms decrypt \
    --location us-east1 \
    --keyring vault \
    --key vault-init \
    --ciphertext-file - \
    --plaintext-file -)"

Finally, verify the setup is correct and functional:

$ vault status

Key             Value
---             -----
Seal Type       shamir
Sealed          false
Total Shares    5
Threshold       3
Version         1.1.3
Cluster Name    vault-cluster-63ea8c7f
Cluster ID      06ec3c8b-86a7-d649-1140-feeaa33b6f14
HA Enabled      true
HA Cluster
HA Mode         active

Congratulations! You now have a best-practices Vault cluster running on Google Kubernetes Engine.

Now that Vault is up and running, you can use it to store secret information. For example, you can enable the kv secrets engine:

$ vault secrets enable kv

Then you can read and write data from Vault's generic key-value store:

$ vault kv put kv/myapp/config \
    username="appuser" \

Then read that data back out:

$ vault kv get kv/myapp/config

The Vault CLI is actually just a very thin HTTP wrapper. The Vault server is an HTTP API, so information is also accessible via anything that can make an HTTP request:

$ curl \
    --silent \
    --fail \
    --cacert "$(pwd)/tls/ca.crt" \
    --header "x-vault-token:${VAULT_TOKEN}" \
    "${VAULT_ADDR}/v1/kv/myapp/config" \
    | jq .

The Key/Value secrets engine is just one of many secrets engines in Vault.

You've successfully stored and retrieved data from Vault!

As mentioned above, generally you want to run Vault in a dedicated Kubernetes cluster or at least a dedicated namespace with tightly controlled RBAC permissions. To follow this best practice, create another Kubernetes cluster which will host our applications.

$ gcloud container clusters create my-apps \
    --cluster-version 1.14 \
    --enable-cloud-logging \
    --enable-cloud-monitoring \
    --enable-ip-alias \
    --no-enable-basic-auth \
    --no-issue-client-certificate \
    --machine-type n1-standard-1 \
    --num-nodes 1 \
    --region us-east1

There are a few things to point out:

There is no requirement that our Vault servers run under Kubernetes (they could be running on dedicated VMs or as a managed service). It is a best practice to treat the Vault server cluster as a "service" through which other applications and services request credentials. As such, moving forward, the Vault cluster will be treated simply as an IP address. We will not leverage Kubernetes for "discovering" the Vault cluster.

To put it another way, completely forget that Vault is running in Kubernetes. If it helps, think that Vault is running in a PaaS instead.

In our cluster, services will authenticate to Vault using the Kubernetes auth method. In this model, services present their JWT token to Vault as part of an authentication request. Vault takes that signed JWT token and, using the token reviewer API, verifies the token is authenticated. If the authentication is successful, Vault generates a token and maps a series of configured policies onto the token which is returned to the caller.

First, create the Kubernetes service account:

$ kubectl create serviceaccount vault-auth

Next, grant that service account the ability to access the TokenReviewer API via RBAC:

$ kubectl apply -f - <<EOH
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
  name: role-tokenreview-binding
  namespace: default
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
- kind: ServiceAccount
  name: vault-auth
  namespace: default

At this point, we have two Kubernetes clusters - one for running Vault and one for running our applications and services. However, our applications and services need a way to authenticate to Vault. The easiest way to do this is via the Vault Kubernetes Auth Method.

In this auth method, pods or services present their signed JWT token to Vault. Vault verifies the JWT token using the Token Reviewer API, and, if successful, Vault returns a token to the requestor. This process requires Vault to be able to talk to the Token Reviewer API in our cluster, which is where the service account with RBAC permissions is important from the previous steps. Visually, the process looks like this:

This process is tedious, but it is easily automated. It's important to note that this operation only needs to be done once per cluster, and it's typically done in advance by a security or infrastructure team.

First, collect some metadata as environment variables:

$ export LB_IP="$(gcloud compute addresses describe vault --region us-east1 --format 'value(address)')"
$ export CLUSTER_NAME="gke_${GOOGLE_CLOUD_PROJECT}_us-east1_my-apps"
$ export SECRET_NAME="$(kubectl get serviceaccount vault-auth \
    -o go-template='{{ (index .secrets 0).name }}')"
$ export TR_ACCOUNT_TOKEN="$(kubectl get secret ${SECRET_NAME} \
    -o go-template='{{ .data.token }}' | base64 --decode)"
$ export K8S_HOST="$(kubectl config view --raw \
    -o go-template="{{ range .clusters }}{{ if eq .name \"${CLUSTER_NAME}\" }}{{ index .cluster \"server\" }}{{ end }}{{ end }}")"
$ export K8S_CACERT="$(kubectl config view --raw \
    -o go-template="{{ range .clusters }}{{ if eq .name \"${CLUSTER_NAME}\" }}{{ index .cluster \"certificate-authority-data\" }}{{ end }}{{ end }}" | base64 --decode)"

Next, enable the Kubernetes auth method on Vault:

$ vault auth enable kubernetes

Configure Vault to talk to the my-apps Kubernetes cluster with the service account created earlier.

$ vault write auth/kubernetes/config \
    kubernetes_host="${K8S_HOST}" \
    kubernetes_ca_cert="${K8S_CACERT}" \

Create a configmap to store the address of the Vault server. This is how pods and services will talk to Vault. This could also be registered as an external service for service discovery, but that is not covered here.

$ kubectl create configmap vault \
    --from-literal "vault_addr=https://${LB_IP}"

Lastly, create a Kubernetes secret to hold the Certificate Authority. This will be used by all pods and services talking to Vault to verify it's TLS connection.

$ kubectl create secret generic vault-tls \
    --from-file "$(pwd)/tls/ca.crt"

Congratulations! Vault is now configured to talk to the my-apps Kubernetes cluster. Apps and services will be able to authenticate using the Vault Kubernetes Auth Method to access Vault secrets.

At this point, our pods and services can authenticate to Vault, but their authentication will not have any authorization. That's because in Vault, everything is deny by default. Even though the pod/service successfully authenticated, we haven't configured Vault to assign policies or permissions to the authentication.

Permissions are assigned based on the JWT token's namespace and the JWT token's service account name. For example:

In all these examples, the service account name, namespace name, and Vault policies are designed and configured by users. Typically this is done by your security teams and infrastructure teams in collaboration.

First, let's create a Vault policy named myapp-rw that grants read and write permission on the data we put in the generic KV secrets engine earlier:

$ vault policy write myapp-rw - <<EOH
path "kv/myapp/*" {
  capabilities = ["create", "read", "update", "delete", "list"]

When a user is assigned this policy, they will have the ability to perform CRUD operations on our key. As you can see, it's possible to restrict or expand permissions as much as you desire.

It is also possible to create policies that map to non-existent resources. For example, at this moment, there's nothing in Vault at the path database/creds/readonly. You can still create a policy:

$ vault policy write database-ro - <<EOH
path "database/creds/readonly" {
  capabilities = ["read"]

Now that policies exist, we need to map these policies to the Kubernetes authentication we enabled in the previous step.

$ vault write auth/kubernetes/role/myapp-role \
    bound_service_account_names=default \
    bound_service_account_namespaces=default \
    policies=default,myapp-rw,database-ro \

This configures a role in Vault named myapp-role which assigns the myapp-rw and database-ro policies to any tokens that match the binding criteria. When any valid JWT token from the "default" service account in the "default" namespace authenticates to Vault, it will be given a Vault token with those policies and permissions attached.

At this point, we are ready to deploy an application that requests data from Vault. This is one of the most common techniques for injecting Vault secrets into an application.

First, apply and inspect the Kubernetes spec for the application:

$ kubectl apply -f https://raw.githubusercontent.com/sethvargo/vault-kubernetes-workshop/master/k8s/kv-sidecar.yaml

KV Sidecar Spec

apiVersion: apps/v1
kind: Deployment
  name: kv-sidecar
    app: kv-sidecar
  replicas: 1
      app: kv-sidecar

# ...

Inspect that the app is running:

$ kubectl get pod

NAME                          READY     STATUS    RESTARTS   AGE
kv-sidecar-5bd77d5b97-clvjg   2/2       Running   0          1m

Finally, show that the container is correctly authenticating and pulling data from Vault by inspecting it's logs

$ kubectl logs -l app=kv-sidecar -c app

Great! You've successfully deployed an application that authenticated and retrieved information from Vault! While all access to that username/password is audited and can be revoked early, it's shared among all instances of this application. What we really want is for Vault to have the ability to generate dynamic credentials, on the fly.

In order to showcase dynamic credentials, create a CloudSQL instance. This process can take a few minutes to complete.

$ gcloud sql instances create my-instance \
    --database-version MYSQL_5_7 \
    --tier db-f1-micro \
    --region us-east1 \

Once finished, set the root user password. Don't worry - Vault will automatically rotate this password once configured.

$ gcloud sql users set-password root \
    --host % \
    --instance my-instance \
    --password my-password

At this point, we have a managed MySQL database with one user root:my-password. Yes, that is an insecure password. In a future step, we will instruct Vault to rotate that root password such that even we do not know its value!

Now that we have a database, let's configure Vault to dynamically generate users in that database.

First, enable the database secrets engine in Vault:

$ vault secrets enable database

Next, provide Vault with the connection details to talk to CloudSQL (MySQL):

$ export INSTANCE_IP="$(gcloud sql instances describe my-instance --format 'value(ipAddresses[0].ipAddress)')"
$ vault write database/config/my-cloudsql-db \
    plugin_name=mysql-database-plugin \
    connection_url="{{username}}:{{password}}@tcp(${INSTANCE_IP}:3306)/" \
    allowed_roles="readonly" \
    username="root" \

Ask Vault to fully manage the root user by rotating the given credentials. After this operation, only Vault will know the root credentials for the database.

$ vault write -f database/rotate-root/my-cloudsql-db

Finally, create a role for Vault to create users. Since Vault doesn't know what permissions or stored procedures you may have, you give Vault the SQL directly to create your user. This gives you full control and allows you to codify (capture as code) existing manual user creation processes.

$ vault write database/roles/readonly \
    db_name=my-cloudsql-db \
    creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; GRANT SELECT ON *.* TO '{{name}}'@'%';" \
    default_ttl="1h" \

At this point, Vault is configured to generate dynamic users against our MySQL CloudSQL instance. To verify, ask Vault to generate a new credential:

$ vault read database/creds/readonly

Each time you run this command, you'll get back a unique username and password:

$ vault read database/creds/readonly

Verify the users were actually created with gcloud:

$ gcloud sql users list --instance my-instance

NAME                              HOST
mysql.sys                         localhost
root                              %
v-root-readonly-2l18rauYWAOaBqLY  %
v-root-readonly-wNFrmj9LaLUNYM7R  %

Vault is now successfully generating dynamic credentials. After 1 hour as passed, these credentials will be revoked. Optionally, you can revoke them earlier:

$ vault lease revoke -prefix database/creds/readonly

And then verify that the users are gone:

$ gcloud sql users list --instance my-instance

mysql.sys  localhost
root       %

We ran these operations as a human. However, applications and services can run these operations too. Recall that all the vault * CLI commands are thin HTTP API wrappers. That means our Kubernetes services can request their own database credentials.

The process for the dynamic app is the same as the process for the static app with one exception: since these credentials can change (they have a lifetime and can be revoked early), our sidecar Consul Template process needs to have the ability to inform the application of changes. One of the most common ways to inform applications of changes is via UNIX signals, but on Kubernetes, pods are all scheduled in isolation... or are they?

To enable our sidecar to signal our main application when a secret changes, we need to leverage a Kubernetes alpha feature – shared process namespaces. When enabled our Consul Template sidecar and main application will live in the same process namespace, allowing them to signal each other.

First, apply and inspect the Kubernetes spec for the application:

$ kubectl apply -f https://raw.githubusercontent.com/sethvargo/vault-kubernetes-workshop/master/k8s/db-sidecar.yaml

DB Sidecar Spec

apiVersion: v1
kind: Pod
  name: sa-sidecar
  shareProcessNamespace: true
# ...

  - name: consul-template
    image: registry.hub.docker.com/sethvargo/consul-template:0.19.5.dev-alpine
    imagePullPolicy: Always
        add: ['SYS_PTRACE']

# ...

Inspect that the app is running:

$ kubectl get pod

NAME                          READY     STATUS        RESTARTS   AGE
db-sidecar-6b9f9cd58f-pvcph   2/2       Running       0          11s
kv-sidecar6b9f9cd58f-9z6tt    2/2       Running       0          1h

Finally, show that the container is correctly authenticating and pulling data from Vault by inspecting it's logs

$ kubectl logs -l app=db-sidecar -c app

Now scale the application to show that each instance will create its own unique credential:

$ kubectl scale deployment db-sidecar --replicas=3
$ gcloud sql users list --instance my-instance

NAME                              HOST
mysql.sys                         localhost
root                              %
v-kubernetes-readonly-4QDPnqj18R  %
v-kubernetes-readonly-b7E1eMWoiS  %
v-kubernetes-readonly-e8EdztdS68  %
v-kubernetes-readonly-fJ0H0BnW4Y  %
v-kubernetes-readonly-fYL2DCVajW  %
v-kubernetes-readonly-vbPpDLKybR  %

Congratulations! You've successfully retrieved dynamic credentials from Vault in Kubernetes.

You learned how to run and connect to HashiCorp Vault on Google Kubernetes Engine.

Clean up

If you are done exploring, please consider deleting your project.

Learn More


This work is licensed under a Creative Commons Attribution 2.0 Generic License.