What you'll learn

What you'll need

This codelab is focused on Continuous Delivery (CD) to Kubernetes using Spinnaker. We will trigger pipelines in Spinnaker that deploy a pair of sample services to Kubernetes. Along the way, you'll see Canary deployments work, how code and configuration changes can trigger deployments, and how promotions between environments can happen.

Most of the infrastructure and Spinnaker setup has been automated for you so you're not distracted from learning how delivery of your code works, from source to production.

Required APIs

First, make sure that the following APIs are enabled in your project:

  1. Pub/Sub API
  2. Cloud build API
  3. Kubernetes API

Open Cloud Shell

Once you've enabled the above APIs, open Cloud Shell by clicking on the link in the top-right of the Cloud Console as shown below.

This provisions a temporary GCE instance with a bash shell we will run all subsequent commands in.

Deploy Spinnaker

Run the following commands to provision Spinnaker:

gsutil cp gs://gke-spinnaker-codelab/install.tgz . 

tar -xvf install.tgz

./setup.sh

The setup command can take a while to run (between 25 and 30 minutes). Once it is completed it will prompt you to continue.

Connect to Spinnaker

Run the following command:

./connect.sh

This will forward Spinnaker's UI server to port 8080 on the Cloud Shell instance. To connect to this instance, click on "Preview on port 8080" in the top right of the Cloud Shell panel as shown below:

This opens a new tab pointing at the Spinnaker UI.

Demo Application

As a part of the automated setup, we have already deployed a copy of the sample application as well as configured some Spinnaker pipelines to manage its lifecycle. Let's navigate to this application by clicking on the "Applications" tab:

Next, navigate to the demo application:

The Clusters Tab

The Clusters tab in Spinnaker aggregates information about running Kubernetes workloads in your cluster. The Clusters tab should already be selected after navigating to the demo application.

Notice we have two services running: frontend and backend. These have been deployed to two environments: production and staging. Try clicking on one of the green pods as shown here:

You should see a details panel that gives high-level information of the status of the pod, as well as an "Actions" dropdown, and a link to the logs.

The "Pipelines" Tab

While the Clusters tab gives you ad-hoc actions to perform, and information about the state of your running applications, the Pipelines tab lets you configure repeatable, automated processes to update your running code.

Select the Pipelines tab as shown below:

We have configured three pipelines to run in order:

  1. Deploy to Staging: This deploys your Kubernetes resources and built Docker images to the staging environment, and runs an integration test against the running backend service when it is ready to receive traffic.
  2. Deploy Simple Canary in Production: This takes the Kubernetes resources and Docker images from staging and deploys them to receive a small fraction of traffic in the production environment.
  3. Promote Canary to Production: This gives you a chance to validate your Canary in production, and if you want, promote it to receive all production traffic. Once this is completed, the Canary is deleted.

To get a sense of what these pipelines do before we automatically trigger them, select the Configure dropdown, and pick the Deploy to Staging pipeline:

You should see the pipeline overview, and a few configurable stages:

Application Structure

The sample application has a frontend and a backend. They communicate like this:

Every time a user makes a request, the frontend serves some static content along with some information about the backend that served the request. Both the frontend and backend are managed by Deployments, have multiple replicas (Pods), and are fronted by a load balancer (Service). We'll use this to demonstrate how to update these two services independently using shared pipelines in Spinnaker.

Return to the Cloud Shell and Examine the Frontend Service

Return to the tab that has the running Cloud shell, and open the ~/services/frontend/ folder, and inspect the contents:

cd ~/services/frontend/

ls

Feel free to open the various files & folders to get a sense for what this service does.

Because this service is already deployed in your cluster, we can take a look at what it is currently configured to serve. Grab the ingress IP address using:

./get-ingress.sh

This prints out an IP address in the form NNN.NNN.NNN.NNN. Copy the address, open a new tab in your browser, and navigate to it. You should see something similar to:

Update and Rebuild the Frontend

In the ~/services/frontend/ folder, make an edit to content/index.html. We recommend changing style="background-color:blue" to white to make the update easy to see. See the example below:

<!DOCTYPE html>
<html>
  <body style="background-color:white">
    <h2>Hello, world!</h2>
    <p>Message from the backend:</p>
    <p>{{.Message}}</p>
    <p>{{.Feature}}</p>
  </body>
</html>

Save your change then submit a cloud build for the frontend service:

cd ~/services/frontend/

./build.sh

This will run for a few minutes in Google Container Builder, and will push an updated image to your project's container registry. This event will kick-off the Deploy to Staging pipeline.

Monitor the "Deploy to Staging" Pipeline

When the build you have kicked off completes, the Deploy to Staging pipeline automatically starts running. Return to the Spinnaker window and open the Pipelines tab like before:

Once the last pipeline stage turns orange, click on the "Person" icon and "Continue" to approve and complete the pipeline. If desired, you can follow the custom instructions shown on the stage.

Notice, on the Clusters tab the frontend service in staging has been updated, and now points to the digest of the image you created in your build above.

Monitor "Deploy Simple Canary to Production" Pipeline

Once you approve and the Deploy to Staging pipeline completes, you will automatically have a canary deployed to production by the Deploy Simple Canary to Production pipeline. You can quickly check that it was deployed on the Clusters tab as shown here:

At this point, let's see if the canary is in effect. Use the ~/services/frontend/get-ingress.sh script to get the IP address of your frontend production service, and open it in a new tab. If you refresh the page frequently, the color should alternate between white and blue:

Canary

Baseline

At the end of this pipeline, you will be asked to promote the canary. If all seems well, accept the Manual judgement like you did in the prior pipeline.

Monitor "Promote Canary to Production" Pipeline

After the Deploy Simple Canary to Production pipeline completes, the canary will be promoted to production.

Your code from staging is now running in production, and the canaries are gone, as we can see back on the Clusters tab.

Using the same ~/services/frontend/get-ingress.sh script from above to get the IP address of your frontend production service, we can verify in a new tab that the background is now always white.

Edit the Frontend ConfigMap

Similar to how we checked in a code change to build a Docker image for the frontend service, we will now update the ConfigMap that the frontend service loads to enable a feature in our application's landing page.

Open the ~/services/manifests/ directory, and edit the frontend.yml file. Edit the kind: ConfigMap block, add a FEATURE: "A new feature!" entry under data: as shown here:

# ... other manifests
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: frontend-config
  namespace: '${ namespace }'
data:
  BACKEND_ENDPOINT: 'http://backend.${ namespace }'
  FEATURE: "A new feature!"
---
# other manifests ... 

Save the file, and run the ./update-frontend.sh script in the ~/services/manifests/ directory. The updated manifest will automatically push a message to Spinnaker via Pub/Sub.

Check the "Deploy to Staging" Pipeline

Back on the Pipelines tab, the Deploy to Staging pipeline should be running. Click on the first green bar (it might be blue if the pipeline is still running) to see details of how the frontend service was deployed:

Select the Deploy Status tab in the details panel as shown:

In particular, notice that this deployment has assigned a version to the ConfigMap that we edited, giving it the name frontend-config-v001. In the prior deployment, we deployed frontend-config-v000. (You can verify this by selecting the prior pipeline execution.) This was done automatically by Spinnaker to ensure that only this deployment of your frontend is affected by this config change.

Push the Change to Production

In the same way we updated our frontend docker image, allow the three (Deploy to Staging, Deploy Simple Canary to Production, and Promote Canary To Production) pipelines to finish running, accepting the Manual Judgements along the way.

When completed, the frontend cluster should look something like this:

We can use the same ~/services/frontend/get-ingress.sh script to get the endpoint our feature flag update will be visible on.

What if something goes wrong?

At this point you've noticed it takes roughly 10 minutes to get a code change from source to production. If we were running more serious integration tests, or our services took longer to start, or the canary needed to run for longer, this could take even longer. In a worst-case scenario, we'd want an escape hatch to roll back as quickly as possible.

Let's create a pipeline that codifies the rollback policy, and allows you to rollback production at the click of a button.

Create a new pipeline

On the Pipelines tab, select "Create"

Call the pipeline Rollback Production, and hit "Create"

Create a "Undo Rollout (Manifest)" stage

Add a stage by clicking "Add Stage"

Select "Undo Rollout (Manifest)" as the Type.

Fill in the following fields under "Undo rollout (Manifest) Configuration" as shown:

Field

Value

Stage Name

Rollback the Frontend

Account

my-kubernetes-account

Namespace

production

Kind

deployment

Name

frontend-primary

Under "Execution options", select "halt this branch and fail the pipeline once other branches complete".

Important: Hit Save Pipeline in the bottom-right corner of your screen.

Copy the existing stage

Now that we've configured the rollback policy for the frontend, let's do the same for the backend using a shortcut. Select "Copy an Existing Stage":

Search for "Rollback the Frontend", and select the stage we just configured:

Edit the name to "Rollback the Backend", and delete the dependency on "Rollback the Frontend" using the trash can icon next to the "Depends On" field:

When completed, the execution graph should look like this:

Finally, edit the field Name to backend-primary:

Important: Hit Save Pipeline in the bottom-right corner of your screen.

Rollback Production

Run your pipeline manually using the "Start manual execution" button in the Pipelines tab

Delete all resources created by this codelab

Simply run the ~/cleanup.sh script in the home directory to delete the resources created by this codelab.