This lab shows you how to setup a continuous delivery pipeline for GKE using Google Cloud Build. We'll run through the following steps

Step 1

Activate Google Cloud Shell

From the GCP Console click the Cloud Shell icon on the top right toolbar:

Then click "Start Cloud Shell":

It should only take a few moments to provision and connect to the environment:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this lab can be done with simply a browser or your Google Chromebook.

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

Run the following command in the cloud shell to confirm that you are authenticated:

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

Step 2

Setup some variables

export PROJECT=$(gcloud info --format='value(config.project)')
export ZONE=us-central1-b
export CLUSTER=gke-deploy-cluster

Store values in gcloud config

gcloud config set project $PROJECT
gcloud config set compute/zone $ZONE

Run the following commands to see your preset account and project. When you create resources using gcloud, this is where they get stored.

gcloud config list project
gcloud config list compute/zone

Step 3

Make sure the following APIs are enabled in Cloud Platform Console:

gcloud services enable container.googleapis.com --async
gcloud services enable containerregistry.googleapis.com --async
gcloud services enable cloudbuild.googleapis.com --async
gcloud services enable sourcerepo.googleapis.com --async

Step 4

Run the following command to get the sample code.

git clone         https://github.com/GoogleCloudPlatform/container-builder-workshop.git

Step 5

Start your Kubernetes cluster with 5 nodes.

cd container-builder-workshop
gcloud container clusters create ${CLUSTER} \
    --project=${PROJECT} \
    --zone=${ZONE} \
    --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"

This may take a moment to create the cluster

Step 6

Give Cloud Build Rights to your cluster

export PROJECT_NUMBER="$(gcloud projects describe \
    $(gcloud config get-value core/project -q) --format='get(projectNumber)')"

gcloud projects add-iam-policy-binding ${PROJECT} \
    --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
    --role=roles/container.developer

You'll deploy the sample application, gceme, in your continuous deployment pipeline. The application is written in the Go language and is located in the root directory. When you run the gceme binary on a Compute Engine instance, the app displays the instance's metadata in an info card as follows:

The application mimics a microservice by supporting two operation modes.

You will deploy the application into two different environments:

Step 1

Create the Kubernetes namespace to logically isolate the deployment.

kubectl create ns production

Step 2

Create the production and canary deployments and services using the kubectl apply commands.

kubectl apply -f kubernetes/deployments/prod -n production
kubectl apply -f kubernetes/deployments/canary -n production
kubectl apply -f kubernetes/services -n production

Step 3

Scale up the production environment frontends. By default, only one replica of the frontend is deployed. Use the kubectl scale command to ensure that you have at least 4 replicas running at all times.

kubectl scale deployment gceme-frontend-production -n production --replicas 4

Step 4

Confirm that you have 5 pods running for the frontend: 4 for production traffic and 1 for canary releases. This means that changes to your canary release will only affect 1 out of 5 (20%) of users. You should also have 2 pods for the backend: 1 for production and 1 for canary.

kubectl get pods -n production -l app=gceme -l role=frontend
kubectl get pods -n production -l app=gceme -l role=backend

Step 5

Retrieve the external IP for the production services.

kubectl get service gceme-frontend -n production

Step 6

Store the frontend service load balancer IP in an environment variable for use later.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}"  --namespace=production services gceme-frontend)

Step 7

Confirm that both services are working by opening the frontend external IP address in your browser.

Step 8

Check the version output of the service by hitting the /version path. It should read 1.0.0.

curl http://$FRONTEND_SERVICE_IP/version

Step 1

Create a copy of the gceme sample app and push it to Cloud Source Repositories.

Step 2

Initialize the sample-app directory as its own Git repository.

gcloud alpha source repos create default
git init
git config credential.helper gcloud.sh
git remote add gcp https://source.developers.google.com/p/$PROJECT/r/default

Step 3

Set the username and email address for your Git commits. Replace [EMAIL_ADDRESS] with your Git email address. Replace [USERNAME] with your Git username.

git config --global user.email "[EMAIL_ADDRESS]"
git config --global user.name "[USERNAME]"

Step 4

Add, commit, and push the files.

git add .
git commit -m "Initial commit"
git push gcp master

Step 1

Set up a build trigger to watch for changes to any branches except master.

Branches

cat <<EOF > branch-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "branchName": "[^(?!.*master)].*"
  },
  "description": "branch",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-dev.yaml"
}
EOF

curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @branch-build-trigger.json

Step 2

Set up a build trigger to watch for changes to only the master branch.

Master

cat <<EOF > master-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "branchName": "master"
  },
  "description": "master",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-canary.yaml"
}
EOF


curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @master-build-trigger.json

Step 3

Set up a build trigger to execute when a tag is pushed to the repository.

Tags

cat <<EOF > tag-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "tagName": ".*"
  },
  "description": "tag",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-prod.yaml"
}
EOF


curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @tag-build-trigger.json

Review triggers are setup on the Build Triggers Page

Development branches are a set of environments your developers use to test their code changes before submitting them for integration into the live site. These environments are scaled-down versions of your application, but need to be deployed using the same mechanisms as the live environment.

Create a development branch

To create a development environment from a feature branch, you can push the branch to the Git server and let Cloud Builder deploy your environment.

Create a development branch and push it to the Git server.

git checkout -b new-feature

Modify the site

In order to demonstrate changing the application, you will be changing the gceme cards from blue to orange.

Step 1

Open html.go and replace the two instances of blue with orange.

Step 2

Open main.go and change the version number from 1.0.0 to 2.0.0. The version is defined in this line:

const version string = "2.0.0"

Kick off deployment

Step 1

Commit and push your changes. This will kick off a build of your development environment.

git add html.go main.go
git commit -m "Version 2.0.0"
git push gcp new-feature

Step 2

After the change is pushed to the Git repository, navigate to the Build History Page user interface where you can see that your build started for the new-feature branch

Click into the build to review the details of the job

Step 3

Once that completes, verify that your application is accessible. You should see it respond with 2.0.0, which is the version that is now running.

Retrieve the external IP for the production services.

kubectl get service gceme-frontend -n new-feature

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=new-feature services gceme-frontend)

curl http://$FRONTEND_SERVICE_IP/version

Now that you have verified that your app is running your latest code in the development environment, deploy that code to the canary environment.

Step 1

Create a canary branch and push it to the Git server.

git checkout master
git merge new-feature
git push gcp master

Again after you've pushed to the Git repository, navigate to the Build History Page user interface where you can see that your build started for the master branch

Click into the build to review the details of the job

Step 2

Once complete, you can check the service URL to ensure that some of the traffic is being served by your new version. You should see about 1 in 5 requests returning version 2.0.0.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1;  done

You can stop this command by pressing Ctrl-C.

Now that your canary release was successful and you haven't heard any customer complaints, you can deploy to the rest of your production fleet.

Step 1

Merge the canary branch and push it to the Git server.

git tag v2.0.0
git push gcp v2.0.0

Review the job on the the Build History Page user interface where you can see that your build started for the v2.0.0 tag

Click into the build to review the details of the job

Step 2

Once complete, you can check the service URL to ensure that all of the traffic is being served by your new version, 2.0.0. You can also navigate to the site using your browser to see your orange cards.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1;  done

Step 3

You can stop this command by pressing Ctrl-C.