This lab shows you how to setup a CICD (continuous integration and continuous deployment) pipeline for GKE.

Step 1

Activate Google Cloud Shell

From the GCP Console click the Cloud Shell icon on the top right toolbar:

Then click "Start Cloud Shell":

It should only take a few moments to provision and connect to the environment:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this lab can be done with simply a browser or your Google Chromebook.

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

Run the following command in the cloud shell to confirm that you are authenticated:

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

Step 2

Setup some variables

export PROJECT=$(gcloud info --format='value(config.project)')
export ZONE=europe-west1-b
export CLUSTER=gke-deploy-cluster

Store values in gcloud config

gcloud config set project $PROJECT
gcloud config set compute/zone $ZONE

Run the following commands to see your preset account and project. When you create resources using gcloud, this is where they get stored.

gcloud config list project
gcloud config list compute/zone

Step 3

Make sure the following APIs are enabled in Cloud Platform Console:

Run this command. It may take 1-2 minutes on a new project.

gcloud services enable container.googleapis.com \
  containerregistry.googleapis.com \
  cloudbuild.googleapis.com \
  sourcerepo.googleapis.com

Step 4

Run the following command to get the sample code.

git clone https://github.com/GoogleCloudPlatform/container-builder-workshop.git

Step 5

Start your Kubernetes cluster (default with 3 nodes).

cd container-builder-workshop
gcloud container clusters create ${CLUSTER} \
    --project=${PROJECT} \
    --zone=${ZONE} \
    --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"

This may take a moment to create the cluster

Step 6

Give Cloud Build Rights to your cluster. The role "container.developer" is granted to the Cloud Build service account, which provides Cloud Build access to Kubernetes API objects inside GKE clusters in this project.

export PROJECT_NUMBER="$(gcloud projects describe \
    $(gcloud config get-value core/project -q) --format='get(projectNumber)')"

gcloud projects add-iam-policy-binding ${PROJECT} \
    --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \
    --role=roles/container.developer

You'll deploy the sample application, gceme, in your continuous deployment pipeline. The application is written in the Go language and is located in the root directory.

The app displays the metadata in an info card as follows:

gceme info card

The application mimics a microservice by supporting two operation modes.

You will first deploy the application into two different environments:

Step 1

Create the Kubernetes namespace to logically isolate the deployment.

kubectl create ns production

Step 2

Create the production and canary deployments and services using the kubectl apply commands.

kubectl apply -f kubernetes/deployments/prod -n production
kubectl apply -f kubernetes/deployments/canary -n production
kubectl apply -f kubernetes/services -n production

Step 3

Scale up the production environment frontends. By default, only one replica of the frontend is deployed. Use the kubectl scale command to ensure that you have at least 4 replicas running at all times.

kubectl scale deployment gceme-frontend-production -n production --replicas 4

Step 4

Confirm that you have 5 pods running for the frontend: 4 for production traffic and 1 for canary releases. This means that changes to your canary release will only affect 1 out of 5 (20%) of users. Run this command:

kubectl get pods -n production -l app=gceme -l role=frontend

The result should be like this (it may take a few seconds for all of them to have Status=Running):

You should also have 2 pods for the backend: 1 for production and 1 for canary. Run this command:

kubectl get pods -n production -l app=gceme -l role=backend

The result should be like this:

Step 5

Retrieve the external IP for the production services.

kubectl get service gceme-frontend -n production

The result should be like this:

Step 6

Store the frontend service load balancer IP in an environment variable for use later.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}"  --namespace=production services gceme-frontend)

Step 7

Confirm that both services are working by opening the frontend external IP address in your browser.

Step 8

Check the version output of the service by hitting the /version path. It should read 1.0.0 by running the curl command.

curl http://$FRONTEND_SERVICE_IP/version

You can also get to the service via your browser. The URL should be in this format.

echo http://$FRONTEND_SERVICE_IP/

Navigate to the URL in your browser:

The goal of this section is to push the code to Google Cloud Source Repository.

Step 1

Create a copy of the gceme sample app and push it to Cloud Source Repositories. Make sure you are still in the directory cloned from git earlier.

cd ~/container-builder-workshop

Step 2

Initialize the container-builder-workshop directory as its own Git repository.

gcloud source repos create default
git init
git config credential.helper gcloud.sh
git remote add gcp https://source.developers.google.com/p/$PROJECT/r/default

Step 3

Set the username and email address for your Git commits. Replace [EMAIL_ADDRESS] with your Git email address and replace [USERNAME] with your Git username. Follow the following commands if you want to set the values based on your project credentials.

GIT_EMAIL_ADDRESS=$(gcloud auth list --format='value(account)') && echo $GIT_EMAIL_ADDRESS
git config --global user.email "$GIT_EMAIL_ADDRESS"
GIT_USERNAME=$(whoami) && echo $GIT_USERNAME
git config --global user.name "$GIT_USERNAME"

Step 4

Add, commit, and push the files to the Cloud Source Repository in your project.

git add .
git commit -m "Initial commit"
git push gcp master

Cloud Build allows you to create pipelines as a part of your build steps to automate deployments. You can start with a trigger from the Source Repository, then define the steps you want to perform as a part of build, test, publish and deploy.

We will be setting up 3 different automated triggers as part of this lab.

Each one of them looks like this:

Trigger 1

Set up a build trigger to watch for changes to any branches except master. The application will be deployed to the new-feature namespace.

Branches

Run the following block of commands:

cat <<EOF > branch-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "branchName": "[^(?!.*master)].*"
  },
  "description": "branch",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-dev.yaml"
}
EOF

curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @branch-build-trigger.json

Trigger 2

Set up a build trigger to watch for changes to only the master branch.

Master

Run the following block of commands:

cat <<EOF > master-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "branchName": "master"
  },
  "description": "master",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-canary.yaml"
}
EOF


curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @master-build-trigger.json

Trigger 3

Set up a build trigger to execute when a tag is pushed to the repository.

Tags

Run the following block of commands:

cat <<EOF > tag-build-trigger.json
{
  "triggerTemplate": {
    "projectId": "${PROJECT}",
    "repoName": "default",
    "tagName": ".*"
  },
  "description": "tag",
  "substitutions": {
    "_CLOUDSDK_COMPUTE_ZONE": "${ZONE}",
    "_CLOUDSDK_CONTAINER_CLUSTER": "${CLUSTER}"
  },
  "filename": "builder/cloudbuild-prod.yaml"
}
EOF


curl -X POST \
    https://cloudbuild.googleapis.com/v1/projects/${PROJECT}/triggers \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $(gcloud config config-helper --format='value(credential.access_token)')" \
    --data-binary @tag-build-trigger.json

Review triggers are setup on the Build Triggers Page (Tools -> Cloud Build, Triggers). You should see branch, master and tag.

Development branches are a set of environments your developers use to test their code changes before submitting them for integration into the live site. These environments are scaled-down versions of your application, but need to be deployed using the same mechanisms as the live environment.

Create a development branch

To create a development environment from a feature branch, you can push the branch to the Git server and let Cloud Build deploy your environment.

Create a development branch and push it to the Git server.

git checkout -b new-feature

Modify the site

In order to demonstrate changing the application, you will be changing the gceme cards from blue to orange.

Step 1

In the file html.go, replace the two instances of blue with orange.

Check the values before the change:

grep "card " html.go

Result:

Perform the replacement using the sed command:

sed -i 's/ blue/ orange/' html.go

Verify the values after the change:

grep "card " html.go

Result should show "orange":

Step 2

In the file main.go, change the version number from 1.0.0 to 2.0.0.

sed -i 's/1.0.0/2.0.0/' main.go

Verify the values after the change:

grep "const version string" main.go

Result should show 2.0.0:

Kick off deployment

Step 1

Commit and push your changes. This will kick off a build of your development environment.

git add html.go main.go
git commit -m "Version 2.0.0"
git push gcp new-feature

Step 2

After the change is pushed to the Git repository, navigate to the Build History Page user interface (Tools -> Cloud Build -> History) where you can see that your build started for the new-feature branch

Click into the build to review the details of the job triggered by thee "Push to new-feature branch".

Our build trigger in this lab publishes the container images to Container Registry. Once the build is complete, you can go to the Container Registry (Tools -> Container Registry) to see the images.

Step 3

Once that completes, verify that your application is accessible. You first have to retrieve this environment's external IP address.

kubectl get service gceme-frontend -n new-feature

Result should be something like this. Wait until you see a value for EXTERNAL-IP:

Then use curl to retrieve the version number.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=new-feature services gceme-frontend)

curl http://$FRONTEND_SERVICE_IP/version

You should see it respond with 2.0.0, which is the version that is now running.

Now you can navigate to the EXTERNAL-IP in your browser. Go to this URL:

echo http://$FRONTEND_SERVICE_IP/

And it should be showing the result of this new-feature push. The card color should be orange now (it was blue in the previous version).

Now that you have verified that your app is running your latest code in the development environment, deploy that code to the canary environment.

Step 1

Merge the new-feature with the master to have a production-ready release for canary.

git checkout master
git merge new-feature
git push gcp master

Again after you've pushed to the Git repository, navigate to the Build History Page user interface where you can see that your build started for the master branch

Click into the build to review the details of the job. There should be a new Build triggered by the push to master branch.

Step 2

Once the build is complete for the Trigger name "master" (which may take 1-2 minutes), you can check the service URL to ensure that some of the traffic is being served by your new version. After a while, you should see about 1 in 5 requests returning version 2.0.0.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1;  done

You can stop this command by pressing Ctrl-C.

Now that your canary release was successful and you haven't heard any customer complaints, you can deploy to the rest of your production fleet.

Step 1

Merge the canary branch and push it to the Git server.

git tag v2.0.0
git push gcp v2.0.0

Review the job on the the Build History Page user interface where you can see that your build started for the v2.0.0 tag

Click into the build to review the details of the job that is triggered by the push to v2.0.0 tag.

Step 2

Once complete, you can check the service URL to ensure that all of the traffic is being served by your new version, 2.0.0. You can also navigate to the site using your browser to see your orange cards.

export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1;  done

Now you can navigate to the EXTERNAL-IP in your browser. Go to this URL:

echo http://$FRONTEND_SERVICE_IP/

Step 3

You can stop this command by pressing Ctrl-C.

To clean up the resources on your project (so that you have enough quota for the rest of the event):

Congratulations. You have created a simple CICD pipeline. Based on code push, the pipelines can continuously build and deploy to development, canary or production.

Enterprise CICD pipelines tend to be more complex. For example, a full pipeline may include a more robust Continuous Deployment tooling such as Spinnaker. Isio and Stackdriver can help with management and operations aspects that can be great feedback into optimization for your apps.

Suggested Next Steps: