Google Container Engine makes it easy to run docker containers in the cloud. Google Container Engine uses Kubernetes, an open source container scheduler, to ensure that your cluster is running exactly the way you want it to at all times.

Follow along this lab to learn how to launch a container on Google Container Engine.

What you'll learn

What you'll need

How will you use this tutorial?

Read it through only Read it and complete the exercises

How would rate your experience with Google Cloud Platform?

Novice Intermediate Proficient

Self-paced environment setup

If you don't already have a Google Account (Gmail or Google Apps), you must create one. Sign-in to Google Cloud Platform console (console.cloud.google.com) and create a new project:

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

Next, you'll need to enable billing in the Developers Console in order to use Google Cloud resources like Cloud Datastore and Cloud Storage.

Running through this codelab shouldn't cost you more than a few dollars, but it could be more if you decide to use more resources or if you leave them running (see "cleanup" section at the end of this document).

New users of Google Cloud Platform are eligible for a $300 free trial.

In this section you'll create a Google Container Engine cluster.

Login to Google Cloud Console

Navigate to the the Google Cloud Console from another browser tab/window, to https://console.cloud.google.com. Use the login credential given to you by the lab proctor.

Setup Project Prerequisites

Enable APIs

Search for "Google Compute Engine" in the search box. Click on "Google Compute Engine" in the results list that appears.

Now click "Enable"

Set Compute Zone

Launch Cloud Shell by clicking on the terminal icon in the top toolbar.

Cloud Shell is a browser based terminal to a virtual machine that has the Google Cloud Platform tools installed on it and some additional tools (like editors and compilers) that are handy when you are developing or debugging your cloud application.

We'll be using the gcloud command to create the cluster. First, though, we need to set the compute zone so that the virtual machines in our cluster are created in the correct region. We can do this using gcloud config set compute/zone. Enter the following in Cloud Shell.

gcloud config set compute/zone us-central1-f

Create a New Cluster

You can create a new container cluster with the gcloud command like this:

gcloud container clusters create hello-world

This command creates a new cluster called "hello-world" with three nodes (VMs). You can configure this command with additional flags to change the number of nodes, the default permissions, and other variables. See the documentation for more details.

Launching the cluster may take a bit of time but once it is up you should see output in Cloud Shell that looks like this:

NAME         ZONE           MASTER_VERSION  MASTER_IP        MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
hello-world  us-central1-f  1.4.6           104.197.119.168  n1-standard-1  1.4.6         3          RUNNING

The next step is to build and publish a container that contains your code. We will be using Docker to build our container, and Google Container Registry to securely publish it.

Set your project ID

You will be using the Google Cloud Project ID in many of the commands in this lab. The Project ID is conveniently stored in an environment variable in Cloud Shell. You can see it here:

echo $DEVSHELL_PROJECT_ID

Get the sample code

git clone https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
cd nodejs-docs-samples/containerengine/hello-world/

Build the container

Docker containers are built using a Dockerfile. The sample code provides a basic Dockerfile that we can use. Here is the contents of the file:

FROM node:4
EXPOSE 8080
COPY server.js .
CMD node server.js

To build the container, run the following command:

docker build -t gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 .

This will build a Docker container image stored locally.

Publish the container

In order for Kubernetes to access your image, you need to store it in a container registry.

Run the following command to publish your container image:

gcloud docker -- push gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0

Now that we have a cluster running and our application built, it is time to deploy it.

Create Your Deployment

A deployment is a core component of Kubernetes that makes sure your application is always running. A deployment schedules and manages a set of pods on the cluster. A pod is one or more containers that "travel together". That might mean they are administered together or they have the same network requirements. For this example we only have one container in our pod.

Typically, you would create a yaml file with the configuration for the deployment. In this example, we are going to skip this step and instead directly create the deployment on the command line.

Create the pod using kubectl

kubectl run hello-node --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-node:1.0 --port=8080

This command starts up one copy of the docker image on one of the nodes in the cluster.

You can see the deployment you created using kubectl.

kubectl get deployments

You should get back a result that looks something like:

NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-node    1         1         1            1           30s

You can see the pod running using kubectl as well.

kubectl get pods

You should get back a result that looks something like:

NAME                            READY     STATUS    RESTARTS   AGE
hello-node-3375482827-7hs3q     1/1       Running   0          1m

Allow External Traffic

By default a pod is only accessible to other machines inside the cluster. In order to use the node.js container that was created it needs to be exposed as a service.

Typically, you would create a yaml file with the configuration for the service. In this example, we are going to skip this step and instead directly create the service on the command line.

Expose the deployment with the kubectl expose command.

kubectl expose deployment hello-node --name=hello-node --type=LoadBalancer --port=80 --target-port=8080

kubectl expose creates a service, the forwarding rules for the load balancer, and the firewall rules that allow external traffic to be sent to the pod. The --type=LoadBalancer flag creates a Google Cloud Network Load Balancer that will accept external traffic.

To get the IP address for your service, run the following command:

kubectl get svc hello-node

You should get back a result that looks something like:

NAME         CLUSTER-IP    EXTERNAL-IP       PORT(S)   AGE
hello-node   10.3.247.85   104.198.151.208   80/TCP    8m

Verify the Deployment

Open a new browser window or tab and navigate to the external IP address from the previous step. You should see the sample code up and running!

Google Container Engine and Kubernetes provide a powerful and flexible way to run containers on Google Cloud Platform. Kubernetes can also be used on your own hardware or on other Cloud Providers.

This example only used a single container but it is simple to setup multiple container environments or multiple instances of a single container as well.

What we've covered

Next Steps