What you need

To complete this lab, you need:

Internet access

Access to a supported Internet browser:

What you do

What you learn

Google Container Engine (GKE) includes integrated support for (L3) network load balancing. To engage network load balancing, all you need to do is include the field type: LoadBalancer in your service configuration file. GKE will setup and connect the network load balancer to your service.

However, if you wanted more advanced (L7) load balancing features including HTTPS balancing, cross-region load balancing, or content-based load balancing, then you would need to integrate your service with the HTTP/HTTPS load balancer provided by Google Compute Engine (GCE).

In the first part of this lab you will configure a cluster with network load balancing.

In the second part of this lab you will configure a replicated nginx service. Then you will use a Kubernetes extension, called ingress, to expose the service behind an HTTP load balancer.

Set some environment variables and gcloud default values. This practice improves consistency and reduced the chances of typing errors.

Step 1

Set gcloud default values.

$ gcloud config set project [PROJECT_ID]

$ gcloud config set compute/zone us-central1-c

$ gcloud config set compute/region us-central1

$ gcloud config list 

Step 2

Set environment variables.

$ export CLUSTER_NAME="httploadbalancer"

$ export ZONE="us-central1-c"

$ export REGION="us-central1"

Step 1: Create a Kubernetes cluster using Google Container Engine.

$ gcloud container clusters create networklb --num-nodes 3

This will take several minutes to run.

It will create Google Compute Engine instances, and configure each instance as a Kubernetes node.

These instances don't include the Kubernetes Master node. In Google Container Engine, the Kubernetes Master node is a managed service.

You can see the newly created instances in the Google Compute Engine > VM Instances page.

Step 1: Deploy nginx into the Kubernetes cluster. This requires two commands. deploy and expose.

Deploy nginx:

$ kubectl run nginx --image=nginx --replicas=3

deployment "nginx" created

This will create a replication controller to spin up 3 pods, each pod runs the nginx container.

Step 2: Verify that the pods are running.

You can see the status of deployment by running:

$ kubectl get pods -owide

NAME          READY     STATUS    RESTARTS   AGE       NODE
nginx-fffsc   1/1       Running   0          1m        gke-demo-2-43558313-node-sgve
nginx-nk1ok   1/1       Running   0          1m        gke-demo-2-43558313-node-hswk
nginx-x86ck   1/1       Running   0          1m        gke-demo-2-43558313-node-wskh

You can see that each nginx pod is now running in a different node (virtual machine).

Once all pods have the Running status, you can then expose the nginx cluster as an external service.

Step 3: Expose the nginx cluster as an external service.

$ kubectl expose deployment nginx --port=80 --target-port=80 \
--type=LoadBalancer

service "nginx" exposed

This command will create a network load balancer to load balance traffic to the three nginx instances.

Step 4: Find the network load balancer address:

$ kubectl get service nginx

NAME      CLUSTER_IP      EXTERNAL_IP      PORT(S)   SELECTOR    AGE
nginx     10.X.X.X        X.X.X.X          80/TCP    run=nginx   1m

It may take several minutes to see the value of EXTERNAL_IP. If you don't see it the first time with the above command, retry every minute or so until the value of EXTERNAL_IP is displayed.

You can then visit http://EXTERNAL_IP/ to see the server being served through network load balancing.

Undeploy nginx before moving on to deploy a full stack application.

Step 1: Delete the service:

$ kubectl delete service nginx

service "nginx" deleted

Step 2: Delete the replication controller. This will subsequently delete the pods (all of the nginx instances) as well:

$ kubectl delete deployment nginx

deployment "nginx" deleted

Step 3: Delete the cluster.

$ gcloud container clusters delete networklb --num-nodes 3

Create the Kubernetes cluster in GKE.

Step 1

$ gcloud container clusters create $CLUSTER_NAME --zone $ZONE

Step 1

Create a pod with a single nginx server.

The following command creates an instance of the nginx image serving on port 80.

$ kubectl run nginx --image=nginx --port=80

Step 2

Create a Container Engine service that exposes the nginx Pod on each Node in the cluster.

$ kubectl expose deployment nginx --target-port=80 --type=NodePort

Ingress is an extension to the Kubernetes API that encapsulates a collection of rules for routing external traffic to Kubernetes endpoints. On Google Container Engine, ingress is implemented with a Google Cloud Load Balancer.

You will need a configuration file that defines an ingress object and configures it to direct traffic to your nginx server.

Step 1

Using your favorite editor in Cloud Shell, create the configuration file for ingress. Call it: basic-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
spec:
  backend:
    serviceName: nginx
    servicePort: 80

EXAMPLE:

In the following example, Tomcat is running on 8080 and nginx on 80.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  backend:
    serviceName: default-handler
    servicePort: 80
  rules:
  - host: my.app.com
    http:
      paths:
      - path: /tomcat
        backend:
          serviceName: tomcat
          servicePort: 8080
      - path: /nginx
        backend:
          serviceName: nginx
          servicePort: 80

Step 2

Use the following command to create the ingress.

$ kubectl create -f basic-ingress.yaml

Ingress will create the HTTP load balancing resources in GCE and connect them to the deployment. It will take a few minutes for the backend systems to pass health checks and begin serving traffic.

Step 3

Use the following command to monitor the progress.

Wait until all three servers are identified, then [CTRL]-[C] out.

$ kubectl get ingress basic-ingress --watch

Step 4

Now that the service is operational, you can check on it's status with the kubectl describe command. It will give you a list of the HTTP load balancing resources, the backend systems, and their health status.

$ kubectl describe ingress basic-ingress

Step 5

You can use the kubectl get ingress command to identify the external IP address of the load balancer. Use curl or browse to the address to verify that nginx is being served through the load balancer.

$ kubectl get ingress basic-ingress
$ curl [IP Address]

Step 1

Delete the ingress object.

$ kubectl delete -f basic-ingress.yaml

Step 2

Shut down and delete nginx.

$ kubectl delete deployment nginx

Step 3

Delete the cluster.

$ gcloud container clusters delete $CLUSTER_NAME

┬ęGoogle, Inc. or its affiliates. All rights reserved. Do not distribute.