To complete this lab, you need:
Access to a supported Internet browser:
Google Container Engine (GKE) includes integrated support for (L3) network load balancing. To engage network load balancing, all you need to do is include the field
type: LoadBalancer in your service configuration file. GKE will setup and connect the network load balancer to your service.
However, if you wanted more advanced (L7) load balancing features including HTTPS balancing, cross-region load balancing, or content-based load balancing, then you would need to integrate your service with the HTTP/HTTPS load balancer provided by Google Compute Engine (GCE).
In the first part of this lab you will configure a cluster with network load balancing.
In the second part of this lab you will configure a replicated nginx service. Then you will use a Kubernetes extension, called ingress, to expose the service behind an HTTP load balancer.
Set some environment variables and gcloud default values. This practice improves consistency and reduced the chances of typing errors.
Set gcloud default values.
$ gcloud config set project [PROJECT_ID] $ gcloud config set compute/zone us-central1-c $ gcloud config set compute/region us-central1 $ gcloud config list
Set environment variables.
$ export CLUSTER_NAME="httploadbalancer" $ export ZONE="us-central1-c" $ export REGION="us-central1"
Step 1: Create a Kubernetes cluster using Google Container Engine.
$ gcloud container clusters create networklb --num-nodes 3
This will take several minutes to run.
It will create Google Compute Engine instances, and configure each instance as a Kubernetes node.
These instances don't include the Kubernetes Master node. In Google Container Engine, the Kubernetes Master node is a managed service.
You can see the newly created instances in the Google Compute Engine > VM Instances page.
Step 1: Deploy
nginx into the Kubernetes cluster. This requires two commands.
$ kubectl run nginx --image=nginx --replicas=3 deployment "nginx" created
This will create a replication controller to spin up 3 pods, each pod runs the
Step 2: Verify that the pods are running.
You can see the status of deployment by running:
$ kubectl get pods -owide NAME READY STATUS RESTARTS AGE NODE nginx-fffsc 1/1 Running 0 1m gke-demo-2-43558313-node-sgve nginx-nk1ok 1/1 Running 0 1m gke-demo-2-43558313-node-hswk nginx-x86ck 1/1 Running 0 1m gke-demo-2-43558313-node-wskh
You can see that each
nginx pod is now running in a different node (virtual machine).
Once all pods have the Running status, you can then expose the
nginx cluster as an external service.
Step 3: Expose the
nginx cluster as an external service.
$ kubectl expose deployment nginx --port=80 --target-port=80 \ --type=LoadBalancer service "nginx" exposed
This command will create a network load balancer to load balance traffic to the three
Step 4: Find the network load balancer address:
$ kubectl get service nginx NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE nginx 10.X.X.X X.X.X.X 80/TCP run=nginx 1m
It may take several minutes to see the value of
EXTERNAL_IP. If you don't see it the first time with the above command, retry every minute or so until the value of
EXTERNAL_IP is displayed.
You can then visit
http://EXTERNAL_IP/ to see the server being served through network load balancing.
nginx before moving on to deploy a full stack application.
Step 1: Delete the service:
$ kubectl delete service nginx service "nginx" deleted
Step 2: Delete the replication controller. This will subsequently delete the pods (all of the
nginx instances) as well:
$ kubectl delete deployment nginx deployment "nginx" deleted
Step 3: Delete the cluster.
$ gcloud container clusters delete networklb --num-nodes 3
Create the Kubernetes cluster in GKE.
$ gcloud container clusters create $CLUSTER_NAME --zone $ZONE
Create a pod with a single
The following command creates an instance of the
nginx image serving on port 80.
$ kubectl run nginx --image=nginx --port=80
Create a Container Engine service that exposes the nginx Pod on each Node in the cluster.
$ kubectl expose deployment nginx --target-port=80 --type=NodePort
Ingress is an extension to the Kubernetes API that encapsulates a collection of rules for routing external traffic to Kubernetes endpoints. On Google Container Engine, ingress is implemented with a Google Cloud Load Balancer.
You will need a configuration file that defines an ingress object and configures it to direct traffic to your nginx server.
Using your favorite editor in Cloud Shell, create the configuration file for ingress. Call it:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress spec: backend: serviceName: nginx servicePort: 80
In the following example, Tomcat is running on 8080 and nginx on 80.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress spec: backend: serviceName: default-handler servicePort: 80 rules: - host: my.app.com http: paths: - path: /tomcat backend: serviceName: tomcat servicePort: 8080 - path: /nginx backend: serviceName: nginx servicePort: 80
Use the following command to create the ingress.
$ kubectl create -f basic-ingress.yaml
Ingress will create the HTTP load balancing resources in GCE and connect them to the deployment. It will take a few minutes for the backend systems to pass health checks and begin serving traffic.
Use the following command to monitor the progress.
Wait until all three servers are identified, then
$ kubectl get ingress basic-ingress --watch
Now that the service is operational, you can check on it's status with the
kubectl describe command. It will give you a list of the HTTP load balancing resources, the backend systems, and their health status.
$ kubectl describe ingress basic-ingress
You can use the
kubectl get ingress command to identify the external IP address of the load balancer. Use curl or browse to the address to verify that
nginx is being served through the load balancer.
$ kubectl get ingress basic-ingress
$ curl [IP Address]
Delete the ingress object.
$ kubectl delete -f basic-ingress.yaml
Shut down and delete nginx.
$ kubectl delete deployment nginx
Delete the cluster.
$ gcloud container clusters delete $CLUSTER_NAME
©Google, Inc. or its affiliates. All rights reserved. Do not distribute.