1. Introduction
Welcome to the advanced load balancing optimizations codelab!
In this codelab, you will learn how to configure advanced load balancing options for the global external application load balancer. Before you start, it is recommended to check out the document about cloud load balancing first ( https://cloud.google.com/load-balancing/docs/load-balancing-overview)
Figure 1. The workflow of picking a destination end point with the global external application load balancer.
Codelab topology and use cases
Figure 2. HTTP Load Balancer Routing Topology
During this code lab you will set up two managed instance groups. You will create a global external https load balancer. The load balancer will utilize several features from the list of advanced capabilities that the envoy based load balancer supports. Once deployed you will then generate some simulated load and verify that the configurations you set are working appropriately.
What you'll learn
- How to configure ServiceLbPolicy to fine tune your load balancer.
What you'll need
- Knowledge of External HTTPS Load Balancing. The first half of this codelab is quite similar to the External HTTPs LB with Advanced Traffic Management (Envoy) Codelab ( https://codelabs.developers.google.com/codelabs/externalhttplb-adv). It is recommended to go through that first.
2. Before you begin
Inside Cloud Shell, make sure that your project id is set up
gcloud config list project gcloud config set project [YOUR-PROJECT-NAME] prodproject=YOUR-PROJECT-NAME echo $prodproject
Enable APIs
Enable all necessary services
gcloud services enable compute.googleapis.com gcloud services enable logging.googleapis.com gcloud services enable monitoring.googleapis.com gcloud services enable networkservices.googleapis.com
3. Create the VPC network
Create a VPC network
From Cloud Shell
gcloud compute networks create httplbs --subnet-mode=auto
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/httplbs]. NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 httplbs AUTO REGIONAL
Create VPC firewall rules
After creating the VPC, now you will create a firewall rule. The firewall rule will be used to allow all IPs to access the external IP of the test application's website on port 80 for http traffic.
From Cloud Shell
gcloud compute firewall-rules create httplb-allow-http-rule \ --allow tcp:80 \ --network httplbs \ --source-ranges 0.0.0.0/0 \ --priority 700
Output
Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls/httplb-allow-http-rule]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED httplb-allow-http-rule httplbs INGRESS 700 tcp:80 False
In this codelab, we will go to tweak the healthiness of the VMs. So we will also create firewall rules to allow SSH.
From Cloud Shell
gcloud compute firewall-rules create fw-allow-ssh \ --network=httplbs \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Output
Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls/fw-allow-ssh]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED fw-allow-ssh httplbs INGRESS 1000 tcp:22 False
4. Set up the Managed Instance Groups
You need to set up Managed Instance Groups which include the patterns for backend resources used by the HTTP Load Balancer. First we will create Instance Templates which define the configuration for VMs to be created for each region. Next, for a backend in each region, we will create a Managed Instance Group that references an Instance Template.
Managed Instance groups can be Zonal or Regional in scope. For this lab exercise we will be creating zonal Managed Instance Groups.
In this section, you can see a pre-created startup script that will be referenced upon instance creation. This startup script installs and enables web server capabilities which we will use to simulate a web application. Feel free to explore this script.
Create the Instance Templates
The first step is to create an instance template.
From Cloud Shell
gcloud compute instance-templates create test-template \ --network=httplbs \ --tags=allow-ssh,http-server \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Output
NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP test-template n1-standard-1 2021-11-09T09:24:35.275-08:00
You can now verify our instance templates were created successfully with the following gcloud command:
From Cloud Shell
gcloud compute instance-templates list
Output
NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP test-template n1-standard-1 2021-11-09T09:24:35.275-08:00
Create the Instance Groups
We now must create a managed instance group from the instance templates we created earlier.
From Cloud Shell
gcloud compute instance-groups managed create us-east1-a-mig \ --size=1 \ --template=test-template \ --zone=us-east1-a
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-east1-a/instanceGroupManagers/us-east1-a-mig]. NAME LOCATION SCOPE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED us-east1-a-mig us-east1-a zone us-east1-a-mig 0 1 test-template no
From Cloud Shell
gcloud compute instance-groups managed create us-east1-b-mig \ --size=5 \ --template=test-template \ --zone=us-east1-b
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-east1-b/instanceGroupManagers/us-east1-b-mig]. NAME LOCATION SCOPE BASE_INSTANCE_NAME SIZE TARGET_SIZE INSTANCE_TEMPLATE AUTOSCALED us-east1-b-mig us-east1-b zone us-east1-b-mig 0 5 test-template no
We can verify our instance groups were successfully created with the following gcloud command:
From Cloud Shell
gcloud compute instance-groups list
Output
NAME LOCATION SCOPE NETWORK MANAGED INSTANCES us-east1-a-mig us-east1-a zone httplbs Yes 1 us-east1-b-mig us-east1-b zone httplbs Yes 5
Verify Web Server Functionality
Each instance is configured to run an Apache web-server with a simple PHP script that renders something like below:
Page served from: us-east1-a-mig-ww2h
To ensure your web servers are functioning correctly, navigate to Compute Engine -> VM instances. Ensure that your new instances (e.g. us-east1-a-mig-xxx) have been created according to their instance group definitions.
Now, make a web request in your browser to it to ensure the web server is running (this may take a minute to start). On the VM instances page under Compute Engine, select an instance created by your instance group and click its External (public) IP.
Or, in your browser, navigate to http://<IP_Address>
5. Set up the Load Balancer
Create Health Check
First we must create a basic health check to ensure that our services are up and running successfully. We will be creating a basic health check, there are many more advanced customizations available.
From Cloud Shell
gcloud compute health-checks create http http-basic-check \ --port 80
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/healthChecks/http-basic-check]. NAME PROTOCOL http-basic-check HTTP
Reserve External IP Address
For this step you will need to reserve a globally available static IP address that will later be attached to the Load Balancer.
From Cloud Shell
gcloud compute addresses create lb-ipv4-2 \ --ip-version=IPV4 \ --global
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/addresses/lb-ipv4-2].
Make sure to note the IP Address that was reserved.
gcloud compute addresses describe lb-ipv4-2 \ --format="get(address)" \ --global
Create Backend Services
Now we must create a backend service for the managed instance groups we created earlier.
From Cloud Shell
gcloud compute backend-services create east-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=http-basic-check \ --global
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices/east-backend-service]. NAME BACKENDS PROTOCOL east-backend-service HTTP
Add MIGs to Backend Services
Now that we have created the backend services, we must now add the Managed Instance Groups we created earlier to each backend service.
From Cloud Shell
gcloud compute backend-services add-backend east-backend-service --instance-group us-east1-a-mig --instance-group-zone us-east1-a --global
From Cloud Shell
gcloud compute backend-services add-backend east-backend-service --instance-group us-east1-b-mig --instance-group-zone us-east1-b --global
You can verify that the backends have been added by running the following command.
From Cloud Shell
gcloud compute backend-services list
Output
NAME BACKENDS PROTOCOL east-backend-service us-east1-a/instanceGroups/us-east1-a-mig,us-east1-b/instanceGroups/us-east1-b-mig HTTP
Create URL Map
Now we will create a URL map.
gcloud compute url-maps create web-map-http \ --default-service=east-backend-service \ --global
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/urlMaps/web-map-http]. NAME DEFAULT_SERVICE web-map-http backendServices/east-backend-service
Create HTTP Frontend
The final step in creating the load balancer is to create the frontend. This will map the IP address you reserved earlier to the load balancer URL map you created.
From Cloud Shell
gcloud compute target-http-proxies create http-lb-proxy-adv \ --url-map=web-map-http
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetHttpProxies/http-lb-proxy-adv]. NAME URL_MAP http-lb-proxy-adv web-map-http
Next you need to create a global forwarding rule which will map the IP address reserved earlier to the HTTP proxy.
From Cloud Shell
gcloud compute forwarding-rules create http-content-rule \ --load-balancing-scheme EXTERNAL_MANAGED \ --address=lb-ipv4-2 \ --global \ --target-http-proxy=http-lb-proxy-adv \ --ports=80
At this point, you can confirm the load balancer is working with the ip address you noted down earlier.
6. Verify that the Load Balancer is Working
In order to verify that the load balancing feature is working, you need to generate some load. To do this we will create a new VM to simulate load.
Create Siege-vm
Now you will create the siege-vm which you will use to generate load
From Cloud Shell
gcloud compute instances create siege-vm \ --network=httplbs \ --zone=us-east1-a \ --machine-type=e2-medium \ --tags=allow-ssh,http-server \ --metadata=startup-script='sudo apt-get -y install siege'
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-east1-a/instances/siege-vm]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS siege-vm us-central1-ir1 e2-medium 10.132.0.15 34.143.20.68 RUNNING
Next you can SSH into the VM you created. Once it is created click SSH to launch a terminal and connect.
Once connected, run the following command to generate load. Use the IP address that you reserved earlier for the external http load balancer.
From Cloud Shell
siege -c 20 http://$lb-ipv4-2
Output
New configuration template added to /home/cloudcurriculumdeveloper/.siege Run siege -C to view the current settings in that file
Check Load Distribution
Now that the Siege is running it is time to check that the traffic is being equally distributed to the two managed instance groups.
Stop the Siege
Now that you have demonstrated that the advanced traffic splitting is working, it is time to stop the siege. To do so, return to the SSH terminal of siege-vm and press CTRL+C to stop the siege running.
7. Configure Service Lb Policy
Create a Service LB Policy
Now that the basic setting is done, we will create a Service Lb Policy and try out the advanced features. As an example, we will configure the service to use some advanced load balancing settings. In this example, we are just going to create a policy to exercise the auto capacity drain feature. But feel free to try other features out.
From Cloud Shell
gcloud beta network-services service-lb-policies create http-policy \ --auto-capacity-drain --location=global
We can verify our policy was successfully created with the following gcloud command:
From Cloud Shell
gcloud beta network-services service-lb-policies list --location=global
Output
NAME http-policy
Attach Service LB Policy to backend service
We will now attach the new policy to your existing backend service above.
From Cloud Shell
gcloud beta compute backend-services update east-backend-service \ --service-lb-policy=http-policy --global
8. Tweak Backend Health
At this point, the new service lb policy has been applied to your backend service. So technically you can jump to cleanup directly. But as part of the codelab, we will also do some additional production tweaks to show you how the new policy works.
The auto capacity drain feature will automatically remove a backend MIG from the load balancer when the total number of healthy backends dropped below some threshold (25%). In order to test out this feature, we are going to SSH into the VMs in us-east1-b-mig and make them unhealthy. With the 25% threshold, you will need to SSH into four of the VMs and shut down the Apache server.
To do so, pick four VMs and SSH to it by clicking the SSH to launch a terminal and connect. Then run the following command.
sudo apachectl stop
At this point, the auto capacity drain feature will be triggered and us-east1-b-mig will not get new requests.
9. Verify that the Auto Capacity Drain Feature is Working
Restart the Siege
To verify the new feature, we will reuse the siege VM again. Let's SSH into the VM you created in the previous step. Once it is created click SSH to launch a terminal and connect.
Once connected, run the following command to generate load. Use the IP address that you reserved earlier for the external http load balancer.
From Cloud Shell
siege -c 20 http://$lb-ipv4-2
Output
New configuration template added to /home/cloudcurriculumdeveloper/.siege Run siege -C to view the current settings in that file
At this point, you will notice that all requests are sent to us-east1-a-mig.
Stop the Siege
Now that you have demonstrated that the advanced traffic splitting is working, it is time to stop the siege. To do so, return to the SSH terminal of siege-vm and press CTRL+C to stop the siege running.
10. Cleanup steps
Now that we are finished with the lab environment, it is time to tear it down. Please run the following commands to delete the test environment.
From Cloud Shell
gcloud compute instances delete siege-vm --zone=us-east1-a gcloud compute forwarding-rules delete http-content-rule --global gcloud compute target-http-proxies delete http-lb-proxy-adv gcloud compute url-maps delete web-map-http gcloud compute backend-services delete east-backend-service --global gcloud compute addresses delete lb-ipv4-2 --global gcloud compute health-checks delete http-basic-check gcloud beta network-services service-lb-policies delete http-policy --location=global gcloud compute instance-groups managed delete us-east1-a-mig --zone=us-east1-a gcloud compute instance-groups managed delete us-east1-b-mig --zone=us-east1-b gcloud compute instance-templates delete test-template gcloud compute firewall-rules delete httplb-allow-http-rule gcloud compute firewall-rules delete fw-allow-ssh gcloud compute networks delete httplbs
11. Congratulations!
Congratulations for completing the codelab.
What we've covered
- Creating an external application load balancer with service lb policy.
- Configure your backend service with the auto capacity drain feature.
Next steps
- Try out other features provided by service lb policy.