1. Introduction
Welcome to the External HTTPs LB with Advanced Traffic Management (Envoy) Codelab!
The latest version of the HTTP(S) External Load Balancer with Advanced Traffic Management contains all features of our existing Classic Global External HTTP(S) Load Balancer, but with an ever-growing list of Advanced Traffic Management capabilities. Some of these capabilities are new to our Load Balancers, and some provide enhanced features to existing capabilities. A partial-list of these capabilities include:
- Weighted Traffic Splitting
- Request Mirroring
- Outlier Detection
- Request Retries
- Fault Injection
- Additional Backend Session Affinity options
- Additional Header Transformation options
- Cross-Origin Resource Sharing (CORS)
- New Load Balancing Algorithms
What you'll learn
- How to set up a Managed Instance Group and the associated VPC and firewall rules
- How to use advanced traffic management features of the new load balancer
- How to validate that the advanced traffic management features are working as intended.
What you'll need
- Basic Networking and knowledge of HTTP
- Basic Unix/Linux command line knowledge
Codelab topology & use-case
Figure 1 - HTTP Load Balancer Routing Topology
During this code lab you will set up three managed instance groups, one in East, West, and Central. You will create a global external https load balancer. The load balancer will utilize several features from the list of advanced capabilities that the envoy based load balancer supports. Once deployed you will then generate some simulated load and verify that the configurations you set are working appropriately.
2. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs, and you can update it at any time.
- The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (and it is typically identified as
PROJECT_ID
), so if you don't like it, generate another random one, or, you can try your own and see if it's available. Then it's "frozen" after the project is created. - There is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console in order to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, follow any "clean-up" instructions found at the end of the codelab. New users of Google Cloud are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the GCP Console click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.
Before you begin
Inside Cloud Shell, make sure that your project id is set up
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
PROJECT_ID=[YOUR-PROJECT-NAME]
echo $PROJECT_ID
Enable APIs
Enable all necessary services
gcloud services enable compute.googleapis.com gcloud services enable logging.googleapis.com gcloud services enable monitoring.googleapis.com
3. Create the VPC network
Create a VPC network
From Cloud Shell
gcloud compute networks create httplbs --subnet-mode=auto
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/httplbs]. NAME: httplbs SUBNET_MODE: AUTO BGP_ROUTING_MODE: REGIONAL IPV4_RANGE: GATEWAY_IPV4:
Create VPC firewall rules
After creating the VPC, now you will create a firewall rules. The firewall rule will be used to allow all IPs to access the external IP of the test application's website on port 80 for http traffic.
From Cloud Shell
gcloud compute firewall-rules create httplb-allow-http-rule \ --allow tcp:80 \ --network httplbs \ --source-ranges 0.0.0.0/0 \ --priority 700
Output
Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls/httplb-allow-http-rule]. Creating firewall...done. NAME: httplb-allow-http-rule NETWORK: httplbs DIRECTION: INGRESS PRIORITY: 700 ALLOW: tcp:80 DENY: DISABLED: False
4. Set up the Managed Instance Groups
You need to set up Managed Instance Groups which include the patterns for backend resources used by the HTTP Load Balancer. First we will create Instance Templates which define the configuration for VMs to be created for each region. Next, for a backend in each region, we will create a Managed Instance Group that references an Instance Template.
Managed Instance groups can be Zonal or Regional in scope. For this lab exercise we will be creating three regional Managed Instance Groups, one in us-east1, one in us-west1, and one in us-central1.
In this section, you can see a pre-created startup script that will be referenced upon instance creation. This startup script installs and enables web server capabilities which we will use to simulate a web application. Feel free to explore this script.
Create the East, West, and Central Instance Templates
First step is to create the us-east-1 instance template.
From Cloud Shell
gcloud compute instance-templates create us-east1-template \ --region=us-east1 \ --network=httplbs \ --tags=http-server, \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates/us-east1-template]. NAME: us-east1-template MACHINE_TYPE: n1-standard-1 PREEMPTIBLE: CREATION_TIMESTAMP: 2021-11-11T11:02:37.511-08:00
Next step is to create the us-west-1 instance template.
From Cloud Shell
gcloud compute instance-templates create us-west1-template \ --region=us-west1 \ --network=httplbs \ --tags=http-server, \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates/us-west1-template]. NAME: us-west1-template MACHINE_TYPE: n1-standard-1 PREEMPTIBLE: CREATION_TIMESTAMP: 2021-11-11T11:03:08.577-08:00
Next step is to create the us-central-1 instance template.
From Cloud Shell
gcloud compute instance-templates create us-central1-template \ --region=us-central1 \ --network=httplbs \ --tags=http-server, \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates/us-central1-template]. NAME: us-central1-template MACHINE_TYPE: n1-standard-1 PREEMPTIBLE: CREATION_TIMESTAMP: 2021-11-11T11:03:44.179-08:00
You can now verify our instance templates were created successfully with the following gcloud command:
From Cloud Shell
gcloud compute instance-templates list
Output
NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP us-central1-template n1-standard-1 2021-11-09T09:25:37.263-08:00 us-east1-template n1-standard-1 2021-11-09T09:24:35.275-08:00 us-west1-template n1-standard-1 2021-11-09T09:25:08.016-08:00
Create the East, West, and Central Managed Instance Groups
We now must create a managed instance group from the instance templates we created earlier.
From Cloud Shell
gcloud compute instance-groups managed create us-east1-mig \ --base-instance-name=us-east1-mig \ --size=1 \ --template=us-east1-template \ --zone=us-east1-b
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-east1-b/instanceGroupManagers/us-east1-mig]. NAME: us-east1-mig LOCATION: us-east1-b SCOPE: zone BASE_INSTANCE_NAME: us-east1-mig SIZE: 0 TARGET_SIZE: 1 INSTANCE_TEMPLATE: us-east1-template AUTOSCALED: no
From Cloud Shell
gcloud compute instance-groups managed create us-west1-mig \ --base-instance-name=us-west1-mig \ --size=1 \ --template=us-west1-template \ --zone=us-west1-a
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroupManagers/us-west1-mig]. NAME: us-west1-mig LOCATION: us-west1-a SCOPE: zone BASE_INSTANCE_NAME: us-west1-mig SIZE: 0 TARGET_SIZE: 1 INSTANCE_TEMPLATE: us-west1-template AUTOSCALED: no
From Cloud Shell
gcloud compute instance-groups managed create us-central1-mig \ --base-instance-name=us-central1-mig \ --size=1 \ --template=us-central1-template \ --zone=us-central1-a
Output
Created [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-central1-a/instanceGroupManagers/us-central1-mig]. NAME: us-central1-mig LOCATION: us-central1-a SCOPE: zone BASE_INSTANCE_NAME: us-central1-mig SIZE: 0 TARGET_SIZE: 1 INSTANCE_TEMPLATE: us-central1-template AUTOSCALED: no
We can verify our instance groups were successfully created with the following gcloud command:
From Cloud Shell
gcloud compute instance-groups list
Output
NAME LOCATION SCOPE NETWORK MANAGED INSTANCES us-central1-mig us-central1 zone httplbs Yes 1 us-west1-mig us-west1 zone httplbs Yes 1 us-east1-mig us-east1 zone httplbs Yes 1
Verify Web Server Functionality
Each instance is configured to run an Apache web-server with a simple PHP script that renders:
To ensure your web servers are functioning correctly, navigate to Compute Engine -> VM instances. Ensure that your new instances (e.g. us-east1-mig-xxx) have been created according to their instance group definitions.
Now, make a web request in your browser to it to ensure the web server is running (this may take a minute to start). On the VM instances page under Compute Engine, select an instance created by your instance group and click its External (public) IP.
Or, in your browser, navigate to http://<IP_Address>
5. Set up the Load Balancer
Create Health Check
First we must create a basic health check to ensure that our services are up and running successfully. We will be creating a basic health check, there are many more advanced customizations available.
From Cloud Shell
gcloud compute health-checks create http http-basic-check \ --port 80
Reserve External IP Address
For this step you will need to reserve a globally available static IP address that will later be attached to the Load Balancer.
From Cloud Shell
gcloud compute addresses create lb-ipv4-2 \ --ip-version=IPV4 \ --global
Make sure to note the IP Address that was reserved.
gcloud compute addresses describe lb-ipv4-2 \ --format="get(address)" \ --global
Create Backend Services
Now we must create a backend service for each of the managed instance groups we created earlier. One for East, West, and Central.
Creating a backend service for the East managed instance group.
From Cloud Shell
gcloud beta compute backend-services create east-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=http-basic-check \ --global
Creating a backend service for the West managed instance group.
From Cloud Shell
gcloud beta compute backend-services create west-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=http-basic-check \ --global
Creating a backend service for the Central managed instance group.
From Cloud Shell
gcloud beta compute backend-services create central-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=http-basic-check \ --global
Add MIGs to Backend Services
Now that we have created the respective backend services for each application cluster, we must now add the Managed Instance Groups we created earlier to each backend service.
Add East MIG to backend service.
From Cloud Shell
gcloud beta compute backend-services add-backend east-backend-service \ --balancing-mode='UTILIZATION' \ --instance-group=us-east1-mig \ --instance-group-zone=us-east1-b \ --global
Add West MIG to backend service.
From Cloud Shell
gcloud beta compute backend-services add-backend west-backend-service \ --balancing-mode='UTILIZATION' \ --instance-group=us-west1-mig \ --instance-group-zone=us-west1-a \ --global
Add West MIG to backend service.
From Cloud Shell
gcloud beta compute backend-services add-backend central-backend-service \ --balancing-mode='UTILIZATION' \ --instance-group=us-central1-mig \ --instance-group-zone=us-central1-a \ --global
Create URL Map
The URL map is where the advanced traffic management features for this lab will live. We must create a .yaml file which will contain the configuration. Within the .yaml file we have created a prefix match on /roundrobbin, so only traffic matching /roundrobbin will be affected by these configurations. We have specified that 50% of traffic should go to the east-backend-service and 50% of traffic should go to the west-backend-service. We have additionally added a response header value:{test} which will be present on all responses. Finally, we have added that all traffic should be mirrored to the central-backend-service. The traffic is duplicated and sent here for testing purposes only.
Save the example as a .yaml file on your machine.
defaultService: https://www.googleapis.com/compute/v1/projects/[project_id]/global/backendServices/east-backend-service kind: compute #urlMap name: web-map-http hostRules: - hosts: - '*' pathMatcher: matcher1 pathMatchers: - defaultService: https://www.googleapis.com/compute/v1/projects/[project_id]/global/backendServices/east-backend-service name: matcher1 routeRules: - matchRules: - prefixMatch: /roundrobbin priority: 2 headerAction: responseHeadersToAdd: - headerName: test headerValue: value replace: True routeAction: weightedBackendServices: - backendService: https://www.googleapis.com/compute/v1/projects/[project_id]/global/backendServices/east-backend-service weight: 50 - backendService: https://www.googleapis.com/compute/v1/projects/[project_id]/global/backendServices/west-backend-service weight: 50 retryPolicy: retryConditions: ['502', '504'] numRetries: 3 perTryTimeout: seconds: 1 nanos: 50 requestMirrorPolicy: backendService: https://www.googleapis.com/compute/v1/projects/[project_id]/global/backendServices/central-backend-service
Create the URL Map importing the document from your machine. Note the source path will be different depending on where you save the .yaml file.
From Cloud Shell
gcloud beta compute url-maps import web-map-http \ --source /Users/[USERNAME]/Documents/Codelab/lbconfig.yaml \ --global
Create HTTP Frontend
The final step in creating the load balancer is to create the frontend. This will map the IP address you reserved earlier to the load balancer URL map you created.
From Cloud Shell
gcloud beta compute target-http-proxies create http-lb-proxy-adv \ --url-map=web-map-http
Next you need to create a global forwarding rule which will map the IP address reserved earlier to the HTTP proxy.
From Cloud Shell
gcloud beta compute forwarding-rules create http-content-rule \ --load-balancing-scheme EXTERNAL_MANAGED \ --address=lb-ipv4-2 \ --global \ --target-http-proxy=http-lb-proxy-adv \ --ports=80
6. Verify that the Advanced Traffic Features are Working
In order to verify that the traffic splitting feature implemented is working, you need to generate some load. To do this we will create a new VM to simulate load.
Create Allow SSH Firewall Rule
In order to SSH into the VM that we will generate traffic from you first need to create a firewall rule which will allow SSH traffic to the VM.
From Cloud Shell
gcloud compute firewall-rules create fw-allow-ssh \ --network=httplbs \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Output
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED fw-allow-ssh httplbs INGRESS 1000 tcp:22 False
Create Siege-vm
Now you will create the siege-vm which you will use to generate load
From Cloud Shell
gcloud compute instances create siege-vm \ --network=httplbs \ --zone=us-east4-c \ --machine-type=e2-medium \ --tags=allow-ssh,http-server \ --metadata=startup-script='sudo apt-get -y install siege'
Output
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS siege-vm us-east4-c e2-medium 10.150.0.3 34.85.218.119 RUNNING
Next you can SSH into the VM you created. Once it is created click SSH to launch a terminal and connect.
Once connected, run the following command to generate load. Use the IP address that you reserved earlier for the external http load balancer.
From Cloud Shell
siege -c 250 http://$lb-ipv4-2/roundrobbin
Output
New configuration template added to /home/cloudcurriculumdeveloper/.siege Run siege -C to view the current settings in that file [alert] Zip encoding disabled; siege requires zlib support to enable it: No such file or directory ** SIEGE 4.0.2 ** Preparing 250 concurrent users for battle. The server is now under siege...
Check Load Distribution
Now that the Siege is running it is time to check that the traffic is being equally distributed to the east and west managed instance groups, additionally you can check that the traffic mirroring is working and traffic is being sent to the central managed instance group.
In the Cloud Console, on the Navigation menu, click Network Services > Load balancing. Click Backends. Click the advanced menu as seen in the screenshot below.
Navigate to the Backend services tab and select the east-backend-service
You will be able to see the traffic splitting in real time to this MIG. Take note of the rate, you can compare it to the west-backend-service in a moment.
Similarly navigate to the west-backend-service. You should see traffic flowing to this service as well. The rate should be similar to what you saw in the east-backend-service since you configured a 50/50 round robbin split of traffic.
To check that the traffic mirroring policy you created is working you need to check the utilization of the central-backend-service managed instance group. To do so, navigate to compute, compute engine, instance groups, and select the us-central1-mig. Next, navigate to the monitoring tab.
You will see the charts populated demonstrating that traffic has been mirrored to this managed instance group.
Stop the Siege
Now that you have demonstrated that the advanced traffic splitting is working, it is time to stop the siege. To do so return to the SSH terminal of siege-vm and press CTRL+C to stop the siege running.
Validate Response Header being Sent
Before you clean up you can quickly validate that the appropriate response header is being sent by the http load balancer. You had configured it to send the header test with contents value. Running the curl command from cloud shell will give the expected response.
From Cloud Shell
curl -svo /dev/null http://lb-ipv4-2/roundrobbin
Output
* Trying lb-ipv4-2.. * TCP_NODELAY set * Connected to lb-ipv4-2 ( lb-ipv4-2) port 80 (#0) > GET /roundrobbin HTTP/1.1 > Host: lb-ipv4-2 > User-Agent: curl/7.64.1 > Accept: */* > < HTTP/1.1 404 Not Found < date: Wed, 10 Nov 2021 17:05:27 GMT < server: envoy < Content-Length: 273 < content-type: text/html; charset=iso-8859-1 < via: 1.1 google < test: value < { [273 bytes data] * Connection #0 to host 34.149.2.26 left intact * Closing connection 0
7. Lab Clean up
Now that we are finished with the lab environment, it is time to tear it down. Please run the following commands to delete the test environment.
From Cloud Shell
gcloud compute instances delete siege-vm gcloud alpha compute forwarding-rules delete http-content-rule --global gcloud alpha compute target-http-proxies delete http-lb-proxy-adv gcloud alpha compute url-maps delete web-map-http gcloud alpha compute backend-services delete east-backend-service --global gcloud alpha compute backend-services delete west-backend-service --global gcloud alpha compute backend-services delete central-backend-service --global gcloud compute addresses delete lb-ipv4-2 --global gcloud compute health-checks delete http-basic-check gcloud compute instance-groups managed delete us-east1-mig --zone us-east1-b gcloud compute instance-groups managed delete us-west1-mig --zone us-west1-a gcloud compute instance-groups managed delete us-central1-mig --zone us-central1-a gcloud compute instance-templates delete "us-east1-template" gcloud compute instance-templates delete "us-west1-template" gcloud compute instance-templates delete "us-central1-template" gcloud compute firewall-rules delete httplb-allow-http-rule gcloud compute firewall-rules delete fw-allow-ssh gcloud compute networks delete httplbs
8. Congratulations!
You have completed the External HTTPs LB with Advanced Traffic Management (Envoy) Codelab!
What we've covered
- How to set up a Managed Instance Group and the associated VPC and firewall rules
- How to use advanced traffic management features of the new load balancer
- How to validate that the advanced traffic management features are working as intended.
Next steps
- Try some of the other advanced routing features like URL rewriting, adding CORS headers, and many more ( link)