Transitioning a network load balancer from target pools to regional backend services

1. Introduction

This guide provides instructions for transitioning an existing network load balancer from a target pool backend to a regional backend service.

What you'll learn

  • Understand the benefits of regional backend services
  • Create a network load balancer with target pools
  • Perform target pool validation
  • Create a regional backend service using unmanaged instances groups
  • Perform target pool to backend service migration
  • Perform backend services validation

What you'll need

  • Experience with load balancers

2. Regional backend services for network Load Balancing overview

With Network Load Balancing, Google Cloud customers have a powerful tool for distributing external traffic among virtual machines in a Google Cloud region. In order to make it easier for our customers to manage incoming traffic and to control how the load balancer behaves, we recently added support for backend services to Network Load Balancing. This provides improved scale, velocity, performance and resiliency to our customers in their deployment—all in an easy to manage way.

We now support backend services with Network Load Balancing—a significant enhancement over the prior approach, target pools. A backend service defines how our load balancers distribute incoming traffic to attached backends and provides fine-grained control for how the load balancer behaves.

3. Regional backend services benefits

Choosing a regional backend service as your load balancer brings a number of advantages to your environment.

267db35a58145be.png

Out of the gate, regional backend services provide:

  • High-fidelity health checking with unified health checking - With regional backend services you can now take full advantage of load balancing health check features, freeing yourself from the constraints of legacy HTTP health checks. For compliance reasons, TCP health checks with support for custom request and response strings or HTTPS were a common request for Network Load Balancing customers.
  • Better resiliency with failover groups - With failover groups, you can designate an Instance Group as primary and another one as secondary and failover the traffic when the health of the instances in the active group goes below a certain threshold. For more control on the failover mechanism, you can use an agent such as keepalived or pacemaker and have a healthy or failing health check exposed based on changes of state of the backend instance.
  • Scalability and high availability with Managed Instance Groups - Regional backend services support Managed Instance Groups as backends. You can now specify a template for your backend virtual machine instances and leverage autoscaling based on CPU utilization or other monitoring metrics.

In addition to the above you will be able to take advantage of Connection Draining for connection oriented protocol (TCP) and faster programming time for large deployments.

Codelab network topology

This guide provides instructions for transitioning an existing network load balancer from a target pool backend to a regional backend service.

Moving to a regional backend service allows you to take advantage of features such as non-legacy health checks (for TCP, SSL, HTTP, HTTPS, and HTTP/2), managed instance groups, connection draining, and failover policy.

This guide walks you through transitioning the following sample target pool-based network load balancer to use a regional backend service instead

b2ac8a09e53e27f8.png

Before: Network Load Balancing with a target pool

Your resulting backend service-based network load balancer deployment will look like this.

f628fdad64c83af3.png

After: Network Load Balancing with a regional backend service

This example assumes you have a traditional target pool-based network load balancer with two instances in zone us-central-1a, and 2 instances in zone us-central-1c.

The high-level steps required for such a transition are:

  1. Group your target pool instances into instance groups. Backend services only work with managed or unmanaged instance groups. Note that while there is no limit on the number of instances that can be placed into a single target pool, instance groups do have a maximum size. If your target pool has more than this maximum number of instances, you'll need to split its backends across multiple instance groups. If your existing deployment includes a backup target pool, create a separate instance group for those instances. This instance group will be configured as a failover group.
  2. Create a regional backend service. If your deployment includes a backup target pool, you will need to specify a failover ratio while creating the backend service. This should match the failover ratio previously configured for the target pool deployment.
  3. Add instance groups (created previously) to the backend service. If your deployment includes a backup target pool, mark the corresponding failover instance group with a –failover flag when adding it to the backend service.
  4. Configure a forwarding rule that points to the new backend service. You have 2 options here:
  • (Recommended) Update the existing forwarding rule to point to the backend service. OR
  • Create a new forwarding that points to the backend service. This requires you to create a new IP address for the load balancer's frontend. Then modify your DNS settings to seamlessly transition from the old target pool-based load balancer's IP address to the new IP address.

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

96a9c957bc475304.png

b9a10ebdf5b5a448.png

a1e3c01a38fa61c2.png

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the GCP Console click the Cloud Shell icon on the top right toolbar:

bce75f34b2c53987.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

f6ef2b5f13479f3a.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.

Log into cloudshell and set your projectid

gcloud config list project
gcloud config set project [YOUR-PROJECT-ID]

Perform setting your projectID:
projectid=YOUR-PROJECT-ID

echo $projectid

4. Create VPC network

VPC Network

From Cloud Shell

gcloud compute networks create network-lb --subnet-mode custom

Create Subnet

From Cloud Shell

gcloud compute networks subnets create network-lb-subnet \
        --network network-lb --range 10.0.0.0/24 --region us-central1

Create Firewall Rules

From Cloud Shell

gcloud compute --project=$projectid firewall-rules create www-firewall-network-lb --direction=INGRESS --priority=1000 --network=network-lb --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=network-lb-tag

Create unmanaged instances

Create instances 2 instances per zone, us-central1-a & us-central1-c

From Cloud Shell create instance 1

gcloud compute instances create www1 \
--subnet network-lb-subnet \
--image-family debian-9 \
--image-project debian-cloud \
--zone us-central1-a \
--tags network-lb-tag \
--metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart
echo '<!doctype html><html><body><h1>www1</h1></body></html>' | tee /var/www/html/index.html"

From Cloud Shell create instance 2

gcloud compute instances create www2 \
--subnet network-lb-subnet \
--image-family debian-9 \
--image-project debian-cloud \
--zone us-central1-a \
--tags network-lb-tag \
--metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart 
echo '<!doctype html><html><body><h1>www2</h1></body></html>' | tee /var/www/html/index.html"

From Cloud Shell create instance 3

gcloud compute instances create www3 \
--subnet network-lb-subnet \
--image-family debian-9 \
--image-project debian-cloud \
--zone us-central1-c \
--tags network-lb-tag \
--metadata startup-script="#! /bin/bash
sudo apt-get update 
sudo apt-get install apache2 -y 
sudo service apache2 restart 
echo '<!doctype html><html><body><h1>www3</h1></body></html>' | tee /var/www/html/index.html"

From Cloud Shell create instance 4

gcloud compute instances create www4 \
--subnet network-lb-subnet \
--image-family debian-9 \
--image-project debian-cloud \
--zone us-central1-c \
--tags network-lb-tag \
--metadata startup-script="#! /bin/bash
sudo apt-get update 
sudo apt-get install apache2 -y 
sudo service apache2 restart
echo '<!doctype html><html><body><h1>www4</h1></body></html>' | tee /var/www/html/index.html"

Create a firewall rule to allow external traffic to these VM instances

From Cloud Shell

gcloud compute --project=$projectid firewall-rules create www-firewall-network-lb --direction=INGRESS --priority=1000 --network=network-lb --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=network-lb-tag

Create a static external IP address for your load balancer

From Cloud Shell

gcloud compute addresses create network-lb-ip-1 \
    --region us-central1

Add a legacy HTTP health check resource

From Cloud Shell

gcloud compute http-health-checks create basic-check

5. Create forwarding rule and target pool

Create a target pool

gcloud compute target-pools create www-pool \
            --region us-central1 --http-health-check basic-check

Add your instances to the target pool, us-central1-a

gcloud compute target-pools add-instances www-pool \
--instances www1,www2 \
--instances-zone us-central1-a

Add your instances to the target pool, us-central1-c

gcloud compute target-pools add-instances www-pool \
--instances www3,www4 \
--instances-zone us-central1-c

Add a forwarding rule

gcloud compute forwarding-rules create www-rule \
--region us-central1 \
--ports 80 \
--address network-lb-ip-1 \
--target-pool www-pool

Validate target pool functionality

Identify the frontend IP address, by selecting Load Balancers → Frontends (www-rule)

Use the curl command from your workstation terminal to access the external IP address and observe load balancing across the four target instances. Close terminal once validated.

while true; do curl -m1 IP_ADDRESS; done

6. Transition the network load balancer from target pool to backend service

Create unified health checks for your backend service

gcloud compute health-checks create tcp my-tcp-health-check --port 80 --region us-central1

Create instance-groups from existing instances from the target pool

gcloud compute --project=$projectid instance-groups unmanaged create www-instance-group-central1a --zone=us-central1-a

gcloud compute --project=$projectid instance-groups unmanaged add-instances www-instance-group-central1a --zone=us-central1-a --instances=www1,www2

Create instance-groups from existing instances from the target pool

gcloud compute --project=$projectid instance-groups unmanaged create www-instance-group-central1c --zone=us-central1-c

gcloud compute --project=$projectid instance-groups unmanaged add-instances www-instance-group-central1c --zone=us-central1-c --instances=www3,www4

Create a backend service and associate it with the newly created health checks

gcloud compute backend-services create my-backend-service --region us-central1 --health-checks my-tcp-health-check --health-checks-region us-central1 --load-balancing-scheme external

Configure your backend service and add the instance groups

gcloud compute backend-services add-backend my-backend-service --instance-group www-instance-group-central1a --instance-group-zone us-central1-a --region us-central1

gcloud compute backend-services add-backend my-backend-service --instance-group www-instance-group-central1c --instance-group-zone us-central1-c --region us-central1

Update the existing forwarding rule to support backend services

Note the forwarding rule name ‘www-rule' and associated IP Address by performing the following:

Select Load Balancer → Frontends

Also, noted the four target pools

Select Load Balancer → Select ‘www-pool'

Route traffic to backend services by updating the existing forwarding rule

gcloud compute forwarding-rules set-target www-rule --region=us-central1 --backend-service my-backend-service --region us-central1

Verify load balancer ‘www-pool' is no longer configured with frontend ‘www-rule' (see screenshot below)

Select Load Balancer → www-pool

9a393b3ca4e0942c.png

Validate frontend forwarding rule is now associated with load balancer ‘my-backend-service'

Select Load Balancer → Frontends

Note the rule name ‘www-rule' IP address is retained and the load balancer ‘my-backend-service' is now in use

Use the curl command from your workstation terminal to access the external IP address and observe load balancing across the newly associated backend service. Close terminal once validated.

while true; do curl -m1 IP_ADDRESS; done

7. Clean Up Steps

gcloud compute forwarding-rules delete www-rule --region=us-central1 --quiet
 
gcloud compute backend-services delete my-backend-service --region us-central1 --quiet
 
gcloud compute target-pools delete www-pool --region us-central1 --quiet
 
gcloud compute addresses delete network-lb-ip-1 --region us-central1 --quiet

gcloud compute firewall-rules delete www-firewall-network-lb --quiet
 
gcloud compute instances delete www4 --zone us-central1-c --quiet
 
gcloud compute instances delete www3 --zone us-central1-c --quiet
 
gcloud compute instances delete www2 --zone us-central1-a --quiet

gcloud compute instances delete www1 --zone us-central1-a --quiet
 
gcloud compute networks subnets delete network-lb-subnet --region us-central1 --quiet

gcloud compute networks delete network-lb --quiet

gcloud compute instance-groups unmanaged delete www-instance-group-central1a --zone us-central1-a --quiet

gcloud compute instance-groups unmanaged delete www-instance-group-central1c --zone us-central1-c --quiet

8. Congratulations!

Congratulations for completing the codelab.

What we've covered

  • Understand the benefits of regional backend services
  • Create a network load balancer with target pools
  • Perform target pool validation
  • Create a regional backend service using unmanaged instances groups
  • Perform target pool to backend service migration
  • Perform backend services validation