Using External HTTP(s) Hybrid load balancer to reach a Network Endpoint Group

1. Introduction

A hybrid strategy is a pragmatic solution for you to adapt to changing market demands and incrementally modernize your applications. Hybrid support for Google Cloud external and internal HTTP(s) load balancers extends cloud load balancing to backends residing on-prem and in other clouds and is a key enabler for your hybrid strategy. This might be temporary to enable migration to a modern cloud-based solution or a permanent fixture of your organization's IT infrastructure.

3312e69c63b02f73.png

In this lab, you will learn how to create a Network Endpoint Group (NEG) using two virtual machines accessible from an external HTTP(s) Global Load Balancer. Although the NEG in the lab is within GCP, the same procedure is used to communicate with public or on-premise resources with IP reachability.

What you'll learn

  • Create a custom VPC
  • Create two virtual machines (VMs) used as a Network Endpoint Group (NEG)
  • Create a Hybrid Load Balancer, backend service and associated health-checks
  • Create a firewall rule that allows access to the Load Balancer
  • Cloud Router and NAT will be created to allow package updates from the internet
  • Validate Network Endpoint Group reachability

What you'll need

  • Knowledge of load balancers

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

96a9c957bc475304.png

b9a10ebdf5b5a448.png

a1e3c01a38fa61c2.png

  • The Project Name is your personal identifier for this project. As long as you follow its naming conventions, you can use anything you want and can update it at any time.
  • The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed once set). The Cloud console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (and it is typically identified as PROJECT_ID), so if you don't like it, generate another random one, or, you can try your own and see if it's available. Then it's "frozen" once the project is created.
  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the GCP Console click the Cloud Shell icon on the top right toolbar:

bce75f34b2c53987.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

f6ef2b5f13479f3a.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.

2. Before you begin

Inside Cloud Shell, make sure that your project id is set up

gcloud config list project
gcloud config set project [YOUR-PROJECT-ID]

Perform setting your projectID:
projectid=YOUR-PROJECT-ID
echo $projectid

3. Create a new custom mode VPC network

In this task, you will create a Virtual Private Cloud (VPC), the foundation of the network.

VPC Network

From Cloud Shell

gcloud compute networks create hybrid-network-lb --subnet-mode custom

Create Subnet

From Cloud Shell

gcloud compute networks subnets create network-endpoint-group-subnet --network hybrid-network-lb --range 192.168.10.0/24 --region us-west1

Create Cloud NAT instance

Although not a requirement for hybrid networking, the compute instance requires internet connectivity to download applications and updates.

In this task, you will create a Cloud Router and NAT instance that allows internet connectivity to VM instances.

Create Cloud Router

From Cloud Shell

gcloud compute routers create crnat --network hybrid-network-lb --region us-west1

Create Cloud NAT

From Cloud Shell

gcloud compute routers nats create cloudnat --router=crnat --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --enable-logging --region us-west1

4. Create two VM instances

In this task, you will create two VM instances running Apache,later in the lab,these VM instances will become a Network Endpoint Group (NEG).

From Cloud Shell create the first on-prem instance, on-prem-neg-1

gcloud compute instances create on-prem-neg-1 \
    --zone=us-west1-a \
    --tags=allow-health-check \
    --image-family=debian-9 \
    --image-project=debian-cloud \
    --subnet=network-endpoint-group-subnet --no-address \
    --metadata=startup-script='#! /bin/bash
apt-get update
apt-get install apache2 -y
a2ensite default-ssl
a2enmod ssl
vm_hostname="$(curl -H "Metadata-Flavor:Google" \
http://169.254.169.254/computeMetadata/v1/instance/name)"
filter="{print \$NF}"
vm_zone="$(curl -H "Metadata-Flavor:Google" \
http://169.254.169.254/computeMetadata/v1/instance/zone \
| awk -F/ "${filter}")"
echo "Page on $vm_hostname in $vm_zone" | \
tee /var/www/html/index.html
systemctl restart apache2'

From Cloud Shell create the first on-prem instance, on-prem-neg-2

gcloud compute instances create on-prem-neg-2 \
    --zone=us-west1-a \
    --tags=allow-health-check \
    --image-family=debian-9 \
    --image-project=debian-cloud \
    --subnet=network-endpoint-group-subnet --no-address \
    --metadata=startup-script='#! /bin/bash
apt-get update
apt-get install apache2 -y
a2ensite default-ssl
a2enmod ssl
vm_hostname="$(curl -H "Metadata-Flavor:Google" \
http://169.254.169.254/computeMetadata/v1/instance/name)"
filter="{print \$NF}"
vm_zone="$(curl -H "Metadata-Flavor:Google" \
http://169.254.169.254/computeMetadata/v1/instance/zone \
| awk -F/ "${filter}")"
echo "Page on $vm_hostname in $vm_zone" | \
tee /var/www/html/index.html
systemctl restart apache2'

5. Create a NEG containing you on-premise endpoint

First, create a NEG named on-prem-neg-1 and on-prem-neg-2. You will also specify that the LB should consider that, for routing and load balancing purposes, these endpoints are in the us-west1-a GCP zone. We recommend that the configured zone correspond to any zone associated with the region of the Interconnect Attachment/VPN Gateway for proximity-based load-balancing measurements used for load-balancing.

From Cloud Shell create on-prem-neg-1

gcloud compute network-endpoint-groups create on-prem-neg-1 \
    --network-endpoint-type NON_GCP_PRIVATE_IP_PORT \
    --zone "us-west1-a" \
    --network hybrid-network-lb

From Cloud Shell create on-prem-neg-2

gcloud compute network-endpoint-groups create on-prem-neg-2 \
    --network-endpoint-type NON_GCP_PRIVATE_IP_PORT \
    --zone "us-west1-a" \
    --network hybrid-network-lb

In the codelab, the network endpoint group is a GCE instance running Apache in GCP. Alternatively, you can specify an on-premise or internet endpoint as your network endpoint

From Cloud Shell identify the GCE IP addresses

gcloud compute instances list | grep -i on-prem

Associate the network-endpoint group to the GCE instance IP address previously identified in the previous step; for each neg, on-prem-neg-1 & on-prem-neg-2.

From Cloud Shell associate on-prem-neg-1, update x.x.x.x with your identified IP

gcloud compute network-endpoint-groups update on-prem-neg-1 \
    --zone="us-west1-a" \
    --add-endpoint="ip=x.x.x.x,port=80"

From Cloud Shell associate on-prem-neg-2, update x.x.x.x with your identified IP

gcloud compute network-endpoint-groups update on-prem-neg-2 \
    --zone="us-west1-a" \
    --add-endpoint="ip=x.x.x.x,port=80"

6. Create the http health-check, backend service & firewall

In this step, you will create a global backend service named on-prem-backend-service. This backend service defines how your data plane will send traffic to your NEG.

First, create a health check named on-prem-health-check to monitor the health of any endpoints belonging to this NEG (that is, your on-premises endpoint).

From Cloud Shell

gcloud compute health-checks create http on-prem-health-check

Create a backend service called on-prem-backend-service and associate it with the health check.

From Cloud Shell

gcloud compute backend-services create on-prem-backend-service \
    --global \
    --load-balancing-scheme=EXTERNAL \
    --health-checks on-prem-health-check

HTTP(S) external load balancer and backend perform health-checks originating from 35.191.0.0/16 and 130.211.0.0/22 subnets; therefore, a firewall rule is required to allow load-balancer to backend routing.

From Cloud Shell

gcloud compute firewall-rules create fw-allow-health-check \
    --network=hybrid-network-lb \
    --action=allow \
    --direction=ingress \
    --source-ranges=130.211.0.0/22,35.191.0.0/16 \
    --target-tags=allow-health-check \
    --rules=tcp:80

7. Associate the NEG and backend service

Add the on-prem-neg-1 NEG to this backend service

From Cloud Shell

gcloud compute backend-services add-backend on-prem-backend-service \
    --global \
    --network-endpoint-group on-prem-neg-1 \
    --network-endpoint-group-zone us-west1-a \
    --balancing-mode RATE \
    --max-rate-per-endpoint 5

Add the on-prem-neg-2 NEG to this backend service

From Cloud Shell

gcloud compute backend-services add-backend on-prem-backend-service \
    --global \
    --network-endpoint-group on-prem-neg-2 \
    --network-endpoint-group-zone us-west1-a \
    --balancing-mode RATE \
    --max-rate-per-endpoint 5

Reserve a IPv4 static IP Address used to access your network endpoint

From Cloud Shell

gcloud compute addresses create hybrid-lb-ip --project=$projectid --global

We are done with the CLI configuration. Let's finish the configuration from Cloud Console.

8. Create the external HTTP load balancer & associate the backend service

From cloud console navigate to Load Balancing and selected Create load balancer

Identify HTTP(S) load balancing and click ‘start configuration'

70ccd168957e89d9.png

Select "From Internet to my VMs" per the screenshot below that allows public access to your VM

a55cd31dbeadfecc.png

Provide "xlb" as the name of the load balancer and select the previously created backend service "on-prem-backend-service" then "ok" per the screenshot provided

f1589df43bf9e3e8.png

Select Frontend configuration, update name "xlb-fe" and select the previously created static IPv4 address, ensure to mirror the screenshot provided b47cd48c7c1ccfc3.png

Select "Review and finalize" ensuing to match the screenshot provided and select create

bfa39f7dc3ad91e1.png

Backend health validation

From the cloud console ensure the backend "xlb" is healthy, green per the provided screenshot

131bbfc955d6166c.png

9. Validate NEG is reachable from the internet

Recall the external static IP Address used while creating the load balancer is now the front end IP of your network endpoints. Lets perform a validation of the IP address before executing our final test.

From Cloud Shell

gcloud compute forwarding-rules describe xlb-fe --global | grep -i IPAddress:

Output (Your IP address will differ)

Output from cloudshell

$ gcloud compute forwarding-rules describe xlb-fe --global | grep -i IPAddress:
IPAddress: 34.96.103.132

Using the global load balancer front end IP address you can access the network endpoint backend. Note, in the codelab, the endpoint is a GCE instance however you would use this with on-premises endpoints for example.

From your local workstation, launch a terminal and perform a curl to the load balancer IP address

From your workstation perform a curl against the frontend IP address. Observe the 200 OK, and page details consisting of the neg instance name and region.

myworkstation$ curl -v 34.96.103.132

* Trying 34.96.103.132...

* TCP_NODELAY set

* Connected to 34.96.103.132 (34.96.103.132) port 80 (#0)

> GET / HTTP/1.1

> Host: 34.96.103.132

> User-Agent: curl/7.64.1

> Accept: */*

>

< HTTP/1.1 200 OK

< Date: Tue, 10 Aug 2021 01:21:54 GMT

< Server: Apache/2.4.25 (Debian)

< Last-Modified: Tue, 10 Aug 2021 00:35:41 GMT

< ETag: "24-5c929ae7384f4"

< Accept-Ranges: bytes

< Content-Length: 36

< Content-Type: text/html

< Via: 1.1 google

<

Page on on-prem-neg-2 in us-west1-a

* Connection #0 to host 34.96.103.132 left intact

* Closing connection 0

Congratulations, you have successfully deployed a L7 Hybrid Load balancer with NEGs

Congratulations for completing the codelab!

What we've covered

  • Create a custom VPC
  • Create two virtual machines (VMs) used as a Network Endpoint Group (NEG)
  • Create a Hybrid Load Balancer, backend service and associated health-checks
  • Create a firewall rule that allows access to the Load Balancer
  • Validate Network Endpoint Group reachability

10. Cleanup steps

From Cloud Console UI identify and tick the ‘xlb' load balancer and select delete via Network Services → Load Balancing. Once selected, tick ‘on-premise-backend service' & ‘on-premise-health-check' then select delete

53d7463fe354fe66.png

From Cloud Console UI navigate to Compute Engine → Network Endpoint Groups. Once selected, tick ‘on-prem-neg-1' & ‘on-prem-neg-2' then select delete

4d8f04264b44d03c.png

From cloud shell delete lab components

gcloud compute routers nats delete cloudnat --router=crnat --region us-west1 --quiet

gcloud compute routers delete crnat  --region us-west1 --quiet

gcloud compute instances delete on-prem-neg-1 --zone=us-west1-a --quiet

gcloud compute instances delete on-prem-neg-2 --zone=us-west1-a --quiet

gcloud compute firewall-rules delete fw-allow-health-check --quiet

gcloud compute networks subnets delete network-endpoint-group-subnet --region=us-west1 --quiet

gcloud compute networks delete hybrid-network-lb --quiet

gcloud compute addresses delete hybrid-lb-ip --global --quiet