Connect to on-prem services over Hybrid Networking using Private Service Connect and Hybrid NEG TCP Proxy

1. Introduction

An internal regional TCP proxy load balancer with hybrid connectivity lets you make a service that is hosted in on-premises or other cloud environments available to clients in your VPC network.

If you want to make the hybrid service available in other VPC networks, you can use Private Service Connect to publish the service. By placing a service attachment in front of your internal regional TCP proxy load balancer, you can let clients in other VPC networks reach the hybrid services running in on-premises or other cloud environments.

What you'll build

In this codelab, you're going to build an internal TCP Proxy load balancer with Hybrid Connectivity to an on-premise service using a Network Endpoint Group. From the Consumer VPC will be able to communicate with the on-premise service.

a4fa0e406e7232fa.png

What you'll learn

  • How to create a TCP Proxy ILB with Hybrid NEG backend service
  • How to establish a Private Service Connect Producer (Service Attachment) and Consumer (Forwarding Rule)
  • How to test and validate consumer to producer service communication

What you'll need

  • Established Hybrid Networking e.g HA VPN, Interconnect, SW-WAN
  • Google Cloud Project

Establish hybrid connectivity

Your Google Cloud and on-premises or other cloud environments must be connected through hybrid connectivity, using either Cloud Interconnect VLAN attachments or Cloud VPN tunnels with Cloud Router. We recommend you use a high availability connection.

A Cloud Router enabled with Global dynamic routing learns about the specific endpoint via BGP and programs it into your Google Cloud VPC network. Regional dynamic routing is not supported. Static routes are also not supported.

The Google Cloud VPC network that you use to configure either Cloud Interconnect or Cloud VPN is the same network you use to configure the hybrid load balancing deployment. Ensure that your VPC network's subnet CIDR ranges do not conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.

For instructions, see:

Custom Route Advertisements

Subnets below require custom advertisements from the Cloud Router to the on-premise network ensuring on-premise firewall rules are updated.

Subnet

Description

172.16.0.0/23

TCP Proxy Subnet used to communicate directly with the on-premise service

130.211.0.0/22, 35.191.0.0/16

Google Cloud Health Check

2. Before you begin

Update the project to support the codelab

This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.

Inside Cloud Shell perform the following

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
psclab=YOUR-PROJECT-NAME
echo $psclab

3. Producer Setup

Create the Producer VPC

Inside Cloud Shell perform the following

gcloud compute networks create producer-vpc --project=$psclab --subnet-mode=custom

Create the Producer subnets

Inside Cloud Shell perform the following

gcloud compute networks subnets create subnet-201 --project=$psclab --range=10.10.1.0/24 --network=producer-vpc --region=us-central1

Create the TCP Proxy subnets

Proxy allocation is at the VPC level, not the load balancer level. You must create one proxy-only subnet in each region of a virtual network (VPC) in which you use Envoy-based load balancers. If you deploy multiple load balancers in the same region and same VPC network, they share the same proxy-only subnet for load balancing.

  1. A client makes a connection to the IP address and port of the load balancer's forwarding rule.
  2. Each proxy listens on the IP address and port specified by the corresponding load balancer's forwarding rule. One of the proxies receives and terminates the client's network connection.
  3. The proxy establishes a connection to the appropriate backend VM or endpoint in a NEG, as determined by the load balancer's URL map and backend services.

You must create proxy-only subnets regardless of whether your network is auto-mode or custom. A proxy-only subnet must provide 64 or more IP addresses. This corresponds to a prefix length of /26 or shorter. The recommended subnet size is /23 (512 proxy-only addresses).

Inside Cloud Shell perform the following

gcloud compute networks subnets create proxy-subnet-us-central \
  --purpose=REGIONAL_MANAGED_PROXY \
  --role=ACTIVE \
  --region=us-central1 \
  --network=producer-vpc \
  --range=172.16.0.0/23

Create the Private Service Connect NAT subnets

Create one or more dedicated subnets to use with Private Service Connect. If you're using the Google Cloud console to publish a service, you can create the subnets during that procedure. Create the subnet in the same region as the service's load balancer. You can't convert a regular subnet to a Private Service Connect subnet.

Inside Cloud Shell perform the following

gcloud compute networks subnets create psc-nat-subnet --network=producer-vpc --region=us-central1 --range=100.100.10.0/24 --purpose=private-service-connect

Create the Producer Firewall Rules

Configure firewall rules to allow traffic between the Private Service Connect endpoints and the service attachment. In the codelab, created a Ingress Firewall Rule allowing the NAT subnet 100.100.10.0/24 access to the Private Service Connect Service Attachment (internal load balancer).

Inside Cloud Shell perform the following

gcloud compute --project=$psclab firewall-rules create allow-to-ingress-nat-subnet --direction=INGRESS --priority=1000 --network=producer-vpc --action=ALLOW --rules=all --source-ranges=100.100.10.0/24

Inside Cloud Shell Create the fw-allow-health-check rule to allow the Google Cloud health checks to reach the on-premise service (backend service) on TCP port 80

gcloud compute firewall-rules create fw-allow-health-check \
    --network=producer-vpc \
    --action=allow \
    --direction=ingress \
    --source-ranges=130.211.0.0/22,35.191.0.0/16 \
    --rules=tcp:80

Create an ingress firewall rule allowing on-premise services to communicate with the proxy subnet on port 80

gcloud compute firewall-rules create fw-allow-proxy-only-subnet \
    --network=producer-vpc \
    --action=allow \
    --direction=ingress \
    --source-ranges=172.16.0.0/23 \
    --rules=tcp:80

Set up the hybrid connectivity NEG

When creating the NEG, use a ZONE that minimizes the geographic distance between Google Cloud and your on-premises or other cloud environment. For example, if you are hosting a service in an on-premises environment in Frankfurt, Germany, you can specify the europe-west3-a Google Cloud zone when you create the NEG.

Moreover, if you're using Cloud Interconnect, the ZONE used to create the NEG should be in the same region where the Cloud Interconnect attachment was configured.

For the available regions and zones, see the Compute Engine documentation: Available regions and zones.

Inside Cloud Shell create a hybrid connectivity NEG using the gcloud compute network-endpoint-groups create command

gcloud compute network-endpoint-groups create on-prem-service-neg \
    --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \
    --zone=us-central1-a \
    --network=producer-vpc

Inside Cloud Shell add the on-premises IP:Port endpoint to the hybrid NEG.

gcloud compute network-endpoint-groups update on-prem-service-neg \
    --zone=us-central1-a \
    --add-endpoint="ip=192.168.1.5,port=80"

Configure the load balancer

In the following steps you will configure the load balancer (forwarding rule) & associate with the network endpoint group

Inside Cloud Shell create the regional health-check passed to the on-premise service

gcloud compute health-checks create tcp on-prem-service-hc \
    --region=us-central1 \
    --use-serving-port

Inside Cloud Shell create the backend service for the on-premise backend

gcloud compute backend-services create on-premise-service-backend \
   --load-balancing-scheme=INTERNAL_MANAGED \
   --protocol=TCP \
   --region=us-central1 \
   --health-checks=on-prem-service-hc \
   --health-checks-region=us-central1

Inside Cloud Shell add the hybrid NEG backend to the backend service. For MAX_CONNECTIONS, enter the maximum concurrent connections that the backend should handle.

gcloud compute backend-services add-backend on-premise-service-backend \
   --network-endpoint-group=on-prem-service-neg \
   --network-endpoint-group-zone=us-central1-a \
   --region=us-central1 \
   --balancing-mode=CONNECTION \
   --max-connections=100

Inside Cloud Shell create the Target Proxy

gcloud compute target-tcp-proxies create on-premise-svc-tcpproxy \
   --backend-service=on-premise-service-backend \
   --region=us-central1

Inside Cloud Shell create the forwarding rule (ILB)

Create the forwarding rule using the gcloud compute forwarding-rules create command.

Replace FWD_RULE_PORT with a single port number from 1-65535. The forwarding rule only forwards packets with a matching destination port.

gcloud compute forwarding-rules create tcp-ilb-psc \
   --load-balancing-scheme=INTERNAL_MANAGED \
   --network=producer-vpc \
   --subnet=subnet-201 \
   --ports=80 \
   --region=us-central1 \
   --target-tcp-proxy=on-premise-svc-tcpproxy \
   --target-tcp-proxy-region=us-central1

Obtain the IP Address of the internal load balancer

gcloud compute forwarding-rules describe tcp-ilb-psc --region=us-central1 | grep -i IPAddress:

Example output:
gcloud compute forwarding-rules describe tcp-ilb-psc --region=us-central1 | grep -i IPAddress:
IPAddress: 10.10.1.2

4. Validate the load balancer

From Cloud Console navigate to Network Services → Load Balancing → Load Balancers. Note, the 1 NEG is ‘Green' indicating a successful health check to the on-premise service

c16a93d1e185336b.png

Selecting ‘on-premise-service-backend' yields the Front End IP Address

26db2d30747fd40a.png

5. View the learned routes from on-premise

Navigate to VPC Network → Routes. Note, the learned on-premise service subnet 192.168.1.0/27

bae85fdc418f9811.png

6. Validate connectivity to the on-premise service

From the Producers VPC we will create a VM to test connectivity to the on-premise service thereafter the Service Attachment is next for configuration.

Inside Cloud Shell create the test instance in the producer vpc

gcloud compute instances create test-box-us-central1 \
    --zone=us-central1-a \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --subnet=subnet-201 \
    --no-address

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell create the test instance in the producer vpc

gcloud compute firewall-rules create ssh-iap \
    --network producer-vpc \
    --allow tcp:22 \
    --source-ranges=35.235.240.0/20

Log into test-box-us-central1 using IAP in Cloud Shell to validate connectivity to the on-premise service by performing a curl against the load balance IP Address. Retry if there is a timeout.

gcloud compute ssh test-box-us-central1 --project=$psclab --zone=us-central1-a --tunnel-through-iap

Perform a curl validating connectivity to the on-premise service. Once validated exit from the VM returning to the Cloud Shell prompt. Replace the Internal Load Balancer IP based on your output identified in steps 3 and 4.

deepakmichael@test-box-us-central1:~$ curl -v 10.10.1.2
* Expire in 0 ms for 6 (transfer 0x55b9a6b2f0f0)
*   Trying 10.10.1.2...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55b9a6b2f0f0)
* Connected to 10.10.1.2 (10.10.1.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.10.1.2
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Accept-Ranges: bytes
< ETag: "3380914763"
< Last-Modified: Mon, 05 Dec 2022 15:10:56 GMT
< Expires: Mon, 05 Dec 2022 15:42:38 GMT
< Cache-Control: max-age=0
< Content-Length: 37
< Date: Mon, 05 Dec 2022 15:42:38 GMT
< Server: lighttpd/1.4.53
< 
Welcome to my on-premise service!!

7. Create the Private Service Connect Service Attachment

In the following steps we will create the Service Attachment, once paired with a Consumer Endpoint access to the on-premise service is achieved without the need to VPC peering.

Create the Service Attachment

Inside Cloud Shell create the Service Attachment

gcloud compute service-attachments create service-1 --region=us-central1 --producer-forwarding-rule=tcp-ilb-psc --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=psc-nat-subnet

Optional: If using a shared VPC create the Service Attachment in the Service Project

gcloud compute service-attachments create service-1 --region=us-central1 --producer-forwarding-rule=tcp-ilb-psc --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=projects/<hostproject>/regions/<region>/subnetworks/<natsubnet>

Validate the TCP service attachment

gcloud compute service-attachments describe service-1 --region us-central1

8. Optional: Navigate to Network Services → Private Service Connect to view the newly established Service Attachment

bddc23a10d38d981.png

Selecting Service-1 provides greater detail, including the Service Attachment URI used by the consumer to establish a Private Service Connection. Take note of the URI since it will be used in a later step.

5c0a74874536909d.png

Service Attachment Details: projects/<projectname>/regions/us-central1/serviceAttachments/service-1

9. Consumer Setup

Create the Consumer VPC

Inside Cloud Shell perform the following

gcloud compute networks create consumer-vpc --project=$psclab --subnet-mode=custom

Create the Consumer subnets

Inside Cloud Shell create the GCE subnet

gcloud compute networks subnets create subnet-101 --project=$psclab --range=10.100.1.0/24 --network=consumer-vpc --region=us-central1

Inside Cloud Shell create the Consumer Endpoint Subnet

gcloud compute networks subnets create subnet-102 --project=$psclab --range=10.100.2.0/24 --network=consumer-vpc --region=us-central1

Create the Consumer Endpoint (forwarding rule)

Inside Cloud Shell create the static IP Address that will be used as a Consumer Endpoint

gcloud compute addresses create psc-consumer-ip-1 --region=us-central1 --subnet=subnet-102 --addresses 10.100.2.10

Lets use the previously generated Service Attachment URI to create the Consumer Endpoint

Inside Cloud Shell create the Consumer Endpoint

gcloud compute forwarding-rules create psc-consumer-1 --region=us-central1 --network=consumer-vpc --address=psc-consumer-ip-1 --target-service-attachment=projects/$psclab/regions/us-central1/serviceAttachments/service-1

10. Validate Consumer Private Service Connect - Consumer VPC

From the Consumer VPC verify a successful Private Service Connection by navigating to Network Services → Private Service Connect→ Connected Endpoints. Note the established psc-consumer-1 connection and corresponding IP Address we previously created.

629d4cea87293a42.png

When selecting psc-consumer-1 addition details are provided including the Service Attachment URI

18b132b458f698b4.png

11. Validate Consumer Private Service Connect - Producer VPC

From the Producer VPC verify a successful Private Service Connection by navigating to Network Services → Private Service ConnectPublished Service. Note the published service-1 connection now indicates 1 forwarding rule (connection endpoint).

3387b170742d7d8d.png

12. Create a Private DNS Zone & A Record

Create the Private DNS Zone mapped to the PSC Connection Endpoint allowing seamless access to the Producer from any host within the VPC.

From Cloud Shell

gcloud dns --project=$psclab managed-zones create codelab-zone --description="" --dns-name="codelab.net." --visibility="private" --networks="consumer-vpc"

gcloud dns --project=$psclab record-sets create service1.codelab.net. --zone="codelab-zone" --type="A" --ttl="300" --rrdatas="10.100.2.10"

13. Validate Consumer access to the Producers service using VM

From the Consumers VPC we will create a VM to test connectivity to the on-premise service by accessing the consumer endpoint service1.codelabs.net

Inside Cloud Shell create the test instance in the consumer vpc

gcloud compute instances create consumer-vm \
    --zone=us-central1-a \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --subnet=subnet-101 \
    --no-address

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell create the test instance in the consumer vpc

gcloud compute firewall-rules create ssh-iap-consumer \
    --network consumer-vpc \
    --allow tcp:22 \
    --source-ranges=35.235.240.0/20

Log into consumer-vm using IAP in Cloud Shell to validate connectivity to the on-premise service by performing a curl against the dns FQDN service1.codelab.net. Retry if there is a timeout.

gcloud compute ssh consumer-vm --project=$psclab --zone=us-central1-a --tunnel-through-iap

Perform a curl validating connectivity to the on-premise service. Once validated exit from the VM returning to the Cloud Shell prompt

Inside Cloud Shell perform a curl

$ curl -v service1.codelab.net
*   Trying 10.100.2.10...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5650fc3390f0)
* Connected to service1.codelab.net (10.100.2.10) port 80 (#0)
> GET / HTTP/1.1
> Host: service1.codelab.net
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Accept-Ranges: bytes
< ETag: "3380914763"
< Last-Modified: Mon, 05 Dec 2022 15:10:56 GMT
< Expires: Mon, 05 Dec 2022 15:15:41 GMT
< Cache-Control: max-age=0
< Content-Length: 37
< Date: Mon, 05 Dec 2022 15:15:41 GMT
< Server: lighttpd/1.4.53
< 
Welcome to my on-premise service!!

Provided below is an example capture from the on-premise service, note the Source IP Address 172.16.0.2 is from the TCP Proxy Subnet range 172.16.0.0/23

6dafe24917c937cb.png

14. Producer Clean up

Delete Producer components

Inside Cloud Shell delete the producer components

gcloud compute instances delete test-box-us-central1 --zone=us-central1-a --quiet

gcloud compute service-attachments delete service-1 --region=us-central1 --quiet 

gcloud compute forwarding-rules delete tcp-ilb-psc --region=us-central1 --quiet

gcloud compute target-tcp-proxies delete on-premise-svc-tcpproxy --region=us-central1 --quiet

gcloud compute backend-services delete on-premise-service-backend --region=us-central1 --quiet

gcloud compute network-endpoint-groups delete on-prem-service-neg --zone=us-central1-a --quiet

gcloud compute networks subnets delete psc-nat-subnet subnet-201 proxy-subnet-us-central --region=us-central1 --quiet

gcloud compute firewall-rules delete ssh-iap fw-allow-proxy-only-subnet allow-to-ingress-nat-subnet fw-allow-health-check --quiet

gcloud compute health-checks delete on-prem-service-hc --region=us-central1 --quiet

gcloud compute networks delete producer-vpc --quiet

15. Consumer Clean up

Delete Consumer components

Inside Cloud Shell delete the consumer components

gcloud compute instances delete consumer-vm --zone=us-central1-a --quiet

gcloud compute forwarding-rules delete psc-consumer-1 --region=us-central1 --quiet

gcloud compute addresses delete psc-consumer-ip-1 --region=us-central1 --quiet

gcloud compute networks subnets delete subnet-101 subnet-102 --region=us-central1 --quiet

gcloud compute firewall-rules delete ssh-iap-consumer --quiet

gcloud dns record-sets delete service1.codelab.net --type=A --zone=codelab-zone --quiet

gcloud dns managed-zones delete codelab-zone --quiet 

gcloud compute networks delete consumer-vpc --quiet 

16. Congratulations

Congratulations, you've successfully configured and validated Private Service Connect with TCP Proxy.

You created the producer infrastructure, and you added a service attachment in the producer VPC pointing to an on-premise service. You learned how create a consumer endpoint in the Consumer VPC that allowed connectivity to the on-premise service.

What's next?

Check out some of these codelabs...

Further reading & Videos

Reference docs