Private Service Connect with automatic DNS configuration

1. Introduction

Private Service Connect with automatic DNS configuration uses Service Directory and Cloud DNS to automatically create DNS records that are programmed with the consumer Private Service Connect endpoint IP addresses.

What you'll build

In this codelab, you're going to build a comprehensive Private Service Connect architecture that illustrates the use of automatic DNS as illustrated in Figure 1.

Automatic DNS is made possible by the following:

  1. Producer service attachment originates automatic DNS by supplying a owned Public domain with the ‘– domain-names' flag when creating the Private Service Connect service attachment.
  2. The consumer defines an endpoint name.
  3. Automatic DNS creates both a DNS Zone goog-psc-default-us-central1 and DNS name cosmopup.net, in addition to a Service Directory entry consisting of the consumer endpoint name.

The benefit of automatic DNS is illustrated in (4) where an end user can communicate with the consumer endpoint through DNS, FQDN stargazer.cosmopup.net.

Figure 1

5e26a358454d1336.png

What you'll learn

  • How to create an internal HTTP(S) load balancer
  • How to create a service attachment with automatic DNS
  • How to establish a Private Service Connect Producer service
  • How to access a consumer endpoint using automatic DNS

What you'll need

  • Google Cloud Project
  • A public domain that you own

2. Before you begin

Update the project to support the codelab

This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.

Inside Cloud Shell, perform the following:

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=YOUR-PROJECT-NAME
echo $projectname

3. Producer Setup

Create the producer VPC

Inside Cloud Shell, perform the following:

gcloud compute networks create producer-vpc --project=$projectname --subnet-mode=custom

Create the producer subnets

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create gce-subnet --project=$projectname --range=172.16.20.0/28 --network=producer-vpc --region=us-central1

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create load-balancer-subnet --project=$projectname --range=172.16.10.0/28 --network=producer-vpc --region=us-central1

Reserve an IP address for the internal load balancer

Inside Cloud Shell, perform the following:

gcloud compute addresses create lb-ip \
    --region=us-central1 \
    --subnet=load-balancer-subnet \
    --purpose=GCE_ENDPOINT

View the allocated IP address

Use the compute addresses describe command to view the allocated IP address

gcloud compute addresses describe lb-ip  --region=us-central1 | grep address:

Create the regional proxy subnets

Proxy allocation is at the VPC network level, not the load balancer level. You must create one proxy-only subnet in each region of a virtual network (VPC) in which you use Envoy-based load balancers. If you deploy multiple load balancers in the same region and same VPC network, they share the same proxy-only subnet for load balancing.

  1. A client makes a connection to the IP address and port of the load balancer's forwarding rule.
  2. Each proxy listens on the IP address and port specified by the corresponding load balancer's forwarding rule. One of the proxies receives and terminates the client's network connection.
  3. The proxy establishes a connection to the appropriate backend VM determined by the load balancer's URL map and backend services.

You must create proxy-only subnets regardless of whether your VPC network is auto mode or custom mode. A proxy-only subnet must provide 64 or more IP addresses. This corresponds to a prefix length of /26 or shorter. The recommended subnet size is /23 (512 proxy-only addresses).

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create proxy-subnet-us-central \
  --purpose=REGIONAL_MANAGED_PROXY \
  --role=ACTIVE \
  --region=us-central1 \
  --network=producer-vpc \
  --range=172.16.0.0/23

Create the Private Service Connect NAT subnets

Create one or more dedicated subnets to use with Private Service Connect. If you're using the Google Cloud console to publish a service, you can create the subnets during that procedure. Create the subnet in the same region as the service's load balancer. You can't convert a regular subnet to a Private Service Connect subnet.

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create psc-nat-subnet \
    --project $projectname \
    --network producer-vpc \
    --region us-central1 \
    --range 100.100.10.0/24 \
    --purpose PRIVATE_SERVICE_CONNECT

Create the producer firewall rules

Configure firewall rules to allow traffic between the Private Service Connect NAT subnet and the ILB proxy only subnet.

Inside Cloud Shell, perform the following:

gcloud compute --project=$projectname firewall-rules create allow-to-ingress-nat-subnet --direction=INGRESS --priority=1000 --network=producer-vpc --action=ALLOW --rules=all --source-ranges=100.100.10.0/24

Inside Cloud Shell, create the fw-allow-health-check firewall rule to allow the Google Cloud health checks to reach the producer service (backend service) on TCP port 80.

gcloud compute firewall-rules create fw-allow-health-check \
    --network=producer-vpc \
    --action=allow \
    --direction=ingress \
    --source-ranges=130.211.0.0/22,35.191.0.0/16 \
    --rules=tcp:80

Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port 80.

gcloud compute firewall-rules create fw-allow-proxy-only-subnet \
    --network=producer-vpc \
    --action=allow \
    --direction=ingress \
    --source-ranges=172.16.0.0/23 \
    --rules=tcp:80

Cloud Router and NAT configuration

Cloud NAT is used in the codelab for software package installation since the VM instance does not have an external IP address.

Inside Cloud Shell, create the cloud router.

gcloud compute routers create cloud-router-for-nat --network producer-vpc --region us-central1

Inside Cloud Shell, create the NAT gateway.

gcloud compute routers nats create cloud-nat-us-central1 --router=cloud-router-for-nat --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region us-central1

Instance group configuration

In the following section, you'll create the Compute Engine instance & unmanaged instance group. In later steps the instance group will be used as the load balancer backend service.

Inside Cloud Shell, create the regional health-check passed to the producer service.

gcloud compute instances create app-server-1 \
    --project=$projectname \
    --machine-type=e2-micro \
    --image-family debian-10 \
    --no-address \
    --image-project debian-cloud \
    --zone us-central1-a \
    --subnet=gce-subnet \
    --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install apache2 -y
      sudo service apache2 restart
      echo 'Welcome to App-Server-1 !!' | tee /var/www/html/index.html
      EOF"

Inside Cloud Shell, create the unmanaged instance group.

gcloud compute instance-groups unmanaged create psc-instance-group --zone=us-central1-a

gcloud compute instance-groups unmanaged set-named-ports psc-instance-group --project=$projectname --zone=us-central1-a --named-ports=http:80

gcloud compute instance-groups unmanaged add-instances psc-instance-group --zone=us-central1-a --instances=app-server-1

Configure the load balancer

In the following steps you will configure the internal HTTP load balancer that will be published as a service attachment in a later step

Inside Cloud Shell, create the regional health-check.

gcloud compute health-checks create http http-health-check \
    --region=us-central1 \
    --use-serving-port

Inside Cloud Shell, create the backend service.

 gcloud compute backend-services create l7-ilb-backend-service \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --protocol=HTTP \
      --health-checks=http-health-check \
      --health-checks-region=us-central1 \
      --region=us-central1

Inside Cloud Shell, add backends to the backend service.

gcloud compute backend-services add-backend l7-ilb-backend-service \
  --balancing-mode=UTILIZATION \
  --instance-group=psc-instance-group \
  --instance-group-zone=us-central1-a \
  --region=us-central1

Inside Cloud Shell, create the URL map to route incoming requests to the backend service.

gcloud compute url-maps create l7-ilb-map \
    --default-service l7-ilb-backend-service \
    --region=us-central1

Create the HTTP target proxy.

gcloud compute target-http-proxies create l7-ilb-proxy\
    --url-map=l7-ilb-map \
    --url-map-region=us-central1 \
    --region=us-central1

Create a forwarding rule to route incoming requests to the proxy. Don't use the proxy-only subnet to create the forwarding rule.

 gcloud compute forwarding-rules create l7-ilb-forwarding-rule \
      --load-balancing-scheme=INTERNAL_MANAGED \
      --network=producer-vpc \
      --subnet=load-balancer-subnet \
      --address=lb-ip \
      --ports=80 \
      --region=us-central1 \
      --target-http-proxy=l7-ilb-proxy \
      --target-http-proxy-region=us-central1

4. Validate the load balancer

From Cloud Console navigate to Network Services → Load Balancing → Load Balancers. Note the successful health check to the backend service

881567cc11627009.png

Selecting ‘l7-ilb-map' yields the frontend IP address, which should match the IP address you grepped in an earlier step, and identifies the backend Service.

bab89b0a7b4f95e9.png

5. Create the Private Service Connect service attachment

Create the service attachment

Inside Cloud Shell, create the service attachment. Make sure to add the ‘.' at the end of the domain name.

gcloud compute service-attachments create published-service --region=us-central1 --producer-forwarding-rule=l7-ilb-forwarding-rule --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=psc-nat-subnet --domain-names=cosmopup.net.

Optional: If using a shared VPC, create the service attachment in the service project.

gcloud compute service-attachments create published-service --region=us-central1 --producer-forwarding-rule=l7-ilb-forwarding-rule --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=projects/<hostproject>/regions/us-central1/subnetworks/psc-nat-subnet --domain-names=cosmopup.net.

Navigate to Network Services → Private Service Connect to view the newly established service attachment

d27fee9073dbbe2.png

Selecting published-service provides greater detail, including the service attachment URI used by the consumer to establish a Private Service Connection & the domain name.

503df63730c62df2.png

Service attachment Details:

projects/<project name>/regions/us-central1/serviceAttachments/published-service

6. Consumer Setup

Enable consumer APIs

Inside Cloud, Shell perform the following:

gcloud services enable dns.googleapis.com
gcloud services enable servicedirectory.googleapis.com

Create the consumer VPC network

Inside Cloud Shell, perform the following:

gcloud compute networks create consumer-vpc --project=$projectname --subnet-mode=custom

Create the consumer subnets

Inside Cloud Shell, create the subnet for the test instance.

gcloud compute networks subnets create db1-subnet --project=$projectname --range=10.20.0.0/28 --network=consumer-vpc --region=us-central1

Inside Cloud Shell, create a subnet for the consumer endpoint.

gcloud compute networks subnets create consumer-ep-subnet --project=$projectname --range=10.10.0.0/28 --network=consumer-vpc --region=us-central1

Create the consumer endpoint (forwarding rule)

Inside Cloud Shell, create the static IP Address that will be used for the consumer endpoint.

gcloud compute addresses create psc-consumer-ip-1 --region=us-central1 --subnet=consumer-ep-subnet --addresses 10.10.0.10

We use the previously generated service attachment URI to create the consumer endpoint.

Inside Cloud Shell, create the consumer endpoint.

gcloud compute forwarding-rules create stargazer --region=us-central1 --network=consumer-vpc --address=psc-consumer-ip-1 --target-service-attachment=projects/$projectname/regions/us-central1/serviceAttachments/published-service

7. Validate the connection in the consumer's VPC network

From the consumer VPC network, verify a successful Private Service Connection by navigating to Network Services → Private Service Connect→ Connected Endpoints. Note the established stargazer connection and corresponding IP Address we previously created.

c60812433c3e1676.png

When selecting psc-consumer-1 details are provided including the service attachment URI

14d3e3b1e0aee3c2.png

8. Validate the connection in the producer's VPC network

From the producer's VPC network verify a successful Private Service Connection by navigating to Network Services → Private Service Connect→Published Service. Note the published service connection now indicates 1 forwarding rule (connection endpoint).

911dbd7421bcfd3a.png

9. Validate the automatic DNS configuration

Let's evaluate the DNS and Service Directory configuration.

Cloud DNS configuration

Navigate to Network Services → Cloud DNS → Zones. The zone goog-psc-default-us-central & DNS name cosmopup.net. is generated automatically.

4395e7b33fc42faa.png

View the DNS and Service Directory configuration

Selecting the zone name allows us to see how Service Directory is integrated with Cloud DNS.

e4fe44d945b20451.png

Service Directory configuration

Navigate to Network Services → Service Directory

Recall the consumer endpoint name ‘stargazer'? It is programmed automatically in Service Directory allowing us to reach the consumer endpoint by using FQDN stargazer.goog-psc-default–us-central1

602deab65b5ac315.png

10. Validate consumer access to the producers service

From the consumer's VPC network, we will create a VM to test connectivity the published service by accessing the consumer endpoint stargazer.cosmopup.net

Inside Cloud Shell create the test instance in the consumer vpc.

gcloud compute instances create db1 \
    --zone=us-central1-a \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --subnet=db1-subnet \
    --no-address

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell, create the IAP firewall rule.

gcloud compute firewall-rules create ssh-iap-consumer \
    --network consumer-vpc \
    --allow tcp:22 \
    --source-ranges=35.235.240.0/20

Log into consumer-vm using IAP in Cloud Shell to validate connectivity to the producer service by performing a curl. Retry if there is a timeout.

gcloud compute ssh db1 --project=$projectname --zone=us-central1-a --tunnel-through-iap

Perform a curl validating connectivity to the producer service. Once validated exit from the VM returning to the Cloud Shell prompt

Inside Cloud Shell perform a curl against your custom domain, example stargazer.[custom-domain.com]. In the output below, a curl is performed against stargazer.cosmopup.net

user@db1:~$ curl -v stargazer.cosmopup.net
*   Trying 10.10.0.10...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55d3aa8190f0)
* Connected to stargazer.cosmopup.net (10.10.0.10) port 80 (#0)
> GET / HTTP/1.1
> Host: stargazer.cosmopup.net
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< date: Thu, 22 Dec 2022 00:16:25 GMT
< server: Apache/2.4.38 (Debian)
< last-modified: Wed, 21 Dec 2022 20:26:32 GMT
< etag: "1b-5f05c5e43a083"
< accept-ranges: bytes
< content-length: 27
< content-type: text/html
< via: 1.1 google
< 
Welcome to App-Server-1 !!

Exit from the VM returning to Cloud Shell prompt to start the clean-up tasks

11. Clean up

From Cloud Shell, delete codelab components.

gcloud compute forwarding-rules delete stargazer --region=us-central1 --quiet

gcloud compute instances delete db1 --zone=us-central1-a --quiet 

gcloud compute addresses delete psc-consumer-ip-1 --region=us-central1 --quiet 

gcloud compute networks subnets delete consumer-ep-subnet db1-subnet --region=us-central1 --quiet 

gcloud compute firewall-rules delete ssh-iap-consumer --quiet 

gcloud compute networks delete consumer-vpc --quiet 

gcloud compute service-attachments delete published-service --region=us-central1 --quiet 

gcloud compute forwarding-rules delete l7-ilb-forwarding-rule --region=us-central1 --quiet 

gcloud compute target-http-proxies delete l7-ilb-proxy --region=us-central1 --quiet 
 
gcloud compute url-maps delete l7-ilb-map --region=us-central1 --quiet 
 
gcloud compute backend-services delete l7-ilb-backend-service --region=us-central1 --quiet
 
gcloud compute instance-groups unmanaged delete psc-instance-group --zone=us-central1-a --quiet
 
gcloud compute instances delete app-server-1 --zone=us-central1-a --quiet 
 
gcloud compute firewall-rules delete allow-to-ingress-nat-subnet fw-allow-health-check fw-allow-proxy-only-subnet --quiet 
 
gcloud compute addresses delete lb-ip --region=us-central1 --quiet 
 
gcloud compute networks subnets delete gce-subnet load-balancer-subnet psc-nat-subnet proxy-subnet-us-central --region=us-central1 --quiet 
 
gcloud compute routers delete cloud-router-for-nat --region=us-central1 --quiet 
 
gcloud compute networks delete producer-vpc --quiet 

12. Congratulations

Congratulations, you've successfully configured and validated a Private Service Connect endpoint with automatic DNS configuration.

You created the producer infrastructure, and you added a service attachment with public domain registration. You learned how to create a consumer endpoint in the consumer VPC network that allowed connectivity to the on-premises service using auto generated DNS.

Cosmopup thinks codelabs are awesome!!

8c2a10eb841f7b01.jpeg

What's next?

Check out some of these codelabs...

Further reading & Videos

Reference docs