Private Service Connect Interface Managed Services

1. Introduction

A Private Service Connect interface is a resource that lets a producer Virtual Private Cloud (VPC) network initiate connections to various destinations in a consumer VPC network. Producer and consumer networks can be in different projects and organizations.

If a network attachment accepts a connection from a Private Service Connect interface, Google Cloud allocates the interface an IP address from a consumer subnet that's specified by the network attachment. The consumer and producer networks are connected and can communicate by using internal IP addresses.

A connection between a network attachment and a Private Service Connect interface is similar to the connection between a Private Service Connect endpoint and a service attachment, but it has two key differences:

  • A network attachment lets a producer network initiate connections to a consumer network (managed service egress), while an endpoint lets a consumer network initiate connections to a producer network (managed service ingress).
  • A Private Service Connect interface connection is transitive. This means that a producer network can communicate with other networks that are connected to the consumer network.

What you'll build

You'll create a single psc-network-attachment in the consumer VPC resulting in two PSC interfaces as backends to the L4 internal load balancer. From the producer VPC tiger will send a curl to cosmo in the backend-vpc. In the producer's VPC you will create a static route to destination traffic 192.168.20.0/28 next hop as the internal load balancer that will leverage backend and subsequent PSC interfaces(s) to route traffic to cosmo. See Figure 1 for an overview.

The same approach can be used with Google managed services that are VPC peered to the customer VPC when using Private Service Access.

Figure 1

36dbc7f825a21cbd.png

What you'll learn

  • How to create a network attachment
  • How a producer can use a network attachment to create a PSC interface as backends
  • How to establish communication from the producer to the consumer using ILB as next hop
  • How to allow access from the producer VM (tiger) to the consumer VM (cosmo) over VPC peering

What you'll need

2. Before you begin

Update the project to support the tutorial

This tutorial makes use of $variables to aid gcloud configuration implementation in Cloud Shell.

Inside Cloud Shell, perform the following:

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectid=YOUR-PROJECT-NAME
echo $projectid

3. Consumer Setup

Create the Consumer VPC

Inside Cloud Shell, perform the following:

gcloud compute networks create consumer-vpc --project=$projectid --subnet-mode=custom

Create the Private Service Connect Network Attachment subnet

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create intf-subnet --project=$projectid --range=192.168.10.0/28 --network=consumer-vpc --region=us-central1

Create the backend VPC

Inside Cloud Shell, perform the following:

gcloud compute networks create backend-vpc --project=$projectid --subnet-mode=custom

Create the backend VPC subnets

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create cosmo-subnet-1 --project=$projectid --range=192.168.20.0/28 --network=backend-vpc --region=us-central1

Create the backend-vpc firewall rules

In cloud Shell, create an ingress rule for traffic from the psc-network-attachment subnet to cosmo

gcloud compute firewall-rules create allow-ingress-to-cosmo \
    --network=backend-vpc \
    --action=ALLOW \
    --rules=ALL \
    --direction=INGRESS \
    --priority=1000 \
    --source-ranges="192.168.10.0/28" \
    --destination-ranges="192.168.20.0/28" \
    --enable-logging

Cloud Router and NAT configuration

Cloud NAT is used in the tutorial for software package installation because the VM instance does not have a public IP address. Cloud NAT enables VMs with private IP addresses to access the internet.

Inside Cloud Shell, create the cloud router.

gcloud compute routers create cloud-router-for-nat --network backend-vpc --region us-central1

Inside Cloud Shell, create the NAT gateway.

gcloud compute routers nats create cloud-nat-us-central1 --router=cloud-router-for-nat --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region us-central1

4. Enable IAP

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell, create the IAP firewall rule.

gcloud compute firewall-rules create ssh-iap-consumer \
    --network backend-vpc \
    --allow tcp:22 \
    --source-ranges=35.235.240.0/20

5. Create consumer VM instances

Inside Cloud Shell, create the consumer vm instance, cosmo

gcloud compute instances create cosmo \
    --project=$projectid \
    --machine-type=e2-micro \
    --image-family debian-11 \
    --no-address \
    --image-project debian-cloud \
    --zone us-central1-a \
    --subnet=cosmo-subnet-1 \
    --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install tcpdump
      sudo apt-get install apache2 -y
      sudo service apache2 restart
      echo 'Welcome to cosmo's backend server !!' | tee /var/www/html/index.html
      EOF"

Obtain and store the IP Addresses of the instances:

Inside Cloud Shell, perform a describe against the cosmo VM instances.

gcloud compute instances describe cosmo --zone=us-central1-a | grep  networkIP:

6. Private Service Connect network attachment

Network attachments are regional resources that represent the consumer side of a Private Service Connect interface. You associate a single subnet with a network attachment, and the producer assigns IPs to the Private Service Connect interface from that subnet. The subnet must be in the same region as the network attachment. A network attachment must be in the same region as the producer service.

Create the network attachment

Inside Cloud Shell, create the network attachment.

gcloud compute network-attachments create psc-network-attachment \
    --region=us-central1 \
    --connection-preference=ACCEPT_MANUAL \
    --producer-accept-list=$projectid \
    --subnets=intf-subnet

List the network attachments

Inside Cloud Shell, list the network attachment.

gcloud compute network-attachments list

Describe the network attachments

Inside Cloud Shell, describe the network attachment.

gcloud compute network-attachments describe psc-network-attachment --region=us-central1

Make note of the psc-network-attachment URI that will be used by the producer when creating the Private Service Connect Interface(s). Example below:

user$ gcloud compute network-attachments describe psc-network-attachment --region=us-central1
connectionPreference: ACCEPT_MANUAL
creationTimestamp: '2023-06-07T11:27:33.116-07:00'
fingerprint: 8SDsvG6TfYQ=
id: '5014253525248340730'
kind: compute#networkAttachment
name: psc-network-attachment
network: https://www.googleapis.com/compute/v1/projects/$projectid/global/networks/consumer-vpc
producerAcceptLists:
- $projectid
region: https://www.googleapis.com/compute/v1/projects/$projectid/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/$projectid/regions/us-central1/networkAttachments/psc-network-attachment
subnetworks:
- https://www.googleapis.com/compute/v1/projects/$projectid/regions/us-central1/subnetworks/intf-subnet

7. Establish VPC peering between the consumer and backend vpc

You will create a VPC peering connection between the consumer and backend VPC. This replicates how Google establishes connectivity to customer VPCs for managed services in addition to cross organization peering for network connectivity. VPC peering must be configured from each VPC.

Consumer VPC to backend VPC peering

Create VPC peering connection from the consumer to backend vpc

Inside Cloud Shell, perform the following:

gcloud compute networks peerings create consumer-to-backend-vpc \
    --network=consumer-vpc \
    --peer-project=$projectid \
    --peer-network=backend-vpc \
    --stack-type=IPV4_ONLY

Create VPC peering connection from the backend to consumer vpc

Inside Cloud Shell, perform the following:

gcloud compute networks peerings create backend-to-consumer-vpc \
    --network=backend-vpc \
    --peer-project=$projectid \
    --peer-network=consumer-vpc \
    --stack-type=IPV4_ONLY

Validate VPC peering state details

Inside Cloud Shell, verify that VPC peering is in the "Active" & "Connected" state.

gcloud compute networks peerings list

Example:

user@cloudshell$ gcloud compute networks peerings list
NAME: backend-to-consumer-vpc
NETWORK: backend-vpc
PEER_PROJECT: $projectid
PEER_NETWORK: consumer-vpc
STACK_TYPE: IPV4_ONLY
PEER_MTU: 
IMPORT_CUSTOM_ROUTES: False
EXPORT_CUSTOM_ROUTES: False
STATE: ACTIVE
STATE_DETAILS: [2023-06-07T11:42:27.634-07:00]: Connected.

NAME: consumer-to-backend-vpc
NETWORK: consumer-vpc
PEER_PROJECT: $projectid
PEER_NETWORK: backend-vpc
STACK_TYPE: IPV4_ONLY
PEER_MTU: 
IMPORT_CUSTOM_ROUTES: False
EXPORT_CUSTOM_ROUTES: False
STATE: ACTIVE
STATE_DETAILS: [2023-06-07T11:42:27.634-07:00]: Connected.

8. Producer Setup

Create the producer VPC

Inside Cloud Shell, perform the following:

gcloud compute networks create producer-vpc --project=$projectid --subnet-mode=custom

Create the producer subnets

Inside Cloud Shell, create the subnet used for the vNIC0 of the psc interface(s)

gcloud compute networks subnets create prod-subnet --project=$projectid --range=10.20.1.0/28 --network=producer-vpc --region=us-central1

Inside Cloud Shell, create the subnet used for the instance tiger.

gcloud compute networks subnets create prod-subnet-2 --project=$projectid --range=10.30.1.0/28 --network=producer-vpc --region=us-central1

Inside Cloud Shell, create the subnet used for the internal load balancer.

gcloud compute networks subnets create prod-subnet-3 --project=$projectid --range=172.16.10.0/28 --network=producer-vpc --region=us-central1

Cloud Router and NAT configuration

Cloud NAT is used in the tutorial for software package installation because the VM instance does not have a public IP address. Cloud NAT enables VMs with private IP addresses to access the internet.

Inside Cloud Shell, create the cloud router.

gcloud compute routers create cloud-router-for-nat-producer --network producer-vpc --region us-central1

Inside Cloud Shell, create the NAT gateway.

gcloud compute routers nats create cloud-nat-us-central1-producer --router=cloud-router-for-nat-producer --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region us-central1

Enable IAP

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell, create the IAP firewall rule.

gcloud compute firewall-rules create ssh-iap-producer \
    --network producer-vpc \
    --allow tcp:22 \
    --source-ranges=35.235.240.0/20

Create producer VM instances

Inside Cloud Shell, create the consumer vm instance, tiger

gcloud compute instances create tiger \
    --project=$projectid \
    --machine-type=e2-micro \
    --image-family debian-11 \
    --no-address \
    --image-project debian-cloud \
    --zone us-central1-a \
    --subnet=prod-subnet-2 \
    --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install tcpdump"

9. Create producer firewall rules

In the producer VPC create a ingress firewall rule that allows communication from prod-subnet-2 to all instances in the producer-vpc.

Inside Cloud Shell, create the producer firewall rule.

gcloud compute --project=$projectid firewall-rules create allow-tiger-ingress --direction=INGRESS --priority=1000 --network=producer-vpc --action=ALLOW --rules=all --source-ranges=10.30.1.0/28 --enable-logging

10. Create the Private Service Connect Interface

A Private Service Connect interface is a resource that lets a producer Virtual Private Cloud (VPC) network initiate connections to various destinations in a consumer VPC network. Producer and consumer networks can be in different projects and organizations.

If a network attachment accepts a connection from a Private Service Connect interface, Google Cloud allocates the interface an IP address from a consumer subnet that's specified by the network attachment. The consumer and producer networks are connected and can communicate by using internal IP addresses.

In the tutorial you will create two instances with the private service connect network attachment that will be the backend to the internal load balancer.

Inside Cloud Shell, create the Private Service Connect interface (rabbit) and insert the previously identified psc-network-attachment URI from the network attachment describe output.

gcloud compute instances create rabbit --zone us-central1-a --machine-type=f1-micro --can-ip-forward --network-interface subnet=prod-subnet,network=producer-vpc,no-address --network-interface network-attachment=https://www.googleapis.com/compute/v1/projects/$projectid/regions/us-central1/networkAttachments/psc-network-attachment --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install tcpdump
      sudo apt-get install apache2 -y
      sudo service apache2 restart"

Inside Cloud Shell, create the Private Service Connect interface (fox) and insert the previously identified psc-network-attachment URI from the network attachment describe output.

gcloud compute instances create fox --zone us-central1-a --machine-type=f1-micro --can-ip-forward --network-interface subnet=prod-subnet,network=producer-vpc,no-address --network-interface network-attachment=https://www.googleapis.com/compute/v1/projects/$projectid/regions/us-central1/networkAttachments/psc-network-attachment --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install tcpdump
      sudo apt-get install apache2 -y
      sudo service apache2 restart"

Multi-nic validation

Validate the PSC interface is configured with the appropriate IP Address. vNIC0 will use the producer prod-subnet (10.20.1.0/28) and vNIC1 will use the consumer intf-subnet (192.168.10.0/28).

gcloud compute instances describe rabbit --zone=us-central1-a | grep networkIP:

gcloud compute instances describe fox --zone=us-central1-a | grep networkIP:

Example:

user$ gcloud compute instances describe rabbit --zone=us-central1-a | grep networkIP:
  networkIP: 10.20.1.2
  networkIP: 192.168.10.2

user$ gcloud compute instances describe fox --zone=us-central1-a | grep networkIP:
  networkIP: 10.20.1.3
  networkIP: 192.168.10.3

11. Create and add rabbit and fox into a unmanaged instance group

In the following section, you will create an unmanaged instance group that will consist of the PSC interface instances rabbit and fox.

Inside Cloud Shell, create the unmanaged instance group.

gcloud compute instance-groups unmanaged create psc-interface-instances-ig --project=$projectid --zone=us-central1-a

Inside Cloud Shell, add instances fox and rabbit to the instance group.

gcloud compute instance-groups unmanaged add-instances psc-interface-instances-ig --project=$projectid --zone=us-central1-a --instances=fox,rabbit

12. Create TCP health check, backend services, forwarding rule & firewall

Inside Cloud Shell, create the backend health check.

gcloud compute health-checks create http hc-http-80 --port=80

Inside Cloud Shell create the backend service

gcloud compute backend-services create psc-interface-backend --load-balancing-scheme=internal --protocol=tcp --region=us-central1 --health-checks=hc-http-80
gcloud compute backend-services add-backend psc-interface-backend --region=us-central1 --instance-group=psc-interface-instances-ig --instance-group-zone=us-central1-a

Inside Cloud Shell create the forwarding rule

gcloud compute forwarding-rules create psc-ilb --region=us-central1 --load-balancing-scheme=internal --network=producer-vpc --subnet=prod-subnet-3 --address=172.16.10.10 --ip-protocol=TCP --ports=all --backend-service=psc-interface-backend --backend-service-region=us-central1

From Cloud Shell create a firewall rule to enable backend health checks

gcloud compute firewall-rules create ilb-health-checks --allow tcp:80,tcp:443 --network producer-vpc --source-ranges 130.211.0.0/22,35.191.0.0/16 

13. Create linux IP tables for the PSC interface(s) - rabbit

From the PSC interface instance, configure linux IP tables to allow producer communication to the consumer subnets.

Find the guest OS name of your Private Service Connect interface

To configure routing, you need to know the guest OS name of your Private Service Connect interface, which is different than the interface's name in Google Cloud.

Log into the psc-interface vm, rabbit, using IAP in Cloud Shell.

gcloud compute ssh rabbit --project=$projectid --zone=us-central1-a --tunnel-through-iap

In cloud Shell obtain the IP address of the psc-interface instance

ip a

Example:

user@rabbit:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:0a:14:01:02 brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet 10.20.1.2/32 brd 10.20.1.2 scope global dynamic ens4
       valid_lft 59396sec preferred_lft 59396sec
    inet6 fe80::4001:aff:fe14:102/64 scope link 
       valid_lft forever preferred_lft forever
3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:c0:a8:0a:02 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.10.2/32 brd 192.168.10.2 scope global dynamic ens5
       valid_lft 66782sec preferred_lft 66782sec
    inet6 fe80::4001:c0ff:fea8:a02/64 scope link 
       valid_lft forever preferred_lft forever

Find the gateway IP of your PSC interface

In the list of network interfaces, find and store the interface name that is associated with your Private Service Connect interface's IP address—for example, ens5 (vNIC1)

To configure routing, you need to know the IP address of your Private Service Connect interface's default gateway

In cloud Shell we will use 1 since the PSC interface is associated with vNIC1.

curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/1/gateway -H "Metadata-Flavor: Google" && echo

Example produces the default gw 192.168.10.1

user@rabbit:~$ curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/1/gateway -H "Metadata-Flavor: Google" && echo
192.168.10.1

Add routes for consumer subnets

You must add a route to your Private Service Connect interface's default gateway for each consumer subnet that connects to your Private Service Connect interface. This ensures that traffic that is bound for the consumer network egresses from the Private Service Connect interface.

Validate route table

In Cloud Shell validate current routes.

ip route show

Example.

user@rabbit:~$ ip route show
default via 10.20.1.1 dev ens4 
10.20.1.0/28 via 10.20.1.1 dev ens4 
10.20.1.1 dev ens4 scope link 
192.168.10.0/28 via 192.168.10.1 dev ens5 
192.168.10.1 dev ens5 scope link 

In Cloud Shell add the route to cosmo-subnet-1

sudo ip route add 192.168.20.0/28 via 192.168.10.1 dev ens5

Validate route table

In Cloud Shell validate the updated added routes.

ip route show

Example.

user@rabbit:~$ ip route show
default via 10.20.1.1 dev ens4 
10.20.1.0/28 via 10.20.1.1 dev ens4 
10.20.1.1 dev ens4 scope link 
192.168.10.0/28 via 192.168.10.1 dev ens5 
192.168.10.1 dev ens5 scope link 
192.168.20.0/28 via 192.168.10.1 dev ens5 

Create IP tables rules

In Cloud Shell validate the current IP Tables.

sudo iptables -t nat -L -n -v

Example:

user@rabbit:~$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination  

In Cloud Shell update IP Tables

sudo iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE
sudo sysctl net.ipv4.ip_forward=1

In Cloud Shell validate the updated IP Tables.

sudo iptables -t nat -L -n -v

Example:

user@rabbit:~$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      ens5    0.0.0.0/0            0.0.0.0/0

14. Create linux IP tables for the PSC interface(s) - fox

From the PSC interface instance, configure linux IP tables to allow producer communication to the consumer subnets.

Find the guest OS name of your Private Service Connect interface

To configure routing, you need to know the guest OS name of your Private Service Connect interface, which is different than the interface's name in Google Cloud.

Open a new Cloud Shell tab and update your project settings.

Inside Cloud Shell, perform the following:

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectid=YOUR-PROJECT-NAME
echo $projectid

Log into the psc-interface vm, fox, using IAP in Cloud Shell.

gcloud compute ssh fox --project=$projectid --zone=us-central1-a --tunnel-through-iap

In cloud Shell obtain the IP address of the psc-interface instance

ip a

Example:

user@fox:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:0a:14:01:03 brd ff:ff:ff:ff:ff:ff
    altname enp0s4
    inet 10.20.1.3/32 brd 10.20.1.3 scope global dynamic ens4
       valid_lft 65601sec preferred_lft 65601sec
    inet6 fe80::4001:aff:fe14:103/64 scope link 
       valid_lft forever preferred_lft forever
3: ens5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 42:01:c0:a8:0a:03 brd ff:ff:ff:ff:ff:ff
    altname enp0s5
    inet 192.168.10.3/32 brd 192.168.10.3 scope global dynamic ens5
       valid_lft 63910sec preferred_lft 63910sec
    inet6 fe80::4001:c0ff:fea8:a03/64 scope link 
       valid_lft forever preferred_lft forever

Find the gateway IP of your PSC interface

In the list of network interfaces, find and store the interface name that is associated with your Private Service Connect interface's IP address—for example, ens5 (vNIC1)

To configure routing, you need to know the IP address of your Private Service Connect interface's default gateway

In cloud Shell we will use 1 since the PSC interface is associated with vNIC1.

curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/1/gateway -H "Metadata-Flavor: Google" && echo

Example produces the default gw 192.168.10.1

user@fox:~$ curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/1/gateway -H "Metadata-Flavor: Google" && echo
192.168.10.1

Add routes for consumer subnets

You must add a route to your Private Service Connect interface's default gateway for each consumer subnet that connects to your Private Service Connect interface. This ensures that traffic that is bound for the consumer network egresses from the Private Service Connect interface.

Validate route table

In Cloud Shell validate current routes.

ip route show

Example.

user@fox:~$ ip route show
default via 10.20.1.1 dev ens4 
10.20.1.0/28 via 10.20.1.1 dev ens4 
10.20.1.1 dev ens4 scope link 
192.168.10.0/28 via 192.168.10.1 dev ens5 
192.168.10.1 dev ens5 scope link 

In Cloud Shell add the route to cosmo-subnet-1

sudo ip route add 192.168.20.0/28 via 192.168.10.1 dev ens5

Validate route table

In Cloud Shell validate the updated added routes.

ip route show

Example.

user@fox:~$ ip route show
default via 10.20.1.1 dev ens4 
10.20.1.0/28 via 10.20.1.1 dev ens4 
10.20.1.1 dev ens4 scope link 
192.168.10.0/28 via 192.168.10.1 dev ens5 
192.168.10.1 dev ens5 scope link 
192.168.20.0/28 via 192.168.10.1 dev ens5

Create IP tables rules

In Cloud Shell validate the current IP Tables.

sudo iptables -t nat -L -n -v

Example:

user@fox:~$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

In Cloud Shell update IP Tables.

sudo iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE
sudo sysctl net.ipv4.ip_forward=1

In Cloud Shell validate the updated IP Tables.

sudo iptables -t nat -L -n -v

Example:

user@fox:~$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      ens5    0.0.0.0/0            0.0.0.0/0  

15. Update route table

In the producer-vpc create a static route to the consumers subnet 192.168.20.0/28, next hop as the internal load balancer. Once created, any packet (within the producer-vpc) to the destination 192.168.20.0/28 will be directed to the internal load balancer.

Open a new Cloud Shell tab and update your project settings.

Inside Cloud Shell, perform the following:

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectid=YOUR-PROJECT-NAME
echo $projectid

In Cloud Shell update the producer-vpc route table with a static route.

gcloud beta compute routes create producer-to-cosmo-subnet-1 --project=$projectid --network=producer-vpc --priority=1000 --destination-range=192.168.20.0/28 --next-hop-ilb=psc-ilb --next-hop-ilb-region=us-central1

16. Validate successful connectivity from tiger to cosmo

Curl validation

Let's confirm that the producer VM instance, tiger, can communicate with the consumer instance, cosmo by performing a curl.

Open a new Cloud Shell tab and update your project settings.

Inside Cloud Shell, perform the following:

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectid=YOUR-PROJECT-NAME
echo $projectid

Log into the tiger instance using IAP in Cloud Shell.

gcloud compute ssh tiger --project=$projectid --zone=us-central1-a --tunnel-through-iap

Perform a curl against cosmo's IP Address identified earlier in the tutorial from the tiger instance.

curl -v <cosmo's IP Address>

Example:

user@tiger:~$ curl -v 192.168.20.2
*   Trying 192.168.20.2:80...
* Connected to 192.168.20.2 (192.168.20.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 192.168.20.2
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Fri, 09 Jun 2023 03:49:42 GMT
< Server: Apache/2.4.56 (Debian)
< Last-Modified: Fri, 09 Jun 2023 03:28:37 GMT
< ETag: "27-5fda9f6ea060e"
< Accept-Ranges: bytes
< Content-Length: 39
< Content-Type: text/html
< 
Welcome to cosmo's backend server !!

Congratulations!! You have successfully validated connectivity from the producer-vpc to the backend-vpc by performing a curl command.

17. Clean up

From Cloud Shell, delete tutorial components.

gcloud compute instances delete cosmo --zone=us-central1-a --quiet

gcloud compute instances delete rabbit --zone=us-central1-a --quiet

gcloud compute instances delete fox --zone=us-central1-a --quiet

gcloud compute instances delete tiger --zone=us-central1-a --quiet

gcloud compute network-attachments delete psc-network-attachment --region=us-central1 --quiet

gcloud compute firewall-rules delete allow-ingress-to-cosmo allow-tiger-ingress ilb-health-checks ssh-iap-consumer   ssh-iap-producer --quiet

gcloud beta compute routes delete producer-to-cosmo-subnet-1 --quiet 

gcloud compute forwarding-rules delete psc-ilb --region=us-central1 --quiet
gcloud compute backend-services delete psc-interface-backend --region=us-central1 --quiet
gcloud compute instance-groups unmanaged delete psc-interface-instances-ig --zone=us-central1-a --quiet
gcloud compute health-checks delete hc-http-80 --quiet
gcloud compute networks subnets delete cosmo-subnet-1 prod-subnet prod-subnet-2 prod-subnet-3 intf-subnet --region=us-central1 --quiet

gcloud compute routers delete cloud-router-for-nat --region=us-central1 --quiet

gcloud compute routers delete cloud-router-for-nat-producer --region=us-central1 --quiet

gcloud compute networks delete consumer-vpc --quiet

gcloud compute networks delete producer-vpc --quiet

gcloud compute networks delete backend-vpc --quiet

18. Congratulations

Congratulations, you've successfully configured and validated a Private Service Connect Interface and validated consumer and producer connectivity over VPC peering.

You created the consumer infrastructure, and you added a network attachment that allowed the producer to create a multi nic vm to bridge consumer and producer communication. You learned how the PSC interface can be used to communicate with 1P/3P services over VPC peering by using an internal load balancer and a static route in the producer's vpc.

Cosmopup thinks tutorials are awesome!!

e6d3675ca7c6911f.jpeg

What's next?

Check out some of these tutorials...

Further reading & Videos

Reference docs