1. Introduction
With Private Service Connect, service producers can expose services in a VPC environment through a Service Attachment and allow consumers in another VPC environment to access those services via a Private Service Connect endpoint. Sometimes these producer services are designed as clusters of VMs, with each VM exposing the same services on identical port numbers. Previously, these service designs required either multiple Private Service Connect endpoints to be deployed on the consumer side, or the use of IP forwarding on the producer side to make sure the correct producer VM was targeted.
Private Service Connect can now natively target the correct destination using Port Mapping. In this lab, you'll learn about the use cases where this feature is required and how to deploy a Port Mapping NEG into a Private Service Connect workload.
What you'll learn
- Private Service Connect Port Mapping use cases
- Key Benefits of PSC Port Mapping
- Network requirements
- Create a Private Service Connect producer service using port mapping.
- Create a Private Service Connect endpoint
- Make calls through a Private Service Connect endpoint to a producer service
What you'll need
- Google Cloud Project with Owner permissions
2. Private Service Connect Port Mapping use cases
The Port Mapping feature makes use of a Port Mapping NEG (Network Endpoint Group) that is specific for PSC use cases.
The most common types of producers that can benefit from using Port Mapping are NoSQL database producers and Kafka producers. However, any producer requiring a cluster of VMs exposing the same services on identical ports with specific VM mapping requirements can use this feature.
The Producer defines the mapping between a client port and a producer VM + destination port. The producer then needs to share this information with the consumer. The consumer uses the predefined ports to uniquely identify which producer VM + destination port they need to reach. The port used by the consumer is a different port that is used by the producer.
Key benefits of PSC Port Mapping
- Simple: Producers deploy PSC components with a port mapping, and consumers deploy a PSC endpoint. PSC handles network address translation automatically.
- Cost-effective: It requires no additional PSC resources or producer VM CPU cycles. The pricing is the same as other types of PSC deployments
- High-performance: Port mapping offers the same line-rate throughput and low latency as other PSC modes
- Scalable and IP-efficent: One IP address from the consumer VPC can access up to 1000 producer VMs and 1000 port mappings
3. Network requirements
- Port Mapping requires the use of an Internal Network Passthrough Load Balancer as the producer load balancer.
- Only PSC endpoints can be used with Port Mapping (not PSC Backends or PSC Interface).
- Port mapping NEGs are regional constructs.
- Port mapping NEGs can only be used across a PSC connection. They will not work if the client VM calls the producer load balancer forwarding rule directly. This is reflected in the way the producer service is tested in this codelab.
- The PSC endpoint and the producer service stack must be in different VPCs.
4. Codelab topology
In the producer VPC, two VMs will be created that will run two web servers each, one running on port 1000, and one running on port 2000. We will test each service before setting up the Portmap NEG, Internal Network Passthrough Load Balancer, and Service Attachment.
In the consumer VPC, we will set up a PSC endpoint and test connectivity to the producer service from a client VM.
5. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.
6. Before you begin
Enable APIs
Inside Cloud Shell, make sure that your project id is set up
gcloud config list project gcloud config set project [YOUR-PROJECT-ID] project=[YOUR-PROJECT-ID] region=us-central1 zone=us-central1-a echo $project echo $region echo $zone
Enable all necessary services
gcloud services enable compute.googleapis.com
7. Create Producer VPC Network
VPC Network
From Cloud Shell
gcloud compute networks create producer-vpc --subnet-mode custom
Create Subnets
From Cloud Shell
gcloud compute networks subnets create producer-service-subnet --network producer-vpc --range 10.0.0.0/24 --region $region --enable-private-ip-google-access gcloud compute networks subnets create psc-nat-subnet --network producer-vpc --range 10.100.100.0/24 --region $region --purpose=PRIVATE_SERVICE_CONNECT
The PSC subnet will be associated with the PSC Service Attachment for the purpose of Network Address Translation. For production use cases, this subnet needs to be sized appropriately to support the amount of inbound traffic from all attached PSC endpoints. See PSC NAT subnet sizing documentation for more information.
Create Network Firewall Policy and Firewall Rules
From Cloud Shell
gcloud compute network-firewall-policies create producer-vpc-policy --global gcloud compute network-firewall-policies associations create --firewall-policy producer-vpc-policy --network producer-vpc --name network-producer-vpc --global-firewall-policy
To allow IAP to connect to your VM instances, create a firewall rule that:
- Applies to all VM instances that you want to be accessible by using IAP.
- Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.
From Cloud Shell
gcloud compute network-firewall-policies rules create 1000 --action ALLOW --firewall-policy producer-vpc-policy --description "SSH with IAP" --direction INGRESS --src-ip-ranges 35.235.240.0/20 --layer4-configs tcp:22 --global-firewall-policy
The following firewall rule allows traffic on TCP ports 1000-2000 from the PSC subnet to all instances in the network. In a production environment, this firewall rule should be limited to only the instances associated with the specific producer service.
From Cloud Shell
gcloud compute network-firewall-policies rules create 2000 --action ALLOW --firewall-policy producer-vpc-policy --description "allow traffic from PSC NAT subnet" --direction INGRESS --src-ip-ranges 10.100.100.0/24 --layer4-configs tcp:1000-2000 --global-firewall-policy
The following firewall rule allows all traffic within the services subnet on TCP ports 1000-2000. This rule will be used to test that our producer service is working appropriately.
From Cloud Shell
gcloud compute network-firewall-policies rules create 2001 --action ALLOW --firewall-policy producer-vpc-policy --description "allow traffic within the service subnet" --direction INGRESS --src-ip-ranges 10.0.0.0/24 --layer4-configs tcp:1000-2000 --global-firewall-policy
Create and Configure Producer VMs
Create VMs
From Cloud Shell
gcloud compute instances create portmap-vm1 --zone=$zone --subnet=producer-service-subnet --no-address gcloud compute instances create portmap-vm2 --zone=$zone --subnet=producer-service-subnet --no-address gcloud compute instances create test-client-vm --zone=$zone --subnet=producer-service-subnet --no-address
In the following section, start the http server on port 1000 & 2000 on each Producer VM.
Configure VMs
From Cloud Shell
gcloud compute ssh --zone $zone "portmap-vm1" --tunnel-through-iap --project $project
In Cloud Shell from portmap-vm1 session
mkdir 1000 cd 1000 echo "portmap-vm1 1000">index.html sudo python3 -m http.server 1000 & cd .. mkdir 2000 cd 2000 echo "portmap-vm1 2000">index.html sudo python3 -m http.server 2000 &
Open a new Cloud Shell Window
Start with resetting variables. In Cloud Shell
project=[YOUR-PROJECT-ID] region=us-central1 zone=us-central1-a echo $project echo $region echo $zone gcloud compute ssh --zone $zone "portmap-vm2" --tunnel-through-iap --project $project
In Cloud Shell from portmap-vm2 session
mkdir 1000 cd 1000 echo "portmap-vm2 1000">index.html sudo python3 -m http.server 1000 & cd .. mkdir 2000 cd 2000 echo "portmap-vm2 2000">index.html sudo python3 -m http.server 2000 &
8. Test Producer Service
First, we need to obtain the IP addresses of the portmap instances. Take note of both of these IP addresses.
Open a new Cloud Shell Window
Start with resetting variables. In Cloud Shell
project=[YOUR-PROJECT-ID] region=us-central1 zone=us-central1-a echo $project echo $region echo $zone gcloud compute instances describe portmap-vm1 \ --format='get(networkInterfaces[0].networkIP)' --zone $zone gcloud compute instances describe portmap-vm2\ --format='get(networkInterfaces[0].networkIP)' --zone $zone
Log into the test instance. In Cloud Shell
gcloud compute ssh --zone $zone "test-client-vm" --tunnel-through-iap --project $project curl [portmap-vm1 IP]:1000
Expected output
portmap-vm1 1000
In Cloud Shell
curl [portmap-vm1 IP]:2000
Expected output
portmap-vm1 2000
In Cloud Shell
curl [portmap-vm2 IP]:1000
Expected output
portmap-vm2 1000
In Cloud Shell
curl [portmap-vm2 IP]:2000
Expected output
portmap-vm2 2000
Exit from test-client-vm
9. Create Producer Service with Portmap NEG
Create Load Balancer Components
From Cloud Shell
gcloud compute network-endpoint-groups create portmap-neg --region=$region --network=producer-vpc --subnet=producer-service-subnet --network-endpoint-type=GCE_VM_IP_PORTMAP
Add endpoints to the Portmap NEG to create the mapping from client port to producer port. The producer creates this mapping and will have its own method to communicate this information to consumers. The specific port mapping is not shared through PSC.
In Cloud Shell
gcloud compute network-endpoint-groups update portmap-neg --region=$region --add-endpoint=client-destination-port=1001,instance=projects/$project/zones/$zone/instances/portmap-vm1,port=1000 --add-endpoint=client-destination-port=1002,instance=projects/$project/zones/$zone/instances/portmap-vm1,port=2000 --add-endpoint=client-destination-port=1003,instance=projects/$project/zones/$zone/instances/portmap-vm2,port=1000 --add-endpoint=client-destination-port=1004,instance=projects/$project/zones/$zone/instances/portmap-vm2,port=2000
Complete the load balancer build out.
In Cloud Shell
gcloud compute backend-services create portmap-bes --load-balancing-scheme=internal --region=$region --network=producer-vpc gcloud compute backend-services add-backend portmap-bes --network-endpoint-group=portmap-neg --network-endpoint-group-region=$region gcloud compute forwarding-rules create portmap-fr --load-balancing-scheme=INTERNAL --network=producer-vpc --subnet=producer-service-subnet --ports=ALL --region=$region --backend-service=portmap-bes
Create Service Attachment
From Cloud Shell
gcloud compute service-attachments create portmap-service-attachment --region=$region --producer-forwarding-rule=portmap-fr --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=psc-nat-subnet
Next, retrieve and note the Service Attachment URI to configure the PSC endpoint in the consumer environment.
In Cloud Shell
gcloud compute service-attachments describe portmap-service-attachment --region=$region
Example Expected Output
connectionPreference: ACCEPT_AUTOMATIC creationTimestamp: '2024-07-19T10:02:29.432-07:00' description: '' enableProxyProtocol: false fingerprint: LI8D6JNQsLA= id: '6207474793859982026' kind: compute#serviceAttachment name: portmap-service-attachment natSubnets: - https://www.googleapis.com/compute/v1/projects/$project/regions/$zone/subnetworks/psc-nat-subnet pscServiceAttachmentId: high: '94288091358954472' low: '6207474793859982026' reconcileConnections: false region: https://www.googleapis.com/compute/v1/projects/$project/regions/$region selfLink: https://www.googleapis.com/compute/v1/projects/$project/regions/$region/serviceAttachments/portmap-service-attachment targetService: https://www.googleapis.com/compute/v1/projects/$project/regions/$region/forwardingRules/portmap-fr
10. Create Consumer VPC network
VPC Network
From Cloud Shell
gcloud compute networks create consumer-vpc --subnet-mode custom
Create Subnet
From Cloud Shell
gcloud compute networks subnets create consumer-client-subnet --network consumer-vpc --range=10.0.0.0/24 --region $region --enable-private-ip-google-access
Create Network Firewall Policy and Firewall Rules
From Cloud Shell
gcloud compute network-firewall-policies create consumer-vpc-policy --global gcloud compute network-firewall-policies associations create --firewall-policy consumer-vpc-policy --network consumer-vpc --name network-consumer-vpc --global-firewall-policy gcloud compute network-firewall-policies rules create 1000 --action ALLOW --firewall-policy consumer-vpc-policy --description "SSH with IAP" --direction INGRESS --src-ip-ranges 35.235.240.0/20 --layer4-configs tcp:22 --global-firewall-policy
Only SSH from IAP access is needed for the consumer network.
11. Create VM, PSC Endpoint and Test Connectivity
At this point, there should be three Cloud Shell windows open. One should have an open session with portmap-vm1. One should have an open session with portmap-vm2, and one should be the working session.
Create Test VM
From Cloud Shell
gcloud compute instances create consumer-client-vm --zone $zone --subnet=consumer-client-subnet --no-address
Create PSC Endpoint
From Cloud Shell
gcloud compute addresses create psc-endpoint-ip --region=$region --subnet=consumer-client-subnet --addresses 10.0.0.10 gcloud compute forwarding-rules create psc-portmap-endpoint --region=$region --network=consumer-vpc --address=psc-endpoint-ip --target-service-attachment=[SERVICE ATTACHMENT URI]
Test Connectivity
From Cloud Shell
gcloud compute ssh --zone $zone "consumer-client-vm" --tunnel-through-iap --project $project curl 10.0.0.10:1001
Expected Output
portmap-vm1 1000
From Cloud Shell
curl 10.0.0.10:1002
Expected Output
portmap-vm1 2000
From Cloud Shell
curl 10.0.0.10:1003
Expected Output
portmap-vm2 1000
From Cloud Shell
curl 10.0.0.10:1004
Expected Output
portmap-vm2 2000
12. Cleanup steps
Exit from VM instance (all windows)
exit
From a single Cloud Shell terminal delete lab components
gcloud compute forwarding-rules delete psc-portmap-endpoint --region=$region -q gcloud compute addresses delete psc-endpoint-ip --region=$region -q gcloud compute instances delete consumer-client-vm --zone=$zone -q gcloud compute network-firewall-policies rules delete 1000 --firewall-policy=consumer-vpc-policy --global-firewall-policy -q gcloud compute network-firewall-policies associations delete --firewall-policy=consumer-vpc-policy --name=network-consumer-vpc --global-firewall-policy -q gcloud compute network-firewall-policies delete consumer-vpc-policy --global -q gcloud compute networks subnets delete consumer-client-subnet --region=$region -q gcloud compute networks delete consumer-vpc -q gcloud compute service-attachments delete portmap-service-attachment --region=$region -q gcloud compute forwarding-rules delete portmap-fr --region=$region -q gcloud compute backend-services delete portmap-bes --region=$region -q gcloud compute network-endpoint-groups delete portmap-neg --region=$region -q gcloud compute instances delete test-client-vm --zone=$zone -q gcloud compute instances delete portmap-vm2 --zone=$zone -q gcloud compute instances delete portmap-vm1 --zone=$zone -q gcloud compute network-firewall-policies rules delete 2001 --firewall-policy producer-vpc-policy --global-firewall-policy -q gcloud compute network-firewall-policies rules delete 2000 --firewall-policy producer-vpc-policy --global-firewall-policy -q gcloud compute network-firewall-policies rules delete 1000 --firewall-policy producer-vpc-policy --global-firewall-policy -q gcloud compute network-firewall-policies associations delete --firewall-policy=producer-vpc-policy --name=network-producer-vpc --global-firewall-policy -q gcloud compute network-firewall-policies delete producer-vpc-policy --global -q gcloud compute networks subnets delete psc-nat-subnet --region $region -q gcloud compute networks subnets delete producer-service-subnet --region $region -q gcloud compute networks delete producer-vpc -q
13. Congratulations!
Congratulations for completing the codelab.
What we've covered
- Private Service Connect Port Mapping use cases
- Key Benefits of PSC Port Mapping
- Network requirements
- Create a Private Service Connect producer service using port mapping.
- Create a Private Service Connect endpoint
- Make calls through a Private Service Connect endpoint to a producer service