Private Service Connect 66

About this codelab
schedule40 minutes
subjectLast updated August 28, 2024
account_circleWritten by Deepak Michael

Private Service Connect revolutionizes how organizations consume services within the Google Cloud ecosystem, providing full support for IPv6 addressing alongside IPv4. It combines enhanced security, simplified connectivity, improved performance, and centralized management, making it an ideal solution for businesses seeking a robust, reliable, and efficient service consumption model that is ready for the future of networking. Whether you're building a hybrid cloud, sharing services across your organization, or accessing third-party services, PSC offers a seamless and secure pathway to harness the full potential of the Google Cloud, while embracing the benefits of IPv6.

  • Key Benefits of PSC 66
  • Private Service Connect 66 supported translation
  • Dual Stack ULA overview
  • Network requirements
  • Create a Private Service Connect producer service
  • Create a Private Service Connect endpoint
  • Establish connectivity to the Private Service Connect endpoint from a dual-stack VM
  • Google Cloud Project with Owner permissions

2. What you'll build

You'll establish a Producer network to deploy an apache web server as a published service via Private Service Connect (PSC). Once published, you'll perform the following actions to validation access to the Producer service:

  • From the Consumer VPC, dual-stack GCE instance, target the IPv6 PSC Endpoint to reach the producer service.

Key benefits of PSC 66

  • Seamless Integration: PSC seamlessly integrates with VPC networks configured for IPv6, allowing you to leverage the benefits of IPv6 addressing for your service connections.
  • Dual-Stack Support: PSC supports dual-stack configurations, enabling simultaneous use of IPv4 and IPv6 within the same VPC, providing flexibility and future-proofing your network.
  • Simplified Transition: PSC simplifies the transition to IPv6 by allowing you to gradually adopt IPv6 alongside your existing IPv4 infrastructure.
  • Producer Support: Producer is required to adopt dual-stack, resulting in a IPv6 only Consumer PSC endpoint.

3. Private Service Connect 64 & 66 supported translation

Consumer considerations

The IP version of the endpoint can be either IPv4 or IPv6, but not both. Consumers can use an IPv4 address if the address's subnet is single-stack. Consumers can use an IPv4 or IPv6 address if the address's subnet is dual-stack. Consumers can connect both IPv4 and IPv6 endpoints to the same service attachment, which can be helpful for migrating services to IPv6.

Producer considerations

The IP version of the producer forwarding rule determines the IP version of the service attachment and traffic that egresses the service attachment. The IP version of the service attachment can be either IPv4 or IPv6, but not both. Producers can use an IPv4 address if the address's subnet is single-stack. Producers can use an IPv4 or IPv6 address if the address's subnet is dual-stack.

The IP version of the producer forwarding rule's IP address must be compatible with the stack type of the service attachment's NAT subnet.

  • If the producer forwarding rule is IPv4, the NAT subnet can be single-stack or dual-stack.
  • If the producer forwarding rule is IPv6, the NAT subnet must be dual-stack.

The following combinations are possible for supported configurations:

  • IPv4 endpoint to IPv4 service attachment
  • IPv6 endpoint to IPv6 service attachment
  • IPv6 endpoint to IPv4 service attachment In this configuration, Private Service Connect automatically translates between the two IP versions.

The following is not supported:

Private Service Connect doesn't support connecting an IPv4 endpoint with an IPv6 service attachment. In this case, the endpoint creation fails with the following error message:

Private Service Connect forwarding rule with an IPv4 address cannot target an IPv6 service attachment.

4. Dual Stack ULA overview

Google Cloud supports creation of ULA private IPv6 subnets and VMs. RFC 4193 defines an IPv6 addressing scheme for local communication, ideal for intra-VPC communication. ULA addresses are not globally routable so your VMs are completely isolated from the internet providing RFC-1918 like behavior using IPv6. Google Cloud allows the creation of /48 VPC network ULA prefixes so that all your /64 IPv6 ULA subnets are assigned from that VPC network range.

Similar to globally unique external IPv6 addresses supported by Google Cloud, each ULA IPv6 enabled subnet will receive a /64 subnet from /48 VPC network ULA range, and each VM will be assigned a /96 address from that subnet.

RFC4193 defines IPv6 address space in the range of fc00::/7. ULA addresses can be allocated and used freely inside private networks/sites. Google Cloud assigns all ULA addresses from the fd20::/20 range. These addresses are only routable within the scope of VPCs, and are not routable in the global IPv6 internet.

ULA addresses assigned by Google Cloud are guaranteed to be unique across all VPC networks. Google Cloud ensures that no two VPC networks are assigned the same ULA prefix. This removes the issue of overlapping ranges in VPC networks.

You can either let Google Cloud auto assign /48 to your network or you can choose a specific /48 IPv6 prefix. If your specified IPv6 prefix is already assigned to another VPC or on your on-premises network, you can choose another range.

5. Network requirements

Below is the breakdown of network requirements for the Consumer and Producer network:

Consumer Network (all components deployed in us-central1)

Components

Description

VPC

Dual-stack networking requires a custom mode VPC with ULA enabled

PSC Endpoint

IPV6 PSC Endpoint used to access the Producer Service

Subnet(s)

Dual-stack

GCE

Dual-stack

Producer Network(all components deployed in us-central1)

Components

Description

VPC

Dual-stack networking requires a custom mode VPC with ULA enabled

PSC NAT Subnet

Dual-stack. Packets from the consumer VPC network are translated using source NAT (SNAT) so that their original source IP addresses are converted to source IP addresses from the NAT subnet in the producer's VPC network.

PSC forwarding rule

Dual-stack. Internal passthrough Network Load Balancer

Health Check

An ingress rule, applicable to the instances being load balanced, that allows traffic from the Google Cloud health checking systems (2600:2d00:1:b029::/64).

Backend Service

A backend service acts as a bridge between your load balancer and your backend resources. In the tutorial, the backend service is associated with the unmanaged instance group.

Unmanaged Instance Group

Supports VMs that require individual configuration or tuning. Does not support auto scaling.

6. Codelab topology

11a36b2a52d60fe7.png

7. Setup and Requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

fbef9caa1602edd0.png

a99b7ace416376c4.png

5e3ff691252acf41.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
  • The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as PROJECT_ID). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project.
  • For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:

55efc1aaa7a4d3ad.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

7ffe5cbb04455448.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.

8. Before you begin

Enable APIs

Inside Cloud Shell, make sure that your project id is set up:

gcloud config list project
gcloud config set project [YOUR-PROJECT-ID]
project=[YOUR-PROJECT-ID]
region=us-central1
echo $project
echo $region

Enable all necessary services:

gcloud services enable compute.googleapis.com

9. Create Producer VPC Network

VPC Network

Inside Cloud Shell, perform the following:

gcloud compute networks create producer-vpc --subnet-mode custom --enable-ula-internal-ipv6

Google allocates a globally unique /48 subnet to Consumer VPC, to view the allocation perform the following:

In Cloud Console, navigate to:

VPC Networks

130648bcdb9266b1.png

Create Subnets

The PSC subnet will be associated with the PSC Service Attachment for the purpose of Network Address Translation. For production use cases, this subnet needs to be sized appropriately to support the amount of inbound traffic from all attached PSC endpoints. See PSC NAT subnet sizing documentation for more information.

Inside Cloud Shell, create the PSC NAT Subnet:

gcloud compute networks subnets create producer-nat-dual-stack-subnet --network producer-vpc --range 172.16.10.0/28 --region $region --purpose=PRIVATE_SERVICE_CONNECT --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL

You'll need to obtain and note the producer-nat-dual-stack-subnet IPv6 address used in a later step to create an ingress firewall rule to allow the PSC NAT subnet access to the load balancer backend.

Inside Cloud Shell, obtain the PSC NAT IPv6 subnet.

gcloud compute networks subnets describe producer-nat-dual-stack-subnet --region=us-central1 | grep -i internalIpv6Prefix:

Expected outcome:

user@cloudshell$ gcloud compute networks subnets describe producer-nat-dual-stack-subnet --region=us-central1 | grep -i internalIpv6Prefix:
internalIpv6Prefix: fd20:b4a:ea9f:2:0:0:0:0/64

Inside Cloud Shell, create the producer forwarding rule subnet:

gcloud compute networks subnets create producer-dual-stack-fr-subnet --network producer-vpc --range 172.16.20.0/28 --region $region --enable-private-ip-google-access --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL

Inside Cloud Shell, create the producer vm subnet:

gcloud compute networks subnets create producer-dual-stack-vm-subnet --network producer-vpc --range 172.16.30.0/28 --region $region --enable-private-ip-google-access --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL

Create the Public NAT gateway

The producer-vm requires internet access to download apache, however the GCE instance does not have an external IP; therefore, Cloud NAT will provide internet egress for package download.

Inside Cloud Shell, create the Cloud Router:

gcloud compute routers create producer-cloud-router --network producer-vpc --region us-central1

Inside Cloud Shell, create the Cloud NAT gateway enabling internet egress:

gcloud compute routers nats create producer-nat-gw --router=producer-cloud-router --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region us-central1

Create Network Firewall Policy and Firewall Rules

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies create producer-vpc-policy --global

gcloud compute network-firewall-policies associations create --firewall-policy producer-vpc-policy --network producer-vpc --name producer-vpc --global-firewall-policy

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies rules create 1000 --action ALLOW --firewall-policy producer-vpc-policy --description "SSH with IAP" --direction INGRESS --src-ip-ranges 35.235.240.0/20 --layer4-configs tcp:22  --global-firewall-policy

The following firewall rule allows traffic from the health-check probe range to all instances in the network. In a production environment, this firewall rule should be limited to only the instances associated with the specific producer service.

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies rules create 2000 --action ALLOW --firewall-policy producer-vpc-policy --description "allow traffic from health check probe range" --direction INGRESS --src-ip-ranges 2600:2d00:1:b029::/64 --layer4-configs tcp:80 --global-firewall-policy

The following firewall rule allows traffic from the PSC NAT Subnet range to all instances in the network. In a production environment, this firewall rule should be limited to only the instances associated with the specific producer service.

Update the firewall rule <insert-your-psc-nat-ipv6-subnet> with the IPv6 PSC NAT subnet obtained earlier in the codelab.

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies rules create 2001 --action ALLOW --firewall-policy producer-vpc-policy --description "allow traffic from PSC NAT subnet" --direction INGRESS --src-ip-ranges <insert-your-psc-nat-ipv6-subnet> --global-firewall-policy --layer4-configs=tcp

Create the Producer VM

Inside Cloud Shell, create the producer-vm apache web server:

gcloud compute instances create producer-vm \
    --project=$project \
    --machine-type=e2-micro \
    --image-family debian-12 \
    --no-address \
    --image-project debian-cloud \
    --zone us-central1-a \
    --subnet=producer-dual-stack-vm-subnet \
    --stack-type=IPV4_IPV6 \
    --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install apache2 -y
      sudo service apache2 restart
      echo 'Welcome to Producer-VM !!' | tee /var/www/html/index.html
      EOF"

Inside Cloud Shell, create the unmanaged instance group consisting of producer-vm instance & health check:

gcloud compute instance-groups unmanaged create producer-instance-group --zone=us-central1-a

gcloud compute instance-groups unmanaged add-instances producer-instance-group  --zone=us-central1-a --instances=producer-vm

gcloud compute health-checks create http hc-http-80 --port=80

10. Create Producer Service

Create Load Balancer Components

Inside Cloud Shell, perform the following:

gcloud compute backend-services create producer-backend-svc --load-balancing-scheme=internal --protocol=tcp --region=us-central1 --health-checks=hc-http-80

gcloud compute backend-services add-backend producer-backend-svc --region=us-central1 --instance-group=producer-instance-group --instance-group-zone=us-central1-a

Allocated a IPv6 address for the producer forwarding rule (internal network load balancer).

In Cloud Shell, perform the following:

gcloud compute addresses create producer-fr-ipv6-address \
    --region=us-central1 \
    --subnet=producer-dual-stack-fr-subnet \
    --ip-version=IPV6

In the following syntax, create a forwarding rule (internal network load balancer) with a predefined IPv6 Address producer-fr-ipv6-address associated to the backend service, producer-backend-svc

In Cloud Shell, perform the following:

gcloud compute forwarding-rules create producer-fr --region=us-central1 --load-balancing-scheme=internal --network=producer-vpc --subnet=producer-dual-stack-fr-subnet --address=producer-fr-ipv6-address --ip-protocol=TCP --ports=all --backend-service=producer-backend-svc --backend-service-region=us-central1 --ip-version=IPV6

Create Service Attachment

Inside Cloud Shell, create the Service Attachment:

gcloud compute service-attachments create ipv6-producer-svc-attachment --region=$region --producer-forwarding-rule=producer-fr --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=producer-nat-dual-stack-subnet

Next, obtain and note the Service Attachment listed in the selfLink URI starting with projects to configure the PSC endpoint in the consumer environment.

selfLink: projects/<your-project-id>/regions/us-central1/serviceAttachments/ipv4-producer-svc-attachment

Inside Cloud Shell, perform the following:

gcloud compute service-attachments describe ipv6-producer-svc-attachment --region=$region

Example Expected Output

connectionPreference: ACCEPT_AUTOMATIC
creationTimestamp: '2024-08-27T05:59:17.188-07:00'
description: ''
enableProxyProtocol: false
fingerprint: EaultrFOzc4=
id: '8752850315312657226'
kind: compute#serviceAttachment
name: ipv6-producer-svc-attachment
natSubnets:
- https://www.googleapis.com/compute/v1/projects/projectid/regions/us-central1/subnetworks/producer-nat-dual-stack-subnet
pscServiceAttachmentId:
  high: '1053877600257000'
  low: '8752850315312657226'
reconcileConnections: false
region: https://www.googleapis.com/compute/v1/projects/projectid/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/projectid/regions/us-central1/serviceAttachments/ipv6-producer-svc-attachment
targetService: https://www.googleapis.com/compute/v1/projects/projectid/regions/us-central1/forwardingRules/producer-fr

In Cloud Console, navigate to:

Network Services → Private Service Connect → Published Services

4356b8ab4a385eb6.png

312795be39b21f62.png

11. Create Consumer VPC network

VPC Network

Inside Cloud Shell, create the Consumer VPC with IPv6 ULA enabled:

gcloud compute networks create consumer-vpc \
    --subnet-mode=custom \
    --enable-ula-internal-ipv6

Google allocates a globally unique /48 subnet to Consumer VPC, to view the allocation perform the following:

In Cloud Console, navigate to:

VPC Networks

f0cb0565e4af4c72.png

Create Subnet

Inside Cloud Shell, create the dual-stack GCE subnet:

gcloud compute networks subnets create consumer-dual-stack-subnet --network consumer-vpc --range=192.168.20.0/28 --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL --region $region --enable-private-ip-google-access

Inside Cloud Shell, create the dual-stack PSC endpoint subnet:

gcloud compute networks subnets create psc-dual-stack-endpoint-subnet --network consumer-vpc --range=192.168.21.0/28 --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL --region $region --enable-private-ip-google-access

Create Network Firewall Policy and Firewall Rules

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies create consumer-vpc-policy --global

gcloud compute network-firewall-policies associations create --firewall-policy consumer-vpc-policy --network consumer-vpc --name consumer-vpc --global-firewall-policy

gcloud compute network-firewall-policies rules create 1000 --action ALLOW --firewall-policy consumer-vpc-policy --description "SSH with IAP" --direction INGRESS --src-ip-ranges 35.235.240.0/20 --layer4-configs tcp:22  --global-firewall-policy

Only SSH from IAP access is needed for the consumer network.

12. Create VM, PSC endpoint and test dual-stack connectivity

Create Test dual-stack VM

Inside Cloud Shell, create the dual-stack GCE instance in the dual-stack subnet:

gcloud compute instances create consumer-vm-ipv4-ipv6 --zone=us-central1-a --subnet=consumer-dual-stack-subnet --no-address --stack-type=IPV4_IPV6

Create PSC endpoint static IPv6 address

Inside Cloud Shell, create a static IPv6 address for the PSC endpoint:

gcloud compute addresses create psc-ipv6-endpoint-ip --region=$region --subnet=psc-dual-stack-endpoint-subnet --ip-version=IPV6

Obtain the PSC endpoint static IPv6 address

Inside Cloud Shell, obtain the PSC IPv6 address that you'll use to reach the Producer service:

gcloud compute addresses describe psc-ipv6-endpoint-ip --region=us-central1 | grep -i address:

Example output:

user@cloudshell$ gcloud compute addresses describe psc-ipv6-endpoint-ip --region=us-central1 | grep -i address:
address: 'fd20:799:4ea3:1::'

Create the IPv6 PSC endpoint

Inside Cloud Shell, create the PSC endpoint by updating the SERVICE ATTACHMENT URI with your URI captured when creating the Service Attachment.

gcloud compute forwarding-rules create psc-ipv6-endpoint --region=$region --network=consumer-vpc --address=psc-ipv6-endpoint-ip --target-service-attachment=[SERVICE ATTACHMENT URI]

Validate the PSC endpoint

Let's confirm that the Producer has accepted the PSC endpoint. In Cloud Console, navigate to:

Network Services → Private Service Connect → Connected Endpoints

1ee60ea44c5027dd.png

Test Connectivity

Inside Cloud Shell, ssh into dual-stack GCE instance, consumer-vm-ipv4-ipv6.

gcloud compute ssh --zone us-central1-a "consumer-vm-ipv4-ipv6" --tunnel-through-iap --project $project

Now that you're logged in the dual-stack GCE instance, perform a curl to the psc endpoint, psc-ipv6-endpoint, using the IPv6 addresses identified in the previous step.

curl -6 http://[insert-your-ipv6-psc-endpoint]

Expected output:

user@consumer-vm-ipv4-ipv6$ curl -6 http://[fd20:799:4ea3:1::]
Welcome to Producer-VM !!

Inside consumer-vm-ipv4-ipv6 GCE instance, perform logout of the instance by performing an exit, returning you back to Cloud Shell.

exit

Expected output:

user@consumer-vm-ipv4-ipv6:~$ exit
logout
Connection to compute.715101668351438678 closed.

13. Cleanup steps

From a single Cloud Shell terminal delete lab components

gcloud compute forwarding-rules delete psc-ipv6-endpoint --region=us-central1 -q

gcloud compute instances delete consumer-vm-ipv4-ipv6 --zone=us-central1-a -q

gcloud compute network-firewall-policies rules delete 1000 --firewall-policy=consumer-vpc-policy --global-firewall-policy -q

gcloud compute network-firewall-policies associations delete --firewall-policy=consumer-vpc-policy  --name=consumer-vpc --global-firewall-policy -q

gcloud compute network-firewall-policies delete consumer-vpc-policy --global -q

gcloud compute addresses delete psc-ipv6-endpoint-ip --region=us-central1 -q

gcloud compute networks subnets delete consumer-dual-stack-subnet psc-dual-stack-endpoint-subnet --region=us-central1 -q

gcloud compute networks delete consumer-vpc -q

gcloud compute service-attachments delete ipv6-producer-svc-attachment --region=us-central1 -q

gcloud compute forwarding-rules delete producer-fr --region=us-central1 -q

gcloud compute backend-services delete producer-backend-svc --region=us-central1 -q

gcloud compute health-checks delete hc-http-80 -q

gcloud compute network-firewall-policies rules delete 2001 --firewall-policy producer-vpc-policy --global-firewall-policy -q

gcloud compute network-firewall-policies rules delete 2000 --firewall-policy producer-vpc-policy --global-firewall-policy -q

gcloud compute network-firewall-policies rules delete 1000 --firewall-policy producer-vpc-policy --global-firewall-policy -q

gcloud compute network-firewall-policies associations delete --firewall-policy=producer-vpc-policy  --name=producer-vpc --global-firewall-policy -q

gcloud compute network-firewall-policies delete producer-vpc-policy --global -q

gcloud compute instance-groups unmanaged delete producer-instance-group --zone=us-central1-a -q

gcloud compute instances delete producer-vm --zone=us-central1-a -q

gcloud compute routers nats delete producer-nat-gw --router=producer-cloud-router --router-region=us-central1 -q

gcloud compute routers delete producer-cloud-router --region=us-central1 -q

gcloud compute addresses delete producer-fr-ipv6-address --region=us-central1 -q

gcloud compute networks subnets delete producer-dual-stack-fr-subnet  producer-dual-stack-vm-subnet producer-nat-dual-stack-subnet --region=us-central1 -q

gcloud compute networks delete producer-vpc -q

14. Congratulations

Congratulations, you've successfully configured and validated Private Service Connect 64.

You created the producer infrastructure, learned how to create an IPv6 consumer endpoint in the consumer VPC network that allowed connectivity to the IPv6 Producer service.

Cosmopup thinks codelabs are awesome!!

c911c127bffdee57.jpeg

What's next?

Check out some of these codelabs...

Further reading & Videos

Reference docs