Agentspace to Zonal NEG self-hosted database

1. Introduction

In this codelab you will deploy an internal tcp proxy load balancer and Zonal Network Endpoint group (NEG), Published as a PSC Producer Service. The NEG will consist of one or more compute instances in GCP self-hosting a database e.g JIRA, Confluence, Sharepoint.

Private Service Connect is a capability of Google Cloud networking that allows Consumers to access managed services privately from inside their VPC network. Similarly, it allows managed service Producers to host these services in their own VPC or Cross Cloud network, offering a private connection to their Consumers. For example, when you use Private Service Connect to access a Zonal NEG you are the service Producer, and Google (Agentspace) is the service Consumer.

What you'll learn

  • Network requirements for Agentspace
  • Agentspace networking best practices
  • Create a Private Service Connect Producer service

What you'll need

  • Google Cloud Project with Owner permissions

2. What you'll build

You'll establish a Producer network, agentspace-psc-demo, to deploy internal tcp proxy load balancer and Zonal NEG published as a service via Private Service Connect (PSC).

3. Network requirements

Below is the breakdown of network requirements for the Producer network, the Consumer in this codelab is Agentspace.

Components

Description

VPC (agentspace-psc-demo)

Custom mode VPC

PSC NAT Subnet

Packets from the Consumer VPC network are translated using source NAT (SNAT) so that their original source IP addresses are converted to source IP addresses from the NAT subnet in the Producer's VPC network. PSC NAT supports a /29 subnet per Service Attachment.

PSC forwarding rule subnet

Used to allocate an IP address for the Regional Internal TCP Proxy Load Balancer.The forwarding rule subnet is considered a regular subnet.

NEG Subnet

Used to allocate an IP address for the Network Endpoint Group from a regular subnet.

Proxy Only Subnet

Each of the load balancer's proxies is assigned an internal IP address. Packets sent from a proxy to a backend VM or network endpoint group has a source IP address from the proxy-only subnet.A /23 subnet is recommended although the minimum, /26 is supported. One regional proxy subnet is required per region.

Backend Service

A backend service acts as a bridge between your load balancer and your backend resources. In the tutorial, the backend service is associated with the Zonal NEG.

4. Best Practices

  • Zonal NEGs support one or more Zonal GCE instances based on GCE_VM_IP_PORT
  • Enable Global Access on the Producer forwarding rule before creating the Service Attachment.
  • Enable Global access when creating the Agentspace endpoint.
  • Internal TCP Proxy Load balancer also supports managed and unmanaged instance groups
  • An existing Google Cloud TCP Proxy or Passthrough load balancers can be exposed as Producer Service

5. Codelab topology

9a8a948b0a4ad91e.png

6. Setup and Requirements

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

fbef9caa1602edd0.png

a99b7ace416376c4.png

5e3ff691252acf41.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
  • The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as PROJECT_ID). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project.
  • For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:

55efc1aaa7a4d3ad.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

7ffe5cbb04455448.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.

7. Before you begin

Enable APIs

Inside Cloud Shell, make sure that your project id is set up:

gcloud config list project
gcloud config set project [YOUR-PROJECT-ID]
project=[YOUR-PROJECT-ID]
region=[YOUR-REGION]
zone1a=[YOUR-ZONE1a]
zone1b=[YOUR-ZONE1b]
echo $project
echo $region
echo $zone1a
echo $zone1b

Enable all necessary services:

gcloud services enable compute.googleapis.com

8. Create Producer VPC Network

VPC Network

Inside Cloud Shell, perform the following:

gcloud compute networks create agentspace-psc-demo --subnet-mode custom

Create Subnets

The PSC subnet will be associated with the PSC Service Attachment for the purpose of Network Address Translation.

Inside Cloud Shell, create the PSC NAT Subnet:

gcloud compute networks subnets create producer-psc-nat-subnet --network agentspace-psc-demo --range 172.16.10.0/28 --region $region --purpose=PRIVATE_SERVICE_CONNECT

Inside Cloud Shell, create the Producer forwarding rule subnet:

gcloud compute networks subnets create producer-psc-fr-subnet --network agentspace-psc-demo --range 172.16.20.0/28 --region $region --enable-private-ip-google-access

Inside Cloud Shell, create the Network Endpoint Group subnet:

gcloud compute networks subnets create neg-subnet --network agentspace-psc-demo --range 172.16.30.0/28 --region $region --enable-private-ip-google-access

Inside Cloud Shell, create the Producer regional proxy only subnet

gcloud compute networks subnets create $region-proxy-only-subnet \
  --purpose=REGIONAL_MANAGED_PROXY \
  --role=ACTIVE \
  --region=$region \
  --network=agentspace-psc-demo \
  --range=10.10.10.0/24

Reserve the load balancer's IP address

Inside Cloud Shell, reserve an internal IP address for the load balancer:

gcloud compute addresses create zonal-neg-lb-ip \
  --region=$region \
  --subnet=producer-psc-fr-subnet

Inside Cloud Shell, view the reserved IP Address.

gcloud compute addresses describe zonal-neg-lb-ip \
  --region=$region | grep -i address:

Example output:

gcloud compute addresses describe zonal-neg-lb-ip --region=$region | grep -i address:
address: 172.16.20.2

Set up the Zonal NEG

In the following section, you will create a Zonal Network Endpoint Group that contain one or more IP address or IP address and destination port combinations:

  • The primary internal IPv4 address of a VM network interface
  • The primary internal IPv4 address of a VM network interface plus a destination port number
  • An internal IPv4 address from the alias IP address range assigned to a VM network interface
  • An internal IPv4 address from the alias IP address range assigned to a VM network interface plus a destination port number

The network interface containing the GCE_VM_IP_PORT endpoint must be in the NEG's subnet. When you omit a port number from a GCE_VM_IP_PORT endpoint, Google Cloud uses the NEG's default port number for the endpoint.

In the reference architecture, the GCE instances associated with the Zonal NEG consists of the following:

  • database-us-central1-a | us-central1-a | IP: 100.100.10.2 | Port: 443
  • database-us-central1-a | us-central1-b | IP: 100.100.10.3 | Port: 443
  • Subnet name: database-subnet-1

Create the Zonal NEG for zone1a

In the following section, you'll create the network endpoint group per zone, e.g us-central1-a, specify the subnet name used to create the GCE instance. In the reference architecture, the subnet name is database-subnet-1.

Inside Cloud Shell, create a Zonal NEG:

gcloud compute network-endpoint-groups create us-central-zonal-neg-1a \
    --zone=$zone1a \
    --network=agentspace-psc-demo \
    --subnet=database-subnet-1 \
    --default-port=443

Inside Cloud Shell, update the Zonal NEG with the IP:Port of GCE instance deployed in zone1a, in the reference architecture the GCE instance is 100.100.10.2 Port 443 deployed in the zone us-central1-a.

gcloud compute network-endpoint-groups update us-central-zonal-neg-1a --zone=$zone1a --add-endpoint instance=database-us-central1-a,port=443

Create the Zonal NEG for zone1b

In the following section, you'll create the network endpoint group per zone, e.g us-central1-b, specify the subnet name used to create the GCE instance. In the reference architecture, the subnet name is database-subnet-1.

Inside Cloud Shell, create a Zonal NEG:

gcloud compute network-endpoint-groups create us-central-zonal-neg-1b \
    --zone=$zone1b \
    --network=agentspace-psc-demo \
    --subnet=database-subnet-1 \
    --default-port=443

Inside Cloud Shell, update the Zonal NEG with the IP:Port of GCE instance deployed in zone1b, in the reference architecture the GCE instance is 100.100.10.3 Port 443 deployed in zone us-central1-b.

gcloud compute network-endpoint-groups update us-central-zonal-neg-1b --zone=$zone1b --add-endpoint instance=database-us-central1-b,port=443

Create a regional health check

Inside Cloud Shell, create a health-check that probes the on-prem database port, 443:

gcloud compute health-checks create tcp zonal-443-healthcheck \
    --region=$region \
    --port=443

Create Network Firewall Policy and Firewall Rules

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies create agentspace-psc-demo-policy --global

gcloud compute network-firewall-policies associations create --firewall-policy agentspace-psc-demo-policy --network agentspace-psc-demo --name agentspace-psc-demo --global-firewall-policy

The following firewall rule allows traffic from the PSC NAT Subnet range to all instances in the network.

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies rules create 2001 --action ALLOW --firewall-policy agentspace-psc-demo-policy --description "allow traffic from PSC NAT subnet to GCE" --direction INGRESS --src-ip-ranges 172.16.10.0/28 --global-firewall-policy --layer4-configs=tcp

The following firewall rule allows traffic from the health check probe range to all instances in the network. Note, the health check and application port must match.

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies rules create 2002 --action ALLOW --firewall-policy agentspace-psc-demo-policy --description "allow internal probe health check range to GCE" --direction INGRESS --src-ip-ranges 35.191.0.0/16,130.211.0.0/22 --global-firewall-policy --layer4-configs=tcp:443

The following firewall rule allows traffic from the proxy only subnet range to all instances in the network. Note, the proxy subnet and application port must match.

Inside Cloud Shell, perform the following:

gcloud compute network-firewall-policies rules create 2003 --action ALLOW --firewall-policy agentspace-psc-demo-policy --description "allow internal tcp proxy health check range to GCE" --direction INGRESS --src-ip-ranges 10.10.10.0/24 --global-firewall-policy --layer4-configs=tcp:443

9. Create Producer Service

Create Load Balancer Components

Inside Cloud Shell, create a backend service:

gcloud compute backend-services create producer-backend-svc --region=$region --load-balancing-scheme=INTERNAL_MANAGED --protocol=TCP --region=$region --health-checks=zonal-443-healthcheck --health-checks-region=$region

Inside Cloud Shell, associate the Zonal NEG, us-central-zonal-neg-1a, to the backend service:

gcloud compute backend-services add-backend producer-backend-svc \
   --network-endpoint-group=us-central-zonal-neg-1a  \
   --network-endpoint-group-zone=$zone1a \
   --balancing-mode=CONNECTION \
   --max-connections-per-endpoint=100 \
   --region=$region

Inside Cloud Shell, associate the Zonal NEG, us-central-zonal-neg-1b,to the backend service:

gcloud compute backend-services add-backend producer-backend-svc \
   --network-endpoint-group=us-central-zonal-neg-1b  \
   --network-endpoint-group-zone=$zone1b \
   --balancing-mode=CONNECTION \
   --max-connections-per-endpoint=100 \
   --region=$region

In Cloud Shell, Create a target TCP proxy to route requests to your backend service:

gcloud compute target-tcp-proxies create producer-lb-tcp-proxy \
      --backend-service=producer-backend-svc  \
      --region=$region

In the following syntax, create a forwarding rule (internal tcp proxy load balancer) with global access enabled.

In Cloud Shell, perform the following:

gcloud compute forwarding-rules create producer-zonal-neg-fr \
     --load-balancing-scheme=INTERNAL_MANAGED \
     --network-tier=PREMIUM \
     --network=agentspace-psc-demo \
     --subnet=producer-psc-fr-subnet \
     --address=zonal-neg-lb-ip \
     --target-tcp-proxy=producer-lb-tcp-proxy \
     --target-tcp-proxy-region=$region \
     --region=$region \
     --allow-global-access \
     --ports=443

Validate Backend Health

Validate the health (green status) of the backend service and its associated compute instances using the cloud console in the following section. Navigate to the following:

Network Services → Load Balancing → Producer-backend-svc

dbbc97dcef9db785.png

Create Service Attachment

To publish a service, you must create a Private Service Connect service attachment. You can publish the service with either automatic approval or explicit approval.

  • To publish the service and automatically allow any Consumer to connect to it, follow the instructions at Publish a service with automatic approval.
  • To publish the service with explicit Consumer approval, in the service attachment connection settings, select Accept connections for selected projects and leave the Accepted projects field blank.
  • After you generate the service attachment, Consumer endpoints that request access to the Producer service will initially enter a pending state. To authorize the connection, the Producer must then accept the project from which the Consumer endpoint request originated.

Inside Cloud Shell, create the Service Attachment, cc-database1-svc-attachment with automatic approval:

gcloud compute service-attachments create zonal-database1-svc-attachment --region=$region --producer-forwarding-rule=producer-zonal-neg-fr --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=producer-psc-nat-subnet

Next, obtain and note the Service Attachment listed in the selfLink URI starting with projects to configure the PSC endpoint in Agentspace.

selfLink: projects/<your-project-id>/regions/<your-region>/serviceAttachments/zonal-database1-svc-attachment

Inside Cloud Shell, perform the following:

gcloud compute service-attachments describe zonal-database1-svc-attachment --region=$region

Example Expected Output:

connectionPreference: ACCEPT_AUTOMATIC
creationTimestamp: '2025-07-12T16:00:22.429-07:00'
description: ''
enableProxyProtocol: false
fingerprint: zOpeRQnPWSc=
id: '1784245893044590569'
kind: compute#serviceAttachment
name: zonal-database1-svc-attachment
natSubnets:
- https://www.googleapis.com/compute/v1/projects/$project-svc4/regions/us-central1/subnetworks/producer-psc-nat-subnet
pscServiceAttachmentId:
  high: '119824781489996776'
  low: '1784245893044590569'
reconcileConnections: false
region: https://www.googleapis.com/compute/v1/projects/$project-svc4/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/$project-svc4/regions/us-central1/serviceAttachments/zonal-database1-svc-attachment
targetService: https://www.googleapis.com/compute/v1/projects/$project-svc4/regions/us-central1/forwardingRules/producer-zonal-neg-fr

In Cloud Console, navigate to:

Network Services → Private Service Connect → Published Services

898fe7673474be14.png

4d0b77966af14c7a.png

10. Establish a PSC Endpoint Connection in Agentspace

Associate the Producers Service Attachment URI with Agentspace, ensuring global access is selected. Below is an example of global access enablement with the reference architecture Service Attachment.

cb16ba8d7cfb86dd.png

To finalize private networking, consult Agentspace third-party data sources for further instructions.

Validate the PSC endpoint in Cloud Console

To confirm a successful PSC connection between Agentspace (the Consumer) and the Producer, verify the Agentspace tenant project linked to the Producer Service. This can be found under ‘Connected Projects'. The tenant project ID is randomly assigned but will always end with ‘tp'.

From Cloud Console you can validate the PSC Connection. In Cloud Console, navigate to:

Network Services → Private Service Connect → Published Service, then select the Service, zonal-database1-svc-attachment.

2f6b7830ce3db3b7.png

11. Clean up

From a single Cloud Shell terminal delete lab components

gcloud compute service-attachments delete zonal-database1-svc-attachment --region=$region -q

gcloud compute forwarding-rules delete producer-zonal-neg-fr --region=$region -q

gcloud compute target-tcp-proxies delete producer-lb-tcp-proxy --region=$region -q

gcloud compute backend-services delete producer-backend-svc --region=$region -q

gcloud compute network-firewall-policies rules delete 2001 --firewall-policy agentspace-psc-demo-policy --global-firewall-policy -q

gcloud compute network-firewall-policies rules delete 2002 --firewall-policy agentspace-psc-demo-policy --global-firewall-policy -q

gcloud compute network-firewall-policies rules delete 2003 --firewall-policy agentspace-psc-demo-policy --global-firewall-policy -q

gcloud compute network-firewall-policies associations delete --firewall-policy=agentspace-psc-demo-policy  --name=agentspace-psc-demo --global-firewall-policy -q

gcloud compute network-firewall-policies delete agentspace-psc-demo-policy --global -q

gcloud compute network-endpoint-groups delete us-central-zonal-neg-1a --zone=$zone1a -q

gcloud compute network-endpoint-groups delete us-central-zonal-neg-1b --zone=$zone1b -q

gcloud compute addresses delete zonal-neg-lb-ip --region=$region -q

gcloud compute networks subnets delete $region-proxy-only-subnet --region=$region -q

gcloud compute networks subnets delete producer-psc-nat-subnet --region=$region -q

gcloud compute networks subnets delete producer-psc-fr-subnet --region=$region -q

gcloud compute networks subnets delete neg-subnet --region=$region -q

gcloud compute health-checks delete zonal-443-healthcheck --region=us-central1 -q

gcloud compute networks delete agentspace-psc-demo -q

12. Congratulations

Congratulations, you've successfully configured and published a Producer service with Private Service Connected.

You created the Producer infrastructure, learned how to create a Zonal NEG, Producer Service and associate the service attachment to Agentspace.

Cosmopup thinks codelabs are awesome!!

c911c127bffdee57.jpeg

What's next?

Check out some of these codelabs...

Further reading & Videos

Reference docs