1. Introduction
In this codelab you will perform a southbound connection to an on-premises postgres database over H-VPN using an internal tcp proxy load balancer and hybrid network endpoint group invoked from Looker PSC as a Service Consumer.
Private Service Connect is a capability of Google Cloud networking that allows consumers to access managed services privately from inside their VPC network. Similarly, it allows managed service producers to host these services in their own separate VPC networks and offer a private connection to their consumers. For example, when you use Private Service Connect to access Looker, you are the service consumer, and Google is the service producer, as highlighted in Figure 1.
Figure 1.
Southbound access, also known as reverse PSC, enables the Consumer to create a Published Service as a Producer to allow Looker access to endpoints on-premises, in a VPC, to managed services and the hybrid. Southbound connections can be deployed in any region, irrespective of where Looker PSC is deployed, as highlighted in Figure 2.
Figure 2.
What you'll learn
- Network requirements
- Create a Private Service Connect producer service
- Create a Private Service Connect endpoint in Looker
- Establish connectivity to the on-premises postgres database from Looker using a Test Connection
What you'll need
- Google Cloud Project with Owner permissions
- Existing Looker PSC Instance
2. What you'll build
You'll establish a Producer network, looker-psc-demo, to deploy internal tcp proxy load balancer and Hybrid NEG published as a service via Private Service Connect (PSC). To demonstrate an on-premise database, you will deploy an on-prem-demo VPC connected to the looker-psc-demo VPC using HA-VPN.
You'll perform the following actions to validation access to the Producer service:
- Create a PSC Endpoint in Looker associated with the Producer Service Attachment
- Use the Looker Console to perform a connection validation to the on-premises postgres database
3. Network requirements
Below is the breakdown of network requirements for the Producer network, the consumer in this codelab is the Looker PSC instance.
Components | Description |
VPC (looker-psc-demo) | Custom mode VPC |
VPC (on-prem-demo) | Custom mode VPC |
PSC NAT Subnet | Packets from the consumer VPC network are translated using source NAT (SNAT) so that their original source IP addresses are converted to source IP addresses from the NAT subnet in the producer's VPC network. |
PSC forwarding rule subnet | Used to allocate an IP address for the Regional Internal TCP Proxy Load Balancer |
PSC NEG Subnet | Used to allocate an IP address for the Network Endpoint Group |
Proxy Only Subnet | Each of the load balancer's proxies is assigned an internal IP address. Packets sent from a proxy to a backend VM or endpoint has a source IP address from the proxy-only subnet. |
Hybrid NEG | On-premises and other cloud services are treated like any other Cloud Load Balancing backend. The key difference is that you use a hybrid connectivity NEG to configure the endpoints of these backends. The endpoints must be valid IP:port combinations that your load balancer can reach by using hybrid connectivity products such as Cloud VPN or Cloud Interconnect. |
Backend Service | A backend service acts as a bridge between your load balancer and your backend resources. In the tutorial, the backend service is associated with the Hybrid NEG. |
Cloud Router |
|
HA-VPN | HA VPN between Google Cloud VPC networks. In this topology, you can connect two Google Cloud VPC networks by using an HA VPN gateway in each network. The VPC networks can be in the same region or multiple regions. |
Cloud NAT | Used by the on-prem-demo VPC for internet egress |
4. Codelab topology
5. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.
6. Before you begin
Enable APIs
Inside Cloud Shell, make sure that your project id is set up:
gcloud config list project
gcloud config set project [YOUR-PROJECT-ID]
project=[YOUR-PROJECT-ID]
region=[YOUR-REGION]
zone=[YOUR-ZONE]
echo $project
echo $region
Enable all necessary services:
gcloud services enable compute.googleapis.com
7. Create Producer VPC Network
VPC Network
Inside Cloud Shell, perform the following:
gcloud compute networks create looker-psc-demo --subnet-mode custom
Create Subnets
The PSC subnet will be associated with the PSC Service Attachment for the purpose of Network Address Translation.
Inside Cloud Shell, create the PSC NAT Subnet:
gcloud compute networks subnets create producer-psc-nat-subnet --network looker-psc-demo --range 172.16.10.0/28 --region $region --purpose=PRIVATE_SERVICE_CONNECT
Inside Cloud Shell, create the producer forwarding rule subnet:
gcloud compute networks subnets create producer-psc-fr-subnet --network looker-psc-demo --range 172.16.20.0/28 --region $region --enable-private-ip-google-access
Inside Cloud Shell, create the producer regional proxy only subnet:
gcloud compute networks subnets create $region-proxy-only-subnet \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVE \
--region=$region \
--network=looker-psc-demo \
--range=10.10.10.0/24
Reserve the load balancer's IP address
Inside Cloud Shell, reserve an internal IP address for the load balancer:
gcloud compute addresses create hybrid-neg-lb-ip \
--region=$region \
--subnet=producer-psc-fr-subnet
Inside Cloud Shell, view the reserved IP Address.
gcloud compute addresses describe hybrid-neg-lb-ip \
--region=$region | grep -i address:
Example output:
gcloud compute addresses describe hybrid-neg-lb-ip --region=$region | grep -i address:
address: 172.16.20.2
Set up the Hybrid NEG
Create a Hybrid NEG, and set the –network-endpoint-type to NON_GCP_PRIVATE_IP_PORT
Inside Cloud Shell, create a Hybrid NEG used to access the on-prem database:
gcloud compute network-endpoint-groups create on-prem-hybrid-neg \
--network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \
--network=looker-psc-demo \
--zone=$zone
Inside Cloud Shell, update the Hybrid NEG with the IP:Port of the on-prem database, 192.168.10.4 & Port 5432, generated in a later step in the tutorial:
gcloud compute network-endpoint-groups update on-prem-hybrid-neg \
--add-endpoint=ip=192.168.10.4,port=5432 \
--zone=$zone
Create a regional health check
Inside Cloud Shell, create a health-check that probes the on-prem database port, 5432:
gcloud compute health-checks create tcp on-prem-5432-healthcheck \
--region=$region \
--port=5432
Create Network Firewall Policy and Firewall Rules
Inside Cloud Shell, perform the following:
gcloud compute network-firewall-policies create looker-psc-demo-policy --global
gcloud compute network-firewall-policies associations create --firewall-policy looker-psc-demo-policy --network looker-psc-demo --name looker-psc-demo --global-firewall-policy
The following firewall rule allows traffic from the PSC NAT Subnet range to all instances in the network.
Inside Cloud Shell, perform the following:
gcloud compute network-firewall-policies rules create 2001 --action ALLOW --firewall-policy looker-psc-demo-policy --description "allow traffic from PSC NAT subnet" --direction INGRESS --src-ip-ranges 172.16.10.0/28 --global-firewall-policy --layer4-configs=tcp
8. Create Producer Service
Create Load Balancer Components
Inside Cloud Shell, create a backend service::
gcloud compute backend-services create producer-backend-svc --region=$region --load-balancing-scheme=INTERNAL_MANAGED --protocol=TCP --region=$region --health-checks=on-prem-5432-healthcheck --health-checks-region=$region
Inside Cloud Shell, add the Hybrid NEG backend to the backend service:
gcloud compute backend-services add-backend producer-backend-svc --network-endpoint-group=on-prem-hybrid-neg --network-endpoint-group-zone=$zone --balancing-mode=CONNECTION --max-connections=100 --region=$region
In Cloud Shell, Create a target TCP proxy to route requests to your backend service:
gcloud compute target-tcp-proxies create producer-lb-tcp-proxy \
--backend-service=producer-backend-svc \
--region=$region
In the following syntax, create a forwarding rule (internal tcp proxy load balancer).
In Cloud Shell, perform the following:
gcloud compute forwarding-rules create producer-hybrid-neg-fr \
--load-balancing-scheme=INTERNAL_MANAGED \
--network-tier=PREMIUM \
--network=looker-psc-demo \
--subnet=producer-psc-fr-subnet \
--address=hybrid-neg-lb-ip \
--target-tcp-proxy=producer-lb-tcp-proxy \
--target-tcp-proxy-region=$region \
--region=$region \
--ports=5432
Create Service Attachment
Inside Cloud Shell, create the Service Attachment, onpremdatabase1-svc-attachment:
gcloud compute service-attachments create onpremdatabase1-svc-attachment --region=$region --producer-forwarding-rule=producer-hybrid-neg-fr --connection-preference=ACCEPT_AUTOMATIC --nat-subnets=producer-psc-nat-subnet
Next, obtain and note the Service Attachment listed in the selfLink URI starting with projects to configure the PSC endpoint in Looker.
selfLink: projects/<your-project-id>/regions/<your-region>/serviceAttachments/onpremdatabase1-svc-attachment
Inside Cloud Shell, perform the following:
gcloud compute service-attachments describe onpremdatabase1-svc-attachment --region=$region
Example Expected Output:
connectionPreference: ACCEPT_AUTOMATIC
creationTimestamp: '2024-09-01T16:07:51.600-07:00'
description: ''
enableProxyProtocol: false
fingerprint: cFt9rERR1iE=
id: '2549689544315850024'
kind: compute#serviceAttachment
name: onpremdatabase1-svc-attachment
natSubnets:
- https://www.googleapis.com/compute/v1/projects/$project/regions/$region/subnetworks/producer-psc-nat-subnet
pscServiceAttachmentId:
high: '19348441121424360'
low: '2549689544315850024'
reconcileConnections: false
region: https://www.googleapis.com/compute/v1/projects/$project/regions/$region
selfLink: https://www.googleapis.com/compute/v1/projects/$project/regions/$region/serviceAttachments/onpremdatabase1-svc-attachment
targetService: https://www.googleapis.com/compute/v1/projects/$project/regions/$region/forwardingRules/producer-hybrid-neg-fr
In Cloud Console, navigate to:
Network Services → Private Service Connect → Published Services
9. Establish a PSC Endpoint Connection in Looker
In the following section, you will associate the Producers Service Attachment with Looker Core PSC through the use –psc-service-attachment flags in Cloud Shell for a single domain.
Inside Cloud Shell, create the psc association by updating the following parameters to match your environment:
- INSTANCE_NAME: The name of your Looker (Google Cloud core) instance.
- DOMAIN_1: onprem.database1.com
- SERVICE_ATTACHMENT_1: URI captured when creating the Service Attachment, onpremdatabase1-svc-attachment
- REGION: The region in which your Looker (Google Cloud core) instance is hosted.
Inside Cloud Shell, perform the following:
gcloud looker instances update INSTANCE_NAME \
--psc-service-attachment domain=DOMAIN_1,attachment=SERVICE_ATTACHMENT_URI_1 \
--region=REGION
Example:
gcloud looker instances update looker-psc-instance --psc-service-attachment domain=onprem.database1.com,attachment=projects/$project/regions/$region/serviceAttachments/onpremdatabase1-svc-attachment --region=$region
Inside Cloud Shell, validate the serviceAttachments connectionStatus is "ACCEPTED", update with your Looker PSC Instance name:
gcloud looker instances describe [INSTANCE_NAME] --region=$region --format=json
Example:
gcloud looker instances describe looker-psc-instance --region=$region --format=json
Example:
{
"adminSettings": {},
"createTime": "2024-08-23T00:00:45.339063195Z",
"customDomain": {
"domain": "looker.cosmopup.com",
"state": "AVAILABLE"
},
"encryptionConfig": {},
"lookerVersion": "24.14.18",
"name": "projects/$project/locations/$region/instances/looker-psc-instance",
"platformEdition": "LOOKER_CORE_ENTERPRISE_ANNUAL",
"pscConfig": {
"allowedVpcs": [
"projects/$project/global/networks/looker-psc-demo",
"projects/$project/global/networks/looker-shared-vpc"
],
"lookerServiceAttachmentUri": "projects/t7ec792caf2a609d1-tp/regions/$region/serviceAttachments/looker-psc-f51982e2-ac0d-48b1-91bb-88656971c183",
"serviceAttachments": [
{
"connectionStatus": "ACCEPTED",
"localFqdn": "onprem.database1.com",
"targetServiceAttachmentUri": "projects/$project/regions/$region/serviceAttachments/onpremdatabase1-svc-attachment"
}
]
},
"pscEnabled": true,
"state": "ACTIVE",
"updateTime": "2024-09-01T23:15:07.426372901Z"
}
Validate the PSC endpoint in Cloud Console
From Cloud Console you can validate the PSC Connection
In Cloud Console, navigate to:
Looker → Looker Instance → Details
Create the on-prem VPC Network
VPC Network
Inside Cloud Shell, perform the following:
gcloud compute networks create on-prem-demo --project=$project --subnet-mode=custom
Create the Postgresql database subnet
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create database-subnet --project=$project --range=192.168.10.0/28 --network=on-prem-demo --region=$region
Inside Cloud Shell, reserve an internal IPv4 address, used for onprem.database1.com, 192.168.10.4:
gcloud compute addresses create on-prem-database1-ip --region=$region --subnet=database-subnet --addresses 192.168.10.4
Create the Cloud Router for the on-prem-demo VPC
Cloud NAT is used in the tutorial for software package installation because the VM instance does not have an external IP address.
Inside Cloud Shell, create the Cloud Router used with Cloud NAT & HA-VPN:
gcloud compute routers create on-prem-cr \
--region=$region \
--network=on-prem-demo \
--asn=65002
Inside Cloud Shell, create the NAT gateway:
gcloud compute routers nats create on-prem-nat --router=on-prem-cr --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region $region
Create the database test instance
Create a postgres-database instance that will be used to test and validate connectivity to Looker.
Inside Cloud Shell, create the instance:
gcloud compute instances create postgres-database \
--project=$project \
--zone=$zone \
--machine-type=e2-medium \
--subnet=database-subnet \
--no-address \
--private-network-ip 192.168.10.4 \
--image-family debian-12 \
--image-project debian-cloud \
--metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt -y install postgresql postgresql-client postgresql-contrib -y"
Create Network Firewall Policy and Firewall Rules
Inside Cloud Shell, perform the following:
gcloud compute network-firewall-policies create on-prem-demo-policy --global
gcloud compute network-firewall-policies associations create --firewall-policy on-prem-demo-policy --network on-prem-demo --name on-prem-demo --global-firewall-policy
To allow IAP to connect to your VM instances, create a firewall rule that:
- Applies to all VM instances that you want to be accessible by using IAP.
- Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.
Inside Cloud Shell, perform the following:
gcloud compute network-firewall-policies rules create 1000 --action ALLOW --firewall-policy on-prem-demo-policy --description "SSH with IAP" --direction INGRESS --src-ip-ranges 35.235.240.0/20 --layer4-configs tcp:22 --global-firewall-policy
The following firewall rule allows traffic from the proxy-only subnet range to all instances in the network.
Inside Cloud Shell, perform the following:
gcloud compute network-firewall-policies rules create 2001 --action ALLOW --firewall-policy on-prem-demo-policy --description "allow traffic from proxy only subnet" --direction INGRESS --src-ip-ranges 10.10.10.0/24 --global-firewall-policy --layer4-configs=tcp
10. Hybrid connectivity
In the following section, you will create a Cloud Router that enables you to dynamically exchange routes between your Virtual Private Cloud (VPC) and peer network by using Border Gateway Protocol (BGP).
Cloud Router can set up a BGP session over a Cloud VPN tunnel to connect your networks. It automatically learns new subnet IP address ranges and announces them to your peer network.
In the following steps you will deploy HA VPN between the looker-psc-demo VPC and on-prem-demo VPC to demonstrate Hybrid NEG connectivity to onprem.database1.com.
Create the HA VPN GW for the looker-psc-demo
When each gateway is created, two external IPv4 addresses are automatically allocated, one for each gateway interface.
Inside Cloud Shell, create the HA VPN GW:
gcloud compute vpn-gateways create looker-psc-demo-vpn-gw \
--network=looker-psc-demo \
--region=$region
Create the HA VPN GW for the on-prem-demo
When each gateway is created, two external IPv4 addresses are automatically allocated, one for each gateway interface.
Inside Cloud Shell, create the HA VPN GW:
gcloud compute vpn-gateways create on-prem-vpn-gw \
--network=on-prem-demo\
--region=$region
Validate HA VPN GW creation
Using the console, navigate to HYBRID CONNECTIVITY → VPN → CLOUD VPN GATEWAYS.
Create the Cloud Router for the looker-psc-demo
Inside Cloud Shell, create the Cloud Router:
gcloud compute routers create looker-psc-demo-cr \
--region=$region \
--network=looker-psc-demo\
--asn=65001
Create the VPN tunnels for looker-psc-demo
You will create two VPN tunnels on each HA VPN gateway.
Create VPN tunnel0
Inside Cloud Shell, create tunnel0:
gcloud compute vpn-tunnels create looker-psc-demo-tunnel0 \
--peer-gcp-gateway on-prem-vpn-gw \
--region $region \
--ike-version 2 \
--shared-secret [ZzTLxKL8fmRykwNDfCvEFIjmlYLhMucH] \
--router looker-psc-demo-cr \
--vpn-gateway looker-psc-demo-vpn-gw \
--interface 0
Create VPN tunnel1
Inside Cloud Shell, create tunnel1:
gcloud compute vpn-tunnels create looker-psc-demo-tunnel1 \
--peer-gcp-gateway on-prem-vpn-gw \
--region $region \
--ike-version 2 \
--shared-secret [bcyPaboPl8fSkXRmvONGJzWTrc6tRqY5] \
--router looker-psc-demo-cr \
--vpn-gateway looker-psc-demo-vpn-gw \
--interface 1
Create the VPN tunnels for on-prem-demo
You will create two VPN tunnels on each HA VPN gateway.
Create VPN tunnel0
Inside Cloud Shell, create tunnel0:
gcloud compute vpn-tunnels create on-prem-tunnel0 \
--peer-gcp-gateway looker-psc-demo-vpn-gw \
--region $region \
--ike-version 2 \
--shared-secret [ZzTLxKL8fmRykwNDfCvEFIjmlYLhMucH] \
--router on-prem-cr \
--vpn-gateway on-prem-vpn-gw \
--interface 0
Create VPN tunnel1
Inside Cloud Shell, create tunnel1:
gcloud compute vpn-tunnels create on-prem-tunnel1 \
--peer-gcp-gateway looker-psc-demo-vpn-gw \
--region $region \
--ike-version 2 \
--shared-secret [bcyPaboPl8fSkXRmvONGJzWTrc6tRqY5] \
--router on-prem-cr \
--vpn-gateway on-prem-vpn-gw \
--interface 1
Validate vpn tunnel creation
Using the console, navigate to HYBRID CONNECTIVITY → VPN → CLOUD VPN TUNNELS.
11. Establish BGP neighbors
Create a BGP interface and peering for looker-psc-demo
Inside Cloud Shell, create the BGP interface:
gcloud compute routers add-interface looker-psc-demo-cr \
--interface-name if-tunnel0-to-onprem \
--ip-address 169.254.1.1 \
--mask-length 30 \
--vpn-tunnel looker-psc-demo-tunnel0 \
--region $region
Inside Cloud Shell, create the BGP peer:
gcloud compute routers add-bgp-peer looker-psc-demo-cr \
--peer-name bgp-on-premises-tunnel0 \
--interface if-tunnel1-to-onprem \
--peer-ip-address 169.254.1.2 \
--peer-asn 65002 \
--region $region
Inside Cloud Shell, create the BGP interface:
gcloud compute routers add-interface looker-psc-demo-cr \
--interface-name if-tunnel1-to-onprem \
--ip-address 169.254.2.1 \
--mask-length 30 \
--vpn-tunnel looker-psc-demo-tunnel1 \
--region $region
Inside Cloud Shell, create the BGP peer:
gcloud compute routers add-bgp-peer looker-psc-demo-cr \
--peer-name bgp-on-premises-tunnel1 \
--interface if-tunnel2-to-onprem \
--peer-ip-address 169.254.2.2 \
--peer-asn 65002 \
--region $region
Create a BGP interface and peering for on-prem-demo
Inside Cloud Shell, create the BGP interface:
gcloud compute routers add-interface on-prem-cr \
--interface-name if-tunnel0-to-looker-psc-demo \
--ip-address 169.254.1.2 \
--mask-length 30 \
--vpn-tunnel on-prem-tunnel0 \
--region $region
Inside Cloud Shell, create the BGP peer:
gcloud compute routers add-bgp-peer on-prem-cr \
--peer-name bgp-looker-psc-demo-tunnel0 \
--interface if-tunnel1-to-looker-psc-demo \
--peer-ip-address 169.254.1.1 \
--peer-asn 65001 \
--region $region
Inside Cloud Shell, create the BGP interface:
gcloud compute routers add-interface on-prem-cr \
--interface-name if-tunnel1-to-looker-psc-demo \
--ip-address 169.254.2.2 \
--mask-length 30 \
--vpn-tunnel on-prem-tunnel1 \
--region $region
Inside Cloud Shell, create the BGP peer:
gcloud compute routers add-bgp-peer on-prem-cr \
--peer-name bgp-looker-psc-demo-tunnel1\
--interface if-tunnel2-to-looker-psc-demo \
--peer-ip-address 169.254.2.1 \
--peer-asn 65001 \
--region $region
Navigate to Hybrid CONNECTIVITY → VPN to view the VPN tunnel details.
Validate looker-psc-demo learned routes over HA VPN
Now that the HA VPN tunnels and BGP sessions are established, the default behavior of the Cloud Router is to advertise subnet routes. View looker-psc-demo learned routes.
Using the console, navigate to VPC network → VPC networks → looker-psc-demo → ROUTES → REGION → VIEW
Observe looker-psc-demo has learned database-subnet 192.168.10.0/28 from the on-prem-demo VPC.
Validate that on-prem-demo VPC learned routes over HA VPN
Since the default behavior of the Cloud Router is to advertise all subnets, the proxy only subnet is advertised over BGP. Hybrid NEG will use the proxy only subnet as the source address when communicating with server onprem.database1.com.
Observe on-prem-demo has learned proxy-only-subnet 10.10.10.0/24 from looker-psc-demo.
Using the console, navigate to VPC network → VPC networks → on-prem-demo → ROUTES → REGION → VIEW
12. Looker postgres-database creation
In the following section, you will perform a SSH into the postgres-database vm using Cloud Shell.
Inside Cloud Shell, perform a ssh to postgres-database instance**:**
gcloud compute ssh --zone "$zone" "postgres-database" --project "$project"
Inside the OS, identify and note the IP address (ens4) of the postgres-database instance:
ip a
Example:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc mq state UP group default qlen 1000
link/ether 42:01:c0:a8:0a:04 brd ff:ff:ff:ff:ff:ff
altname enp0s4
inet 192.168.10.4/32 metric 100 scope global dynamic ens4
valid_lft 66779sec preferred_lft 66779sec
inet6 fe80::4001:c0ff:fea8:a04/64 scope link
valid_lft forever preferred_lft forever
Inside the OS, log into postgresql:
sudo -u postgres psql postgres
Inside the OS, enter the password prompt:
\password postgres
Inside the OS, set the password to postgres (enter the same password twice):
postgres
Example:
user@postgres-database:~$ sudo -u postgres psql postgres
\password postgres
psql (13.11 (Debian 13.11-0+deb11u1))
Type "help" for help.
postgres=# \password postgres
Enter new password for user "postgres":
Enter it again:
Inside the OS, exit postgres:
\q
Example:
postgres=# \q
user@postgres-database:~$
In the following section, you will insert your postgres-database instance IP (192.168.10.4) and proxy-only subnet (10.10.10.0/24) in the pg_hba.conf file under the IPv4 local connections.
sudo nano /etc/postgresql/15/main/pg_hba.conf
The screenshot below is the completed update:
In the following section, uncomment the postgresql.conf to listen for all ‘*' IP addresses per the screenshot below:
sudo nano /etc/postgresql/15/main/postgresql.conf
Before:
After:
Inside the OS, restart the postgresql service:
sudo service postgresql restart
Inside the OS, validate the postgresql status as active:
sudo service postgresql status
Example:
Inside the OS, validate the postgresql status as active:
user@postgres-database:/$ sudo service postgresql status
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; preset: enabled)
Active: active (exited) since Mon 2024-09-02 12:10:10 UTC; 1min 46s ago
Process: 20486 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 20486 (code=exited, status=0/SUCCESS)
CPU: 2ms
Sep 02 12:10:10 postgres-database systemd[1]: Starting postgresql.service - PostgreSQL RDBMS...
Sep 02 12:10:10 postgres-database systemd[1]: Finished postgresql.service - PostgreSQL RDBMS.
13. Create the postgres database
In the following section, you will create a postgres database named postgres_looker and schema looker_schema used to validate looker to on-premises connectivity.
Inside the OS, log into postgres:
sudo -u postgres psql postgres
Inside the OS, create the database:
create database postgres_looker;
Inside the OS, list the database:
\l
Inside the OS, create the user postgres_looker with the password postgreslooker:
create user postgres_looker with password 'postgreslooker';
Inside the OS, connect to the database:
\c postgres_looker;
Inside the OS, create the schema looker-schema and exit to the Cloud Shell prompt.
create schema looker_schema;
create table looker_schema.test(firstname CHAR(15), lastname CHAR(20));
exit
Exit from the OS, returning you to cloud shell:
\q
Example:
user@postgres-database:/$ sudo -u postgres psql postgres
psql (15.8 (Debian 15.8-0+deb12u1))
Type "help" for help.
postgres=# create database postgres_looker;
CREATE DATABASE
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges
-----------------+----------+----------+---------+---------+------------+-----------------+-----------------------
postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc |
postgres_looker | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc |
template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
(4 rows)
postgres=# create user postgres_looker with password 'postgreslooker';
CREATE ROLE
postgres=# \c postgres_looker;
You are now connected to database "postgres_looker" as user "postgres".
postgres_looker=# create schema looker_schema;
create table looker_schema.test(firstname CHAR(15), lastname CHAR(20));
exit
CREATE SCHEMA
CREATE TABLE
postgres_looker-# \q
14. Integrate Looker with the Postgres postgres-database
In the following section you will use Looker Console to create a Database connection to the on-premises postgres-database instance.
Navigate to ADMIN → DATABASE → CONNECTIONS → Select ADD CONNECTION
Fill out the connection details per the screenshot below, select CONNECT
The connection is now configured
15. Validate Looker connectivity
In the following section you will learn how to validate Looker connectivity to the postgres-database in the on-prem-vpc using the Looker ‘test' action and TCPDUMP.
From Cloud Shell, log into the postgres-database if the session has timed out.
Inside Cloud Shell, perform the following:
gcloud config list project
gcloud config set project [YOUR-PROJECT-ID]
project=[YOUR-PROJECT-ID]
region=[YOUR-REGION]
zone=[YOUR-ZONE]
echo $project
echo $region
gcloud compute ssh --zone "$zone" "postgres-database" --project "$project"
From the OS, create a TCPDUMP filter with the proxy-only subnet 10.10.10.0/24
sudo tcpdump -i any net 10.10.10.0/24 -nn
Navigate to the Data Connection ADMIN → DATABASE → CONNECTIONS → postgres-database → Test
Once Test is selected Looker will connect to the postgres-database as indicated below:
Clean up
From a single Cloud Shell terminal delete lab components
gcloud compute service-attachments delete onpremdatabase1-svc-attachment --region=$region -q
gcloud compute forwarding-rules delete producer-hybrid-neg-fr --region=$region -q
gcloud compute target-tcp-proxies delete producer-lb-tcp-proxy --region=$region -q
gcloud compute backend-services delete producer-backend-svc --region=$region -q
gcloud compute network-firewall-policies rules delete 2001 --firewall-policy looker-psc-demo-policy --global-firewall-policy -q
gcloud compute network-firewall-policies associations delete --firewall-policy=looker-psc-demo-policy --name=looker-psc-demo --global-firewall-policy -q
gcloud compute network-firewall-policies delete looker-psc-demo-policy --global -q
gcloud compute routers nats delete on-prem-nat --router=on-prem-cr --router-region=$region -q
gcloud compute network-endpoint-groups delete on-prem-hybrid-neg --zone=$zone -q
gcloud compute addresses delete hybrid-neg-lb-ip --region=$region -q
gcloud compute vpn-tunnels delete looker-psc-demo-tunnel0 looker-psc-demo-tunnel1 on-prem-tunnel0 on-prem-tunnel1 --region=$region -q
gcloud compute vpn-gateways delete looker-psc-demo-vpn-gw on-prem-vpn-gw --region=$region -q
gcloud compute routers delete looker-psc-demo-cr on-prem-cr --region=$region -q
gcloud compute instances delete postgres-database --zone=$zone -q
gcloud compute addresses delete on-prem-database1-ip --region=$region -q
gcloud compute networks subnets delete database-subnet --region=$region -q
gcloud compute network-firewall-policies rules delete 2001 --firewall-policy on-prem-demo-policy --global-firewall-policy -q
gcloud compute network-firewall-policies rules delete 1000 --firewall-policy on-prem-demo-policy --global-firewall-policy -q
gcloud compute network-firewall-policies associations delete --firewall-policy=on-prem-demo-policy --name=on-prem-demo --global-firewall-policy -q
gcloud compute networks subnets delete $region-proxy-only-subnet --region=$region -q
gcloud compute networks subnets delete producer-psc-nat-subnet --region=$region -q
gcloud compute networks subnets delete producer-psc-fr-subnet --region=$region -q
gcloud compute networks delete on-prem-demo -q
gcloud compute networks delete looker-psc-demo -q
16. Congratulations
Congratulations, you've successfully configured and validated connectivity to the on-premises database over HA-VPN using Looker Console powered by Private Service Connect.
You created the producer infrastructure, learned how to create a Hybrid NEG, Producer Service and Looker PSC endpoint that allowed connectivity to the Producer service.
Cosmopup thinks codelabs are awesome!!
What's next?
Check out some of these codelabs...
- Using Private Service Connect to publish and consume services
- Connect to on-prem services over Hybrid Networking using Private Service Connect and an internal TCP Proxy load balancer
- Access to all published Private Service Connect codelabs