1. Introduction
Sometimes, workloads can't be transferred from an Oracle database to AlloyDB for various reasons. In such cases, if we want to make the data available in AlloyDB for reporting or further processing, we can leverage Oracle FDW (Foreign Data Wrapper). Oracle FDW enables querying data from a remote Oracle database and presents the remote data via views, making it appear as if it resides within AlloyDB.
In this codelab you will learn how to use Oracle FDW to connect your AlloyDB database to an Oracle database deployed in a separate network using VPN service.
The diagram above shows AlloyDB cluster and GCE instance instance-1 on the left deployed in one VPC with default network and the GCE instance ora-xe-01 deployed in a different VPC with network name $PROJECT_ID-vpc-02. The first and the second VPC are connected using cloud VPN with established routes allowing Oracle and AlloyDB instances to communicate with each other.
AlloyDB to Oracle over VPN AlloyDB to Oracle over VPN
You can get more information about the Oracle FDW extension here.
Prerequisites
- A basic understanding of the Google Cloud, Console
- Basic skills in command line interface and google shell
- Basic knowledge of PostgreSQL and Oracle database
What you'll learn
- How to deploy AlloyDB Cluster
- How to connect to the AlloyDB
- How to configure and deploy a sample Oracle database
- How to set up VPN between two VPC networks
- How to configure Oracle FDW extension
What you'll need
- A Google Cloud Account and Google Cloud Project
- A web browser such as Chrome
2. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.
3. Before you begin
Enable API
Output:
Inside Cloud Shell, make sure that your project ID is setup:
gcloud config set project [YOUR-PROJECT-ID]
PROJECT_ID=$(gcloud config get-value project)
Enable all necessary services:
gcloud services enable alloydb.googleapis.com \
compute.googleapis.com \
cloudresourcemanager.googleapis.com \
servicenetworking.googleapis.com \
vpcaccess.googleapis.com
Expected output
student@cloudshell:~ (gleb-test-short-004)$ gcloud services enable alloydb.googleapis.com \ compute.googleapis.com \ cloudresourcemanager.googleapis.com \ servicenetworking.googleapis.com \ vpcaccess.googleapis.com Operation "operations/acf.p2-404051529011-664c71ad-cb2b-4ab4-86c1-1f3157d70ba1" finished successfully.
Configure your default region to us-central1 or any other more preferable for you. In this lab we are going to use us-central1 region.
gcloud config set compute/region us-central1
4. Deploy AlloyDB Cluster
Before creating an AlloyDB cluster we need to allocate a private IP range in our VPC to be used by the future AlloyDB instance, after that we will be able to create the cluster and instance.
Create private IP range
We need to configure Private Service Access configuration in our VPC for AlloyDB. The assumption here is that we have the "default" VPC network in the project and it is going to be used for all actions.
Create the private IP range:
gcloud compute addresses create psa-range \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--description="VPC private service access" \
--network=default
Create private connection using the allocated IP range:
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=psa-range \
--network=default
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute addresses create psa-range \ --global \ --purpose=VPC_PEERING \ --prefix-length=16 \ --description="VPC private service access" \ --network=default Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/addresses/psa-range]. student@cloudshell:~ (test-project-402417)$ gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=psa-range \ --network=default Operation "operations/pssn.p24-4470404856-595e209f-19b7-4669-8a71-cbd45de8ba66" finished successfully. student@cloudshell:~ (test-project-402417)$
Create AlloyDB Cluster
Create an AlloyDB cluster in the default region:
export PGPASSWORD=`openssl rand -hex 12`
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
gcloud alloydb clusters create $ADBCLUSTER \
--password=$PGPASSWORD \
--network=default \
--region=$REGION
Expected console output:
student@cloudshell:~ (test-project-402417)$ export PGPASSWORD=`openssl rand -base64 12` export REGION=us-central1 export ADBCLUSTER=alloydb-aip-01 gcloud alloydb clusters create $ADBCLUSTER \ --password=$PGPASSWORD \ --network=default \ --region=$REGION Operation ID: operation-1697655441138-6080235852277-9e7f04f5-2012fce4 Creating cluster...done.
Note the PostgreSQL password for future use:
echo $PGPASSWORD
Expected console output:
student@cloudshell:~ (test-project-402417)$ echo $PGPASSWORD bbefbfde7601985b0dee5723
Create AlloyDB Primary Instance
Create an AlloyDB primary instance for our cluster:
export REGION=us-central1
gcloud alloydb instances create $ADBCLUSTER-pr \
--instance-type=PRIMARY \
--cpu-count=2 \
--region=$REGION \
--cluster=$ADBCLUSTER
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud alloydb instances create $ADBCLUSTER-pr \ --instance-type=PRIMARY \ --cpu-count=2 \ --region=$REGION \ --availability-type ZONAL \ --cluster=$ADBCLUSTER Operation ID: operation-1697659203545-6080315c6e8ee-391805db-25852721 Creating instance...done.
5. Connect to AlloyDB
To work with AlloyDB and run commands like create database, enable extension and others we need a development environment. It can be any standard tool working with PostgreSQL. In our case we are going to use a standard client for PostgreSQL installed on a linux box.
AlloyDB is deployed using a private-only connection, so we need a VM with PostgreSQL client installed to work with the database.
Deploy GCE VM
Create a GCE VM in the same region and VPC as the AlloyDB cluster.
In Cloud Shell execute:
export ZONE=us-central1-a
gcloud compute instances create instance-1 \
--zone=$ZONE \
--scopes=https://www.googleapis.com/auth/cloud-platform
Expected console output:
student@cloudshell:~ (test-project-402417)$ export ZONE=us-central1-a student@cloudshell:~ (test-project-402417)$ gcloud compute instances create instance-1 \ --zone=$ZONE \ --scopes=https://www.googleapis.com/auth/cloud-platform Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/instance-1]. NAME: instance-1 ZONE: us-central1-a MACHINE_TYPE: n1-standard-1 PREEMPTIBLE: INTERNAL_IP: 10.128.0.2 EXTERNAL_IP: 34.71.192.233 STATUS: RUNNING
Install Postgres Client
Install the PostgreSQL client software on the deployed VM
Connect to the VM:
gcloud compute ssh instance-1 --zone=us-central1-a
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute ssh instance-1 --zone=us-central1-a Updating project ssh metadata...working..Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417]. Updating project ssh metadata...done. Waiting for SSH key to propagate. Warning: Permanently added 'compute.5110295539541121102' (ECDSA) to the list of known hosts. Linux instance-1 5.10.0-26-cloud-amd64 #1 SMP Debian 5.10.197-1 (2023-09-29) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. student@instance-1:~$
Install the software running command inside the VM:
sudo apt-get update
sudo apt-get install --yes postgresql-client
Expected console output:
student@instance-1:~$ sudo apt-get update sudo apt-get install --yes postgresql-client Get:1 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable InRelease [5146 B] Get:2 https://packages.cloud.google.com/apt cloud-sdk-bullseye InRelease [6406 B] Hit:3 https://deb.debian.org/debian bullseye InRelease Get:4 https://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB] Get:5 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable/main amd64 Packages [1930 B] Get:6 https://deb.debian.org/debian bullseye-updates InRelease [44.1 kB] Get:7 https://deb.debian.org/debian bullseye-backports InRelease [49.0 kB] ...redacted... update-alternatives: using /usr/share/postgresql/13/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode Setting up postgresql-client (13+225) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for libc-bin (2.31-13+deb11u7) ...
Connect to the Instance
Connect to the primary instance from the VM using psql.
In the same Cloud Shell tab with the opened SSH session to your instance-1 VM.
Use the noted AlloyDB password (PGPASSWORD) value and the AlloyDB cluster id to connect to AlloyDB from the GCE VM:
export PGPASSWORD=<Noted password>
ADBCLUSTER=<your AlloyDB cluster name>
REGION=us-central1
INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)")
export INSTANCE_IP=<AlloyDB Instance IP>
psql "host=$INSTANCE_IP user=postgres sslmode=require"
Expected console output:
student@instance-1:~$ export PGPASSWORD=CQhOi5OygD4ps6ty student@instance-1:~$ ADBCLUSTER=alloydb-aip-01 student@instance-1:~$ REGION=us-central1 student@instance-1:~$ INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)") gleb@instance-1:~$ psql "host=$INSTANCE_IP user=postgres sslmode=require" psql (13.13 (Debian 13.13-0+deb11u1), server 14.7) WARNING: psql major version 13, server major version 14. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. postgres=>
6. Create a Sample Oracle database
Our sample Oracle database will be deployed in a separate VPC to reflect scenarios when it is deployed either on premises or in any other environment separated from the AlloyDB cluster's VPC.
Create a Second VPC
Using an open tab of cloud shell or a command line terminal with Google Cloud SDK execute:
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
gcloud compute networks create $PROJECT_ID-vpc-02 --project=$PROJECT_ID --description=Custom\ VPC\ for\ $PROJECT_ID\ project --subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional
gcloud compute networks subnets create $PROJECT_ID-vpc-02-$REGION --project=$PROJECT_ID --range=10.110.0.0/24 --stack-type=IPV4_ONLY --network=$PROJECT_ID-vpc-02 --region=$REGION
Expected console output:
student@cloudshell:~ PROJECT_ID=$(gcloud config get-value project) REGION=us-central1 gcloud compute networks create $PROJECT_ID-vpc-02 --project=$PROJECT_ID --description=Custom\ VPC\ for\ $PROJECT_ID\ project --subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional gcloud compute networks subnets create $PROJECT_ID-vpc-02-$REGION --project=$PROJECT_ID --range=10.110.0.0/24 --stack-type=IPV4_ONLY --network=$PROJECT_ID-vpc-02 --region=$REGION Your active configuration is: [cloudshell-3726] Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/networks/test-project-402417-vpc-02]. NAME: test-project-402417-vpc-02 SUBNET_MODE: CUSTOM BGP_ROUTING_MODE: REGIONAL IPV4_RANGE: GATEWAY_IPV4: Instances on this network will not be reachable until firewall rules are created. As an example, you can allow all internal traffic between instances as well as SSH, RDP, and ICMP by running: $ gcloud compute firewall-rules create <FIREWALL_NAME> --network test-project-402417-vpc-02 --allow tcp,udp,icmp --source-ranges <IP_RANGE> $ gcloud compute firewall-rules create <FIREWALL_NAME> --network test-project-402417-vpc-02 --allow tcp:22,tcp:3389,icmp Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/subnetworks/test-project-402417-vpc-02-us-central1]. NAME: test-project-402417-vpc-02-us-central1 REGION: us-central1 NETWORK: test-project-402417-vpc-02 RANGE: 10.110.0.0/24 STACK_TYPE: IPV4_ONLY IPV6_ACCESS_TYPE:
Configure minimum firewall rules in the second VPC for basic diagnostic and SSH connection.
gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-icmp --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ ICMP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=icmp
gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-ssh --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ TCP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network\ using\ port\ 22. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=tcp:22
Expected console output:
student@cloudshell:~ gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-icmp --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ ICMP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=icmp gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-ssh --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ TCP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network\ using\ port\ 22. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=tcp:22 Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-icmp]. Creating firewall...done. NAME: test-project-402417-vpc-02-allow-icmp NETWORK: test-project-402417-vpc-02 DIRECTION: INGRESS PRIORITY: 65534 ALLOW: icmp DENY: DISABLED: False Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-ssh]. Creating firewall...done. NAME: test-project-402417-vpc-02-allow-ssh NETWORK: test-project-402417-vpc-02 DIRECTION: INGRESS PRIORITY: 65534 ALLOW: tcp:22 DENY: DISABLED: False
Configure firewall rules in the default and the second VPC to allow connections between internal subnets to the database ports.
ALLOYDB_NET_IP_RANGES=$(gcloud compute networks subnets list --network=default --filter="region:us-central1" --format="value(ipCidrRange)"),$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())")
gcloud compute firewall-rules create default-allow-postgres --project=$PROJECT_ID --network=default --description=Allows\ Postgres\ connections\ from\ local\ networks\ to\ postgres\ instance\ on\ the\ network\ using\ port\ 5432. --direction=INGRESS --priority=65534 --source-ranges=10.110.0.0/24 --action=ALLOW --rules=tcp:5432
gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-oracle --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ Oracle\ connections\ from\ local\ networks\ to\ Oracle\ instance\ on\ the\ network\ using\ port\ 1521. --direction=INGRESS --priority=65534 --source-ranges=$ALLOYDB_NET_IP_RANGES --action=ALLOW --rules=tcp:1521
Expected console output:
student@cloudshell:~ ALLOYDB_NET_IP_RANGES=$(gcloud compute networks subnets list --network=default --filter="region:us-central1" --format="value(ipCidrRange)"),$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())") gcloud compute firewall-rules create default-allow-postgres --project=$PROJECT_ID --network=default --description=Allows\ Postgres\ connections\ from\ local\ networks\ to\ postgres\ instance\ on\ the\ network\ using\ port\ 5432. --direction=INGRESS --priority=65534 --source-ranges=10.110.0.0/24 --action=ALLOW --rules=tcp:5432 gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-oracle --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ Oracle\ connections\ from\ local\ networks\ to\ Oracle\ instance\ on\ the\ network\ using\ port\ 1521. --direction=INGRESS --priority=65534 --source-ranges=$ALLOYDB_NET_IP_RANGES --action=ALLOW --rules=tcp:1521 Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/default-allow-postgres]. Creating firewall...done. NAME: default-allow-postgres NETWORK: default DIRECTION: INGRESS PRIORITY: 65534 ALLOW: tcp:5432 DENY: DISABLED: False Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-oracle]. Creating firewall...done. NAME: test-project-402417-vpc-02-allow-oracle NETWORK: test-project-402417-vpc-02 DIRECTION: INGRESS PRIORITY: 65534 ALLOW: tcp:1521 DENY: DISABLED: False
Deploy GCE VM in the second VPC
Deploying the GCE VM using the second VPC subnet for our sample Oracle environment. This compute instance is going to be used as our test Oracle environment. Oracle XE database binaries will be installed here and the instance will be used as a server and as a client for Oracle specific tasks.
Using the same cloud shell or terminal run the command to create the VM:
SERVER_NAME=ora-xe-01
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
ZONE=us-central1-a
MACHINE_TYPE=e2-standard-2
SUBNET=$PROJECT_ID-vpc-02-$REGION
DISK_SIZE=50
gcloud compute instances create $SERVER_NAME --project=$PROJECT_ID --zone=$ZONE --machine-type=$MACHINE_TYPE --network-interface=subnet=$SUBNET --create-disk=auto-delete=yes,boot=yes,size=$DISK_SIZE,image=projects/debian-cloud/global/images/$(gcloud compute images list --filter="family=debian-11 AND family!=debian-11-arm64" \
--format="value(name)"),type=pd-ssd
Expected console output:
student@cloudshell:~ SERVER_NAME=ora-xe-01 PROJECT_ID=$(gcloud config get-value project) REGION=us-central1 ZONE=us-central1-a MACHINE_TYPE=e2-standard-2 SUBNET=$PROJECT_ID-vpc-02-$REGION DISK_SIZE=50 gcloud compute instances create $SERVER_NAME --project=$PROJECT_ID --zone=$ZONE --machine-type=$MACHINE_TYPE --network-interface=subnet=$SUBNET --create-disk=auto-delete=yes,boot=yes,size=$DISK_SIZE,image=projects/debian-cloud/global/images/$(gcloud compute images list --filter="family=debian-11 AND family!=debian-11-arm64" \ --format="value(name)"),type=pd-ssd Your active configuration is: [cloudshell-3726] Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/ora-xe-01]. WARNING: Some requests generated warnings: - Disk size: '50 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details. NAME: ora-xe-01 ZONE: us-central1-a MACHINE_TYPE: e2-standard-2 PREEMPTIBLE: INTERNAL_IP: 10.110.0.2 EXTERNAL_IP: 34.121.117.216 STATUS: RUNNING
Create Oracle XE
Oracle Express Edition (XE) will be used for the tests. Please be aware that the usage of Oracle XE container is subject to the terms of Oracle Free Use Terms and Conditions license.
Connect to the created VM using ssh:
gcloud compute ssh $SERVER_NAME --zone=$ZONE
In the SSH session deploy required packages for docker type of deployment of Oracle XE. You can read about other deployment options in the oracle documentation.
sudo apt-get update
sudo apt-get -y install ca-certificates curl gnupg lsb-release
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
Expected console output (redacted):
student@ora-xe-01:~$ sudo apt-get update sudo apt-get -y install ca-certificates curl gnupg lsb-release sudo mkdir -m 0755 -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin sudo usermod -aG docker $USER Get:1 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable InRelease [5146 B] Get:2 https://packages.cloud.google.com/apt cloud-sdk-bullseye InRelease [6406 B] ... Setting up git (1:2.30.2-1+deb11u2) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for libc-bin (2.31-13+deb11u7) ... student@ora-xe-01:~$
Reconnect to the VM by logging out and connecting again.
exit
gcloud compute ssh $SERVER_NAME --zone=$ZONE
Expected console output:
student@ora-xe-01:~$ exit logout Connection to 34.132.87.73 closed. student@cloudshell:~ (test-project-002-410214)$ gcloud compute ssh $SERVER_NAME --zone=$ZONE Linux ora-xe-01 5.10.0-26-cloud-amd64 #1 SMP Debian 5.10.197-1 (2023-09-29) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Thu Jan 4 14:51:25 2024 from 34.73.112.191 student@ora-xe-01:~$
In the new SSH session to the ora-xe-01 execute:
ORACLE_PWD=`openssl rand -hex 12`
echo $ORACLE_PWD
docker run -d --name oracle-xe -p 1521:1521 -e ORACLE_PWD=$ORACLE_PWD container-registry.oracle.com/database/express:21.3.0-xe
Expected console output:
student@ora-xe-01:~$ ORACLE_PWD=`openssl rand -hex 12` echo $ORACLE_PWD docker run -d --name oracle-xe -p 1521:1521 -e ORACLE_PWD=$ORACLE_PWD container-registry.oracle.com/database/express:21.3.0-xe e36e191b6c298ce6e02a614c Unable to find image 'container-registry.oracle.com/database/express:21.3.0-xe' locally 21.3.0-xe: Pulling from database/express 2318ff572021: Pull complete c6250726c822: Pull complete 33ac5ea7f7dd: Pull complete 753e0fae7e64: Pull complete Digest: sha256:dcf137aab02d5644aaf9299aae736e4429f9bfdf860676ff398a1458ab8d23f2 Status: Downloaded newer image for container-registry.oracle.com/database/express:21.3.0-xe 9f74e2d37bf49b01785338495e02d79c55c92e4ddd409eddcf45e754fd9044d9
Record your ORACLE_PWD value to be used later for connection to the database as a user system and sys.
Install Sample Schema
Create a sample dataset in the installed database using standard HR sample schema.
In the same SSH session:
git clone https://github.com/oracle-samples/db-sample-schemas.git
curl -O https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip
sudo apt-get install -y unzip openjdk-17-jdk
unzip sqlcl-latest.zip
echo 'export PATH=$HOME/sqlcl/bin:$PATH' >>.bashrc
exec $SHELL
Expected console output:
student@ora-xe-01:~$ git clone https://github.com/oracle-samples/db-sample-schemas.git curl -O https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip sudo apt-get install -y unzip openjdk-17-jdk unzip sqlcl-latest.zip echo 'export PATH=$HOME/sqlcl/bin:$PATH' >>.bashrc exec $SHELL Cloning into 'db-sample-schemas'... remote: Enumerating objects: 624, done. remote: Counting objects: 100% (104/104), done. ... inflating: sqlcl/lib/sshd-contrib.jar inflating: sqlcl/lib/sshd-putty.jar inflating: sqlcl/lib/dbtools-sqlcl-distribution.jar student@ora-xe-01:~$
In the same SSH session:
ORACLE_PWD=<your noted password from the previous step>
cd db-sample-schemas/human_resources/
sql system/$ORACLE_PWD@localhost/XEPDB1 @hr_install.sql
Enter a password for the HR schema (any strong password would suffice) and accept default tablespaces suggested in the prompts.
Expected console output:
student@ora-xe-01:~$ ORACLE_PWD=e36e191b6c298ce6e02a614c student@ora-xe-01:~$ cd db-sample-schemas/human_resources/ sql system/$ORACLE_PWD@localhost/XEPDB1 @hr_install.sql SQLcl: Release 23.3 Production on Wed Jan 03 14:38:14 2024 Copyright (c) 1982, 2024, Oracle. All rights reserved. Last Successful login time: Wed Jan 03 2024 14:38:17 +00:00 Connected to: Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 Thank you for installing the Oracle Human Resources Sample Schema. This installation script will automatically exit your database session at the end of the installation or if any error is encountered. The entire installation will be logged into the 'hr_install.log' log file. Enter a password for the user HR: ************************ Enter a tablespace for HR [USERS]: Do you want to overwrite the schema, if it already exists? [YES|no]: ****** Creating REGIONS table .... Table REGIONS created. ... Table provided actual ______________ ___________ _________ regions 5 5 countries 25 25 departments 27 27 locations 23 23 employees 107 107 jobs 19 19 job_history 10 10 Thank you! ___________________________________________________________ The installation of the sample schema is now finished. Please check the installation verification output above. You will now be disconnected from the database. Thank you for using Oracle Database! Disconnected from Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0
Create User for Oracle FDW
For simplicity we will create a user POSTGRES - the same as the default PostgreSQL user.
Connect to the Oracle database as user SYS:
sql sys/$ORACLE_PWD@localhost/XEPDB1 as sysdba
In the SQL session as user SYS execute (replace the POSTGRES_ORA_PWD by your password for the user):
create user postgres identified by POSTGRES_ORA_PWD;
And grant necessary privileges to the user:
grant connect, select any table to postgres;
grant select on v_$SQL_PLAN to postgres;
grant select on v_$SQL to postgres;
Expected console output:
student@ora-xe-01:~/db-sample-schemas/human_resources$ sql sys/$ORACLE_PWD@localhost/XEPDB1 as sysdba SQLcl: Release 23.3 Production on Wed Jan 03 14:42:37 2024 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> create user postgres identified by VeryStrongPassword0011##; User POSTGRES created. SQL> grant connect, select any table to postgres; Grant succeeded. SQL> grant select on v_$SQL_PLAN to postgres; Grant succeeded. SQL> grant select on v_$SQL to postgres; Grant succeeded. SQL>
7. Create VPN between VPC
The resources deployed in each VPC can see only internal IP addresses for that particular network and cannot connect to any internal resources in another VPC. Google Cloud VPN creates a bridge between two VPC and allows our VM with Oracle database to connect and be connected from the AlloyDB instance. The VPN connection includes several components such as Cloud Routers, Gateways, VPN tunnels and BGP sessions. Here we are going through creation and configuration of all the components required for a high available VPN between two VPC. You can read more about Google Cloud VPN in documentation.
Create Routers and Gateways
Create Cloud Routers and VPN gateways in the both VPC.
In the cloud shell or a terminal with Cloud SDK execute:
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
gcloud compute routers create $PROJECT_ID-vpc-01-router --project=$PROJECT_ID --region=$REGION --network=default --asn=64520 --advertisement-mode=custom --set-advertisement-groups=all_subnets
gcloud compute routers create $PROJECT_ID-vpc-02-router --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02 --asn=64521 --advertisement-mode=custom --set-advertisement-groups=all_subnets
Expected console output:
student@cloudshell:~ (test-project-402417)$ PROJECT_ID=$(gcloud config get-value project) REGION=us-central1 gcloud compute routers create $PROJECT_ID-vpc-01-router --project=$PROJECT_ID --region=$REGION --network=default --asn=64520 --advertisement-mode=custom --set-advertisement-groups=all_subnets gcloud compute routers create $PROJECT_ID-vpc-02-router --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02 --asn=64521 --advertisement-mode=custom --set-advertisement-groups=all_subnets Your active configuration is: [cloudshell-18870] Creating router [test-project-402417-vpc-01-router]...done. NAME: test-project-402417-vpc-01-router REGION: us-central1 NETWORK: default Creating router [test-project-402417-vpc-02-router]...done. NAME: test-project-402417-vpc-02-router REGION: us-central1 NETWORK: test-project-402417-vpc-02
Create VPN gateways in both VPC:
PROJECT_ID=$(gcloud config get-value project)
gcloud compute vpn-gateways create $PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=default
gcloud compute vpn-gateways create $PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute vpn-gateways create $PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=default gcloud compute vpn-gateways create $PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02 Creating VPN Gateway...done. NAME: test-project-402417-vpc-01-vpn-gtw INTERFACE0: 35.242.106.28 INTERFACE1: 35.220.86.122 INTERFACE0_IPV6: INTERFACE1_IPV6: NETWORK: default REGION: us-central1 Creating VPN Gateway...done. NAME: test-project-402417-vpc-02-vpn-gtw INTERFACE0: 35.242.116.197 INTERFACE1: 35.220.68.53 INTERFACE0_IPV6: INTERFACE1_IPV6: NETWORK: test-project-402417-vpc-02 REGION: us-central1 student@cloudshell:~ (test-project-402417)$
Create VPN tunnels
We will be creating an HA configuration with a pair of VPN tunnels for each VPC.
In the cloud shell or a terminal with SDK execute:
PROJECT_ID=$(gcloud config get-value project)
SHARED_SECRET=`openssl rand -base64 24`
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1
Expected console output:
student@cloudshell:~ (test-project-402417)$ SHARED_SECRET=`openssl rand -base64 24` gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0 gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1 gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0 gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1 Creating VPN tunnel...done. NAME: test-project-402417-vpc-01-vpn-tunnel-01 REGION: us-central1 GATEWAY: test-project-402417-vpc-01-vpn-gtw VPN_INTERFACE: 0 PEER_ADDRESS: 35.242.116.197 Creating VPN tunnel...done. NAME: test-project-402417-vpc-01-vpn-tunnel-02 REGION: us-central1 GATEWAY: test-project-402417-vpc-01-vpn-gtw VPN_INTERFACE: 1 PEER_ADDRESS: 35.220.68.53 Creating VPN tunnel...done. NAME: test-project-402417-vpc-02-vpn-tunnel-01 REGION: us-central1 GATEWAY: test-project-402417-vpc-02-vpn-gtw VPN_INTERFACE: 0 PEER_ADDRESS: 35.242.106.28 Creating VPN tunnel...done. NAME: test-project-402417-vpc-02-vpn-tunnel-02 REGION: us-central1 GATEWAY: test-project-402417-vpc-02-vpn-gtw VPN_INTERFACE: 1 PEER_ADDRESS: 35.220.86.122
Create BGP sessions
The Border Gateway Protocol (BGP) sessions allow dynamic routing by exchanging information about available routes between VPC.
Create the first pair of BGP peers on the first (default) network. The IP addresses for the peers will be used for BGP created in the second VPC.
In the cloud shell or a terminal with SDK execute:
PROJECT_ID=$(gcloud config get-value project)
gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-asn=64521 --region=$REGION
gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-asn=64521 --region=$REGION
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-asn=64521 --region=$REGION gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-asn=64521 --region=$REGION Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-01-router]. Creating peer [test-project-402417-vpc-01-vpn-tunnel-01-bgp] in router [test-project-402417-vpc-01-router]...done. Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-01-router]. Creating peer [test-project-402417-vpc-01-vpn-tunnel-02-bgp] in router [test-project-402417-vpc-01-router]...done. student@cloudshell:~ (test-project-402417)$
Add BGP in the second VPC:
REGION=us-central1
PROJECT_ID=$(gcloud config get-value project)
gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-01 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION
gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-02 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION
Expected console output:
student@cloudshell:~ (test-project-402417)$ REGION=us-central1 PROJECT_ID=$(gcloud config get-value project) gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-01 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-02 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION Your active configuration is: [cloudshell-18870] Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-02-router]. Creating peer [test-project-402417-vpc-02-vpn-tunnel-01-bgp] in router [test-project-402417-vpc-02-router]...done. Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-02-router]. Creating peer [test-project-402417-vpc-02-vpn-tunnel-02-bgp] in router [test-project-402417-vpc-02-router]...done.
Add the custom routes to BGP
Now we need to advertise our service private IP range to BGP
In the cloud shell execute:
PROJECT_ID=$(gcloud config get-value project)
gcloud compute routers update $PROJECT_ID-vpc-01-router --add-advertisement-ranges=$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())") --region=$REGION
gcloud compute networks peerings update servicenetworking-googleapis-com --network=default --import-custom-routes --export-custom-routes
Expected console output (redacted):
PROJECT_ID=$(gcloud config get-value project) gcloud compute routers update $PROJECT_ID-vpc-01-router --add-advertisement-ranges=$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())") --region=$REGION gcloud compute networks peerings update servicenetworking-googleapis-com --network=default --import-custom-routes --export-custom-routes ...
8. Configure Oracle FDW in AlloyDB
Now we can create a test database and configure the Oracle FDW extension. We are going to use the instance-1 VM we created in the second step and the noted password for the AlloyDB cluster.
Create Database
Connect to the instance-1 GCE VM:
ZONE=us-central1-a
gcloud compute ssh instance-1 --zone=$ZONE
In the SSH session execute:
export PGPASSWORD=<Noted password>
REGION=us-central1
ADBCLUSTER=alloydb-aip-01
INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)")
psql "host=$INSTANCE_IP user=postgres" -c "CREATE DATABASE quickstart_db"
Expected console output:
student@instance-1:~$ export PGPASSWORD=6dd7fKHnMId8RM97 student@instance-1:~$ REGION=us-central1 ADBCLUSTER=alloydb-aip-01 INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)") psql "host=$INSTANCE_IP user=postgres" -c "CREATE DATABASE quickstart_db" CREATE DATABASE student@instance-1:~$
Configure Oracle FDW Extension
Enable Oracle FDW extension in the new created database.
In the same SSH session execute:
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create extension if not exists oracle_fdw cascade"
Expected console output:
student@instance-1:~$ psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "create extension if not exists oracle_fdw cascade" CREATE EXTENSION student@instance-1:~$
Configure Oracle FDW
We continue to work in the same SSH session configuring Oracle FDW to work with the HR sample schema in Oracle database.
Create FDW server configuration using the internal IP address for our Oracle XE VM in the second VPC.
In the same SSH session to the instance-1 execute:
ORACLE_SERVER_IP=$(gcloud compute instances list --filter="name=(ora-xe-01)" --format="value(networkInterfaces[0].networkIP)")
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create server ora_xe foreign data wrapper oracle_fdw options (dbserver '$ORACLE_SERVER_IP:1521/xepdb1')"
Expected console output:
student@instance-1:~$ ORACLE_SERVER_IP=$(gcloud compute instances list --filter="name=(ora-xe-01)" --format="value(networkInterfaces[0].networkIP)") psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create server ora_xe foreign data wrapper oracle_fdw options (dbserver '$ORACLE_SERVER_IP:1521/xepdb1')" CREATE SERVER
Grant usage on the created server to the postgres user.
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "grant usage on foreign server ora_xe to postgres"
Expected console output:
student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "grant usage on foreign server ora_xe to postgres" GRANT
Create a mapping between the user in PostgreSQL and the Oracle database user using the password we set up for the user in the Oracle database. Use the noted password for Oracle user POSTGRES we created before (POSTGRES_ORA_PWD).
POSTGRES_ORA_PWD=<your password for the oracle user postgres>
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "CREATE USER MAPPING FOR postgres SERVER ora_xe OPTIONS ( USER 'postgres', PASSWORD '$POSTGRES_ORA_PWD')"
Expected console output:
student@instance-1:~$ POSTGRES_ORA_PWD=VeryStrongPassword0011## student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "CREATE USER MAPPING FOR postgres SERVER ora_xe OPTIONS ( USER 'postgres', PASSWORD '$POSTGRES_ORA_PWD')" CREATE USER MAPPING
9. Use the Oracle FDW with HR schema
Now we can use our Oracle FDW with data in the Oracle database.
Import Oracle HR Schema definitions
We continue to work on our Postgres client VM using psql to work with AlloyDB.
In the VM SSH session execute:
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create schema ora_exe_hr";
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "import foreign schema \"HR\" from server ora_xe into ora_exe_hr";
Expected output (redacted):
student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create schema ora_exe_hr"; psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "import foreign schema \"HR\" from server ora_xe into ora_exe_hr"; CREATE SCHEMA IMPORT FOREIGN SCHEMA student@instance-1:~$
Test the Oracle FDW
Let's run a sample SQL statement.
Connect to the database. In the VM SSH session execute:
psql -h $INSTANCE_IP -U postgres -d quickstart_db
In the PSQL session run a select statement.
select * from ora_exe_hr.countries limit 5;
Expected output:
student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db psql (13.13 (Debian 13.13-0+deb11u1), server 14.7) WARNING: psql major version 13, server major version 14. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. quickstart_db=> select * from ora_exe_hr.countries limit 5; country_id | country_name | region_id ------------+--------------+----------- AR | Argentina | 20 AU | Australia | 40 BE | Belgium | 10 BR | Brazil | 20 CA | Canada | 20 (5 rows) quickstart_db=>
It returns the results and it means our configuration works correctly.
You can experiment with the rest of the tables.
10. Clean up environment
Now when all tasks are completed we can clean up our environment destroying the components to prevent unnecessary billing.
Destroy the AlloyDB instances and cluster when you are done with the lab
Delete AlloyDB cluster and all instances
The cluster is destroyed with option force which also deletes all the instances belonging to the cluster.
In the cloud shell define the project and environment variables if you've been disconnected and all the previous settings are lost:
gcloud config set project <your project id>
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
export PROJECT_ID=$(gcloud config get-value project)
Delete the cluster:
gcloud alloydb clusters delete $ADBCLUSTER --region=$REGION --force
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ gcloud alloydb clusters delete $ADBCLUSTER --region=$REGION --force All of the cluster data will be lost when the cluster is deleted. Do you want to continue (Y/n)? Y Operation ID: operation-1697820178429-6082890a0b570-4a72f7e4-4c5df36f Deleting cluster...done.
Delete AlloyDB Backups
Delete all AlloyDB backups for the cluster:
for i in $(gcloud alloydb backups list --filter="CLUSTER_NAME: projects/$PROJECT_ID/locations/$REGION/clusters/$ADBCLUSTER" --format="value(name)" --sort-by=~createTime) ; do gcloud alloydb backups delete $(basename $i) --region $REGION --quiet; done
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ for i in $(gcloud alloydb backups list --filter="CLUSTER_NAME: projects/$PROJECT_ID/locations/$REGION/clusters/$ADBCLUSTER" --format="value(name)" --sort-by=~createTime) ; do gcloud alloydb backups delete $(basename $i) --region $REGION --quiet; done Operation ID: operation-1697826266108-60829fb7b5258-7f99dc0b-99f3c35f Deleting backup...done.
Now we can destroy our VM
Delete GCE VM
In Cloud Shell execute:
export GCEVM=instance-1
export ZONE=us-central1-a
gcloud compute instances delete $GCEVM \
--zone=$ZONE \
--quiet
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ export GCEVM=instance-1 export ZONE=us-central1-a gcloud compute instances delete $GCEVM \ --zone=$ZONE \ --quiet Deleted
Delete VM with Oracle XE
In Cloud Shell execute:
SERVER_NAME=ora-xe-01
ZONE=us-central1-a
gcloud compute instances delete $SERVER_NAME \
--zone=$ZONE \
--quiet
Expected console output:
student@cloudshell:~ (test-project-402417)$ SERVER_NAME=ora-xe-01 ZONE=us-central1-a gcloud compute instances delete $SERVER_NAME \ --zone=$ZONE \ --quiet Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/ora-xe-01]. student@cloudshell:~ (test-project-402417)$
Delete VPN
In Cloud Shell execute:
REGION=us-central1
PROJECT_ID=$(gcloud config get-value project)
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION --quiet
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION --quiet
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-01 --region=$REGION --quiet
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-02 --region=$REGION --quiet
gcloud compute routers delete $PROJECT_ID-vpc-02-router --region=$REGION --quiet
gcloud compute routers delete $PROJECT_ID-vpc-01-router --region=$REGION --quiet
gcloud compute vpn-gateways delete $PROJECT_ID-vpc-01-vpn-gtw --region=$REGION --quiet
gcloud compute vpn-gateways delete $PROJECT_ID-vpc-02-vpn-gtw --region=$REGION --quiet
Expected console output:
student@cloudshell:~ (test-project-402417)$ REGION=us-central1 PROJECT_ID=$(gcloud config get-value project) gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION --quiet gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION --quiet gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-01 --region=$REGION --quiet gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-02 --region=$REGION --quiet gcloud compute routers delete $PROJECT_ID-vpc-02-router --region=$REGION --quiet gcloud compute routers delete $PROJECT_ID-vpc-01-router --region=$REGION --quiet gcloud compute vpn-gateways delete $PROJECT_ID-vpc-01-vpn-gtw --region=$REGION --quiet gcloud compute vpn-gateways delete $PROJECT_ID-vpc-02-vpn-gtw --region=$REGION --quiet Your active configuration is: [cloudshell-18870] Deleting VPN tunnel...done. Deleting VPN tunnel...done. Deleting VPN tunnel...done. Deleting VPN tunnel...done. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-02-router]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-01-router]. Deleting VPN Gateway...done. Deleting VPN Gateway...done. student@cloudshell:~ (test-project-402417)$
Delete Second VPC
In Cloud Shell execute:
REGION=us-central1
PROJECT_ID=$(gcloud config get-value project)
gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-icmp --quiet
gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-ssh --quiet
gcloud compute firewall-rules delete default-allow-postgres --quiet
gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-oracle --quiet
gcloud compute networks subnets delete $PROJECT_ID-vpc-02-$REGION --region=$REGION --quiet
gcloud compute networks delete $PROJECT_ID-vpc-02 --quiet
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-icmp --quiet gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-ssh --quiet gcloud compute firewall-rules delete default-allow-postgres --quiet gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-oracle --quiet gcloud compute networks subnets delete $PROJECT_ID-vpc-02-$REGION --region=$REGION --quiet gcloud compute networks delete $PROJECT_ID-vpc-02 --quiet Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-icmp]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-ssh]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/default-allow-postgres]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-oracle]. gcloud compute networks subnets delete $PROJECT_ID-vpc-02-$REGION --region=$REGION --quiet gcloud compute networks delete $PROJECT_ID-vpc-02 --quiet
11. Congratulations
Congratulations for completing the codelab.
What we've covered
- How to deploy AlloyDB Cluster
- How to connect to the AlloyDB
- How to configure and deploy a sample Oracle database
- How to set up VPN between two VPC networks
- How to configure Oracle FDW extension
12. Survey
Output: