1. Introdução
Às vezes, por vários motivos, não é possível transferir cargas de trabalho de um banco de dados Oracle para o AlloyDB. Nesses casos, se quisermos disponibilizar os dados no AlloyDB para geração de relatórios ou processamento adicional, podemos usar o Oracle FDW (Foreign Data Wrapper). O Oracle FDW permite consultar dados de um banco de dados Oracle remoto e os apresenta por visualizações, fazendo com que pareçam estar dentro do AlloyDB.
Neste codelab, você vai aprender a usar o Oracle FDW para conectar seu banco de dados AlloyDB a um banco de dados Oracle implantado em uma rede separada usando o serviço VPN.
O diagrama acima mostra o cluster do AlloyDB e a instância do GCE instance-1 à esquerda implantadas em uma VPC com a rede default e a instância do GCE ora-xe-01 implantada em uma VPC diferente com o nome de rede $PROJECT_ID-vpc-02. A primeira e a segunda VPC são conectadas usando o Cloud VPN, com rotas estabelecidas, permitindo que as instâncias do Oracle e do AlloyDB se comuniquem entre si.
AlloyDB para Oracle por VPN AlloyDB para Oracle por VPN
Clique aqui para mais informações sobre a extensão Oracle FDW.
Pré-requisitos
- Noções básicas sobre o Google Cloud, console
- Habilidades básicas na interface de linha de comando e no Google Shell
- Conhecimento básico dos bancos de dados PostgreSQL e Oracle
O que você vai aprender
- Como implantar um cluster do AlloyDB
- Como se conectar ao AlloyDB
- Como configurar e implantar um banco de dados Oracle de amostra
- Como configurar uma VPN entre duas redes VPC
- Como configurar a extensão Oracle FDW
O que é necessário
- Uma conta e um projeto do Google Cloud
- Um navegador da Web, como o Chrome
2. Configuração e requisitos
Configuração de ambiente autoguiada
- Faça login no Console do Google Cloud e crie um novo projeto ou reutilize um existente. Crie uma conta do Gmail ou do Google Workspace, se ainda não tiver uma.
- O Nome do projeto é o nome de exibição para os participantes do projeto. É uma string de caracteres não usada pelas APIs do Google e pode ser atualizada quando você quiser.
- O ID do projeto precisa ser exclusivo em todos os projetos do Google Cloud e não pode ser mudado após a definição. O console do Cloud gera automaticamente uma string exclusiva. Em geral, não importa o que seja. Na maioria dos codelabs, é necessário fazer referência ao ID do projeto, normalmente identificado como
PROJECT_ID
. Se você não gostar do ID gerado, crie outro aleatório. Se preferir, teste o seu e confira se ele está disponível. Ele não pode ser mudado após essa etapa e permanece durante o projeto. - Para sua informação, há um terceiro valor, um Número do projeto, que algumas APIs usam. Saiba mais sobre esses três valores na documentação.
- Em seguida, ative o faturamento no console do Cloud para usar os recursos/APIs do Cloud. A execução deste codelab não vai ser muito cara, se tiver algum custo. Para encerrar os recursos e evitar cobranças além deste tutorial, exclua os recursos criados ou exclua o projeto. Novos usuários do Google Cloud estão qualificados para o programa de US$ 300 de avaliação sem custos.
Inicie o Cloud Shell
Embora o Google Cloud e o Spanner possam ser operados remotamente do seu laptop, neste codelab usaremos o Google Cloud Shell, um ambiente de linha de comando executado no Cloud.
No Console do Google Cloud, clique no ícone do Cloud Shell na barra de ferramentas superior à direita:
O provisionamento e a conexão com o ambiente levarão apenas alguns instantes para serem concluídos: Quando o processamento for concluído, você verá algo como:
Essa máquina virtual contém todas as ferramentas de desenvolvimento necessárias. Ela oferece um diretório principal persistente de 5 GB, além de ser executada no Google Cloud. Isso aprimora o desempenho e a autenticação da rede. Neste codelab, todo o trabalho pode ser feito com um navegador. Você não precisa instalar nada.
3. Antes de começar
Ativar API
Saída:
No Cloud Shell, verifique se o ID do projeto está configurado:
gcloud config set project [YOUR-PROJECT-ID]
PROJECT_ID=$(gcloud config get-value project)
Ative todos os serviços necessários:
gcloud services enable alloydb.googleapis.com \
compute.googleapis.com \
cloudresourcemanager.googleapis.com \
servicenetworking.googleapis.com \
vpcaccess.googleapis.com
Resultado esperado
student@cloudshell:~ (gleb-test-short-004)$ gcloud services enable alloydb.googleapis.com \ compute.googleapis.com \ cloudresourcemanager.googleapis.com \ servicenetworking.googleapis.com \ vpcaccess.googleapis.com Operation "operations/acf.p2-404051529011-664c71ad-cb2b-4ab4-86c1-1f3157d70ba1" finished successfully.
Configure sua região padrão como us-central1 ou qualquer outra que preferir. Neste laboratório, vamos usar a região us-central1.
gcloud config set compute/region us-central1
4. Implantar cluster do AlloyDB
Antes de criar um cluster do AlloyDB, é necessário alocar um intervalo de IP privado na VPC para ser usado pela instância futura do AlloyDB. Depois disso, vamos criar o cluster e a instância.
Criar um intervalo de IP privado
É preciso configurar o Acesso a serviços particulares na VPC para o AlloyDB. Vamos supor que o projeto tem uma rede VPC "padrão" a ser usada para todas as ações.
Crie o intervalo de IP privado:
gcloud compute addresses create psa-range \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--description="VPC private service access" \
--network=default
Crie uma conexão privada com o intervalo de IP alocado:
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=psa-range \
--network=default
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ gcloud compute addresses create psa-range \ --global \ --purpose=VPC_PEERING \ --prefix-length=16 \ --description="VPC private service access" \ --network=default Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/addresses/psa-range]. student@cloudshell:~ (test-project-402417)$ gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=psa-range \ --network=default Operation "operations/pssn.p24-4470404856-595e209f-19b7-4669-8a71-cbd45de8ba66" finished successfully. student@cloudshell:~ (test-project-402417)$
Criar cluster do AlloyDB
Crie um cluster do AlloyDB na região padrão:
export PGPASSWORD=`openssl rand -hex 12`
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
gcloud alloydb clusters create $ADBCLUSTER \
--password=$PGPASSWORD \
--network=default \
--region=$REGION
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ export PGPASSWORD=`openssl rand -base64 12` export REGION=us-central1 export ADBCLUSTER=alloydb-aip-01 gcloud alloydb clusters create $ADBCLUSTER \ --password=$PGPASSWORD \ --network=default \ --region=$REGION Operation ID: operation-1697655441138-6080235852277-9e7f04f5-2012fce4 Creating cluster...done.
Anote a senha do PostgreSQL para uso futuro:
echo $PGPASSWORD
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ echo $PGPASSWORD bbefbfde7601985b0dee5723
Criar instância principal do AlloyDB
Crie uma instância principal do AlloyDB para o cluster:
export REGION=us-central1
gcloud alloydb instances create $ADBCLUSTER-pr \
--instance-type=PRIMARY \
--cpu-count=2 \
--region=$REGION \
--cluster=$ADBCLUSTER
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ gcloud alloydb instances create $ADBCLUSTER-pr \ --instance-type=PRIMARY \ --cpu-count=2 \ --region=$REGION \ --availability-type ZONAL \ --cluster=$ADBCLUSTER Operation ID: operation-1697659203545-6080315c6e8ee-391805db-25852721 Creating instance...done.
5. Conectar ao AlloyDB
Para trabalhar com o AlloyDB e executar comandos como criar banco de dados, ativar extensão e outros, precisamos de um ambiente de desenvolvimento. Pode ser qualquer ferramenta padrão que funcione com o PostgreSQL. No nosso caso, vamos usar um cliente padrão para PostgreSQL instalado em uma caixa Linux.
O AlloyDB é implantado usando uma conexão somente privada, então precisamos de uma VM com o cliente PostgreSQL instalado para trabalhar com o banco de dados.
Implantar a VM do GCE
Crie uma VM do GCE na mesma região e VPC que o cluster do AlloyDB.
No Cloud Shell, execute:
export ZONE=us-central1-a
gcloud compute instances create instance-1 \
--zone=$ZONE \
--scopes=https://www.googleapis.com/auth/cloud-platform
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ export ZONE=us-central1-a student@cloudshell:~ (test-project-402417)$ gcloud compute instances create instance-1 \ --zone=$ZONE \ --scopes=https://www.googleapis.com/auth/cloud-platform Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/instance-1]. NAME: instance-1 ZONE: us-central1-a MACHINE_TYPE: n1-standard-1 PREEMPTIBLE: INTERNAL_IP: 10.128.0.2 EXTERNAL_IP: 34.71.192.233 STATUS: RUNNING
Instale o cliente Postgres
Instale o software do cliente PostgreSQL na VM implantada.
Conecte-se à VM:
gcloud compute ssh instance-1 --zone=us-central1-a
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ gcloud compute ssh instance-1 --zone=us-central1-a Updating project ssh metadata...working..Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417]. Updating project ssh metadata...done. Waiting for SSH key to propagate. Warning: Permanently added 'compute.5110295539541121102' (ECDSA) to the list of known hosts. Linux instance-1 5.10.0-26-cloud-amd64 #1 SMP Debian 5.10.197-1 (2023-09-29) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. student@instance-1:~$
Instale o comando em execução do software na VM:
sudo apt-get update
sudo apt-get install --yes postgresql-client
Saída esperada do console:
student@instance-1:~$ sudo apt-get update sudo apt-get install --yes postgresql-client Get:1 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable InRelease [5146 B] Get:2 https://packages.cloud.google.com/apt cloud-sdk-bullseye InRelease [6406 B] Hit:3 https://deb.debian.org/debian bullseye InRelease Get:4 https://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB] Get:5 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable/main amd64 Packages [1930 B] Get:6 https://deb.debian.org/debian bullseye-updates InRelease [44.1 kB] Get:7 https://deb.debian.org/debian bullseye-backports InRelease [49.0 kB] ...redacted... update-alternatives: using /usr/share/postgresql/13/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode Setting up postgresql-client (13+225) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for libc-bin (2.31-13+deb11u7) ...
Conectar-se à instância
Conecte-se à instância principal pela VM com psql.
Na mesma guia do Cloud Shell com a sessão SSH aberta para a VM instance-1.
Use o valor da senha do AlloyDB (PGPASSWORD) e o ID do cluster do AlloyDB para se conectar ao AlloyDB pela VM do GCE:
export PGPASSWORD=<Noted password>
ADBCLUSTER=<your AlloyDB cluster name>
REGION=us-central1
INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)")
export INSTANCE_IP=<AlloyDB Instance IP>
psql "host=$INSTANCE_IP user=postgres sslmode=require"
Saída esperada do console:
student@instance-1:~$ export PGPASSWORD=CQhOi5OygD4ps6ty student@instance-1:~$ ADBCLUSTER=alloydb-aip-01 student@instance-1:~$ REGION=us-central1 student@instance-1:~$ INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)") gleb@instance-1:~$ psql "host=$INSTANCE_IP user=postgres sslmode=require" psql (13.13 (Debian 13.13-0+deb11u1), server 14.7) WARNING: psql major version 13, server major version 14. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. postgres=>
6. Criar um banco de dados Oracle de amostra
Nosso banco de dados Oracle de amostra será implantado em uma VPC separada para refletir os cenários quando for implantado no local ou em qualquer outro ambiente separado da VPC do cluster do AlloyDB.
Crie uma segunda VPC
Use uma guia aberta do Cloud Shell ou um terminal de linha de comando com o SDK Google Cloud para executar:
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
gcloud compute networks create $PROJECT_ID-vpc-02 --project=$PROJECT_ID --description=Custom\ VPC\ for\ $PROJECT_ID\ project --subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional
gcloud compute networks subnets create $PROJECT_ID-vpc-02-$REGION --project=$PROJECT_ID --range=10.110.0.0/24 --stack-type=IPV4_ONLY --network=$PROJECT_ID-vpc-02 --region=$REGION
Saída esperada do console:
student@cloudshell:~ PROJECT_ID=$(gcloud config get-value project) REGION=us-central1 gcloud compute networks create $PROJECT_ID-vpc-02 --project=$PROJECT_ID --description=Custom\ VPC\ for\ $PROJECT_ID\ project --subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional gcloud compute networks subnets create $PROJECT_ID-vpc-02-$REGION --project=$PROJECT_ID --range=10.110.0.0/24 --stack-type=IPV4_ONLY --network=$PROJECT_ID-vpc-02 --region=$REGION Your active configuration is: [cloudshell-3726] Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/networks/test-project-402417-vpc-02]. NAME: test-project-402417-vpc-02 SUBNET_MODE: CUSTOM BGP_ROUTING_MODE: REGIONAL IPV4_RANGE: GATEWAY_IPV4: Instances on this network will not be reachable until firewall rules are created. As an example, you can allow all internal traffic between instances as well as SSH, RDP, and ICMP by running: $ gcloud compute firewall-rules create <FIREWALL_NAME> --network test-project-402417-vpc-02 --allow tcp,udp,icmp --source-ranges <IP_RANGE> $ gcloud compute firewall-rules create <FIREWALL_NAME> --network test-project-402417-vpc-02 --allow tcp:22,tcp:3389,icmp Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/subnetworks/test-project-402417-vpc-02-us-central1]. NAME: test-project-402417-vpc-02-us-central1 REGION: us-central1 NETWORK: test-project-402417-vpc-02 RANGE: 10.110.0.0/24 STACK_TYPE: IPV4_ONLY IPV6_ACCESS_TYPE:
Configurar regras mínimas de firewall na segunda VPC para diagnóstico básico e conexão SSH.
gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-icmp --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ ICMP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=icmp
gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-ssh --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ TCP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network\ using\ port\ 22. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=tcp:22
Saída esperada do console:
student@cloudshell:~ gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-icmp --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ ICMP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=icmp gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-ssh --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ TCP\ connections\ from\ any\ source\ to\ any\ instance\ on\ the\ network\ using\ port\ 22. --direction=INGRESS --priority=65534 --source-ranges=0.0.0.0/0 --action=ALLOW --rules=tcp:22 Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-icmp]. Creating firewall...done. NAME: test-project-402417-vpc-02-allow-icmp NETWORK: test-project-402417-vpc-02 DIRECTION: INGRESS PRIORITY: 65534 ALLOW: icmp DENY: DISABLED: False Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-ssh]. Creating firewall...done. NAME: test-project-402417-vpc-02-allow-ssh NETWORK: test-project-402417-vpc-02 DIRECTION: INGRESS PRIORITY: 65534 ALLOW: tcp:22 DENY: DISABLED: False
Configurar regras de firewall na segunda VPC padrão e na segunda para permitir conexões entre as sub-redes internas com as portas do banco de dados.
ALLOYDB_NET_IP_RANGES=$(gcloud compute networks subnets list --network=default --filter="region:us-central1" --format="value(ipCidrRange)"),$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())")
gcloud compute firewall-rules create default-allow-postgres --project=$PROJECT_ID --network=default --description=Allows\ Postgres\ connections\ from\ local\ networks\ to\ postgres\ instance\ on\ the\ network\ using\ port\ 5432. --direction=INGRESS --priority=65534 --source-ranges=10.110.0.0/24 --action=ALLOW --rules=tcp:5432
gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-oracle --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ Oracle\ connections\ from\ local\ networks\ to\ Oracle\ instance\ on\ the\ network\ using\ port\ 1521. --direction=INGRESS --priority=65534 --source-ranges=$ALLOYDB_NET_IP_RANGES --action=ALLOW --rules=tcp:1521
Saída esperada do console:
student@cloudshell:~ ALLOYDB_NET_IP_RANGES=$(gcloud compute networks subnets list --network=default --filter="region:us-central1" --format="value(ipCidrRange)"),$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())") gcloud compute firewall-rules create default-allow-postgres --project=$PROJECT_ID --network=default --description=Allows\ Postgres\ connections\ from\ local\ networks\ to\ postgres\ instance\ on\ the\ network\ using\ port\ 5432. --direction=INGRESS --priority=65534 --source-ranges=10.110.0.0/24 --action=ALLOW --rules=tcp:5432 gcloud compute firewall-rules create $PROJECT_ID-vpc-02-allow-oracle --project=$PROJECT_ID --network=$PROJECT_ID-vpc-02 --description=Allows\ Oracle\ connections\ from\ local\ networks\ to\ Oracle\ instance\ on\ the\ network\ using\ port\ 1521. --direction=INGRESS --priority=65534 --source-ranges=$ALLOYDB_NET_IP_RANGES --action=ALLOW --rules=tcp:1521 Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/default-allow-postgres]. Creating firewall...done. NAME: default-allow-postgres NETWORK: default DIRECTION: INGRESS PRIORITY: 65534 ALLOW: tcp:5432 DENY: DISABLED: False Creating firewall...working..Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-oracle]. Creating firewall...done. NAME: test-project-402417-vpc-02-allow-oracle NETWORK: test-project-402417-vpc-02 DIRECTION: INGRESS PRIORITY: 65534 ALLOW: tcp:1521 DENY: DISABLED: False
Implantar a VM do GCE na segunda VPC
Implante a VM do GCE usando a segunda sub-rede VPC para o ambiente Oracle de amostra. Esta instância de computação será usada como nosso ambiente de teste do Oracle. Os binários do banco de dados Oracle XE serão instalados aqui e a instância será usada como um servidor e como um cliente para tarefas específicas da Oracle.
Usando o mesmo terminal ou Cloud Shell, execute o comando para criar a VM:
SERVER_NAME=ora-xe-01
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
ZONE=us-central1-a
MACHINE_TYPE=e2-standard-2
SUBNET=$PROJECT_ID-vpc-02-$REGION
DISK_SIZE=50
gcloud compute instances create $SERVER_NAME --project=$PROJECT_ID --zone=$ZONE --machine-type=$MACHINE_TYPE --network-interface=subnet=$SUBNET --create-disk=auto-delete=yes,boot=yes,size=$DISK_SIZE,image=projects/debian-cloud/global/images/$(gcloud compute images list --filter="family=debian-11 AND family!=debian-11-arm64" \
--format="value(name)"),type=pd-ssd
Saída esperada do console:
student@cloudshell:~ SERVER_NAME=ora-xe-01 PROJECT_ID=$(gcloud config get-value project) REGION=us-central1 ZONE=us-central1-a MACHINE_TYPE=e2-standard-2 SUBNET=$PROJECT_ID-vpc-02-$REGION DISK_SIZE=50 gcloud compute instances create $SERVER_NAME --project=$PROJECT_ID --zone=$ZONE --machine-type=$MACHINE_TYPE --network-interface=subnet=$SUBNET --create-disk=auto-delete=yes,boot=yes,size=$DISK_SIZE,image=projects/debian-cloud/global/images/$(gcloud compute images list --filter="family=debian-11 AND family!=debian-11-arm64" \ --format="value(name)"),type=pd-ssd Your active configuration is: [cloudshell-3726] Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/ora-xe-01]. WARNING: Some requests generated warnings: - Disk size: '50 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details. NAME: ora-xe-01 ZONE: us-central1-a MACHINE_TYPE: e2-standard-2 PREEMPTIBLE: INTERNAL_IP: 10.110.0.2 EXTERNAL_IP: 34.121.117.216 STATUS: RUNNING
Criar o Oracle XE
O Oracle Express Edition (XE) será usado nos testes. O uso do contêiner do Oracle XE está sujeito aos termos da licença de Termos e Condições de uso sem custo financeiro da Oracle.
Conecte-se à VM criada usando ssh:
gcloud compute ssh $SERVER_NAME --zone=$ZONE
Na sessão SSH, implante os pacotes necessários para o tipo Docker de implantação do Oracle XE. Leia sobre outras opções de implantação na documentação do Oracle.
sudo apt-get update
sudo apt-get -y install ca-certificates curl gnupg lsb-release
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
Saída esperada do console (editada):
student@ora-xe-01:~$ sudo apt-get update sudo apt-get -y install ca-certificates curl gnupg lsb-release sudo mkdir -m 0755 -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin sudo usermod -aG docker $USER Get:1 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable InRelease [5146 B] Get:2 https://packages.cloud.google.com/apt cloud-sdk-bullseye InRelease [6406 B] ... Setting up git (1:2.30.2-1+deb11u2) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for libc-bin (2.31-13+deb11u7) ... student@ora-xe-01:~$
Desconecte-se e conecte-se novamente à VM.
exit
gcloud compute ssh $SERVER_NAME --zone=$ZONE
Saída esperada do console:
student@ora-xe-01:~$ exit logout Connection to 34.132.87.73 closed. student@cloudshell:~ (test-project-002-410214)$ gcloud compute ssh $SERVER_NAME --zone=$ZONE Linux ora-xe-01 5.10.0-26-cloud-amd64 #1 SMP Debian 5.10.197-1 (2023-09-29) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Thu Jan 4 14:51:25 2024 from 34.73.112.191 student@ora-xe-01:~$
Na nova sessão SSH para o ora-xe-01, execute:
ORACLE_PWD=`openssl rand -hex 12`
echo $ORACLE_PWD
docker run -d --name oracle-xe -p 1521:1521 -e ORACLE_PWD=$ORACLE_PWD container-registry.oracle.com/database/express:21.3.0-xe
Saída esperada do console:
student@ora-xe-01:~$ ORACLE_PWD=`openssl rand -hex 12` echo $ORACLE_PWD docker run -d --name oracle-xe -p 1521:1521 -e ORACLE_PWD=$ORACLE_PWD container-registry.oracle.com/database/express:21.3.0-xe e36e191b6c298ce6e02a614c Unable to find image 'container-registry.oracle.com/database/express:21.3.0-xe' locally 21.3.0-xe: Pulling from database/express 2318ff572021: Pull complete c6250726c822: Pull complete 33ac5ea7f7dd: Pull complete 753e0fae7e64: Pull complete Digest: sha256:dcf137aab02d5644aaf9299aae736e4429f9bfdf860676ff398a1458ab8d23f2 Status: Downloaded newer image for container-registry.oracle.com/database/express:21.3.0-xe 9f74e2d37bf49b01785338495e02d79c55c92e4ddd409eddcf45e754fd9044d9
Registre o valor de ORACLE_PWD para ser usado posteriormente para conexão com o banco de dados como um sistema de usuário e sys.
Instalar esquema de amostra
Criar um conjunto de dados de amostra no banco de dados instalado usando o esquema de amostra de RH padrão.
Na mesma sessão SSH:
git clone https://github.com/oracle-samples/db-sample-schemas.git
curl -O https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip
sudo apt-get install -y unzip openjdk-17-jdk
unzip sqlcl-latest.zip
echo 'export PATH=$HOME/sqlcl/bin:$PATH' >>.bashrc
exec $SHELL
Saída esperada do console:
student@ora-xe-01:~$ git clone https://github.com/oracle-samples/db-sample-schemas.git curl -O https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip sudo apt-get install -y unzip openjdk-17-jdk unzip sqlcl-latest.zip echo 'export PATH=$HOME/sqlcl/bin:$PATH' >>.bashrc exec $SHELL Cloning into 'db-sample-schemas'... remote: Enumerating objects: 624, done. remote: Counting objects: 100% (104/104), done. ... inflating: sqlcl/lib/sshd-contrib.jar inflating: sqlcl/lib/sshd-putty.jar inflating: sqlcl/lib/dbtools-sqlcl-distribution.jar student@ora-xe-01:~$
Na mesma sessão SSH:
ORACLE_PWD=<your noted password from the previous step>
cd db-sample-schemas/human_resources/
sql system/$ORACLE_PWD@localhost/XEPDB1 @hr_install.sql
Insira uma senha para o esquema de RH (qualquer senha forte é suficiente) e aceite os tablespaces padrão sugeridos nas solicitações.
Saída esperada do console:
student@ora-xe-01:~$ ORACLE_PWD=e36e191b6c298ce6e02a614c student@ora-xe-01:~$ cd db-sample-schemas/human_resources/ sql system/$ORACLE_PWD@localhost/XEPDB1 @hr_install.sql SQLcl: Release 23.3 Production on Wed Jan 03 14:38:14 2024 Copyright (c) 1982, 2024, Oracle. All rights reserved. Last Successful login time: Wed Jan 03 2024 14:38:17 +00:00 Connected to: Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 Thank you for installing the Oracle Human Resources Sample Schema. This installation script will automatically exit your database session at the end of the installation or if any error is encountered. The entire installation will be logged into the 'hr_install.log' log file. Enter a password for the user HR: ************************ Enter a tablespace for HR [USERS]: Do you want to overwrite the schema, if it already exists? [YES|no]: ****** Creating REGIONS table .... Table REGIONS created. ... Table provided actual ______________ ___________ _________ regions 5 5 countries 25 25 departments 27 27 locations 23 23 employees 107 107 jobs 19 19 job_history 10 10 Thank you! ___________________________________________________________ The installation of the sample schema is now finished. Please check the installation verification output above. You will now be disconnected from the database. Thank you for using Oracle Database! Disconnected from Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0
Criar usuário para o Oracle FDW
Para simplificar, vamos criar um POSTGRES de usuário, igual ao usuário padrão do PostgreSQL.
Conecte-se ao banco de dados Oracle como o usuário SYS:
sql sys/$ORACLE_PWD@localhost/XEPDB1 as sysdba
Na sessão SQL conforme o usuário SYS é executado (substitua POSTGRES_ORA_PWD pela sua senha para o usuário):
create user postgres identified by POSTGRES_ORA_PWD;
E conceda os privilégios necessários ao usuário:
grant connect, select any table to postgres;
grant select on v_$SQL_PLAN to postgres;
grant select on v_$SQL to postgres;
Saída esperada do console:
student@ora-xe-01:~/db-sample-schemas/human_resources$ sql sys/$ORACLE_PWD@localhost/XEPDB1 as sysdba SQLcl: Release 23.3 Production on Wed Jan 03 14:42:37 2024 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> create user postgres identified by VeryStrongPassword0011##; User POSTGRES created. SQL> grant connect, select any table to postgres; Grant succeeded. SQL> grant select on v_$SQL_PLAN to postgres; Grant succeeded. SQL> grant select on v_$SQL to postgres; Grant succeeded. SQL>
7. Criar VPN entre VPC
Os recursos implantados em cada VPC só podem ter acesso aos endereços IP internos dessa rede específica e não podem se conectar a recursos internos de outra VPC. O Google Cloud VPN cria uma ponte entre duas VPCs e permite que nossa VM com o banco de dados Oracle se conecte e se conecte pela instância do AlloyDB. A conexão VPN inclui vários componentes, como Cloud Routers, gateways, túneis de VPN e sessões do BGP. Aqui, vamos criar e configurar todos os componentes necessários para uma VPN de alta disponibilidade entre duas VPCs. Leia mais sobre o Google Cloud VPN na documentação.
Criar roteadores e gateways
Criar Cloud Routers e gateways de VPN nas duas VPCs.
No Cloud Shell ou em um terminal com o SDK Cloud, execute:
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
gcloud compute routers create $PROJECT_ID-vpc-01-router --project=$PROJECT_ID --region=$REGION --network=default --asn=64520 --advertisement-mode=custom --set-advertisement-groups=all_subnets
gcloud compute routers create $PROJECT_ID-vpc-02-router --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02 --asn=64521 --advertisement-mode=custom --set-advertisement-groups=all_subnets
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ PROJECT_ID=$(gcloud config get-value project) REGION=us-central1 gcloud compute routers create $PROJECT_ID-vpc-01-router --project=$PROJECT_ID --region=$REGION --network=default --asn=64520 --advertisement-mode=custom --set-advertisement-groups=all_subnets gcloud compute routers create $PROJECT_ID-vpc-02-router --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02 --asn=64521 --advertisement-mode=custom --set-advertisement-groups=all_subnets Your active configuration is: [cloudshell-18870] Creating router [test-project-402417-vpc-01-router]...done. NAME: test-project-402417-vpc-01-router REGION: us-central1 NETWORK: default Creating router [test-project-402417-vpc-02-router]...done. NAME: test-project-402417-vpc-02-router REGION: us-central1 NETWORK: test-project-402417-vpc-02
Crie gateways de VPN nas duas VPCs:
PROJECT_ID=$(gcloud config get-value project)
gcloud compute vpn-gateways create $PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=default
gcloud compute vpn-gateways create $PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ gcloud compute vpn-gateways create $PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=default gcloud compute vpn-gateways create $PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --network=$PROJECT_ID-vpc-02 Creating VPN Gateway...done. NAME: test-project-402417-vpc-01-vpn-gtw INTERFACE0: 35.242.106.28 INTERFACE1: 35.220.86.122 INTERFACE0_IPV6: INTERFACE1_IPV6: NETWORK: default REGION: us-central1 Creating VPN Gateway...done. NAME: test-project-402417-vpc-02-vpn-gtw INTERFACE0: 35.242.116.197 INTERFACE1: 35.220.68.53 INTERFACE0_IPV6: INTERFACE1_IPV6: NETWORK: test-project-402417-vpc-02 REGION: us-central1 student@cloudshell:~ (test-project-402417)$
Criar túneis de VPN
Vamos criar uma configuração de alta disponibilidade com um par de túneis VPN para cada VPC.
No Cloud Shell ou em um terminal com SDK, execute:
PROJECT_ID=$(gcloud config get-value project)
SHARED_SECRET=`openssl rand -base64 24`
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0
gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ SHARED_SECRET=`openssl rand -base64 24` gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0 gcloud compute vpn-tunnels create $PROJECT_ID-vpc-01-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-02-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-01-router --vpn-gateway=$PROJECT_ID-vpc-01-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1 gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-01 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=0 gcloud compute vpn-tunnels create $PROJECT_ID-vpc-02-vpn-tunnel-02 --shared-secret=$SHARED_SECRET --peer-gcp-gateway=$PROJECT_ID-vpc-01-vpn-gtw --ike-version=2 --router=$PROJECT_ID-vpc-02-router --vpn-gateway=$PROJECT_ID-vpc-02-vpn-gtw --project=$PROJECT_ID --region=$REGION --interface=1 Creating VPN tunnel...done. NAME: test-project-402417-vpc-01-vpn-tunnel-01 REGION: us-central1 GATEWAY: test-project-402417-vpc-01-vpn-gtw VPN_INTERFACE: 0 PEER_ADDRESS: 35.242.116.197 Creating VPN tunnel...done. NAME: test-project-402417-vpc-01-vpn-tunnel-02 REGION: us-central1 GATEWAY: test-project-402417-vpc-01-vpn-gtw VPN_INTERFACE: 1 PEER_ADDRESS: 35.220.68.53 Creating VPN tunnel...done. NAME: test-project-402417-vpc-02-vpn-tunnel-01 REGION: us-central1 GATEWAY: test-project-402417-vpc-02-vpn-gtw VPN_INTERFACE: 0 PEER_ADDRESS: 35.242.106.28 Creating VPN tunnel...done. NAME: test-project-402417-vpc-02-vpn-tunnel-02 REGION: us-central1 GATEWAY: test-project-402417-vpc-02-vpn-gtw VPN_INTERFACE: 1 PEER_ADDRESS: 35.220.86.122
Criar sessões do BGP
As sessões do protocolo de gateway de borda (BGP, na sigla em inglês) permitem roteamento dinâmico trocando informações sobre as rotas disponíveis entre as VPCs.
Crie o primeiro par de pares de BGP na primeira rede (padrão). Os endereços IP dos pares serão usados para o BGP criado na segunda VPC.
No Cloud Shell ou em um terminal com SDK, execute:
PROJECT_ID=$(gcloud config get-value project)
gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-asn=64521 --region=$REGION
gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-asn=64521 --region=$REGION
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-asn=64521 --region=$REGION gcloud compute routers add-interface $PROJECT_ID-vpc-01-router --interface-name=$PROJECT_ID-vpc-01-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-01-router --peer-name=$PROJECT_ID-vpc-01-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-asn=64521 --region=$REGION Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-01-router]. Creating peer [test-project-402417-vpc-01-vpn-tunnel-01-bgp] in router [test-project-402417-vpc-01-router]...done. Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-01-router]. Creating peer [test-project-402417-vpc-01-vpn-tunnel-02-bgp] in router [test-project-402417-vpc-01-router]...done. student@cloudshell:~ (test-project-402417)$
Adicione o BGP na segunda VPC:
REGION=us-central1
PROJECT_ID=$(gcloud config get-value project)
gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-01 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION
gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-02 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION
gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ REGION=us-central1 PROJECT_ID=$(gcloud config get-value project) gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-0 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-01 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-01-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-0 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[0].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION gcloud compute routers add-interface $PROJECT_ID-vpc-02-router --interface-name=$PROJECT_ID-vpc-02-router-bgp-if-1 --vpn-tunnel=$PROJECT_ID-vpc-02-vpn-tunnel-02 --ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].peerIpAddress)" --region=$REGION) --mask-length=30 --region=$REGION gcloud compute routers add-bgp-peer $PROJECT_ID-vpc-02-router --peer-name=$PROJECT_ID-vpc-02-vpn-tunnel-02-bgp --interface=$PROJECT_ID-vpc-01-router-bgp-if-1 --peer-ip-address=$(gcloud compute routers describe $PROJECT_ID-vpc-01-router --format="value(bgpPeers[1].ipAddress)" --region=$REGION) --peer-asn=64520 --region=$REGION Your active configuration is: [cloudshell-18870] Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-02-router]. Creating peer [test-project-402417-vpc-02-vpn-tunnel-01-bgp] in router [test-project-402417-vpc-02-router]...done. Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-02-router]. Creating peer [test-project-402417-vpc-02-vpn-tunnel-02-bgp] in router [test-project-402417-vpc-02-router]...done.
Adicionar as rotas personalizadas ao BGP
Agora precisamos anunciar nosso intervalo de IP privado do serviço para o BGP
No Cloud Shell, execute:
PROJECT_ID=$(gcloud config get-value project)
gcloud compute routers update $PROJECT_ID-vpc-01-router --add-advertisement-ranges=$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())") --region=$REGION
gcloud compute networks peerings update servicenetworking-googleapis-com --network=default --import-custom-routes --export-custom-routes
Saída esperada do console (editada):
PROJECT_ID=$(gcloud config get-value project) gcloud compute routers update $PROJECT_ID-vpc-01-router --add-advertisement-ranges=$(gcloud compute addresses list --filter="name:psa-range AND network:default" --format="value(address_range())") --region=$REGION gcloud compute networks peerings update servicenetworking-googleapis-com --network=default --import-custom-routes --export-custom-routes ...
8. Configurar o Oracle FDW no AlloyDB
Agora podemos criar um banco de dados de teste e configurar a extensão Oracle FDW. Vamos usar a VM instance-1 que criamos na segunda etapa e a senha anotada para o cluster do AlloyDB.
Criar banco de dados
Conecte-se à VM do GCE da instance-1:
ZONE=us-central1-a
gcloud compute ssh instance-1 --zone=$ZONE
Na sessão SSH, execute:
export PGPASSWORD=<Noted password>
REGION=us-central1
ADBCLUSTER=alloydb-aip-01
INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)")
psql "host=$INSTANCE_IP user=postgres" -c "CREATE DATABASE quickstart_db"
Saída esperada do console:
student@instance-1:~$ export PGPASSWORD=6dd7fKHnMId8RM97 student@instance-1:~$ REGION=us-central1 ADBCLUSTER=alloydb-aip-01 INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)") psql "host=$INSTANCE_IP user=postgres" -c "CREATE DATABASE quickstart_db" CREATE DATABASE student@instance-1:~$
Configurar a extensão Oracle FDW
Ative a extensão Oracle FDW no novo banco de dados criado.
Na mesma sessão SSH, execute:
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create extension if not exists oracle_fdw cascade"
Saída esperada do console:
student@instance-1:~$ psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "create extension if not exists oracle_fdw cascade" CREATE EXTENSION student@instance-1:~$
Configurar o Oracle FDW
Continuamos trabalhando na mesma sessão SSH configurando o Oracle FDW para trabalhar com o esquema de amostra de RH no banco de dados Oracle.
Crie a configuração do servidor FDW usando o endereço IP interno para nossa VM Oracle XE na segunda VPC.
Na mesma sessão SSH para a instância-1, execute:
ORACLE_SERVER_IP=$(gcloud compute instances list --filter="name=(ora-xe-01)" --format="value(networkInterfaces[0].networkIP)")
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create server ora_xe foreign data wrapper oracle_fdw options (dbserver '$ORACLE_SERVER_IP:1521/xepdb1')"
Saída esperada do console:
student@instance-1:~$ ORACLE_SERVER_IP=$(gcloud compute instances list --filter="name=(ora-xe-01)" --format="value(networkInterfaces[0].networkIP)") psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create server ora_xe foreign data wrapper oracle_fdw options (dbserver '$ORACLE_SERVER_IP:1521/xepdb1')" CREATE SERVER
Conceda uso no servidor criado ao usuário postgres.
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "grant usage on foreign server ora_xe to postgres"
Saída esperada do console:
student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "grant usage on foreign server ora_xe to postgres" GRANT
Crie um mapeamento entre o usuário no PostgreSQL e o usuário do banco de dados Oracle usando a senha que configuramos para o usuário no banco de dados Oracle. Use a senha anotada para o usuário Oracle POSTGRES que criamos antes (POSTGRES_ORA_PWD).
POSTGRES_ORA_PWD=<your password for the oracle user postgres>
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "CREATE USER MAPPING FOR postgres SERVER ora_xe OPTIONS ( USER 'postgres', PASSWORD '$POSTGRES_ORA_PWD')"
Saída esperada do console:
student@instance-1:~$ POSTGRES_ORA_PWD=VeryStrongPassword0011## student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "CREATE USER MAPPING FOR postgres SERVER ora_xe OPTIONS ( USER 'postgres', PASSWORD '$POSTGRES_ORA_PWD')" CREATE USER MAPPING
9. Usar o esquema do Oracle FDW com HR
Agora podemos usar o Oracle FDW com dados no banco de dados Oracle.
Importar definições de esquema do Oracle HR
Continuamos trabalhando na nossa VM do cliente Postgres usando o psql para trabalhar com o AlloyDB.
Na sessão SSH da VM, execute:
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create schema ora_exe_hr";
psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "import foreign schema \"HR\" from server ora_xe into ora_exe_hr";
Saída esperada (editada):
student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "create schema ora_exe_hr"; psql -h $INSTANCE_IP -U postgres -d quickstart_db -c "import foreign schema \"HR\" from server ora_xe into ora_exe_hr"; CREATE SCHEMA IMPORT FOREIGN SCHEMA student@instance-1:~$
Testar o Oracle FDW
Vamos executar um exemplo de instrução SQL.
Conecte-se ao banco de dados. Na sessão SSH da VM, execute:
psql -h $INSTANCE_IP -U postgres -d quickstart_db
Na sessão PSQL, execute uma instrução select.
select * from ora_exe_hr.countries limit 5;
Saída esperada:
student@instance-1:~$ psql -h $INSTANCE_IP -U postgres -d quickstart_db psql (13.13 (Debian 13.13-0+deb11u1), server 14.7) WARNING: psql major version 13, server major version 14. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. quickstart_db=> select * from ora_exe_hr.countries limit 5; country_id | country_name | region_id ------------+--------------+----------- AR | Argentina | 20 AU | Australia | 40 BE | Belgium | 10 BR | Brazil | 20 CA | Canada | 20 (5 rows) quickstart_db=>
Isso retorna os resultados e significa que nossa configuração funciona corretamente.
Você pode testar com o restante das tabelas.
10. Limpar o ambiente
Agora, quando todas as tarefas estiverem concluídas, poderemos limpar nosso ambiente destruindo os componentes para evitar cobranças desnecessárias.
Destrua as instâncias e o cluster do AlloyDB quando terminar o laboratório.
Excluir o cluster do AlloyDB e todas as instâncias
O cluster é destruído com a opção "force" que também exclui todas as instâncias pertencentes.
No Cloud Shell, defina o projeto e as variáveis de ambiente se tiver ocorrido uma desconexão e todas as configurações anteriores forem perdidas:
gcloud config set project <your project id>
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
export PROJECT_ID=$(gcloud config get-value project)
Exclua o cluster:
gcloud alloydb clusters delete $ADBCLUSTER --region=$REGION --force
Saída esperada do console:
student@cloudshell:~ (test-project-001-402417)$ gcloud alloydb clusters delete $ADBCLUSTER --region=$REGION --force All of the cluster data will be lost when the cluster is deleted. Do you want to continue (Y/n)? Y Operation ID: operation-1697820178429-6082890a0b570-4a72f7e4-4c5df36f Deleting cluster...done.
Excluir backups do AlloyDB
Exclua todos os backups do AlloyDB do cluster:
for i in $(gcloud alloydb backups list --filter="CLUSTER_NAME: projects/$PROJECT_ID/locations/$REGION/clusters/$ADBCLUSTER" --format="value(name)" --sort-by=~createTime) ; do gcloud alloydb backups delete $(basename $i) --region $REGION --quiet; done
Saída esperada do console:
student@cloudshell:~ (test-project-001-402417)$ for i in $(gcloud alloydb backups list --filter="CLUSTER_NAME: projects/$PROJECT_ID/locations/$REGION/clusters/$ADBCLUSTER" --format="value(name)" --sort-by=~createTime) ; do gcloud alloydb backups delete $(basename $i) --region $REGION --quiet; done Operation ID: operation-1697826266108-60829fb7b5258-7f99dc0b-99f3c35f Deleting backup...done.
Agora você pode destruir a VM.
Excluir a VM do GCE
No Cloud Shell, execute:
export GCEVM=instance-1
export ZONE=us-central1-a
gcloud compute instances delete $GCEVM \
--zone=$ZONE \
--quiet
Saída esperada do console:
student@cloudshell:~ (test-project-001-402417)$ export GCEVM=instance-1 export ZONE=us-central1-a gcloud compute instances delete $GCEVM \ --zone=$ZONE \ --quiet Deleted
Excluir VM com o Oracle XE
No Cloud Shell, execute:
SERVER_NAME=ora-xe-01
ZONE=us-central1-a
gcloud compute instances delete $SERVER_NAME \
--zone=$ZONE \
--quiet
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ SERVER_NAME=ora-xe-01 ZONE=us-central1-a gcloud compute instances delete $SERVER_NAME \ --zone=$ZONE \ --quiet Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/ora-xe-01]. student@cloudshell:~ (test-project-402417)$
Excluir VPN
No Cloud Shell, execute:
REGION=us-central1
PROJECT_ID=$(gcloud config get-value project)
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION --quiet
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION --quiet
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-01 --region=$REGION --quiet
gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-02 --region=$REGION --quiet
gcloud compute routers delete $PROJECT_ID-vpc-02-router --region=$REGION --quiet
gcloud compute routers delete $PROJECT_ID-vpc-01-router --region=$REGION --quiet
gcloud compute vpn-gateways delete $PROJECT_ID-vpc-01-vpn-gtw --region=$REGION --quiet
gcloud compute vpn-gateways delete $PROJECT_ID-vpc-02-vpn-gtw --region=$REGION --quiet
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ REGION=us-central1 PROJECT_ID=$(gcloud config get-value project) gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-01 --region=$REGION --quiet gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-01-vpn-tunnel-02 --region=$REGION --quiet gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-01 --region=$REGION --quiet gcloud compute vpn-tunnels delete $PROJECT_ID-vpc-02-vpn-tunnel-02 --region=$REGION --quiet gcloud compute routers delete $PROJECT_ID-vpc-02-router --region=$REGION --quiet gcloud compute routers delete $PROJECT_ID-vpc-01-router --region=$REGION --quiet gcloud compute vpn-gateways delete $PROJECT_ID-vpc-01-vpn-gtw --region=$REGION --quiet gcloud compute vpn-gateways delete $PROJECT_ID-vpc-02-vpn-gtw --region=$REGION --quiet Your active configuration is: [cloudshell-18870] Deleting VPN tunnel...done. Deleting VPN tunnel...done. Deleting VPN tunnel...done. Deleting VPN tunnel...done. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-02-router]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/regions/us-central1/routers/test-project-402417-vpc-01-router]. Deleting VPN Gateway...done. Deleting VPN Gateway...done. student@cloudshell:~ (test-project-402417)$
Excluir a segunda VPC
No Cloud Shell, execute:
REGION=us-central1
PROJECT_ID=$(gcloud config get-value project)
gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-icmp --quiet
gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-ssh --quiet
gcloud compute firewall-rules delete default-allow-postgres --quiet
gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-oracle --quiet
gcloud compute networks subnets delete $PROJECT_ID-vpc-02-$REGION --region=$REGION --quiet
gcloud compute networks delete $PROJECT_ID-vpc-02 --quiet
Saída esperada do console:
student@cloudshell:~ (test-project-402417)$ gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-icmp --quiet gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-ssh --quiet gcloud compute firewall-rules delete default-allow-postgres --quiet gcloud compute firewall-rules delete $PROJECT_ID-vpc-02-allow-oracle --quiet gcloud compute networks subnets delete $PROJECT_ID-vpc-02-$REGION --region=$REGION --quiet gcloud compute networks delete $PROJECT_ID-vpc-02 --quiet Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-icmp]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-ssh]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/default-allow-postgres]. Deleted [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/firewalls/test-project-402417-vpc-02-allow-oracle]. gcloud compute networks subnets delete $PROJECT_ID-vpc-02-$REGION --region=$REGION --quiet gcloud compute networks delete $PROJECT_ID-vpc-02 --quiet
11. Parabéns
Parabéns por concluir o codelab.
O que vimos
- Como implantar um cluster do AlloyDB
- Como se conectar ao AlloyDB
- Como configurar e implantar um banco de dados Oracle de amostra
- Como configurar uma VPN entre duas redes VPC
- Como configurar a extensão Oracle FDW
12. Pesquisa
Saída: