Access Google APIs from on-premises hosts using IPv6 addresses

1. Introduction

Private Google Access for on-premises hosts provides a way for on-premises systems to connect to Google APIs and services by routing traffic through a Cloud VPN tunnel or a VLAN attachment for Cloud Interconnect. Private Google Access for on-premises hosts is an alternative to connecting to Google APIs and services over the internet.

Private Google Access for on-premises hosts requires that you direct requests for Google APIs to virtual IP addresses (VIP). For IPv6, the following IP addresses are used:

  • For private.googleapis.com: 2600:2d00:0002:2000::/64
  • For restricted.googleapis.com: 2600:2d00:0002:1000::/64

The VIP that you choose determines which services you can access. In this codelab, we will use private.googleapis.com. For more information, see Domain options.

This codelab describes how to enable Private Google Access for on-premises hosts that use IPv6 addresses. You will set up a VPC network called on-premises-vpc to represent an on-premises environment. For your deployment, the on-premises-vpc would not exist, instead hybrid networking to your on-premise data center or cloud provider would be used.

What you'll build

In this codelab, you're going to build an end-to-end IPv6 network that demonstrates on-premises access to the cloud storage API using CNAME *.googleapis.com to private.googleapis.com IPv6 address 2600:2d00:0002:2000::/64 as illustrated in Figure 1.

Figure 1

a0fc56abf24f3535.png

What you'll learn

  • How to create a dual stack VPC network
  • How to create HA VPN with IPv6
  • How to update DNS to access Private Google Access
  • How to establish and validate Private Google Access connectivity

What you'll need

  • Google Cloud Project

2. Before you begin

Update the project to support the codelab

This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.

Inside Cloud Shell, perform the following:

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=YOUR-PROJECT-NAME
echo $projectname

3. Create the transit-vpc

f6932f551b5acac0.png

Create the transit VPC network

Inside Cloud Shell, perform the following:

gcloud compute networks create transit-vpc --project=$projectname --subnet-mode=custom --mtu=1460 --enable-ula-internal-ipv6 --bgp-routing-mode=regional

4. Create the on-premises network

58d75cbc9cb20a51.png

This VPC network represents an on-premises environment.

Create the on-premises VPC network

Inside Cloud Shell, perform the following:

gcloud compute networks create on-premises-vpc --project=$projectname --subnet-mode=custom --mtu=1460 --enable-ula-internal-ipv6 --bgp-routing-mode=regional

Create the subnet

Inside Cloud Shell, perform the following:

gcloud compute networks subnets create on-premises-subnet1-us-central1 --project=$projectname --range=172.16.10.0/27 --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL --network=on-premises-vpc --region=us-central1

5. Create HA VPN for the transit-vpc and on-premises-vpc

Create the HA VPN GW for the transit-vpc

a0fc56abf24f3535.png

When each gateway is created, two external IPv4 addresses are automatically allocated, one for each gateway interface. Note down these IP addresses to use later on in the configuration steps.

Inside Cloud Shell, create the HA VPN GW with stack type IPV4_IPV6.

gcloud compute vpn-gateways create transit-vpc-vpngw \
   --network=transit-vpc\
   --region=us-central1 \
   --stack-type=IPV4_IPV6

Create the HA VPN GW for the on-premises-vpc

Inside Cloud Shell, create the HA VPN GW with stack type IPV4_IPV6

gcloud compute vpn-gateways create on-premises-vpc-vpngw \
   --network=on-premises-vpc\
   --region=us-central1 \
   --stack-type=IPV4_IPV6

Validate HA VPN GW creation

Using the console, navigate to Hybrid Connectivity → VPN → CLOUD VPN GATEWAYS.

c8eed6ca929935bc.png

Create the Cloud Router for the transit-vpc

Inside Cloud Shell, create the Cloud Router located in us-central1

gcloud compute routers create transit-vpc-cr-us-central1 \
   --region=us-central1 \
   --network=transit-vpc\
   --asn=65001

Create the Cloud Router for the on-premises-vpc

Inside Cloud Shell, create the Cloud Router located in us-central1

gcloud compute routers create on-premises-vpc-cr-us-central1 \
   --region=us-central1 \
   --network=on-premises-vpc \
   --asn=65002

Create the VPN tunnels for transit-vpc

You will create two VPN tunnels on each HA VPN gateway.

Create VPN tunnel0

Inside Cloud Shell, create tunnel0:

gcloud compute vpn-tunnels create transit-vpc-tunnel0 \
    --peer-gcp-gateway on-premises-vpc-vpngw \
    --region us-central1 \
    --ike-version 2 \
    --shared-secret [ZzTLxKL8fmRykwNDfCvEFIjmlYLhMucH] \
    --router transit-vpc-cr-us-central1 \
    --vpn-gateway transit-vpc-vpngw \
    --interface 0

Create VPN tunnel1

Inside Cloud Shell, create tunnel1:

gcloud compute vpn-tunnels create transit-vpc-tunnel1 \
    --peer-gcp-gateway on-premises-vpc-vpngw \
    --region us-central1 \
    --ike-version 2 \
    --shared-secret [bcyPaboPl8fSkXRmvONGJzWTrc6tRqY5] \
    --router transit-vpc-cr-us-central1 \
    --vpn-gateway transit-vpc-vpngw \
    --interface 1

Create the VPN tunnels for on-premises-vpc

You will create two VPN tunnels on each HA VPN gateway.

Create VPN tunnel0

Inside Cloud Shell, create tunnel0:

gcloud compute vpn-tunnels create on-premises-tunnel0 \
    --peer-gcp-gateway transit-vpc-vpngw \
    --region us-central1 \
    --ike-version 2 \
    --shared-secret [ZzTLxKL8fmRykwNDfCvEFIjmlYLhMucH] \
    --router on-premises-vpc-cr-us-central1 \
    --vpn-gateway on-premises-vpc-vpngw \
    --interface 0

Create VPN tunnel1

Inside Cloud Shell, create tunnel1:

gcloud compute vpn-tunnels create on-premises-tunnel1 \
    --peer-gcp-gateway transit-vpc-vpngw \
    --region us-central1 \
    --ike-version 2 \
    --shared-secret [bcyPaboPl8fSkXRmvONGJzWTrc6tRqY5] \
    --router on-premises-vpc-cr-us-central1 \
    --vpn-gateway on-premises-vpc-vpngw \
    --interface 1

Validate vpn tunnel creation

Using the console, navigate to Hybrid Connectivity → VPN → CLOUD VPN TUNNELS.

85fd5aef4b2c4010.png

Create BGP sessions

In this section, you configure Cloud Router interfaces and BGP peers.

When creating VPN tunnels that allow IPv6 traffic, specify --enable-ipv6 when you run the add-bgp-peer command.

Create a BGP interface and peering for transit-vpc

Inside Cloud Shell, create the BGP interface:

gcloud compute routers add-interface transit-vpc-cr-us-central1 \
    --interface-name if-tunnel1-to-onpremise \
    --ip-address 169.254.1.1 \
    --mask-length 30 \
    --vpn-tunnel transit-vpc-tunnel0 \
    --region us-central1

Inside Cloud Shell, create the BGP peer:

gcloud compute routers add-bgp-peer transit-vpc-cr-us-central1 \
    --peer-name bgp-on-premises-tunnel0 \
    --interface if-tunnel1-to-onpremise \
    --peer-ip-address 169.254.1.2 \
    --peer-asn 65002 \
    --region us-central1 \
    --enable-ipv6 \
    --ipv6-nexthop-address 2600:2d00:0:3:0:0:0:1 \
    --peer-ipv6-nexthop-address 2600:2d00:0:3:0:0:0:2

Inside Cloud Shell, create the BGP interface:

gcloud compute routers add-interface transit-vpc-cr-us-central1 \
    --interface-name if-tunnel2-to-onpremise \
    --ip-address 169.254.2.1 \
    --mask-length 30 \
    --vpn-tunnel transit-vpc-tunnel1 \
    --region us-central1

Inside Cloud Shell, create the BGP peer:

gcloud compute routers add-bgp-peer transit-vpc-cr-us-central1 \
    --peer-name bgp-on-premises-tunnel2 \
    --interface if-tunnel2-to-onpremise \
    --peer-ip-address 169.254.2.2 \
    --peer-asn 65002 \
    --region us-central1 \
    --enable-ipv6 \
    --ipv6-nexthop-address 2600:2d00:0:3:0:0:0:11 \
    --peer-ipv6-nexthop-address 2600:2d00:0:3:0:0:0:12

Create a BGP interface and peering for on-premises-vpc

Inside Cloud Shell, create the BGP interface:

gcloud compute routers add-interface on-premises-vpc-cr-us-central1\
    --interface-name if-tunnel1-to-hub-vpc \
    --ip-address 169.254.1.2 \
    --mask-length 30 \
    --vpn-tunnel on-premises-tunnel0 \
    --region us-central1

Inside Cloud Shell, create the BGP peer:

gcloud compute routers add-bgp-peer on-premises-vpc-cr-us-central1 \
    --peer-name bgp-transit-vpc-tunnel0 \
    --interface if-tunnel1-to-hub-vpc \
    --peer-ip-address 169.254.1.1 \
    --peer-asn 65001 \
    --region us-central1 \
    --enable-ipv6 \
    --ipv6-nexthop-address 2600:2d00:0:3:0:0:0:2 \
    --peer-ipv6-nexthop-address 2600:2d00:0:3:0:0:0:1

Inside Cloud Shell, create the BGP interface:

gcloud compute routers add-interface on-premises-vpc-cr-us-central1\
    --interface-name if-tunnel2-to-hub-vpc \
    --ip-address 169.254.2.2 \
    --mask-length 30 \
    --vpn-tunnel on-premises-tunnel1 \
    --region us-central1

Inside Cloud Shell, create the BGP peer:

gcloud compute routers add-bgp-peer  on-premises-vpc-cr-us-central1\
    --peer-name bgp-transit-vpc-tunnel1\
    --interface if-tunnel2-to-hub-vpc \
    --peer-ip-address 169.254.2.1 \
    --peer-asn 65001 \
    --region us-central1 \
    --enable-ipv6 \
    --ipv6-nexthop-address 2600:2d00:0:3:0:0:0:12 \
    --peer-ipv6-nexthop-address 2600:2d00:0:3:0:0:0:11

Navigate to Hybrid Connectivity → VPN to view the VPN tunnel details.

e100e31ea22c8124.png

Validate that transit-vpc is learning IPv4 and IPv6 routes over HA VPN

Because the HA VPN tunnels and BGP sessions are established, routes from on-premises-vpc are learned from the transit-vpc. Using the console, navigate to VPC network → VPC networks → transit-vpc → ROUTES.

Observe the learned IPv4 and IPv6 dynamic routes illustrated below:

216bde7d08d75ec4.png

Validate that on-premises-vpc is not learning routes over HA VPN

Transit-vpc does not have a subnet, therefore the Cloud Router will not advertise any subnets to the on-premises-vpc. Using the console, navigate to VPC network → VPC networks → on-premises-vpc → ROUTES.

6. Advertise the IPv6 private.googleapis.com VIP

To access Private Google Access from on-premise you will need to create a custom route advertisement from the transit-vpc. The IPv6 address 2600:2d00:0002:2000:: will be advertised to the on-premises environment and used by workloads to access Google APIs such as Cloud Storage, Cloud BigQuery, and Cloud Bigtable after local DNS is updated.

In this codelab, you'll enable API access to most Google APIs and services regardless of whether they are supported by VPC service Controls.

From the console navigate to Hybrid Connectivity → Cloud Routers → transit-vpc-cr-us-central1, then select EDIT.

3e36e3b5ea741ec5.png

In the section Advertised routes, select the option Create custom routes, update the fields based on the example below, select DONE, and then click SAVE.

9283aba7b214f70d.png

Validate that the on-premises-vpc is learning IPv6 routes

Now that the IPv6 private.googleapis.com VIP is advertised from the transit-vpc, the on-premises-vpc will have learned IPv6 dynamic routes for the VIP. Using the console, navigate to VPC network → VPC networks → on-premises-vpc → ROUTES.

Observe the IPv6 routes advertised from the transit-vpc:

caf3b79b035b2a20.png

7. Establish communication to Google APIs using Private Google Access

In the following section, we will access and validate connectivity to Cloud Storage by using the IPv6 private.googleapis.com VIP. To do so, we need to perform the following actions in the on-premises-vpc.

  • Create an ingress firewall rule to allow Identity Aware Proxy (IAP) access for SSH access.
  • Create a Cloud Router and Cloud NAT to download tcpdump and dnsutils.
  • Create a private Cloud DNS zone for googleapis.com.
  • Create a Cloud Storage bucket.

Create the IAP firewall rule

To allow IAP to connect to your VM instances, create a firewall rule that:

  • Applies to all VM instances that you want to be accessible by using IAP.
  • Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.

Inside Cloud Shell, create the IAP firewall rule.

gcloud compute firewall-rules create ssh-iap-on-premises-vpc \
    --network on-premises-vpc \
    --allow tcp:22 \
    --source-ranges=35.235.240.0/20

Cloud Router and NAT configuration

Cloud NAT is used in the codelab for software package installation because the VM instance does not have an external IP address.

Inside Cloud Shell, create the Cloud Router.

gcloud compute routers create on-premises-cr-us-central1-nat --network on-premises-vpc --region us-central1

Inside Cloud Shell, create the NAT gateway.

gcloud compute routers nats create on-premises-nat-us-central1 --router=on-premises-cr-us-central1-nat --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region us-central1

Create a test instance, on-premises-testbox

Create a test instance that will be used to test and validate connectivity to the IPv6 private.googleapis.com VIP.

Inside Cloud Shell, create the instance.

gcloud compute instances create on-premises-testbox \
    --project=$projectname \
    --machine-type=e2-micro \
    --stack-type=IPV4_IPV6 \
    --image-family debian-10 \
    --no-address \
    --image-project debian-cloud \
    --zone us-central1-a \
    --subnet=on-premises-subnet1-us-central1 \
    --metadata startup-script="#! /bin/bash
      sudo apt-get update
      sudo apt-get install tcpdump -y
      sudo apt-get install dnsutils -y"

Create the Cloud DNS private zone

We will use Cloud DNS to create a private zone and records for *.googleapis.com, below are the required steps.

Inside Cloud Shell, create a private DNS zone v6-googleapis.com.

gcloud dns --project=$projectname managed-zones create v6-googleapis --description="" --dns-name="googleapis.com." --visibility="private" --networks="on-premises-vpc"

Inside Cloud Shell, create the AAAA record for private.googleapis.com. pointing to the IPv6 address 2600:2d00:0002:2000::.

gcloud dns --project=$projectname record-sets create private.googleapis.com. --zone="v6-googleapis" --type="AAAA" --ttl="300" --rrdatas="2600:2d00:0002:2000::"

Inside Cloud Shell, create a CNAME for *.googleapis.com to point to private.googleapis.com.

gcloud dns --project=$projectname record-sets create *.googleapis.com. --zone="v6-googleapis" --type="CNAME" --ttl="300" --rrdatas="private.googleapis.com."

Validate the cloud DNS private zone

Navigate to Network services → Cloud DNS → v6-googleapis.

455e355195a2a48f.png

Create the Cloud Storage bucket

Inside Cloud Shell, create a cloud storage bucket and replace bucket_name with a globally unique name you prefer, try another name if already in use.

gsutil mb  -l us-central1 -b on gs://bucket_name

8. Access and validate Google APIs using IPv6 addresses

In the following section, you will perform a SSH into two Cloud Shell terminals. The first terminal is used to validate IPv6 lookup by using tcpdump while the second is used for access to the cloud storage bucket.

Inside Cloud Shell, perform a ssh to test instance on-premises-testbox.

 gcloud compute ssh --zone "us-central1-a" "on-premises-testbox" --project "$projectname"

Inside Cloud Shell terminal one, start tcpdump and monitor port 53 for DNS traffic.

sudo tcpdump -nn -i ens4 port 53

Example below.

user@on-premises-testbox:~$ sudo tcpdump -nn -i ens4 port 53

Open a new Cloud Shell terminal by selecting the "+". Once the new tab is opened,, update the project name variable.

Inside Cloud Shell, update the project name variable.

gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=YOUR-PROJECT-NAME
echo $projectname

Inside Cloud Shell two, perform a ssh to test instance on-premises-testbox.

gcloud compute ssh --zone "us-central1-a" "on-premises-testbox" --project "$projectname"

Perform a dig to validate DNS lookup

Inside Cloud Shell terminal two, perform a dig against storage.googleapis.com.

dig AAAA storage.googleapis.com

Inspect the ANSWER SECTION, private DNS zone storage.googleapis.com CNAME to private.googleapis.com AAAA 2600:2d00:2:2000::, example below:

user@on-premises-testbox:~$ dig AAAA storage.googleapis.com

; <<>> DiG 9.11.5-P4-5.1+deb10u8-Debian <<>> AAAA storage.googleapis.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2782
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;storage.googleapis.com.                IN      AAAA

;; ANSWER SECTION:
storage.googleapis.com. 300     IN      CNAME   private.googleapis.com.
private.googleapis.com. 300     IN      AAAA    2600:2d00:2:2000::

;; Query time: 9 msec
;; SERVER: 169.254.169.254#53(169.254.169.254)
;; WHEN: Mon Feb 20 01:56:33 UTC 2023
;; MSG SIZE  rcvd: 101

Inside Cloud Shell terminal one, inspect the tcpdump that further confirms DNS resolution to AAAA 2600:2d00:2:2000::.

user@on-premises-testbox:~$ sudo tcpdump -nn -i ens4 port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
01:56:33.473208 IP 172.16.10.3.41476 > 169.254.169.254.53: 2782+ [1au] AAAA? storage.googleapis.com. (63)
01:56:33.482580 IP 169.254.169.254.53 > 172.16.10.3.41476: 2782 2/0/1 CNAME private.googleapis.com., AAAA 2600:2d00:2:2000:: (101)

Based on the dig and tcpdump we can conclude that DNS resolution to storage.googleapis.com is achieved through 2600:2d00:2:2000::, the IPv6 address for private.googleapis.com.

Perform gsutil list to validate access to cloud storage

Inside Cloud Shell terminal two, perform a list against the previously created storage bucket using gsutil. Change bucket_name to the bucket that you previously created.

gsutil -d ls gs://bucket_name

Example using cloud storage bucket codelab-ipv6, inspect the debug output indicating storage.googleapis.com and HTTP/1.1 200 OK.

user@on-premises-testbox:~$ gsutil -d ls gs://codelab-ipv6
***************************** WARNING *****************************
*** You are running gsutil with debug output enabled.
*** Be aware that debug output includes authentication credentials.
*** Make sure to remove the value of the Authorization header for
*** each HTTP request printed to the console prior to posting to
*** a public medium such as a forum post or Stack Overflow.
***************************** WARNING *****************************
gsutil version: 5.19
checksum: 49a18b9e15560adbc187bab09c51b5fd (OK)
boto version: 2.49.0
python version: 3.9.16 (main, Jan 10 2023, 02:29:25) [Clang 12.0.1 ]
OS: Linux 4.19.0-23-cloud-amd64
multiprocessing available: True
using cloud sdk: True
pass cloud sdk credentials to gsutil: True
config path(s): /etc/boto.cfg
gsutil path: /usr/lib/google-cloud-sdk/bin/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
shim enabled: False
Command being run: /usr/lib/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=myprojectid -o GoogleCompute:service_account=default -d ls gs://codelab-ipv6
config_file_list: ['/etc/boto.cfg']
config: [('working_dir', '/mnt/pyami'), ('debug', '0'), ('https_validate_certificates', 'true'), ('working_dir', '/mnt/pyami'), ('debug', '0'), ('default_project_id', 'myproject'), ('default_api_version', '2')]
DEBUG 0220 02:01:14.713012 multiprocess_file_storage.py] Read credential file
INFO 0220 02:01:14.714742 base_api.py] Calling method storage.objects.list with StorageObjectsListRequest: <StorageObjectsListRequest
 bucket: 'codelab-ipv6'
 delimiter: '/'
 maxResults: 1000
 projection: ProjectionValueValuesEnum(noAcl, 1)
 versions: False>
INFO 0220 02:01:14.715939 base_api.py] Making http GET to https://storage.googleapis.com/storage/v1/b/codelab-ipv6/o?alt=json&fields=prefixes%2Citems%2Fname%2CnextPageToken&delimiter=%2F&maxResults=1000&projection=noAcl&versions=False
INFO 0220 02:01:14.716369 base_api.py] Headers: {'accept': 'application/json',
 'accept-encoding': 'gzip, deflate',
 'content-length': '0',
 'user-agent': 'apitools Python/3.9.16 gsutil/5.19 (linux) analytics/disabled '
               'interactive/True command/ls google-cloud-sdk/416.0.0'}
INFO 0220 02:01:14.716875 base_api.py] Body: (none)
connect: (storage.googleapis.com, 443)
send: b'GET /storage/v1/b/codelab-ipv6/o?alt=json&fields=prefixes%2Citems%2Fname%2CnextPageToken&delimiter=%2F&maxResults=1000&projection=noAcl&versions=False HTTP/1.1\r\nHost: storage.googleapis.com\r\ncontent-length: 0\r\nuser-agent: apitools Python/3.9.16 gsutil/5.19 (linux) analytics/disabled
<SNIP>
reply: 'HTTP/1.1 200 OK\r\n'
header: X-GUploader-UploadID: ADPycdvunHlbN1WQBxDr_LefzLaH_HY1bBH22X7IxX9sF1G2Yo_7-nhYwjxUf6N7AF9Zg_JDwPxYtuNJiFutfd6qauEfohYPs7mE
header: Content-Type: application/json; charset=UTF-8
header: Date: Mon, 20 Feb 2023 02:01:14 GMT
header: Vary: Origin
header: Vary: X-Origin
header: Cache-Control: private, max-age=0, must-revalidate, no-transform
header: Expires: Mon, 20 Feb 2023 02:01:14 GMT
header: Content-Length: 3
header: Server: UploadServer
INFO 0220 02:01:14.803286 base_api.py] Response of type Objects: <Objects
 items: []
 prefixes: []>
user@on-premises-testbox:~$ 

Inside Cloud Shell terminal one, inspect the tcpdump that further confirms DNS resolution to AAAA 2600:2d00:2:2000::.

eepakmichael@on-premises-testbox:~$ sudo tcpdump -nn -i ens4 port 53
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens4, link-type EN10MB (Ethernet), capture size 262144 bytes
02:01:14.725000 IP 172.16.10.3.48792 > 169.254.169.254.53: 7056+ A? storage.googleapis.com. (40)
02:01:14.725106 IP 172.16.10.3.48792 > 169.254.169.254.53: 50841+ AAAA? storage.googleapis.com. (40)
02:01:14.732516 IP 169.254.169.254.53 > 172.16.10.3.48792: 50841 2/0/0 CNAME private.googleapis.com., AAAA 2600:2d00:2:2000:: (90)

Exit from the on-premises-testbox instance operating system, returning to the Cloud Shell prompt.

9. Cleanup

Inside Cloud Shell perform the following:

gcloud compute vpn-tunnels delete transit-vpc-tunnel0 transit-vpc-tunnel1 on-premises-tunnel1   --region=us-central1 --quiet

gcloud compute vpn-tunnels delete on-premises-tunnel0 on-premises-tunnel1 --region=us-central1 --quiet

gcloud compute vpn-gateways delete on-premises-vpc-vpngw transit-vpc-vpngw --region=us-central1 --quiet

gcloud compute routers delete transit-vpc-cr-us-central1  on-premises-vpc-cr-us-central1 on-premises-cr-us-central1-nat --region=us-central1 --quiet

gcloud compute instances delete on-premises-testbox --zone=us-central1-a --quiet

gcloud compute networks subnets delete on-premises-subnet1-us-central1 --region=us-central1 --quiet

gcloud compute firewall-rules delete ssh-iap-on-premises-vpc --quiet

gcloud compute networks delete on-premises-vpc --quiet


gcloud compute networks delete transit-vpc --quiet

gsutil rb gs://bucket_name

gcloud dns record-sets delete *.googleapis.com. \
    --type=CNAME \
    --zone=v6-googleapis
        
gcloud dns record-sets delete private.googleapis.com. \
        --type=AAAA \
        --zone=v6-googleapis
        
gcloud dns managed-zones delete v6-googleapis

10. Congratulations

Congratulations, you've successfully configured and validated a Private Google Access with IPv6.

You created the transit and on-premise infrastructure, and created a private DNS zone enabling resolution for Google API domains using IPv6. You learned how to test and validate IPv6 access using dig and cloud storage.

Cosmopup thinks codelabs are awesome!!

8c2a10eb841f7b01.jpeg

What's next?

Check out some of these codelabs...

Further reading & Videos

Reference docs