1. Introduction
Static custom routes influence the default routing behavior in a VPC. IPv6 custom routes now support new next-hop attributes: next-hop-gateway, next-hop-instance and next-hop-address. This codelab describes how to use IPv6 custom routes with these new next-hop options using two VPCs connected by a multi-NIC VM instance. You will also demonstrate mixing ULA and GUA addressing and providing reachability to the ULA VPC to the public internet using the new custom route capability.
What you'll learn
- How to create an IPv6 custom route with a next-hop-instance next-hop.
- How to create an IPv6 custom route with a next-hop-gateway next-hop.
- How to create an IPv6 custom route with a next-hop-address next-hop.
What you'll need
- Google Cloud Project
2. Before you begin
Update the project to support the codelab
This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.
Inside Cloud Shell, perform the following
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
export projectname=$(gcloud config list --format="value(core.project)")
Overall Lab Architecture
To demonstrate both types of custom route next-hops, you will create 3 VPCs: A client VPC that uses GUA addressing, a server VPC that uses ULA addressing and a second server VPC that uses GUA addressing.
For the client VPC to access the ULA server, you will utilize a custom route using both next-hop-instance and next-hop-address pointing at a multi-NIC gateway instance. To provide access to the GUA server (after deleting the default ::/0 route), you will utilize a custom route with next-hop-gateway pointing at the Default Internet Gateway to provide routing over the internet.
3. Client VPC Setup
Create the Client VPC
Inside Cloud Shell, perform the following:
gcloud compute networks create client-vpc \
--project=$projectname \
--subnet-mode=custom \
--mtu=1500 --bgp-routing-mode=regional
Create the Client subnet
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create client-subnet \
--network=client-vpc \
--project=$projectname \
--range=192.168.1.0/24 \
--stack-type=IPV4_IPV6 \
--ipv6-access-type=external \
--region=us-central1
Record the assigned GUA subnet in an environment variable using this command
export client_subnet=$(gcloud compute networks subnets \
describe client-subnet \
--project $projectname \
--format="value(externalIpv6Prefix)" \
--region us-central1)
Launch client instance
Inside Cloud Shell, perform the following:
gcloud compute instances create client-instance \
--subnet client-subnet \
--stack-type IPV4_IPV6 \
--zone us-central1-a \
--project=$projectname
Add firewall rule to for client VPC traffic
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules create allow-gateway-client \
--direction=INGRESS --priority=1000 \
--network=client-vpc --action=ALLOW \
--rules=tcp --source-ranges=$client_subnet \
--project=$projectname
Add firewall rule to allow IAP for the client instance
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules create allow-iap-client \
--direction=INGRESS --priority=1000 \
--network=client-vpc --action=ALLOW \
--rules=tcp:22 --source-ranges=35.235.240.0/20 \
--project=$projectname
Confirm SSH access into the client instance
Inside Cloud Shell, log into the client-instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
If successful, you'll see a terminal window from the client instance. Exit from the SSH session to continue on with the codelab.
4. ULA Server VPC Setup
Create the ULA server VPC
Inside Cloud Shell, perform the following:
gcloud compute networks create server-vpc1 \
--project=$projectname \
--subnet-mode=custom --mtu=1500 \
--bgp-routing-mode=regional \
--enable-ula-internal-ipv6
Create the ULA server subnets
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create server-subnet1 \
--network=server-vpc1 \
--project=$projectname \
--range=192.168.0.0/24 \
--stack-type=IPV4_IPV6 \
--ipv6-access-type=internal \
--region=us-central1
Record the assigned ULA subnet in an environment variable using this command
export server_subnet1=$(gcloud compute networks subnets \
describe server-subnet1 \
--project $projectname \
--format="value(internalIpv6Prefix)" \
--region us-central1)
Launch server VM with a ULA internal IPV6 address
Inside Cloud Shell, perform the following:
gcloud compute instances create server-instance1 \
--subnet server-subnet1 \
--stack-type IPV4_IPV6 \
--zone us-central1-a \
--project=$projectname
Add firewall rule to allow access to the server from client
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules create allow-client-server1 \
--direction=INGRESS --priority=1000 \
--network=server-vpc1 --action=ALLOW \
--rules=tcp --source-ranges=$client_subnet \
--project=$projectname
Add firewall rule to allow IAP
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules create allow-iap-server1 \
--direction=INGRESS --priority=1000 \
--network=server-vpc1 --action=ALLOW \
--rules=tcp:22 \
--source-ranges=35.235.240.0/20 \
--project=$projectname
Install Apache in ULA server instance
Inside Cloud Shell, log into the client-instance:
gcloud compute ssh server-instance1 \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the Server VM shell, run the following command
sudo apt update && sudo apt -y install apache2
Verify that Apache is running
sudo systemctl status apache2
Overwrite the default web page
echo '<!doctype html><html><body><h1>Hello World! From Server1!</h1></body></html>' | sudo tee /var/www/html/index.html
Exit from the SSH session to continue on with the codelab.
5. GUA Server VPC Setup
Create the GUA server VPC
Inside Cloud Shell, perform the following:
gcloud compute networks create server-vpc2 \
--project=$projectname \
--subnet-mode=custom --mtu=1500 \
--bgp-routing-mode=regional
Create the GUA server subnets
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create server-subnet2 \
--network=server-vpc2 \
--project=$projectname \
--range=192.168.0.0/24 \
--stack-type=IPV4_IPV6 \
--ipv6-access-type=external \
--region=us-central1
Record the assigned GUA subnet in an environment variable using this command
export server_subnet2=$(gcloud compute networks subnets \
describe server-subnet2 \
--project $projectname \
--format="value(externalIpv6Prefix)" \
--region us-central1)
Launch server VM with a GUA IPV6 address
Inside Cloud Shell, perform the following:
gcloud compute instances create server-instance2 \
--subnet server-subnet2 \
--stack-type IPV4_IPV6 \
--zone us-central1-a \
--project=$projectname
Add firewall rule to allow access within the subnet
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules create allow-client-server2 \
--direction=INGRESS \
--priority=1000 \
--network=server-vpc2 \
--action=ALLOW \
--rules=tcp --source-ranges=$client_subnet \
--project=$projectname
Add firewall rule to allow IAP
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules create allow-iap-server2 \
--direction=INGRESS \
--priority=1000 \
--network=server-vpc2 \
--action=ALLOW \
--rules=tcp:22 \
--source-ranges=35.235.240.0/20 \
--project=$projectname
Confirm SSH access into the GUA server instance and install Apache
Inside Cloud Shell, log into the client-instance:
gcloud compute ssh server-instance2 \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the Server VM shell, run the following command
sudo apt update && sudo apt -y install apache2
Verify that Apache is running
sudo systemctl status apache2
Overwrite the default web page
echo '<!doctype html><html><body><h1>Hello World! From Server2!</h1></body></html>' | sudo tee /var/www/html/index.html
Exit from the SSH session to continue on with the codelab.
6. Create Gateway Instance
Delete the Client VPC's default route
In preparation for redirecting ULA v6 traffic to the multi-NIC instance and to disable internet egress routing. Delete the default ::/0 route pointing at the default internet gateway.
Inside Cloud Shell, perform the following:
export client_defroutename=$(gcloud compute routes list \
--project $projectname \
--format='value(name)' \
--filter="network:client-vpc AND destRange~'::/0'")
gcloud compute routes delete $client_defroutename \
--project $projectname \
--quiet
Launch gateway multi-NIC VM
Inside Cloud Shell, perform the following:Inside Cloud Shell, perform the following:
gcloud compute instances create gateway-instance \
--project=$projectname \
--zone=us-central1-a \
--network-interface=stack-type=IPV4_IPV6,subnet=client-subnet,no-address \
--network-interface=stack-type=IPV4_IPV6,subnet=server-subnet1,no-address \
--can-ip-forward
Configure gateway instance
Inside Cloud Shell, log into the gateway instance (it might take a few minutes to SSH successfully while the instance is booting up):
gcloud compute ssh gateway-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the gateway VM shell, run the following command to enable IPv6 forwarding and keep accepting RAs with forwarding enabled (accept_ra = 2)
sudo sysctl -w net.ipv6.conf.ens4.accept_ra=2
sudo sysctl -w net.ipv6.conf.ens5.accept_ra=2
sudo sysctl -w net.ipv6.conf.ens4.accept_ra_defrtr=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
Verify the IPv6 routing table on the instance
ip -6 route show
Sample output showing both ULA and GUA subnet routes, with the default route pointing at the GUA interface.
::1 dev lo proto kernel metric 256 pref medium
2600:1900:4000:7a7f:0:1:: dev ens4 proto kernel metric 256 expires 83903sec pref medium
2600:1900:4000:7a7f::/65 via fe80::4001:c0ff:fea8:101 dev ens4 proto ra metric 1024 expires 88sec pref medium
fd20:3df:8d5c::1:0:0 dev ens5 proto kernel metric 256 expires 83904sec pref medium
fd20:3df:8d5c::/64 via fe80::4001:c0ff:fea8:1 dev ens5 proto ra metric 1024 expires 84sec pref medium
fe80::/64 dev ens5 proto kernel metric 256 pref medium
fe80::/64 dev ens4 proto kernel metric 256 pref medium
default via fe80::4001:c0ff:fea8:101 dev ens4 proto ra metric 1024 expires 88sec pref medium
Exit from the SSH session to continue on with the codelab.
7. Create and test routes to gateway instance (using instance's name)
In this section, you will add routes to both the client and server VPCs by using the gateway instance name as the next-hop.
Make note of server addresses
Inside Cloud Shell, perform the following:
gcloud compute instances list \
--project $projectname \
--filter="name~server-instance" \
--format='value[separator=","](name,networkInterfaces[0].ipv6Address,networkInterfaces[0].ipv6AccessConfigs[0].externalIpv6)'
This should output both server instance names and their IPv6 prefixes. Sample output
server-instance1,fd20:3df:8d5c:0:0:0:0:0,
server-instance2,,2600:1900:4000:71fd:0:0:0:0
Make note of both addresses as you will use them later in curl commands from the client instance. Unfortunately environment variables cannot be easily used to store these as they don't transfer over SSH sessions.
Run curl command from client to ULA server instance
To see the behavior before adding any new routes. Run a curl command from the client instance towards the server-instance1.
Inside Cloud Shell, log into the client-instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, perform a curl using the ULA IPV6 address of the server1 instance (the command sets a short timeout of 5s to avoid curl waiting for too long)
curl -m 5.0 -g -6 'http://[ULA-ipv6-address-of-server1]:80/'
This curl command should timeout because the Client VPC doesn't have a route towards the Server VPC yet.
Let's try to fix that! Exit from the SSH session for now.
Add custom route in client VPC
Since the client VPC is missing a route towards the ULA prefix. Let's add it now.
Inside Cloud Shell, perform the following:
gcloud compute routes create client-to-server1-route \
--project=$projectname \
--destination-range=$server_subnet1 \
--network=client-vpc \
--next-hop-instance=gateway-instance \
--next-hop-instance-zone=us-central1-a
SSH back to the client instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, attempt the curl to the server instance again. (the command sets a short timeout of 5s to avoid curl waiting for too long)
curl -m 5.0 -g -6 'http://[ULA-ipv6-address-of-server1]:80/'
This curl command still times out because the server1 VPC, doesn't have a route back towards the client VPC through the gateway instance yet.
Exit from the SSH session to continue on with the codelab.
Add custom route in ULA Server VPC
Inside Cloud Shell, perform the following:
gcloud compute routes create server1-to-client-route \
--project=$projectname \
--destination-range=$client_subnet \
--network=server-vpc1 \
--next-hop-instance=gateway-instance \
--next-hop-instance-zone=us-central1-a
SSH back to the client instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, attempt the curl to the server instance one more time.
curl -m 5.0 -g -6 'http://[ULA-ipv6-address-of-server1]:80/'
This curl command now succeeds showing you have end-to-end reachability from the client instance towards the ULA server instance. This connectivity is only possible now through the use of IPv6 custom routes with next-hop-instance as next-hops.
Sample Output
<user id>@client-instance:~$ curl -m 5.0 -g -6 'http://[fd20:3df:8d5c:0:0:0:0:0]:80/'
<!doctype html><html><body><h1>Hello World! From Server1!</h1></body></html>
Exit from the SSH session to continue on with the codelab.
8. Create and test routes to gateway instance (using instance's address)
In this section, you will add routes to both the client and server VPCs by using the gateway instance ipv6 address as the next-hop.
Delete previous routes
Let's restore the environment to before adding any custom routes by deleting the custom routes that use the instance name.
Inside Cloud Shell, perform the following:
gcloud compute routes delete client-to-server1-route --quiet --project=$projectname
gcloud compute routes delete server1-to-client-route --quiet --project=$projectname
Run curl command from client to ULA server instance
To confirm that the previous routes have been deleted successfully, run a curl command from the client instance towards the server-instance1.
Inside Cloud Shell, log into the client-instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, perform a curl using the ULA IPV6 address of the server1 instance (the command sets a short timeout of 5s to avoid curl waiting for too long)
curl -m 5.0 -g -6 'http://[ULA-ipv6-address-of-server1]:80/'
This curl command should timeout because the Client VPC doesn't have a route anymore towards the Server VPC.
Get gateway instance IPv6 addresses
We will need to get the gateway instance's IPv6 addresses before we can write routes that use next-hop-address.
Inside Cloud Shell, perform the following:
export gateway_ula_address=$(gcloud compute instances \
describe gateway-instance \
--project $projectname \
--format='value(networkInterfaces[1].ipv6Address)')
export gateway_gua_address=$(gcloud compute instances \
describe gateway-instance \
--project $projectname \
--format='value(networkInterfaces[0].ipv6AccessConfigs[0].externalIpv6)')
Add custom route in client VPC
We can now re-add the route in the client VPC the ULA prefix but instead using the gateway GUA address as the next-hop.
Inside Cloud Shell, perform the following:
gcloud compute routes create client-to-server1-route \
--project=$projectname \
--destination-range=$server_subnet1 \
--network=client-vpc \
--next-hop-address=$gateway_gua_address
SSH back to the client instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, attempt the curl to the server instance again.
curl -m 5.0 -g -6 'http://[ULA-ipv6-address-of-server1]:80/'
As expected, this curl command still times out because the server1 VPC doesn't have a route back towards the client VPC through the gateway instance yet.
Exit from the SSH session to continue on with the codelab.
Add custom route in ULA Server VPC
Inside Cloud Shell, perform the following:
gcloud compute routes create server1-to-client-route \
--project=$projectname \
--destination-range=$client_subnet \
--network=server-vpc1 \
--next-hop-address=$gateway_ula_address
SSH back to the client instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, attempt the curl to the server instance one more time.
curl -m 5.0 -g -6 'http://[ULA-ipv6-address-of-server1]:80/'
This curl command now succeeds showing you have end-to-end reachability from the client instance towards the ULA server instance. This connectivity is only possible now through the use of IPv6 custom routes with next-hop-address as next-hops.
Sample Output
<user id>@client-instance:~$ curl -m 5.0 -g -6 'http://[fd20:3df:8d5c:0:0:0:0:0]:80/'
<!doctype html><html><body><h1>Hello World! From Server1!</h1></body></html>
Exit from the SSH session to continue on with the codelab.
9. Create and test route to internet gateway
While you have this lab setup, let's also test the functionality of the new next-hop property: next-hop-gateway.
Run curl command from client to GUA server instance
To see the behavior before adding any new routes. Run a curl command from the client instance towards server2's IP address.
Inside Cloud Shell, log into the client instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, perform a curl towards the IPv6 endpoint
curl -m 5.0 -g -6 'http://[GUA-ipv6-address-of-server2]:80/'
This curl command should timeout because the client VPC only has its own subnet route and a route to server1's VPC. To be able to reach server2 VPC's GUA range, you need to use the default internet gateway through a custom route.
Exit from the SSH session to continue on with the codelab.
Add custom gateway route in client VPC
Inside Cloud Shell, perform the following:
gcloud compute routes create client-to-server2-route \
--project=$projectname \
--destination-range=$server_subnet2 \
--network=client-vpc \
--next-hop-gateway=default-internet-gateway
SSH back to the client instance:
gcloud compute ssh client-instance \
--project=$projectname \
--zone=us-central1-a \
--tunnel-through-iap
Inside the client instance, repeat the same curl
curl -m 5.0 -g -6 'http://[GUA-ipv6-address-of-server2]:80/'
This curl command should now succeed in returning the custom hello message, indicating that you could successfully reach the other server's IPv6 address through the default internet gateway successfully.
Sample output:
<user id>@client-instance:~$ curl -m 5.0 -g -6 'http://[2600:1900:4000:71fd:0:0:0:0]:80/'
<!doctype html><html><body><h1>Hello World! From Server2!</h1></body></html>
Exit from the SSH session to go through the clean up section of the lab.
10. Clean up
Clean up instances
Inside Cloud Shell, perform the following:
gcloud compute instances delete client-instance --zone us-central1-a --quiet --project=$projectname
gcloud compute instances delete server-instance1 --zone us-central1-a --quiet --project=$projectname
gcloud compute instances delete server-instance2 --zone us-central1-a --quiet --project=$projectname
gcloud compute instances delete gateway-instance --zone us-central1-a --quiet --project=$projectname
Clean up subnets
Inside Cloud Shell, perform the following:
gcloud compute networks subnets delete client-subnet --region=us-central1 --quiet --project=$projectname
gcloud compute networks subnets delete server-subnet1 --region=us-central1 --quiet --project=$projectname
gcloud compute networks subnets delete server-subnet2 --region=us-central1 --quiet --project=$projectname
Clean up firewall rules
Inside Cloud Shell, perform the following:
gcloud compute firewall-rules delete allow-iap-client --quiet --project=$projectname
gcloud compute firewall-rules delete allow-iap-server1 --quiet --project=$projectname
gcloud compute firewall-rules delete allow-iap-server2 --quiet --project=$projectname
gcloud compute firewall-rules delete allow-gateway-client --quiet --project=$projectname
gcloud compute firewall-rules delete allow-client-server1 --quiet --project=$projectname
gcloud compute firewall-rules delete allow-client-server2 --quiet --project=$projectname
Clean up custom routes
Inside Cloud Shell, perform the following:
gcloud compute routes delete client-to-server1-route --quiet --project=$projectname
gcloud compute routes delete client-to-server2-route --quiet --project=$projectname
gcloud compute routes delete server1-to-client-route --quiet --project=$projectname
Clean up VPCs
Inside Cloud Shell, perform the following:
gcloud compute networks delete client-vpc --quiet --project=$projectname
gcloud compute networks delete server-vpc1 --quiet --project=$projectname
gcloud compute networks delete server-vpc2 --quiet --project=$projectname
11. Congratulations
You have successfully used static custom IPv6 routes with next-hops set to next-hop-gateway , next-hop-instance and next-hop-address. You also validated end-to-end IPv6 communication using those routes.
What's next?
Check out some of these codelabs...