About this codelab
1. Overview
Dynamic Port Allocation (DPA) is a new feature in Cloud NAT. With DPA enabled, Cloud NAT dynamically scales up/down port allocations for instances depending on their need. DPA is configured with minimum and maximum port limits so that it never scales down ports below the minimum, or scales up beyond the maximum. This allows some instances behind NAT gateways to dynamically scale up their connection count without having to allocate more ports to all instances behind Cloud NAT.
Without DPA, all instances behind Cloud NAT are allocated the same number of ports regardless of usage, as defined by the minPortsPerVm
parameter .
For more information please review the Documentation section about NAT DPA .
What you'll learn
- How to set up a Cloud NAT gateway in preparation for DPA.
- How to inspect port allocations without DPA.
- How to enable and configure DPA for a NAT gateway.
- How to observe the effects of DPA by running parallel egress connections.
- How to add NAT rules to a NAT Gateway with DPA enabled.
- How to see the behavior of DPA with Rules by running egress connections to multiple destinations.
What you'll need
- Basic knowledge of Google Compute Engine
- Basic networking and TCP/IP knowledge
- Basic Unix/Linux command line knowledge
- It is helpful to have completed a tour of networking in Google Cloud such as the Networking in Google Cloud lab.
- A Google Cloud project with ‘Alpha Access' enabled.
- Understanding of Cloud NAT basics.
2. Using Google Cloud Console and Cloud Shell
To interact with GCP, we will use both the Google Cloud Console and Cloud Shell throughout this lab.
Google Cloud Console
The Cloud Console can be reached at https://console.cloud.google.com.
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs, and you can update it at any time.
- The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (and it is typically identified as
PROJECT_ID
), so if you don't like it, generate another random one, or, you can try your own and see if it's available. Then it's "frozen" after the project is created. - There is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console in order to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, follow any "clean-up" instructions found at the end of the codelab. New users of Google Cloud are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the GCP Console click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.
3. Lab Setup
For this lab, you will use a Project, and create two VPCs with a subnet in each. You will reserve external IP addresses and then create and configure a Cloud NAT gateway (with a Cloud Router), two producer instances as well as two consumer instances. After validating the default Cloud NAT behavior, you will enable Dynamic Port Allocation and validate its behavior. Finally, you will also configure NAT rules and observe the interaction between DPA and NAT Rules.
Networking architecture overview:
4. Reserve External IP Addresses
Let's reserve all external IP addresses to be used in this lab. This will help you write all relevant NAT and firewall rules in both consumer and producer VPC.
From Cloud Shell:
gcloud compute addresses create nat-address-1 nat-address-2 \ producer-address-1 producer-address-2 --region us-east4
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/nat-address-1]. Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/nat-address-2]. Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/producer-address-1]. Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/producer-address-2].
Populate the IP addresses that were reserved as environment variables.
export natip1=`gcloud compute addresses list --filter name:nat-address-1 --format="get(address)"` export natip2=`gcloud compute addresses list --filter name:nat-address-2 --format="get(address)"` export producerip1=`gcloud compute addresses list --filter name:producer-address-1 --format="get(address)"` export producerip2=`gcloud compute addresses list --filter name:producer-address-2 --format="get(address)"`
No output is expected, but to confirm that the addresses were populated properly. Let's output the values of all environment variables.
env | egrep '^(nat|producer)ip[1-3]'
Output:
producerip1=<Actual Producer IP 1> producerip2=<Actual Producer IP 2> natip1=<NAT IP 1> natip2=<NAT IP 2>
5. Producer VPC and Instances Setup.
We will now create the resources for the producer resources. The instances running in the producer VPC will offer the internet-facing service using two public IPs "producer-address-1" and "producer-address-2" .
First let's create the VPC. From Cloud Shell:
gcloud compute networks create producer-vpc --subnet-mode custom
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project-ID>/global/networks/producer-vpc]. NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 producer-vpc CUSTOM REGIONAL Instances on this network will not be reachable until firewall rules are created. As an example, you can allow all internal traffic between instances as well as SSH, RDP, and ICMP by running: $ gcloud compute firewall-rules create <FIREWALL_NAME> --network producer-vpc --allow tcp,udp,icmp --source-ranges <IP_RANGE> $ gcloud compute firewall-rules create <FIREWALL_NAME> --network producer-vpc --allow tcp:22,tcp:3389,icmp
Next, let's create the subnet in us-east4. From Cloud Shell:
gcloud compute networks subnets create prod-net-e4 \ --network producer-vpc --range 10.0.0.0/24 --region us-east4
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/subnetworks/prod-net-e4]. NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE IPV6_CIDR_RANGE EXTERNAL_IPV6_CIDR_RANGE prod-net-e4 us-east4 producer-vpc 10.0.0.0/24 IPV4_ONLY
Next, let's create VPC firewall rules to allow the NAT IP addresses to reach the producer instances on port 8080.
For the first rule, from Cloud Shell:
gcloud compute firewall-rules create producer-allow-80 \ --network producer-vpc --allow tcp:80 \ --source-ranges $natip1,$natip2
Output:
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/global/firewalls/producer-allow-80]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED producer-allow-80 producer-vpc INGRESS 1000 tcp:80 False
Next step is to create the two producer instances.
The producer instances will run a simple nginx proxy deployment.
To quickly provision the instances with all required software, we will create the instances with a start-up script that installs nginx using the Debian APT package manager.
To be able to write NAT rules, we will provision each instance with a different reserved IP address.
Create the first instance. From Cloud Shell:
gcloud compute instances create producer-instance-1 \ --zone=us-east4-a --machine-type=e2-medium \ --network-interface=address=producer-address-1,network-tier=PREMIUM,subnet=prod-net-e4 \ --metadata startup-script="#! /bin/bash sudo apt update sudo apt install -y nginx mkdir /var/www/html/nginx/ cat <<EOF > /var/www/html/nginx/index.html <html><body><h1>This is producer instance 1</h1> </body></html> EOF"
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/zones/us-east4-a/instances/producer-instance-1]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS producer-instance-1 us-east4-a e2-medium 10.0.0.2 <Producer IP1> RUNNING
Then create the second instance. From Cloud Shell:
gcloud compute instances create producer-instance-2 \ --zone=us-east4-a --machine-type=e2-medium \ --network-interface=address=producer-address-2,network-tier=PREMIUM,subnet=prod-net-e4 \ --metadata startup-script="#! /bin/bash sudo apt update sudo apt install -y nginx mkdir /var/www/html/nginx/ cat <<EOF > /var/www/html/nginx/index.html <html><body><h1>This is producer instance 2</h1> </body></html> EOF"
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/zones/us-east4-a/instances/producer-instance-2]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS producer-instance-2 us-east4-a e2-medium 10.0.0.3 <Producer IP2> RUNNING
6. Setup Consumer VPC, Cloud NAT and Instances
Now that you have created the producer service, it's now time to create the consumer VPC and its Cloud NAT gateway.
After creating the VPC and subnet, we will add a simple ingress firewall rule to allow the IAP for TCP source IP ranges. This will allow us to SSH to the consumer instances directly using gcloud.
We will then create a simple Cloud NAT gateway in manual allocation mode and the reserved address "nat-address-1" associated with it. In subsequent parts of the codelab, we will update the gateway's configuration to enable Dynamic Port Allocation and, later, add custom rules.
First let's create the VPC. From Cloud Shell:
gcloud compute networks create consumer-vpc --subnet-mode custom
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/global/networks/consumer-vpc]. NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 consumer-vpc CUSTOM REGIONAL Instances on this network will not be reachable until firewall rules are created. As an example, you can allow all internal traffic between instances as well as SSH, RDP, and ICMP by running: $ gcloud compute firewall-rules create <FIREWALL_NAME> --network consumer-vpc --allow tcp,udp,icmp --source-ranges <IP_RANGE> $ gcloud compute firewall-rules create <FIREWALL_NAME> --network consumer-vpc --allow tcp:22,tcp:3389,icmp
Next, let's create a subnet in us-east4. From Cloud Shell:
gcloud compute networks subnets create cons-net-e4 \ --network consumer-vpc --range 10.0.0.0/24 --region us-east4
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/subnetworks/cons-net-e4]. NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE IPV6_CIDR_RANGE EXTERNAL_IPV6_CIDR_RANGE cons-net-e4 us-east4 consumer-vpc 10.0.0.0/24 IPV4_ONLY
Next, let's create a VPC firewall rules to allow IAP ranges addresses to reach the consumer instances on port 22.
For the first firewall rule, run the following from Cloud Shell:
gcloud compute firewall-rules create consumer-allow-iap \ --network consumer-vpc --allow tcp:22 \ --source-ranges 35.235.240.0/20
Output:
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/<Project-ID>/global/firewalls/consumer-allow-iap]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED consumer-allow-iap consumer-vpc INGRESS 1000 tcp:22 False
Before creating a NAT gateway, we need to create a Cloud Router instance first (we use a private ASN number but it's irrelevant for this lab's activities). From Cloud Shell:
gcloud compute routers create consumer-cr \ --region=us-east4 --network=consumer-vpc \ --asn=65501
Output:
Creating router [consumer-cr]...done. NAME REGION NETWORK consumer-cr us-east4 consumer-vpc
Then create the NAT gateway instance. From Cloud Shell:
gcloud compute routers nats create consumer-nat-gw \ --router=consumer-cr \ --router-region=us-east4 \ --nat-all-subnet-ip-ranges \ --nat-external-ip-pool=nat-address-1
Output:
Creating NAT [consumer-nat-gw] in router [consumer-cr]...done.
Note that, by default, the Cloud NAT gateway is created with minPortsPerVm
set to 64
Create the consumer test instances. We populate the reserved producer IPs here to be able to refer to them within the instance later. From Cloud Shell:
gcloud compute instances create consumer-instance-1 --zone=us-east4-a \ --machine-type=e2-medium --network-interface=subnet=cons-net-e4,no-address \ --metadata=producer-service-ip1=$producerip1,producer-service-ip2=$producerip2 gcloud compute instances create consumer-instance-2 --zone=us-east4-a \ --machine-type=e2-medium --network-interface=subnet=cons-net-e4,no-address \ --metadata=producer-service-ip1=$producerip1,producer-service-ip2=$producerip2
Output:
Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/zones/us-east4-a/instances/consumer-instance-1]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS consumer-instance-1 us-east4-a e2-medium 10.0.0.2 RUNNING Created [https://www.googleapis.com/compute/v1/projects/<Project ID>/zones/us-east4-a/instances/consumer-instance-2]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS consumer-instance-2 us-east4-a e2-medium 10.0.0.3 RUNNING
7. Verify default Cloud NAT behavior
At this point, the consumer instances use the default Cloud NAT behavior which uses the same reserved IP "nat-address-1" for communicating with all external addresses. Cloud NAT also doesn't have DPA enabled yet.
Let's validate what ports Cloud NAT has allocated our consumer instances by running the following command
gcloud compute routers get-nat-mapping-info consumer-cr --region=us-east4
Sample output
--- instanceName: consumer-instance-1 interfaceNatMappings: - natIpPortRanges: - <NAT Consumer IP1>:1024-1055 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 - natIpPortRanges: - <NAT Consumer IP1>:32768-32799 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 --- instanceName: consumer-instance-2 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1056-1087 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.3 - natIpPortRanges: - <NAT Address IP1>:32800-32831 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.3
As you can see from the above output, Cloud NAT has allocated 64 ports per instance from the same external IP nat-address-1
Let's validate how many connections we can open in parallel before enabling DPA.
SSH into the first consumer instance. From Cloud Shell:
gcloud compute ssh consumer-instance-1 --zone=us-east4-a
You should be now in the instance shell.
Sample Output (full output truncated for brevity)
External IP address was not found; defaulting to using IAP tunneling. ... ... <username>@consumer-instance-1:~$
From within the consumer instance, let's first fetch both producer IPs and populate them as environment variables
export producerip1=`curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/producer-service-ip1" -H "Metadata-Flavor: Google"` export producerip2=`curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/producer-service-ip2" -H "Metadata-Flavor: Google"`
Then try to curl to both producer instances to make sure we can reach them successfully.
<username>@consumer-instance-1:~$ curl http://$producerip1/nginx/ <html><body><h1>This is producer instance 1</h1> </body></html> <username>@consumer-instance-1:~$ curl http://$producerip2/nginx/ <html><body><h1>This is producer instance 2</h1> </body></html>
Now let's try and create many parallel connections to one of the producer instances by running curl through a loop. Recall that Cloud NAT does not allow re-use of closed sockets for 2 minutes. Hence, as long as we can loop through all connection attempts within 2 minutes, we are able to simulate parallel connections this way.
Run the following command in the instance SSH session
while true; do for i in {1..64}; do curl -s -o /dev/null --connect-timeout 5 http://$producerip1/nginx/; if [ $? -ne 0 ] ; then echo -e "\nConnection # $i failed" ; else echo -en "\rConnection # $i successful"; fi; done; echo -e "\nLoop Done, Sleeping for 150s"; sleep 150; done
You would expect successfully being able to open 64 parallel connections and the script should print out the following
Connection # 64 successful Loop Done, Sleeping for 150s Connection # 64 successful Loop Done, Sleeping for 150s
To see that we are not able to go beyond 64 parallel connections, first wait 2 minutes to allow all old sockets to clear. Then tweak the same one-liner to the following and re-run it
while true; do for i in {1..70}; do curl -s -o /dev/null --connect-timeout 5 http://$producerip1/nginx/; if [ $? -ne 0 ] ; then echo -e "\nConnection # $i failed" ; else echo -en "\rConnection # $i successful"; fi; done; echo -e "\nLoop Done, Sleeping for 150s"; sleep 150; done
You would expect now the following output
Connection # 64 successful Connection # 65 failed Connection # 66 failed Connection # 67 failed Connection # 68 failed Connection # 69 failed Connection # 70 failed Loop Done, Sleeping for 150s
This indicates that while the first 64 connections succeeded, the remaining 6 connections failed due to unavailability of ports.
Let's do something about it then, exit the SSH shell and let's enable DPA in the following secion.
8. Enable DPA and validate its behavior
Run the following gcloud command, which enables DPA, sets the minimum port allocation per VM to 64, and maximum port allocation to 1024.
gcloud alpha compute routers nats update consumer-nat-gw --router=consumer-cr \ --region=us-east4 --min-ports-per-vm=64 --max-ports-per-vm=1024 \ --enable-dynamic-port-allocation
Which outputs the following
Updating nat [consumer-nat-gw] in router [consumer-cr]...done.
Now let's re-run get-nat-mapping-info
to confirm that both instances still have only 64 ports allocated
gcloud compute routers get-nat-mapping-info consumer-cr --region=us-east4
Sample output (truncated for brevity)
--- instanceName: consumer-instance-1 ... - <NAT Consumer IP1>:1024-1055 numTotalNatPorts: 32 ... - natIpPortRanges: - <NAT Consumer IP1>:32768-32799 numTotalNatPorts: 32 ... --- instanceName: consumer-instance-2 ... - <NAT Address IP1>:1056-1087 numTotalNatPorts: 32 ... - <NAT Address IP1>:32800-32831 numTotalNatPorts: 32 ...
Not much has changed in terms of port allocations since the instance is not actively using any ports yet.
Let's SSH back into the instance:
gcloud compute ssh consumer-instance-1 --zone=us-east4-a
Re-export the producer IP environment variables.
export producerip1=`curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/producer-service-ip1" -H "Metadata-Flavor: Google"` export producerip2=`curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/producer-service-ip2" -H "Metadata-Flavor: Google"`
And re-run the earlier loop to simulate parallel connections:
while true; do for i in {1..70}; do curl -s -o /dev/null --connect-timeout 5 http://$producerip1/nginx/; if [ $? -ne 0 ] ; then echo -e "\nConnection # $i failed" ; else echo -en "\rConnection # $i successful"; fi; done; echo -e "\nLoop Done, Sleeping for 150s"; sleep 150; done
We should now see the following output
Connection # 64 successful Connection # 65 failed Connection # 66 failed Connection # 70 successful Loop Done, Sleeping for 150s
So what happened here? Cloud NAT ramps port allocation on port usage increase but that takes some time to be programmed throughout the networking layer. Hence we see 1-3 connection timeouts before we successfully complete the rest of the connection attempts.
We have specified an aggressive timeout for curl (5 seconds) but applications with longer timeouts should be able to complete connections successfully while DPA is increasing port allocations.
This ramp up behavior can be seen more clearly when we run the loop for 1024 connection attempts like so
while true; do for i in {1..1024}; do curl -s -o /dev/null --connect-timeout 5 http://$producerip1/nginx/; if [ $? -ne 0 ] ; then echo -e "\nConnection # $i failed" ; else echo -en "\rConnection # $i successful"; fi; done; echo -e "\nLoop Done, Sleeping for 150s"; sleep 150; done
We now expect to see the following output
Connection # 64 successful Connection # 65 failed Connection # 66 failed Connection # 129 successful Connection # 130 failed Connection # 131 failed Connection # 258 successful Connection # 259 failed Connection # 260 failed Connection # 515 successful Connection # 516 failed Connection # 1024 successful Loop Done, Sleeping for 150s
Because Cloud NAT allocates ports in powers of 2, essentially doubling allocations in each step, we see the connection timeouts highlighted around the powers of 2 between 64 and 1024.
Since we set maxPortsPerVM
to 1024, we don't expect to be able to go for more than 1024 connections. We can test that by re-running the curl loop with a higher count than 1024 (after waiting 2 minutes to reset stale ports).
while true; do for i in {1..1035}; do curl -s -o /dev/null --connect-timeout 5 http://$producerip1/nginx/; if [ $? -ne 0 ] ; then echo -e "\nConnection # $i failed" ; else echo -en "\rConnection # $i successful"; fi; done; echo -e "\nLoop Done, Sleeping for 150s"; sleep 150; done
And as expected, the output shows connections beyond 1024 start to fail
<truncated output> ... Connection # 1028 successful Connection # 1029 failed Connection # 1030 failed Connection # 1031 failed Connection # 1032 failed Connection # 1033 failed Connection # 1034 failed Connection # 1035 failed ... Loop Done, Sleeping for 150s
By setting set maxPortsPerVM
to 1024, we have instructed Cloud NAT to never scale port allocations beyond 1024 per VM.
If we exit the SSH session and re-run get-nat-mapping-info
quickly enough, we can see the extra ports allocated
gcloud compute routers get-nat-mapping-info consumer-cr --region=us-east4
And observe the following output
--- instanceName: consumer-instance-1 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1024-1055 - <NAT Address IP1>1088-1119 -<NAT Address IP1>:1152-1215 - <NAT Address IP1>:1280-1407 - <NAT Address IP1>:1536-1791 numTotalDrainNatPorts: 0 numTotalNatPorts: 512 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 - natIpPortRanges: - <NAT Address IP1>:32768-32799 - <NAT Address IP1>:32832-32863 - <NAT Address IP1>:32896-32959 - <NAT Address IP1>:33024-33151 - <NAT Address IP1>:33536-33791 numTotalDrainNatPorts: 0 numTotalNatPorts: 512 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 --- instanceName: consumer-instance-2 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1056-1087 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.3 - natIpPortRanges: - <NAT Address IP1>:32800-32831 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.3
Notice how consumer-instance-1
has 1024 ports allocated, but consumer-instance-2
has only 64 ports allocated. This was not easily possible before DPA and exactly highlights the power of DPA for Cloud NAT.
If you wait for 2 minutes before re-running the get-nat-mapping-info
command, you will notice that consumer-instance-1
is back at its minimum value of just 64 ports allocated. Illustrating not just DPA's ability to increase port allocations, but also release them when not in use for potential use by other instances behind the same NAT Gateway.
9. Test Cloud NAT Rules with DPA
We have also recently released NAT rules functionality for Cloud NAT, allowing customers to write rules that use specific NAT IPs for certain external destinations. For more information, please refer to the NAT Rules documentation page.
In this exercise, we observe the interaction between DPA and NAT Rules. Let's first define a NAT rule to use nat-address-2
when accessing producer-address-2
.
Run the following gcloud command, which creates the NAT rule using
gcloud alpha compute routers nats rules create 100 \ --match='destination.ip == "'$producerip2'"' \ --source-nat-active-ips=nat-address-2 --nat=consumer-nat-gw \ --router=consumer-cr --router-region=us-east4
You should expect the following output
Updating nat [consumer-nat-gw] in router [consumer-cr]...done.
Now let's re-run get-nat-mapping-info
to see the effect of the new NAT rule.
gcloud alpha compute routers get-nat-mapping-info consumer-cr --region=us-east4
Which should output the following
--- instanceName: consumer-instance-1 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1024-1055 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:1024-1055 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 - natIpPortRanges: - <NAT Address IP1>:32768-32799 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:32768-32799 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2
Notice that now we have extra ports allocated (also at 64, the specified minimum) specifically for nat-address-2
under the ruleMappings
hierarchy.
So what happens if an instance opens many connections to the destination specified by the NAT rule? Let's find out.
Let's SSH back into the instance:
gcloud compute ssh consumer-instance-1 --zone=us-east4-a
Re-export the producer IP environment variables.
export producerip1=`curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/producer-service-ip1" -H "Metadata-Flavor: Google"` export producerip2=`curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/producer-service-ip2" -H "Metadata-Flavor: Google"`
And now let's re-run the curl loop against producerip2
this time
while true; do for i in {1..1024}; do curl -s -o /dev/null --connect-timeout 5 http://$producerip2/nginx/; if [ $? -ne 0 ] ; then echo -e "\nConnection # $i failed" ; else echo -en "\rConnection # $i successful"; fi; done; echo -e "\nLoop Done, Sleeping for 150s"; sleep 150; done
You should expect an output similar to the following
Connection # 64 successful Connection # 65 failed Connection # 66 failed Connection # 129 successful Connection # 130 failed Connection # 131 failed Connection # 258 successful Connection # 259 failed Connection # 260 failed Connection # 515 successful Connection # 516 failed Connection # 1024 successful Loop Done, Sleeping for 150s
Basically mirroring the previous test. Let's exit the instance's SSH session and look at nat mappings again.
gcloud alpha compute routers get-nat-mapping-info consumer-cr --region=us-east4
Which should output the following
--- instanceName: consumer-instance-1 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1024-1055 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:1024-1055 - <NAT Address IP2>:1088-1119 - <NAT Address IP2>:1152-1215 - <NAT Address IP2>:1280-1407 - <NAT Address IP2>:1536-1791 numTotalDrainNatPorts: 0 numTotalNatPorts: 512 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 - natIpPortRanges: - <NAT Address IP1>:32768-32799 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:32768-32799 - <NAT Address IP2>:32832-32863 - <NAT Address IP2>:32896-32959 - <NAT Address IP2>:33024-33151 - <NAT Address IP2>:33280-33535 numTotalDrainNatPorts: 0 numTotalNatPorts: 512 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 --- instanceName: consumer-instance-2 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1056-1087 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:1056-1087 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.3 - natIpPortRanges: - <NAT Address IP1>:32800-32831 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:32800-32831 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.3 --- instanceName: consumer-instance-1 interfaceNatMappings: - natIpPortRanges: - <NAT Address IP1>:1024-1055 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:1024-1055 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2 - natIpPortRanges: - <NAT Address IP1>:32768-32799 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleMappings: - natIpPortRanges: - <NAT Address IP2>:32768-32799 numTotalDrainNatPorts: 0 numTotalNatPorts: 32 ruleNumber: 100 sourceAliasIpRange: '' sourceVirtualIp: 10.0.0.2
As you can observe above, consumer-instance-1
's default NAT IP ( the IP for nat-address-1
) still has only 64 ports allocated, but the NAT rule's IP (IP for nat-address-2
) has 1024 ports allocated. All the while consumer-instance-2
kept its default allocations of 64 ports for all NAT IPs.
As an exercise, you can test the reverse case. Let Cloud NAT deallocate all extra ports, then run the curl loop against producerip1
and observe the effects on the output of get-nat-mapping-info
10. Cleanup Steps
To avoid recurring charges you should delete all resources associated with this codelab.
First delete all instances.
From Cloud Shell:
gcloud compute instances delete consumer-instance-1 consumer-instance-2 \ producer-instance-1 producer-instance-2 \ --zone us-east4-a --quiet
Expected output :
Deleted [https://www.googleapis.com/compute/v1/projects/<Project Id>/zones/us-east4-a/instances/consumer-instance-1]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project Id>/zones/us-east4-a/instances/consumer-instance-2]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project Id>/zones/us-east4-a/instances/producer-instance-1]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project Id>/zones/us-east4-a/instances/producer-instance-2].
Next, delete the Cloud Router. From Cloud Shell:
gcloud compute routers delete consumer-cr \ --region us-east4 --quiet
You should expect the following output :
Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/routers/consumer-cr].
Release all external IP addresses. From Cloud Shell:
gcloud compute addresses delete nat-address-1 \ nat-address-2 producer-address-1 \ producer-address-2 --region us-east4 --quiet
You should expect the following output :
Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/nat-address-1]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/nat-address-2]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/nat-address-3]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/producer-address-1]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/addresses/producer-address-2].
Delete VPC firewall rules. From Cloud Shell:
gcloud compute firewall-rules delete consumer-allow-iap \ producer-allow-80 --quiet
You should expect the following output :
Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/global/firewalls/consumer-allow-iap]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/global/firewalls/producer-allow-80].
Delete subnets. From Cloud Shell:
gcloud compute networks subnets delete cons-net-e4 \ prod-net-e4 --region=us-east4 --quiet
You should expect the following output :
Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/subnetworks/cons-net-e4]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/regions/us-east4/subnetworks/prod-net-e4].
Finally, let's delete the VPCs. From Cloud Shell:
gcloud compute networks delete consumer-vpc \ producer-vpc --quiet
You should expect the following output :
Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/global/networks/consumer-vpc]. Deleted [https://www.googleapis.com/compute/v1/projects/<Project ID>/global/networks/producer-vpc].
11. Congratulations!
You have completed the Cloud NAT DPA Lab!
What you covered
- How to set up a Cloud NAT gateway in preparation for DPA.
- How to inspect port allocations without DPA.
- How to enable and configure DPA for a NAT gateway.
- How to observe the effects of DPA by running parallel egress connections.
- How to add NAT rules to a NAT Gateway with DPA enabled.
- How to see the behavior of DPA with Rules by running egress connections to multiple destinations.
Next Steps
- Browse our Dynamic Port Allocation documentation page
- Experiment with tweaking NAT timeouts, and port allocation values with your application.
- Learn more about Networking on Google Cloud Platform
©Google, Inc. or its affiliates. All rights reserved. Do not distribute.