1. Introduction
Policy Based Routes
Policy-based routes let you choose a next hop based on more than a packet's destination IP address. You can match traffic by protocol and source IP address as well. Matching traffic is redirected to an Internal Network load balancer. This can help you insert appliances such as firewalls into the path of network traffic.
When you create a policy-based route, you select which resources can have their traffic processed by the route. The route can apply to the following:
- The entire network: all virtual machine (VM) instances, VPN gateways, and Interconnects
- Using network tags: select VM instances in the VPC
- Interconnect region: All traffic entering the VPC network by way of VLAN attachments for the region
The next hop of a policy-based route must be a valid Internal Network load balancer that is in the same VPC network as the policy-based route.
Internal Network load balancers use symmetric hashing by default, so traffic can reach the same appliance on the outgoing and return paths without configuring source NAT.
Policy-based routes have a higher priority than other route types except for special return paths.
If two policy-based routes have the same priority, Google Cloud uses a deterministic, internal algorithm to select a single policy-based route, ignoring other routes with the same priority. Policy-based routes do not use longest-prefix matching and only select the highest priority route.
You can create a single rule for one-way traffic or multiple rules to handle bidirectional traffic.
To use policy-based routes with Cloud Interconnect, the route must be applied to all Cloud Interconnect connections in an entire region. Policy-based routes cannot be applied to an individual Cloud Interconnect connection only.
The VM instances that receive traffic from a policy-based route must have IP forwarding enabled.
Considerations with PBR
Special configuration is necessary to use policy-based routes in the following ways.
For example using PBR with GKE, PSC, or with PGA/PSA.
More details on PBR with GKE can be found here and the general PBR limitations section here.
What you'll learn
- How to configure a policy based routes
What you'll need
- Knowledge of deploying instances and configuring networking components
- VPC Firewall configuration knowledge
2. Test Environment
This Codelab will leverage a single VPC. There will be two compute resources, clienta and clientb, in this environment that will send packets to another server resource. Using PBR and filters, we will force traffic from clienta through another compute resource for firewall enforcement while clientb traffic goes directly to the server. The diagram below illustrates the path.
In the diagram above, there should technically be an ILB (network internal load balancer) for PBR paths. This has been omitted for diagram simplicity.
3. Before you begin
Codelab requires a single project.
From cloudshell:
export project_id=`gcloud config list --format="value(core.project)"` export region=us-central1 export zone=us-central1-a export prefix=codelab-pbr
4. Enable APIs
If not already done, enable the APIs to use the products
From cloudshell:
gcloud services enable compute.googleapis.com gcloud services enable networkconnectivity.googleapis.com
5. Create VPC network and subnet
VPC Network
Create codelab-pbr-vpc VPC:
From cloudshell:
gcloud compute networks create $prefix-vpc --subnet-mode=custom
Subnet
Create the respective subnets in the selected region:
From cloudshell:
gcloud compute networks subnets create $prefix-vpc-subnet \ --range=10.10.10.0/24 --network=$prefix-vpc --region=${region}
6. Create Firewall Rules
To allow IAP to connect to your VM instances, create a firewall rule that:
- Applies to all VM instances that you want to be accessible by using IAP.
- Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.
From cloudshell:
gcloud compute firewall-rules create $prefix-allow-iap-proxy \ --direction=INGRESS \ --priority=1000 \ --network=$prefix-vpc \ --action=ALLOW \ --rules=tcp:22 \ --source-ranges=35.235.240.0/20
To allow the standard HTTP port (TCP 80) and ICMP protocol to the server:
- Applies to resources with network tag "server"
- Allows ingress from all sources
From cloudshell:
gcloud compute firewall-rules create $prefix-allow-http-icmp \ --direction=INGRESS \ --priority=1000 \ --network=$prefix-vpc \ --action=ALLOW \ --rules=tcp:80,icmp \ --source-ranges=0.0.0.0/0 \ --target-tags=server
To allow the FW to receive packets, allow an ingress on all protocols and ports.
- Applies to resources with network tag "fw"
- Allows ingress from 10.10.10.0/24 sources
From cloudshell:
gcloud compute firewall-rules create $prefix-fw-allow-ingress \ --direction=INGRESS \ --priority=1000 \ --network=$prefix-vpc \ --action=ALLOW \ --rules=all \ --source-ranges=10.10.10.0/24 \ --target-tags=fw
To allow the health checks probes
- Applies to resources with the network tag "fw"
- Allows ingress from health check ranges
From cloudshell:
gcloud compute firewall-rules create $prefix-allow-hc-ingress \ --direction=INGRESS \ --priority=1000 \ --network=$prefix-vpc \ --action=ALLOW \ --rules=tcp:80 \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=fw
7. Create Cloud Router & Cloud NAT
The purpose of this section is so that the private virtual machines can download the appropriate software packages from the internet.
Create Cloud Router
From cloudshell:
gcloud compute routers create ${prefix}-cr \ --region=${region} \ --network=${prefix}-vpc
Create Cloud NAT Gateway
From cloudshell:
gcloud compute routers nats create ${prefix}-nat-gw-${region} \ --router=${prefix}-cr \ --router-region=${region} \ --auto-allocate-nat-external-ips \ --nat-all-subnet-ip-ranges
8. Create Compute Instances
Create the compute instances ClientA, ClientB, FW, and Server:
From cloudshell:
gcloud compute instances create clienta \ --subnet=$prefix-vpc-subnet \ --no-address \ --private-network-ip=10.10.10.10 \ --zone $zone \ --tags client \ --metadata startup-script='#! /bin/bash apt-get update'
From cloudshell:
gcloud compute instances create clientb \ --subnet=$prefix-vpc-subnet \ --no-address \ --private-network-ip=10.10.10.11 \ --zone $zone \ --tags client \ --metadata startup-script='#! /bin/bash apt-get update'
From cloudshell:
gcloud compute instances create server \ --subnet=$prefix-vpc-subnet \ --no-address \ --private-network-ip=10.10.10.200 \ --zone $zone \ --tags server \ --metadata startup-script='#! /bin/bash sudo su apt-get update apt-get -y install tcpdump apt-get -y install nginx cat > /var/www/html/index.html << EOF <html><body><p>Server</p></body></html> EOF'
From cloudshell:
gcloud compute instances create fw \ --subnet=$prefix-vpc-subnet \ --can-ip-forward \ --no-address \ --private-network-ip=10.10.10.75 \ --zone $zone \ --tags fw \ --metadata startup-script='#! /bin/bash apt-get update sudo apt-get -y install tcpdump sudo apt-get -y install nginx sudo sysctl -w net.ipv4.ip_forward=1 sudo iptables -I FORWARD -d 10.10.10.200 -j REJECT'
9. Test Connectivity without PBR
SSH into client compute VMs we recently created and verify connectivity from both clients to server.
From cloudshell1 log in to clienta:
gcloud compute ssh clienta --zone=$zone --tunnel-through-iap
Run the following commands:
ping 10.10.10.200 -c 5
curl 10.10.10.200/index.html
The pings and curl requests should be successful.
Output:
root@clienta:~$ ping 10.10.10.200 -c 5 PING 10.10.10.200 (10.10.10.200) 56(84) bytes of data. 64 bytes from 10.10.10.200: icmp_seq=1 ttl=64 time=1.346 ms 64 bytes from 10.10.10.200: icmp_seq=2 ttl=64 time=0.249 ms 64 bytes from 10.10.10.200: icmp_seq=3 ttl=64 time=0.305 ms 64 bytes from 10.10.10.200: icmp_seq=4 ttl=64 time=0.329 ms 64 bytes from 10.10.10.200: icmp_seq=5 ttl=64 time=0.240 ms
root@clienta:~$ curl 10.10.10.200/index.html <html><body><p>Server</p></body></html>
Open an additional cloudshell tab by clicking the +.
From cloudshell2 set variables for use:
export project_id=`gcloud config list --format="value(core.project)"` export region=us-central1 export zone=us-central1-a export prefix=codelab-pbr
From cloudshell2 SSH to clientb:
gcloud compute ssh clientb --zone=$zone --tunnel-through-iap
Run the following commands:
ping 10.10.10.200 -c 5
curl 10.10.10.200/index.html
The pings and curl requests should be successful.
Output:
root@clientb:~$ ping 10.10.10.200 -c 5 PING 10.10.10.200 (10.10.10.200) 56(84) bytes of data. 64 bytes from 10.10.10.200: icmp_seq=1 ttl=64 time=1.346 ms 64 bytes from 10.10.10.200: icmp_seq=2 ttl=64 time=0.249 ms 64 bytes from 10.10.10.200: icmp_seq=3 ttl=64 time=0.305 ms 64 bytes from 10.10.10.200: icmp_seq=4 ttl=64 time=0.329 ms 64 bytes from 10.10.10.200: icmp_seq=5 ttl=64 time=0.240 ms
root@clientb:~$ curl 10.10.10.200/index.html <html><body><p>Server</p></body></html>
Now exit the VM terminal and head back to cloudshell.
10. Create an Instance Group
Create an unmanaged instance group for your fw VM.
From cloudshell:
gcloud compute instance-groups unmanaged create pbr-uig --zone=$zone
Add the fw instance to the unmanaged instance group.
From cloudshell:
gcloud compute instance-groups unmanaged add-instances pbr-uig --instances=fw --zone=$zone
11. Create a health check
Create a health check for backend service. We will do a simple TCP port 80 health check.
From cloudshell:
gcloud compute health-checks create tcp $prefix-hc-tcp-80 --region=$region --port 80
12. Create a backend service
Create a backend service to attach to forwarding rule.
From cloudshell:
gcloud compute backend-services create be-pbr --load-balancing-scheme=internal --protocol=tcp --region=$region --health-checks=$prefix-hc-tcp-80 --health-checks-region=$region
Now add the instance group to the backend service.
From cloudshell:
gcloud compute backend-services add-backend be-pbr --region=$region --instance-group=pbr-uig --instance-group-zone=$zone
13. Create a forwarding rule
From cloudshell:
gcloud compute forwarding-rules create fr-pbr --region=$region --load-balancing-scheme=internal --network=$prefix-vpc --subnet=$prefix-vpc-subnet --ip-protocol=TCP --ports=ALL --backend-service=be-pbr --backend-service-region=$region --address=10.10.10.25 --network-tier=PREMIUM
14. Create PBR Rule
This PBR rule applies to clients. It will route all IPv4 traffic to the forwarding rule 10.10.10.25 if the source IP is 10.10.10.10/32 (clienta's address) and the destination IP is 10.10.10.0/24.
This means clienta will match PBR and not clientb.
From cloudshell:
gcloud network-connectivity policy-based-routes create pbr-client \ --network=projects/$project_id/global/networks/$prefix-vpc \ --next-hop-ilb-ip=10.10.10.25 \ --source-range=10.10.10.10/32 \ --destination-range=10.10.10.0/24 \ --protocol-version=IPv4 \ --priority=1000 \ --tags=client
This PBR rule applies to the server. It will route all IPv4 traffic to the forwarding rule 10.10.10.25 if the source IP is 10.10.10.200/32 and the destination IP is 10.10.10.10/32.
From cloudshell:
gcloud network-connectivity policy-based-routes create pbr-server \ --network=projects/$project_id/global/networks/$prefix-vpc \ --next-hop-ilb-ip=10.10.10.25 \ --source-range=10.10.10.200/32 \ --destination-range=10.10.10.10/32 \ --protocol-version=IPv4 \ --priority=2000 \ --tags=server
15. Test Connectivity with PBR
We will now verify PBR functionality. The "fw" instance is configured with iptables to reject requests destined for the server. If PBR is functional, the requests that worked previously on clienta will now fail, while clientb is still successful.
SSH into clienta compute VM and run the same tests.
From cloudshell1:
gcloud compute ssh clienta --zone=$zone --tunnel-through-iap
Run the following commands:
ping 10.10.10.200 -c 5
curl 10.10.10.200/index.html
Output:
root@clienta:~$ ping 10.10.10.200 -c 5 PING 10.10.10.200 (10.10.10.200) 56(84) bytes of data. From 10.10.10.75 icmp_seq=1 Destination Port Unreachable From 10.10.10.75 icmp_seq=2 Destination Port Unreachable From 10.10.10.75 icmp_seq=3 Destination Port Unreachable From 10.10.10.75 icmp_seq=4 Destination Port Unreachable From 10.10.10.75 icmp_seq=5 Destination Port Unreachable
root@clienta:~$ curl 10.10.10.200/index.html curl: (7) Failed to connect to 10.10.10.200 port 80: Connection refused
Since the requests failed, we can confirm that PBR is actively routing traffic for clienta to the fw instance which was configured to block this traffic.
SSH into clientb and run the same connectivity test.
From cloudshell2:
gcloud compute ssh clientb --zone=$zone --tunnel-through-iap
Run the following commands:
ping 10.10.10.200 -c 5
curl 10.10.10.200/index.html
Output:
root@clientb:~$ ping 10.10.10.200 -c 5 PING 10.10.10.200 (10.10.10.200) 56(84) bytes of data. 64 bytes from 10.10.10.200: icmp_seq=1 ttl=63 time=0.361 ms 64 bytes from 10.10.10.200: icmp_seq=2 ttl=63 time=0.475 ms 64 bytes from 10.10.10.200: icmp_seq=3 ttl=63 time=0.379 ms
root@clientb:~$ curl 10.10.10.200 <html><body><p>Server</p></body></html>
As you can see, requests from clientb to server are successful. This is because the requests do not match a PBR rule for the source IP.
16. [Optional] Validating with captures on firewall
In this optional section, you have the opportunity to validate PBR by taking packet captures on the firewall VM.
You should still have an SSH connection in cloudshell1 and cloudshell2 to clienta and clientb.
Open an additional cloudshell tab by clicking the +.
From cloudshell3, set variables:
export project_id=`gcloud config list --format="value(core.project)"` export region=us-central1 export zone=us-central1-a export prefix=codelab-pbr
SSH into fw:
gcloud compute ssh fw --zone=$zone --tunnel-through-iap
Run the following command on fw (cloudshell3):
sudo tcpdump -i any icmp -nn
From clienta (cloudshell1) run the ping test:
ping 10.10.10.200 -c 5
From clientb (cloudshell2) run the ping test:
ping 10.10.10.200 -c 5
Output on fw (cloudshell 3):
root@fw:~$ sudo tcpdump -i any icmp -nn tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 17:07:42.215297 ens4 In IP 10.10.10.10 > 10.10.10.200: ICMP echo request, id 25362, seq 1, length 64 17:07:42.215338 ens4 Out IP 10.10.10.75 > 10.10.10.10: ICMP 10.10.10.200 protocol 1 port 51064 unreachable, length 92 17:07:43.216122 ens4 In IP 10.10.10.10 > 10.10.10.200: ICMP echo request, id 25362, seq 2, length 64 17:07:43.216158 ens4 Out IP 10.10.10.75 > 10.10.10.10: ICMP 10.10.10.200 protocol 1 port 30835 unreachable, length 92 17:07:44.219064 ens4 In IP 10.10.10.10 > 10.10.10.200: ICMP echo request, id 25362, seq 3, length 64 17:07:44.219101 ens4 Out IP 10.10.10.75 > 10.10.10.10: ICMP 10.10.10.200 protocol 1 port 2407 unreachable, length 92
You will not see any packets on the tcpdump from clientb (10.10.10.11) since PBR is not applicable.
Exit back to cloudshell for cleaning up resources.
17. Cleanup steps
From Cloud Shell, remove the PBR rule, forwarding rule, backend service, health check, instance group, compute instances, NAT, Cloud Router, and firewall rules.
gcloud -q network-connectivity policy-based-routes delete pbr-client gcloud -q network-connectivity policy-based-routes delete pbr-server gcloud -q compute forwarding-rules delete fr-pbr --region=$region gcloud -q compute backend-services delete be-pbr --region=$region gcloud -q compute health-checks delete $prefix-hc-tcp-80 --region=$region gcloud -q compute instance-groups unmanaged delete pbr-uig --zone=$zone gcloud -q compute instances delete clienta --zone=$zone gcloud -q compute instances delete clientb --zone=$zone gcloud -q compute instances delete server --zone=$zone gcloud -q compute instances delete fw --zone=$zone gcloud -q compute routers nats delete ${prefix}-nat-gw-${region} \ --router=$prefix-cr --router-region=$region gcloud -q compute routers delete $prefix-cr --region=$region gcloud -q compute firewall-rules delete $prefix-allow-iap-proxy gcloud -q compute firewall-rules delete $prefix-allow-http-icmp gcloud -q compute firewall-rules delete $prefix-fw-allow-ingress gcloud -q compute firewall-rules delete $prefix-allow-hc-ingress
Remove the subnet and VPCs:
gcloud -q compute networks subnets delete $prefix-vpc-subnet \ --region $region gcloud -q compute networks delete $prefix-vpc
18. Congratulations!
Congratulations for completing the codelab.
What we've covered
- Policy Based Routes