1. Introduction
Overview
In this lab, you will explore some of the features of Network Connectivity Center.
Network Connectivity Center (NCC) is a hub-and-spoke control plane model for network connectivity management in Google Cloud. The hub resource provides a centralized connectivity management model to connect spokes. NCC currently supports the following network resources as spokes:
- VLAN attachments
- Router Appliances
- HA VPN
Codelabs requires the use of flexiWAN SaaS SD-WAN solution that simplifies WAN deployment and management. flexWAN is an open source SD-WAN and SASE solution.
What you'll build
In this codelab, you'll build a hub and spoke SD-WAN topology to simulate remote branch sites that will traverse Google's backbone network for site to cloud and site to site communication.
- You'll deploy a pair of GCE vm's configured for "flexiWAN" SD-WAN agent in the hub VPC that represents headends for inbound and outbound traffic to GCP.
- Deploy two remote flexiWAN SD-WAN routers to represent two different branch site VPC
- For data path testing, you'll configure three GCE VMs to simulate on prem clients and server hosted on GCP
What you'll learn
- Using NCC to inter-connect remote branch offices using an open source Software-Defined WAN solution
- Hands on experience with a open source software defined WAN solution
What you'll need
- Knowledge of GCP VPC network
- Knowledge of Cloud Router and BGP routing
- Codelab requires 6 VPCs. Check your Quota:Networks and request additional Networks if required, screenshot below:
2. Objectives
- Setup the GCP Environment
- Deploy flexiWAN Edge instances in GCP
- Establish a NCC Hub and flexiWAN Edge NVA as a spoke
- Configure and manage flexiWAN instances using flexiManage
- Configure BGP route exchange between vpc-app-svcs and the flexiWAN NVA
- Create a remote site simulating a customer remote branch or a data center
- Establish a IPSEC Tunnel between the remote site and NVA
- Verify the appliances deployed successfully
- Validate site to cloud data transfer
- Validate site to site data transfer
- Clean up used resources
This tutorial requires the creation of a free flexiManage account to authenticate, onboard and manage flexiEdge instances.
Before you begin
Using Google Cloud Console and Cloud Shell
To interact with GCP, we will use both the Google Cloud Console and Cloud Shell throughout this lab.
Google Cloud Console
The Cloud Console can be reached at https://console.cloud.google.com.
Set up the following items in Google Cloud to make it easier to configure Network Connectivity Center:
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
Launch the Cloud Shell. This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=[YOUR-PROJECT-NAME]
echo $projectname
IAM Roles
NCC requires IAM roles to access specific APIs. Be sure to configure your user with the NCC IAM roles as required.
Role Name | Description | Permissions |
networkconnectivity.networkAdmin | Allows network administrators to manage hub and spokes. | networkconnectivity.hubs.networkconnectivity.spokes. |
networkconnectivity.networkSpokeManager | Allows adding and managing spokes in a hub. To be used in Shared VPC where the host-project owns the Hub, but other admins in other projects can add spokes for their attachments to the Hub. | networkconnectivity.spokes.** |
networkconnectivity.networkUsernetworkconnectivity.networkViewer | Allows network users to view different attributes of hub and spokes. | networkconnectivity.hubs.getnetworkconnectivity.hubs.listnetworkconnectivity.spokes.getnetworkconnectivity.spokes.listnetworkconnectivity.spokes.aggregatedList |
3. Setup the Network Lab Environment
Overview
In this section, we'll deploy the VPC networks and firewall rules.
Simulate the On-Prem Branch Site Networks
This VPC network contains subnets for on-premises VM instances.
Create the on-premises site networks and subnets:
gcloud compute networks create site1-vpc \
--subnet-mode custom
gcloud compute networks create site2-vpc \
--subnet-mode custom
gcloud compute networks create s1-inside-vpc \
--subnet-mode custom
gcloud compute networks create s2-inside-vpc \
--subnet-mode custom
gcloud compute networks subnets create site1-subnet \
--network site1-vpc \
--range 10.10.0.0/24 \
--region us-central1
gcloud compute networks subnets create site2-subnet \
--network site2-vpc \
--range 10.20.0.0/24 \
--region us-east4
gcloud compute networks subnets create s1-inside-subnet \
--network s1-inside-vpc \
--range 10.10.1.0/24 \
--region us-central1
gcloud compute networks subnets create s2-inside-subnet \
--network s2-inside-vpc \
--range 10.20.1.0/24 \
--region us-east4
Create site1-vpc firewall rules to allow:
- SSH, internal, IAP
- ESP, UDP/500, UDP/4500
- 10.0.0.0/8 range
- 192.168.0.0/16 range
gcloud compute firewall-rules create site1-ssh \--network site1-vpc \
--allow tcp:22
gcloud compute firewall-rules create site1-internal \
--network site1-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create site1-cloud \
--network site1-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create site1-vpn \
--network site1-vpc \
--allow esp,udp:500,udp:4500 \
--target-tags router
gcloud compute firewall-rules create site1-iap \
--network site1-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
Create site2-vpc firewall rules to allow:
- SSH, internal, IAP
- 10.0.0.0/8 range
- 192.168.0.0/16 range
gcloud compute firewall-rules create site2-ssh \
--network site2-vpc \
--allow tcp:22
gcloud compute firewall-rules create site2-internal \
--network site2-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create site2-cloud \
--network site1-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create site2-vpn \
--network site1-vpc \
--allow esp,udp:500,udp:4500 \
--target-tags router
gcloud compute firewall-rules create site2-iap \
--network site2-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
Create s1-inside-vpc firewall rules to allow:
- SSH, internal, IAP
- 10.0.0.0/8 range
- 192.168.0.0/16 range
gcloud compute firewall-rules create s1-inside-ssh \
--network s1-inside-vpc \
--allow tcp:22
gcloud compute firewall-rules create s1-inside-internal \
--network s1-inside-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create s1-inside-cloud \
--network s1-inside-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create s1-inside-iap \
--network site2-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
Create s2-inside-vpc firewall rules to allow:
- SSH, internal, IAP
- 10.0.0.0/8 range
- 192.168.0.0/16 range
gcloud compute firewall-rules create s2-inside-ssh \
--network s2-inside-vpc \
--allow tcp:22
gcloud compute firewall-rules create s2-inside-internal \
--network s2-inside-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create s2-inside-cloud \
--network s2-inside-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create s2-inside-iap \
--network site2-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
For testing purposes, create the s1-inside-vm
and s2-inside-vm
instances
gcloud compute instances create s1-vm \
--zone=us-central1-a \
--machine-type=e2-micro \
--network-interface subnet=s1-inside-subnet,private-network-ip=10.10.1.3,no-address
gcloud compute instances create s2-vm \
--zone=us-east4-b \
--machine-type=e2-micro \
--network-interface subnet=s2-inside-subnet,private-network-ip=10.20.1.3,no-address
Simulate GCP Cloud Network Environment
To enable cross-region site-to-site traffic through the hub-vpc
network and the spokes, you must enable global routing in the hub-vpc
network. Read more in NCC route exchange.
- Create
hub-vpc
network and subnets:
gcloud compute networks create hub-vpc \
--subnet-mode custom \
--bgp-routing-mode=global
gcloud compute networks subnets create hub-subnet1 \
--network hub-vpc \
--range 10.1.0.0/24 \
--region us-central1
gcloud compute networks subnets create hub-subnet2 \
--network hub-vpc \
--range 10.2.0.0/24 \
--region us-east4
- Create
workload-vpc
network and subnets:
gcloud compute networks create workload-vpc \
--subnet-mode custom \
--bgp-routing-mode=global
gcloud compute networks subnets create workload-subnet1 \
--network workload-vpc \
--range 192.168.235.0/24 \
--region us-central1
gcloud compute networks subnets create workload-subnet2 \
--network workload-vpc \
--range 192.168.236.0/24 \
--region us-east4
- Create Hub-VPC firewall rules to allow:
- SSH
- ESP, UDP/500, UDP/4500
- internal 10.0.0.0/8 range (which covers TCP port 179 required for the BGP session from cloud router to the router appliance)
gcloud compute firewall-rules create hub-ssh \
--network hub-vpc \
--allow tcp:22
gcloud compute firewall-rules create hub-vpn \
--network hub-vpc \
--allow esp,udp:500,udp:4500 \
--target-tags router
gcloud compute firewall-rules create hub-internal \
--network hub-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create hub-iap \
--network hub-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
- Create Workload-VPC firewall rules to allow:
- SSH
- internal 192.168.0.0/16 range (which covers TCP port 179 required for the BGP session from cloud router to the router appliance)
gcloud compute firewall-rules create workload-ssh \
--network workload-vpc \
--allow tcp:22
gcloud compute firewall-rules create workload-internal \
--network workload-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute --project=$projectname firewall-rules create allow-from-site-1-2 --direction=INGRESS --priority=1000 --network=workload-vpc --action=ALLOW --rules=all --source-ranges=10.10.1.0/24,10.20.1.0/24
gcloud compute firewall-rules create workload-onprem \
--network hub-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create workload-iap \
--network workload-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
- Enable Cloud NAT in the workload-VPC to allow workload1-vm to download packages by creating a Cloud Router and NAT Gateway
gcloud compute routers create cloud-router-usc-central-1-nat \
--network workload-vpc \
--region us-central1
gcloud compute routers nats create cloudnat-us-central1 \
--router=cloud-router-usc-central-1-nat \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--region us-central1
- Create the
workload1-vm
in "us-central1-a" in
workload-VPC
, you will use this host to verify site to cloud connectivity
gcloud compute instances create workload1-vm \
--project=$projectname \
--machine-type=e2-micro \
--image-family debian-10 \
--image-project debian-cloud \
--zone us-central1-a \
--private-network-ip 192.168.235.3 \
--no-address \
--subnet=workload-subnet1 \
--metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart
echo 'Welcome to Workload VM1 !!' | tee /var/www/html/index.html
EOF"
4. Setup On Prem Appliances for SD-WAN
Create the On-Prem VM for SDWAN (Appliances)
In the following section, we will create site1-nva and site2-nva router appliances acting as on-premise routers.
Create Instances
Create the site1-router
appliance named site1-nva
gcloud compute instances create site1-nva \
--zone=us-central1-a \
--machine-type=e2-medium \
--network-interface subnet=site1-subnet \
--network-interface subnet=s1-inside-subnet,no-address \
--create-disk=auto-delete=yes,boot=yes,device-name=flex-gcp-nva-1,image=projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220901,mode=rw,size=20,type=projects/$projectname/zones/us-central1-a/diskTypes/pd-balanced \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any \
--can-ip-forward
Create the site2-router appliance named site2-nva
gcloud compute instances create site2-nva \
--zone=us-east4-b \
--machine-type=e2-medium \
--network-interface subnet=site2-subnet \
--network-interface subnet=s2-inside-subnet,no-address \
--create-disk=auto-delete=yes,boot=yes,device-name=flex-gcp-nva-1,image=projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220901,mode=rw,size=20,type=projects/$projectname/zones/us-east4-a/diskTypes/pd-balanced \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any \
--can-ip-forward
5. Install flexiWAN on site1-nva
Open an SSH connection to site1-nva, if timeout try again
gcloud compute ssh site1-nva --zone=us-central1-a
Install flexiWAN on site1-nva
sudo su
sudo curl -sL https://deb.flexiwan.com/setup | sudo bash -
apt install flexiwan-router -y
Prepare the VM for flexiWAN control plane registration.
After flexiWAN installation is complete, run the fwsystem_checker command to check your system configuration. This command checks the system requirements and helps to fix configuration errors in your system.
- Select option
2
for quick and silent configuration - exit afterwards with 0.
- Do not close the cloud shell window.
root@site-1-nva-1:/home/user# fwsystem_checker <output snipped> [0] - quit and use fixed parameters 1 - check system configuration 2 - configure system silently 3 - configure system interactively 4 - restore system checker settings to default ------------------------------------------------ Choose: 2 <output snipped> [0] - quit and use fixed parameters 1 - check system configuration 2 - configure system silently 3 - configure system interactively 4 - restore system checker settings to default ------------------------------------------------ Choose: 0 Please wait.. Done. === system checker ended ====
Leave the session open for the following steps
6. Register site1-nva with SD-WAN controller
These steps are required to complete provisioning of the flexiWAN NVA is administered from the flexiManage Console. Be sure the flexiWAN organization is set-up before moving forward.
Authenticate the newly deployed flexiWAN NVA with flexiManage using a security token by logging into the flexiManage Account. The same token may be reused across all router appliances.
Select Inventory → Tokens, create a token & select copy
Return to the Cloud Shell (site1-nva) and paste the token into the directory /etc/flexiwan/agent/token.txt by performing the following
nano /etc/flexiwan/agent/token.txt
#Paste the generated token obtain from flexiManage
#Exit session with CTRL+X and Select Y to save then enter
Activate the Site Routers on the flexiManage Console
Login to the flexiManage Console to activate site1-nva on the controller
On the left panel, Select Inventory → Devices, click the "Unknown" device
Enter the hostname of the site1-nva and Approve the device by sliding the dial to the right.
Select "Interfaces" Tab
Find the "Assigned" Column and click "No" and change the setting to "Yes"
Select Firewall Tab and click the "+" sign to add an inbound firewall rule
Select the WAN interface to apply the ssh rule as described below
Click "Update Device"
Start the site1-nva from the flexiWAN controller. Return to Inventory → Devices → site1-nva select ‘Start Device'
Status - Syncing
Status - Synced
The warning indicator is viewable under Troubleshoot → Notifications. Once viewed, select all then mark as read
7. Install flexiWAN on site2-nva
Open a new tab and create a Cloud Shell session, update the $variables to aid gcloud configuration implementation
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=[YOUR-PROJECT-NAME]
echo $projectname
Open an SSH connection to site2-nva, if timeout try again
gcloud compute ssh site2-nva --zone=us-east4-b
Install flexiWAN on site2-nva
sudo su
sudo curl -sL https://deb.flexiwan.com/setup | sudo bash -
apt install flexiwan-router -y
Prepare the VM for flexiWAN control plane registration.
After flexiWAN installation is complete, run the fwsystem_checker command to check your system configuration. This command checks the system requirements and helps to fix configuration errors in your system.
- Select option
2
for quick and silent configuration - exit afterwards with 0.
- Do not close the cloud shell window.
root@site2-nva:/home/user# fwsystem_checker <output snipped> [0] - quit and use fixed parameters 1 - check system configuration 2 - configure system silently 3 - configure system interactively 4 - restore system checker settings to default ------------------------------------------------ Choose: 2 <output snipped> [0] - quit and use fixed parameters 1 - check system configuration 2 - configure system silently 3 - configure system interactively 4 - restore system checker settings to default ------------------------------------------------ Choose: 0 Please wait.. Done. === system checker ended ====
8. Register site2-nva with SD-WAN Controller
These steps are required to complete provisioning of the flexiWAN NVA is administered from the flexiManage Console. Be sure the flexiWAN organization is set-up before moving forward.
Authenticate the newly deployed flexiWAN NVA with flexiManage using a security token by logging into the flexiManage Account. The same token may be reused across all router appliances.
Select Inventory → Tokens, create a token & select copy
Return to the Cloud Shell (site2-nva) and paste the token into the directory /etc/flexiwan/agent/token.txt by performing the following
nano /etc/flexiwan/agent/token.txt
#Paste the generated token obtain from flexiManage
#Exit session with CTRL+X and Select Y to save then enter
Activate the Site Routers from the flexiManage Console
Login to the flexiManage Console to activate site2-nva on the controller
On the left panel, Select Inventory → Devices, click the "Unknown" device
Enter the hostname of the site2-nva and Approve the device by sliding the dial to the right.
Select "Interfaces" Tab
Find the "Assigned" Column and click "No" and change the setting to "Yes"
Select Firewall Tab and click the "+" sign to add an inbound firewall rule. Select the WAN interface to apply the ssh rule as described below
Click "Update Device"
Start the site2-nva from the flexiWAN controller. Return to Inventory → Devices → site2-nva select ‘Start Device'
Satus - Syncing
Status - Synced
The warning indicator is viewable under Troubleshoot → Notifications. Once viewed, select all then mark as read
9. Setup Hub SDWAN Appliances
In the following section you will create and register the Hub routers (hub-r1 & hub-r2) with the flexiWAN Controller as previously executed with the site routes.
Open a new tab and create a Cloud Shell session, update the $variables to aid gcloud configuration implementation
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=[YOUR-PROJECT-NAME]
echo $projectname
Create Hub NVA Instances
Create the hub-r1 appliance:
gcloud compute instances create hub-r1 \
--zone=us-central1-a \
--machine-type=e2-medium \
--network-interface subnet=hub-subnet1 \
--network-interface subnet=workload-subnet1,no-address \
--can-ip-forward \
--create-disk=auto-delete=yes,boot=yes,device-name=flex-gcp-nva-1,image=projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220901,mode=rw,size=20,type=projects/$projectname/zones/us-central1-a/diskTypes/pd-balanced \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
Create the hub-r2 appliance:
gcloud compute instances create hub-r2 \
--zone=us-east4-b \
--machine-type=e2-medium \
--network-interface subnet=hub-subnet2 \
--network-interface subnet=workload-subnet2,no-address \
--can-ip-forward \
--create-disk=auto-delete=yes,boot=yes,device-name=flex-gcp-nva-1,image=projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220901,mode=rw,size=20,type=projects/$projectname/zones/us-east4-a/diskTypes/pd-balanced \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
10. Install flexiWAN on Hub Instances for hub-r1
Open a SSH connection to hub-r1
gcloud compute ssh hub-r1 --zone=us-central1-a
Install flexiWAN agent on both hub-r1
sudo su
sudo curl -sL https://deb.flexiwan.com/setup | sudo bash -
apt install flexiwan-router -y
Prepare hub-r1 VMs for flexiWAN registration.
After flexiWAN installation is complete, run the fwsystem_checker command to check your system configuration. This command checks the system requirements and helps to fix configuration errors in your system.
root@hub-r1:/home/user# fwsystem_checker
- Select option
2
for quick and silent configuration - exit afterwards with 0.
- Do not close the cloud shell window.
11. Register hub-r1 VMs on the flexiManage controller
Authenticate the newly deployed flexiWAN NVA with flexiManage using a security token by logging into the flexiManage Account.
- Select Inventory → Tokens and copy the token
Return to the Cloud Shell (hub-r1) and paste the token into the directory /etc/flexiwan/agent/token.txt by performing the following
nano /etc/flexiwan/agent/token.txt
#Paste the generated token obtain from flexiManage
#Exit session with CTRL+X and Select Y to save then enter
12. Install flexiWAN on Hub Instances for hub-r2
Open a SSH connection to hub-r2
gcloud compute ssh hub-r2 --zone=us-east4-b
Install flexiWAN agent on both hub-r2
sudo su
sudo curl -sL https://deb.flexiwan.com/setup | sudo bash -
apt install flexiwan-router -y
Prepare hub-r2 VMs for flexiWAN registration.
After flexiWAN installation is complete, run the fwsystem_checker command to check your system configuration. This command checks the system requirements and helps to fix configuration errors in your system.
root@hub-r2:/home/user# fwsystem_checker
- Select option
2
for quick and silent configuration - exit afterwards with 0.
- Do not close the cloud shell window.
13. Register hub-r2 VMs on the flexiManage controller
Authenticate the newly deployed flexiWAN NVA with flexiManage using a security token by logging into the flexiManage Account.
- Select Inventory → Tokens and copy the token
Return to the Cloud Shell (hub-r2) and paste the token into the directory /etc/flexiwan/agent/token.txt by performing the following
nano /etc/flexiwan/agent/token.txt
#Paste the generated token obtain from flexiManage
#Exit session with CTRL+X and Select Y to save then enter
Activate Hub routers hub-r1 on the flexiManage Console
Login to the flexiManage Console
- Navigate to Inventory → Devices
- Find and note Hostname for hub-r1 and hub-r2 are both "unknown"
Select the Unknown device with the HostName hub-r1
- Enter the hostname of the hub-r1
- Approve the device, Slide the dial to the right.
Select the Interfaces Tab
- Find the "Assigned" Column
- Next to the interface row, click on "No" to change the setting to "Yes"
Select the Firewall Tab
- Click "+" to Add Inbound firewall rule
- Select the WAN interface to inherit the rule
- Allow SSH port 22 with TCP protocol
- Click "Update Device"
Start the hub-r1 appliance for SD-WAN from flexiWAN's controller
- Return to Inventory → Devices → hub-r1
Select ‘Start Device'
- Wait for the sync to complete and note the "running" status
Activate Hub routers hub-r2 on the flexiManage Console
Select the Unknown device with the HostName hub-r2
- Enter the hostname of the hub-r2
- Approve the device, Slide the dial to the right.
Select the Interfaces Tab
- Find the "Assigned" Column
- Next to the interface row, Click on "No" to change the setting to "Yes"
Select the Firewall Tab
- Click "+" to Add Inbound firewall rule
- Select the WAN interface to inherit the rule
- Allow SSH port 22 with TCP protocol
- Click Add Rule
- Click "Update Device"
Start the hub-r2 appliance for SD-WAN from flexiWAN's controller
- Return to Inventory → Devices → hub-r2, select ‘Start Device'
- Wait for the sync to complete and note the "running" status
14. Network Connectivity Center on GCP Hub
Enable API Services
Enable the network connectivity API in case it is not yet enabled:
gcloud services enable networkconnectivity.googleapis.com
Create the NCC Hub
gcloud network-connectivity hubs create ncc-hub
Create request issued for: [ncc-hub]
Waiting for operation [projects/user-3p-dev/locations/global/operations/operation-1668793629598-5edc24b7ee3ce-dd4c765b-5ca79556] to complete...done.
Created hub [ncc-hub]
Configure the both router appliances as a NCC spoke
Find the URI and IP address for both hub-r1 and hub-r2 and note the output. You'll need this information in the next step.
Be sure to note the IP address (192.168.x.x) of the hub-r1 and hub-r2 instance.
gcloud compute instances describe hub-r1 \
--zone=us-central1-a \
--format="value(selfLink.scope(projects))"
gcloud compute instances describe hub-r1 --zone=us-central1-a | grep "networkIP"
gcloud compute instances describe hub-r2 \
--zone=us-east4-b \
--format="value(selfLink.scope(projects))"
gcloud compute instances describe hub-r2 --zone=us-east4-b | grep "networkIP"
Add the hub-r1's vnic networkIP
(192.168.x.x) as a spoke and enable site to site data transfer
gcloud network-connectivity spokes linked-router-appliances create s2s-wrk-cr1 \
--hub=ncc-hub \
--router-appliance=instance="https://www.googleapis.com/compute/projects/$projectname/zones/us-central1-a/instances/hub-r1",ip=192.168.235.4 \
--region=us-central1 \
--site-to-site-data-transfer
Add the hub-r2's vnic networkIP
(192.168.x.x) as a spoke and enable site to site data transfer
gcloud network-connectivity spokes linked-router-appliances create s2s-wrk-cr2 \
--hub=ncc-hub \
--router-appliance=instance=/projects/$projectname/zones/us-east4-b/instances/hub-r2,ip=192.168.236.101 \
--region=us-east4 \
--site-to-site-data-transfer
Configure Cloud Router to establish BGP with Hub-R1
In the following step, create the Cloud Router and announce the workload VPC subnet 192.168.235.0/24
Create the cloud router in us-central1 that will communicate with BGP with hub-r1
gcloud compute routers create wrk-cr1 \
--region=us-central1 \
--network=workload-vpc \
--asn=65002 \
--set-advertisement-groups=all_subnets \
--advertisement-mode=custom
By configuring the router appliances as NCC Spoke, this enables the cloud router to negotiate BGP on virtual interfaces.
Create two interfaces on the cloud router that will exchange BGP messages with hub-r1.
IP Addresses are selected from the workload subnet and & can be changed if required.
gcloud compute routers add-interface wrk-cr1 \
--region=us-central1 \
--subnetwork=workload-subnet1 \
--interface-name=int0 \
--ip-address=192.168.235.101
gcloud compute routers add-interface wrk-cr1 \
--region=us-central1 \
--subnetwork=workload-subnet1 \
--interface-name=int1 \
--ip-address=192.168.235.102 \
--redundant-interface=int0
Configure the Cloud Router interface to establish BGP with hub-r1's vNIC-1, update the peer-ip-address with the IP Address of the hub-r1 networkIP
. Note, the same IP Address is used for int0 & int1.
gcloud compute routers add-bgp-peer wrk-cr1 \
--peer-name=hub-cr1-bgp-peer-0 \
--interface=int0 \
--peer-ip-address=192.168.235.4 \
--peer-asn=64111 \
--instance=hub-r1 \
--instance-zone=us-central1-a \
--region=us-central1
gcloud compute routers add-bgp-peer wrk-cr1 \
--peer-name=hub-cr1-bgp-peer-1 \
--interface=int1 \
--peer-ip-address=192.168.235.4 \
--peer-asn=64111 \
--instance=hub-r1 \
--instance-zone=us-central1-a \
--region=us-central1
Verify the BGP State, at this point in the code lab, BGP is "connect state" as the network router appliance has not been configured for BGP.
gcloud compute routers get-status wrk-cr1 --region=us-central1
Configure Workload-cr2 to establish BGP with Hub-R2
In the following step, create the Cloud Router and announce the workload VPC subnet 192.168.236.0/24
Create the cloud router in us-east4 that will communicate with BGP with hub-r2
gcloud compute routers create wrk-cr2 \
--region=us-east4 \
--network=workload-vpc \
--asn=65002 \
--set-advertisement-groups=all_subnets \
--advertisement-mode=custom
Create a pair of interfaces on the cloud router that will exchange BGP messages with hub-r2, IP Addresses are selected from the workload subnet & can be changed if required.
gcloud compute routers add-interface wrk-cr2 \
--region=us-east4 \
--subnetwork=workload-subnet2 \
--interface-name=int0 \
--ip-address=192.168.236.5
gcloud compute routers add-interface wrk-cr2 \
--region=us-east4 \
--subnetwork=workload-subnet2 \
--interface-name=int1 \
--ip-address=192.168.236.6 \
--redundant-interface=int0
Configure the Cloud Router interface to establish BGP with hub-r2's vNIC-1, update the peer-ip-address with the IP Address of the hub-r1 networkIP
. Note, the same IP Address is used for int0 & int1.
gcloud compute routers add-bgp-peer wrk-cr2 \
--peer-name=hub-cr2-bgp-peer-0 \
--interface=int0 \
--peer-ip-address=192.168.236.101 \
--peer-asn=64112 \
--instance=hub-r2 \
--instance-zone=us-east4-b \
--region=us-east4
gcloud compute routers add-bgp-peer wrk-cr2 \
--peer-name=hub-cr2-bgp-peer-1 \
--interface=int1 \
--peer-ip-address=192.168.236.101 \
--peer-asn=64112 \
--instance=hub-r2 \
--instance-zone=us-east4-b \
--region=us-east4
Verify the BGP State, at this point in the code lab, BGP is "connect state" as the network router appliance has not been configured for BGP.
gcloud compute routers get-status wrk-cr2 --region=us-east4
15. Configure Hub router appliances for BGP
Configure hub-r1 for BGP
Be sure to login to the flexiManage Console
Navigate to Inventory → Devices → hub-r1 and select the device with the HostName:hub-r1
- Click on the "Routing" tab
- Click on the "BGP Configuration"
- Disable "Redistribute OSPF Routes"
- Configure hub-r1 for BGP with these parameters and Click "Save"
Select "Interfaces" tab, locate the LAN interface, find the column "Routing"
- Click "none" to open up menu to select BGP as the routing protocol
- At the top of the page, click "update device"
Configure hub-r2 for BGP
Be sure to login to the flexiManage Console
Navigate to Inventory → Devices → hub-r2, select the device with the HostName:hub-r2
- Click on the "Routing" tab
- Click on the "BGP Configuration"
- Disable "Redistribute OSPF Routes"
- Configure hub-r2 for BGP with these parameters and click "Save"
Select "Interfaces" tab, locate the LAN interface, find the column "Routing"
- Click "none" to open up a drop down menu to select BGP as the routing protocol
- At the top of the page, click "update device"
Select "routing" tab
- Confirm that hub-r2 has learned a BGP route from wrk-cr2
16. BGP Route Exchange between Router Appliances
Establish local ASN for remote sites
Configure a local BGP ASN for site1-nva and site2-nva, once configured we will then establish an IPSEC Tunnel between the remote sites and hub routers.
Select the device with the HostName:site1-nva
- Click on the "Routing" tab
- Click on the "BGP Configuration"
- Disable "Redistribute OSPF Routes"
- Enabled BGP
- Local ASN 7269 → Save
- Update Device
- Interfaces Tab → LAN → Routing → BGP
- Update Device
Select the device with the HostName:site2-nva
- Click on the "Routing" tab
- Click on the "BGP Configuration"
- Disable "Redistribute OSPF Routes"
- Enabled BGP
- Local ASN 7270 → Save
- Update Device
- Interfaces Tab → LAN → Routing → BGP
- Update Device
Configure VPN tunnels Between Site and Hub Appliances
Be sure to login to the flexiManage Console
- Navigate to Inventory → Devices
- Select the box next to hostname of site1-nva and hub-r1 to build a VPN tunnel between this pair of NVAs
- Click the Actions→ Create Tunnels and configure the following
- Select Create Tunnels
- Remove the check marks from site1-nva and ub-r1
Repeat the steps to create a tunnel between site2-nva and hub-r2 by selecting the appropriate parameters
Verify the pair of tunnels is established between each pair of NVAs.
- On the left side panel, Select "inventory" and Click "Tunnels" and locate the status column
Verify that "site1-nva" learned routes to the subnet 192.168.235.0/24 and 192.168.236.0/24
- Select Inventory → Devices → site1-nva and click the "Routing" tab
In the example output below, flexiWAN automatically created the tunnel using the host IP address 10.100.0.6
17. Verify Data Path Connectivity
Verify site to cloud connectivity from on prem
Refer to the diagram, Verify that the data path between s1-vm and workload1-vm
Configure VPC Static routes for Site to Cloud
The on-premise Site1-VPC and Site2-VPC simulates an on-premise datacenter network.
Both Site-1-nva and site-2-nva router appliances use VPN connectivity to reach the hub network.
For the site to cloud use case**,** create static routes to the 192.168.0.0/16 destination using the router appliance as the next hop to reach networks in the GCP cloud network.
On s1-inside-vpc, create a static route for cloud destination (192.168.0.0/16):
gcloud compute routes create site1-subnet-route \
--network=s1-inside-vpc \
--destination-range=192.168.0.0/16 \
--next-hop-instance=site1-nva \
--next-hop-instance-zone=us-central1-a
On s2-inside-vpc, create a static route for cloud destination (192.168.0.0/16):
gcloud compute routes create site2-subnet-route \
--network=s2-inside-vpc \
--destination-range=192.168.0.0/16 \
--next-hop-instance=site2-nva \
--next-hop-instance-zone=us-east4-b
On cloudshell, look up the IP address of "workload1-vm." You'll need this to test connectivity from "s1-vm."
gcloud compute instances describe workload1-vm --zone=us-central1-a | grep "networkIP"
Open an SSH connection to s1-vm, if timeout try again
gcloud compute ssh s1-vm --zone=us-central1-a
SSH to "s1-vm" and use the "curl" command to establish a TCP session to workload1-VM ip address.
s1-vm:~$ curl 192.168.235.3 -vv * Trying 192.168.235.3:80... * Connected to 192.168.235.3 (192.168.235.3) port 80 (#0) > GET / HTTP/1.1 > Host: 192.168.235.3 > User-Agent: curl/7.74.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Wed, 07 Dec 2022 15:12:08 GMT < Server: Apache/2.4.54 (Debian) < Last-Modified: Tue, 06 Dec 2022 00:57:46 GMT < ETag: "1f-5ef1e4acfa1d9" < Accept-Ranges: bytes < Content-Length: 31 < Content-Type: text/html < Page served from: workload1-vm * Connection #0 to host 192.168.235.3 left intact
Verify Site to Site Connectivity
Refer to the diagram, verify that the data path between s1-vm and s2-vm
Configure VPC Static routes for Site to Site
To route site-to-site traffic between Site 1 and Site 2 using GCP's global network, you'll create static routes to remote site subnet destinations using the on prem router appliance as the next hop.
In a later step, the workload VPC will be configured with NCC to support site to site data transfer.
On s1-inside-vpc, create a static route to reach site2-subnet (10.20.1.0/24):
gcloud compute routes create site1-sn1-route \
--network=s1-inside-vpc \
--destination-range=10.20.1.0/24 \
--next-hop-instance=site1-nva \
--next-hop-instance-zone=us-central1-a
On s2-inside-vpc, create a static route to reach site1-subnet (10.10.1.0/24):
gcloud compute routes create site2-sn1-route \
--network=s2-inside-vpc \
--destination-range=10.10.1.0/24 \
--next-hop-instance=site2-nva \
--next-hop-instance-zone=us-east4-b
On cloudshell, look up the IP address of "s2-vm." You'll need this to test connectivity from S1-vm.
gcloud compute instances describe s2-vm --zone=us-east4-b | grep networkIP
Open an SSH connection to s1-vm, if timeout try again
gcloud compute ssh s1-vm --zone=us-central1-a
SSH to "s1-vm" and "ping" the ip address of "s2-vm."
s1-vm:~$ ping 10.20.1.3
PING 10.20.1.3 (10.20.1.3) 56(84) bytes of data.
64 bytes from 10.20.1.3: icmp_seq=1 ttl=60 time=99.1 ms
64 bytes from 10.20.1.3: icmp_seq=2 ttl=60 time=94.3 ms
64 bytes from 10.20.1.3: icmp_seq=3 ttl=60 time=92.4 ms
64 bytes from 10.20.1.3: icmp_seq=4 ttl=60 time=90.9 ms
64 bytes from 10.20.1.3: icmp_seq=5 ttl=60 time=89.7 ms
18. Clean Up
Login to cloud shell and delete VM instances in the hub and branch site networks
#on prem instances
gcloud compute instances delete s1-vm --zone=us-central1-a --quiet
gcloud compute instances delete s2-vm --zone=us-east4-b --quiet
#delete on prem firewall rules
gcloud compute firewall-rules delete site1-ssh --quiet
gcloud compute firewall-rules delete site1-internal --quiet
gcloud compute firewall-rules delete site1-cloud --quiet
gcloud compute firewall-rules delete site1-vpn --quiet
gcloud compute firewall-rules delete site1-iap --quiet
gcloud compute firewall-rules delete site2-ssh --quiet
gcloud compute firewall-rules delete site2-internal --quiet
gcloud compute firewall-rules delete site2-cloud --quiet
gcloud compute firewall-rules delete site2-vpn --quiet
gcloud compute firewall-rules delete site2-iap --quiet
gcloud compute firewall-rules delete allow-from-site-1-2 --quiet
gcloud compute firewall-rules delete s2-inside-cloud s2-inside-internal s2-inside-ssh --quiet
gcloud compute firewall-rules delete s1-inside-cloud s1-inside-iap s1-inside-internal s1-inside-ssh s2-inside-cloud s2-inside-iap s2-inside-internal s2-inside-ssh --quiet
#delete ncc spokes
gcloud network-connectivity spokes delete s2s-wrk-cr1 --region us-central1 --quiet
gcloud network-connectivity spokes delete s2s-wrk-cr2 --region us-east4 --quiet
#delete ncc hub
gcloud network-connectivity hubs delete ncc-hub --quiet
#delete the cloud router
gcloud compute routers delete wrk-cr1 --region=us-central1 --quiet
gcloud compute routers delete wrk-cr2 --region=us-east4 --quiet
#delete the instances
gcloud compute instances delete hub-r1 --zone=us-central1-a --quiet
gcloud compute instances delete hub-r2 --zone=us-east4-b --quiet
gcloud compute instances delete workload1-vm --zone=us-central1-a --quiet
gcloud compute instances delete site1-nva --zone=us-central1-a --quiet
gcloud compute instances delete site2-nva --zone=us-east4-b --quiet
#delete on prem subnets
gcloud compute networks subnets delete hub-subnet1 s1-inside-subnet site1-subnet workload-subnet1 --region=us-central1 --quiet
gcloud compute networks subnets delete hub-subnet2 s2-inside-subnet site2-subnet workload-subnet2 --region=us-east4 --quiet
#delete hub firewall rule
gcloud compute firewall-rules delete hub-ssh --quiet
gcloud compute firewall-rules delete hub-vpn --quiet
gcloud compute firewall-rules delete hub-internal --quiet
gcloud compute firewall-rules delete hub-iap --quiet
gcloud compute firewall-rules delete workload-ssh --quiet
gcloud compute firewall-rules delete workload-internal --quiet
gcloud compute firewall-rules delete workload-onprem --quiet
gcloud compute firewall-rules delete workload-iap --quiet
#delete on vpcs
gcloud compute networks delete hub-vpc s1-inside-vpc s2-inside-vpc site2-vpc workload-vpc --quiet
19. Congratulations!
You have completed the Network Connectivity Center Lab!
What you covered
- Configured Software Defined WAN integration for NCC site to cloud
- Configured Software Defined WAN integration for NCC site to site
Next Steps
- Network Connectivity Center Overview
- Network Connectivity Center Documentation
- flexiWAN resources
- flexiWAN GitLab Repo
©Google, LLC or its affiliates. All rights reserved. Do not distribute.