1. Introduction
Overview
In this lab, you will explore some of the features of Network Connectivity Center.
Network Connectivity Center (NCC) is a hub-and-spoke control plane model for network connectivity management in Google Cloud. The hub resource provides a centralized connectivity management model to connect spokes. NCC currently supports the following network resources as spokes:
- VLAN attachments
- Router Appliances
- HA VPN
Codelabs requires the use of flexiWAN SaaS SD-WAN solution that simplifies WAN deployment and management.
What you'll build
In this codelab, you'll build a hub and spoke SD-WAN topology to simulate remote branch sites that will traverse Google's backbone network for site to cloud communication.
- You'll deploy a pair of GCE vm's configured for flexiWAN SD-WAN agent in the hub VPC that represents headends for inbound and outbound traffic to GCP.
- Deploy two remote flexiWAN SD-WAN routers to represent two different branch site VPC
- For data path testing, you'll configure three GCE VMs to simulate on prem clients and server hosted on GCP
What you'll learn
- Using NCC to inter-connect remote branch offices using an open source Software-Defined WAN solution
- Hands on experience with a open source software defined WAN solution
What you'll need
- Knowledge of GCP VPC network
- Knowledge of Cloud Router and BGP routing
2. Objectives
- Setup the GCP Environment
- Deploy flexiWAN Edge instances in GCP
- Establish a NCC Hub and flexiWAN Edge NVA as a spoke
- Configure and manage flexiWAN instances using flexiManage
- Configure BGP route exchange between vpc-app-svcs and the flexiWAN NVA
- Create a remote site simulating a customer remote branch or a data center
- Establish a IPSEC Tunnel between the remote site and NVA
- Verify the appliances deployed successfully
- Validate site to cloud data transfer
- Clean up used resources
This tutorial requires the creation of a free flexiManage account to authenticate, onboard and manage flexiEdge instances.
Before you begin
Using Google Cloud Console and Cloud Shell
To interact with GCP, we will use both the Google Cloud Console and Cloud Shell throughout this lab.
Google Cloud Console
The Cloud Console can be reached at https://console.cloud.google.com.
Set up the following items in Google Cloud to make it easier to configure Network Connectivity Center:
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
Launch the Cloud Shell. This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=[YOUR-PROJECT-NAME]
echo $projectname
IAM Roles
NCC requires IAM roles to access specific APIs. Be sure to configure your user with the NCC IAM roles as required.
Role Name | Description | Permissions |
networkconnectivity.networkAdmin | Allows network administrators to manage hub and spokes. | networkconnectivity.hubs.networkconnectivity.spokes. |
networkconnectivity.networkSpokeManager | Allows adding and managing spokes in a hub. To be used in Shared VPC where the host-project owns the Hub, but other admins in other projects can add spokes for their attachments to the Hub. | networkconnectivity.spokes.** |
networkconnectivity.networkUsernetworkconnectivity.networkViewer | Allows network users to view different attributes of hub and spokes. | networkconnectivity.hubs.getnetworkconnectivity.hubs.listnetworkconnectivity.spokes.getnetworkconnectivity.spokes.listnetworkconnectivity.spokes.aggregatedList |
3. Setup the Network Lab Environment
Overview
In this section, we'll deploy the VPC networks and firewall rules.
Simulate the On-Prem Branch Site Networks
This VPC network contains subnets for on-premises VM instances.
Create the on-premises site networks and subnets:
gcloud compute networks create site1-vpc \
--subnet-mode custom
gcloud compute networks create s1-inside-vpc \
--subnet-mode custom
gcloud compute networks subnets create site1-subnet \
--network site1-vpc \
--range 10.10.0.0/24 \
--region us-central1
gcloud compute networks subnets create s1-inside-subnet \
--network s1-inside-vpc \
--range 10.10.1.0/24 \
--region us-central1
Create site1-vpc firewall rules to allow:
- SSH, internal, IAP
- ESP, UDP/500, UDP/4500
- 10.0.0.0/8 range
- 192.168.0.0/16 range
gcloud compute firewall-rules create site1-ssh \--network site1-vpc \
--allow tcp:22
gcloud compute firewall-rules create site1-internal \
--network site1-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create site1-cloud \
--network site1-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create site1-vpn \
--network site1-vpc \
--allow esp,udp:500,udp:4500 \
--target-tags router
gcloud compute firewall-rules create site1-iap \
--network site1-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
Create s1-inside-vpc firewall rules to allow:
- SSH, internal, IAP
- 10.0.0.0/8 range
- 192.168.0.0/16 range
gcloud compute firewall-rules create s1-inside-ssh \
--network s1-inside-vpc \
--allow tcp:22
gcloud compute firewall-rules create s1-inside-internal \
--network s1-inside-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create s1-inside-cloud \
--network s1-inside-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create s1-inside-iap \
--network site2-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
For testing purposes, create the s1-inside-vm
and s2-inside-vm
instances
gcloud compute instances create s1-vm \
--zone=us-central1-a \
--machine-type=e2-micro \
--network-interface subnet=s1-inside-subnet,private-network-ip=10.10.1.3,no-address
Simulate GCP Cloud Network Environment
To enable cross-region site-to-site traffic through the hub-vpc
network and the spokes, you must enable global routing in the hub-vpc
network. Read more in NCC route exchange.
- Create
hub-vpc
network and subnets:
gcloud compute networks create hub-vpc \
--subnet-mode custom \
--bgp-routing-mode=global
gcloud compute networks subnets create hub-subnet1 \
--network hub-vpc \
--range 10.1.0.0/24 \
--region us-central1
gcloud compute networks subnets create hub-subnet2 \
--network hub-vpc \
--range 10.2.0.0/24 \
--region us-east4
- Create
workload-vpc
network and subnets:
gcloud compute networks create workload-vpc \
--subnet-mode custom \
--bgp-routing-mode=global
gcloud compute networks subnets create workload-subnet1 \
--network workload-vpc \
--range 192.168.235.0/24 \
--region us-central1
- Create Hub-VPC firewall rules to allow:
- SSH
- ESP, UDP/500, UDP/4500
- internal 10.0.0.0/8 range (which covers TCP port 179 required for the BGP session from cloud router to the router appliance)
gcloud compute firewall-rules create hub-ssh \
--network hub-vpc \
--allow tcp:22
gcloud compute firewall-rules create hub-vpn \
--network hub-vpc \
--allow esp,udp:500,udp:4500 \
--target-tags router
gcloud compute firewall-rules create hub-internal \
--network hub-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create hub-iap \
--network hub-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
- Create Workload-VPC firewall rules to allow:
- SSH
- internal 192.168.0.0/16 range (which covers TCP port 179 required for the BGP session from cloud router to the router appliance)
gcloud compute firewall-rules create workload-ssh \
--network workload-vpc \
--allow tcp:22
gcloud compute firewall-rules create workload-internal \
--network workload-vpc \
--allow all \
--source-ranges 192.168.0.0/16
gcloud compute firewall-rules create workload-onprem \
--network hub-vpc \
--allow all \
--source-ranges 10.0.0.0/8
gcloud compute firewall-rules create workload-iap \
--network workload-vpc --allow tcp:22 --source-ranges=35.235.240.0/20
- Enable Cloud NAT in the workload-VPC to allow workload1-vm to download packages by creating a Cloud Router and NAT Gateway
gcloud compute routers create cloud-router-usc-central-1-nat \
--network workload-vpc \
--region us-central1
gcloud compute routers nats create cloudnat-us-central1 \
--router=cloud-router-usc-central-1-nat \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--region us-central1
- Create the
workload1-vm
in "us-central1-a" in
workload-VPC
, you will use this host to verify site to cloud connectivity
gcloud compute instances create workload1-vm \
--project=$projectname \
--machine-type=e2-micro \
--image-family debian-10 \
--image-project debian-cloud \
--zone us-central1-a \
--private-network-ip 192.168.235.3 \
--no-address \
--subnet=workload-subnet1 \
--metadata startup-script="#! /bin/bash
sudo apt-get update
sudo apt-get install apache2 -y
sudo service apache2 restart
echo 'Welcome to Workload VM1 !!' | tee /var/www/html/index.html
EOF"
4. Setup On Prem Appliances for SD-WAN
Create the On-Prem VM for SDWAN (Appliances)
In the following section, we will create site1-nva acting as on-premise routers.
Create Instances
Create the site1-router
appliance named site1-nva
gcloud compute instances create site1-nva \
--zone=us-central1-a \
--machine-type=e2-medium \
--network-interface subnet=site1-subnet \
--network-interface subnet=s1-inside-subnet,no-address \
--create-disk=auto-delete=yes,boot=yes,device-name=flex-gcp-nva-1,image=projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220901,mode=rw,size=20,type=projects/$projectname/zones/us-central1-a/diskTypes/pd-balanced \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any \
--can-ip-forward
5. Install flexiWAN on site1-nva
Open an SSH connection to site1-nva, if timeout try again
gcloud compute ssh site1-nva --zone=us-central1-a
Install flexiWAN on site1-nva
sudo su
sudo curl -sL https://deb.flexiWAN.com/setup | sudo bash -
apt install flexiWAN-router -y
Prepare the VM for flexiWAN control plane registration.
After flexiWAN installation is complete, run fwsystem_checker command to prepare the VM for flexiWAN operation. This command checks the system requirements and helps to fix configuration errors in your system.
- Select option
2
for quick and silent configuration - exit afterwards with 0.
- Do not close the cloud shell window.
root@site-1-nva-1:/home/user# fwsystem_checker <output snipped> [0] - quit and use fixed parameters 1 - check system configuration 2 - configure system silently 3 - configure system interactively 4 - restore system checker settings to default ------------------------------------------------ Choose: 2 <output snipped> [0] - quit and use fixed parameters 1 - check system configuration 2 - configure system silently 3 - configure system interactively 4 - restore system checker settings to default ------------------------------------------------ Choose: 0 Please wait.. Done. === system checker ended ====
Leave the session open for the following steps
6. Register site1-nva with SD-WAN controller
These steps are required to complete provisioning of the flexiWAN NVA is administered from the flexiManage Console. Be sure the flexiWAN organization is set-up before moving forward.
Authenticate the newly deployed flexiWAN NVA with flexiManage using a security token by logging into the flexiManage Account. The same token may be reused across all router appliances.
Select Inventory → Tokens, create a token & select copy
Return to the Cloud Shell (site1-nva) and paste the token into the directory /etc/flexiWAN/agent/token.txt by performing the following
nano /etc/flexiWAN/agent/token.txt
#Paste the generated token obtain from flexiManage
#Exit session with CTRL+X and Select Y to save then enter
Activate the Site Routers on the flexiManage Console
Login to the flexiManage Console to activate site1-nva on the controller
On the left panel, Select Inventory → Devices, click the "Unknown" device
Enter the hostname of the site1-nva and Approve the device by sliding the dial to the right.
Select "Interfaces" Tab
Find the "Assigned" Column and click "No" and change the setting to "Yes"
Select Firewall Tab and click the "+" sign to add an inbound firewall rule
Select the WAN interface to apply the ssh rule as described below
Click "Update Device"
Start the site1-nva from the flexiWAN controller. Return to Inventory → Devices → site1-nva select ‘Start Device'
Status - Syncing
Status - Synced
The warning indicator is viewable under Troubleshoot → Notifications. Once viewed, select all then mark as read
7. Setup Hub SDWAN Appliances
In the following section you will create and register the Hub routers (hub-r1) with the flexiWAN Controller as previously executed with the site routes.
Open a new tab and create a Cloud Shell session, update the $variables to aid gcloud configuration implementation
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=[YOUR-PROJECT-NAME]
echo $projectname
Create Hub NVA Instances
Create the hub-r1 appliance:
gcloud compute instances create hub-r1 \
--zone=us-central1-a \
--machine-type=e2-medium \
--network-interface subnet=hub-subnet1 \
--network-interface subnet=workload-subnet1,no-address \
--can-ip-forward \
--create-disk=auto-delete=yes,boot=yes,device-name=flex-gcp-nva-1,image=projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20220901,mode=rw,size=20,type=projects/$projectname/zones/us-central1-a/diskTypes/pd-balanced \
--no-shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any
8. Install flexiWAN on Hub Instances for hub-r1
Open a SSH connection to hub-r1
gcloud compute ssh hub-r1 --zone=us-central1-a
Install flexiWAN agent on both hub-r1
sudo su
sudo curl -sL https://deb.flexiWAN.com/setup | sudo bash -
apt install flexiWAN-router -y
Prepare hub-r1 VMs for flexiWAN registration.
After flexiWAN installation is complete, run fwsystem_checker
command to prepare the VM for flexiWAN operation. This command checks the system requirements and helps to fix configuration errors in your system.
root@hub-r1:/home/user# fwsystem_checker
- Select option
2
for quick and silent configuration - exit afterwards with 0.
- Do not close the cloud shell window.
9. Register hub-r1 VMs on the FlexManage controller
Authenticate the newly deployed flexiWAN NVA with flexiManage using a security token by logging into the flexiManage Account.
- Select Inventory → Tokens and copy the token
Return to the Cloud Shell (hub-r1) and paste the token into the directory /etc/flexiWAN/agent/token.txt by performing the following
nano /etc/flexiWAN/agent/token.txt
#Paste the generated token obtain from flexiManage
#Exit session with CTRL+X and Select Y to save then enter
Activate Hub routers hub-r1 on the flexiManage Console
Login to the flexiManage Console
- Navigate to Inventory → Devices
- Find and note Hostname for hub-r1 is "unknown"
Select the Unknown device with the HostName hub-r1
- Enter the hostname of the hub-r1
- Approve the device, Slide the dial to the right.
Select the Interfaces Tab
- Find the "Assigned" Column
- Next to the interface row, click on "No" to change the setting to "Yes"
Select the Firewall Tab
- Click "+" to Add Inbound firewall rule
- Select the WAN interface to inherit the rule
- Allow SSH port 22 with TCP protocol
- Click "Update Device"
Start the hub-r1 appliance for SD-WAN from flexiWAN's controller
- Return to Inventory → Devices → hub-r1
Select ‘Start Device'
- Wait for the sync to complete and note the "running" status
10. Network Connectivity Center on GCP Hub
Enable API Services
Enable the network connectivity API in case it is not yet enabled:
gcloud services enable networkconnectivity.googleapis.com
Create the NCC Hub
gcloud network-connectivity hubs create ncc-hub
Create request issued for: [ncc-hub]
Waiting for operation [projects/user-3p-dev/locations/global/operations/operation-1668793629598-5edc24b7ee3ce-dd4c765b-5ca79556] to complete...done.
Created hub [ncc-hub]
Configure the both router appliances as a NCC spoke
Find the URI and IP address for both hub-r1 and note the output. You'll need this information in the next step.
Be sure to note the IP address of the hub-r1 instance.
gcloud compute instances describe hub-r1 \
--zone=us-central1-a \
--format="value(selfLink.scope(projects))"
gcloud compute instances describe hub-r1 --zone=us-central1-a | grep "networkIP"
Add the hub-r1's vnic networkIP
as a spoke. By default, site to site data transfer is disabled.
gcloud network-connectivity spokes linked-router-appliances create s2c-wrk-cr1 \
--hub=ncc-hub \
--router-appliance=instance="https://www.googleapis.com/compute/projects/$projectname/zones/us-central1-a/instances/hub-r1",ip=192.168.235.4 \
--region=us-central1 \
--site-to-site-data-transfer
Configure Cloud Router to establish BGP with Hub-R1
In the following step, create the Cloud Router and announce the workload VPC subnet 192.168.235.0/24
Create the cloud router in us-central1 that will communicate with BGP with hub-r1
gcloud compute routers create wrk-cr1 \
--region=us-central1 \
--network=workload-vpc \
--asn=65002 \
--set-advertisement-groups=all_subnets
By configuring the router appliances as NCC Spoke, this enables the cloud router to negotiate BGP on virtual interfaces.
Create two interfaces on the cloud router that will exchange BGP messages with hub-r1.
IP Addresses are selected from the workload subnet and & can be changed if required.
gcloud compute routers add-interface wrk-cr1 \
--region=us-central1 \
--subnetwork=workload-subnet1 \
--interface-name=int0 \
--ip-address=192.168.235.101
gcloud compute routers add-interface wrk-cr1 \
--region=us-central1 \
--subnetwork=workload-subnet1 \
--interface-name=int1 \
--ip-address=192.168.235.102 \
--redundant-interface=int0
Configure the Cloud Router interface to establish BGP with hub-r1's vNIC-1, update the peer-ip-address with the IP Address of the hub-r1 networkIP. Note, the same IP Address is used for int0 & int1.
gcloud compute routers add-bgp-peer wrk-cr1 \
--peer-name=hub-cr1-bgp-peer-0 \
--interface=int0 \
--peer-ip-address=192.168.235.4 \
--peer-asn=64111 \
--instance=hub-r1 \
--instance-zone=us-central1-a \
--region=us-central1
gcloud compute routers add-bgp-peer wrk-cr1 \
--peer-name=hub-cr1-bgp-peer-1 \
--interface=int1 \
--peer-ip-address=192.168.235.4 \
--peer-asn=64111 \
--instance=hub-r1 \
--instance-zone=us-central1-a \
--region=us-central1
Verify the BGP State, at this point in the code lab, BGP is "connect state" as the network router appliance has not been configured for BGP.
gcloud compute routers get-status wrk-cr1 --region=us-central1
11. Configure Hub router appliances for BGP
Configure hub-r1 for BGP
Be sure to login to the flexiManage Console
Navigate to Inventory → Devices → hub-r1 and select the device with the HostName:hub-r1
- Click on the "Routing" tab
- Click on the "BGP Configuration"
- Disable "Redistribute OSPF Routes"
- Configure hub-r1 for BGP with these parameters and Click "Save"
Select "Interfaces" tab, locate the LAN interface, find the column "Routing"
- Click "none" to open up menu to select BGP as the routing protocol
- At the top of the page, click "update device"
12. BGP Route Exchange between Router Appliances
Establish local ASN for remote sites
Configure a local BGP ASN for site1-nva, once configured we will then establish an IPSEC Tunnel between the remote sites and hub routers.
Select the device with the HostName:site1-nva
- Click on the "Routing" tab
- Click on the "BGP Configuration"
- Disable "Redistribute OSPF Routes"
- Local ASN 7269 → Save
- Update Device
- Interfaces Tab → Routing → BGP
- Update Device
Configure VPN tunnels Between Site1 and Hub1 Appliances
Be sure to login to the flexiManage Console
- Navigate to Inventory → Devices
- Select the box next to hostname of site1-nva and hub-r1 to build a VPN tunnel between this pair of NVAs
- Click the Actions→ Create Tunnels and configure the following
- Select Create Tunnels
Verify that "site1-nva" learned routes to the subnet 192.168.235.0/24 and 192.168.236.0/24
- Select Inventory → Devices → site1-nva and click the "Routing" tab
In the example output below, flexiWAN automatically created the tunnel using the host IP address 10.100.0.6
13. Verify Data Path Connectivity
Verify site to cloud connectivity from on prem
Refer to the diagram, Verify that the data path between s1-vm and workload1-vm
Configure VPC Static routes for Site to Cloud
The on-premise Site1-VPC simulates an on-premise datacenter network.
Both Site-1-nva router appliances use VPN connectivity to reach the hub network.
For the site to cloud use case**,** create static routes to the 192.168.0.0/16 destination using the router appliance as the next hop to reach networks in the GCP cloud network.
On s1-inside-vpc, create a static route for cloud destination (192.168.0.0/16):
gcloud compute routes create site1-subnet-route \
--network=s1-inside-vpc \
--destination-range=192.168.0.0/16 \
--next-hop-instance=site1-nva \
--next-hop-instance-zone=us-central1-a
On cloudshell, look up the IP address of workload1-vmnee." You'll need this to test connectivity from "s1-vm."
gcloud compute instances describe workload1-vm --zone=us-central1-a | grep "networkIP"
SSH to "s1-vm" and use the "curl" command to establish a TCP session to workload1-VM ip address.
s1-vm:~$ curl 192.168.235.3 -vv * Trying 192.168.235.3:80... * Connected to 192.168.235.3 (192.168.235.3) port 80 (#0) > GET / HTTP/1.1 > Host: 192.168.235.3 > User-Agent: curl/7.74.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Wed, 07 Dec 2022 15:12:08 GMT < Server: Apache/2.4.54 (Debian) < Last-Modified: Tue, 06 Dec 2022 00:57:46 GMT < ETag: "1f-5ef1e4acfa1d9" < Accept-Ranges: bytes < Content-Length: 31 < Content-Type: text/html < Page served from: workload1-vm * Connection #0 to host 192.168.235.3 left intact
14. Clean Up
Delete the On Prem resources
Login to cloud shell and delete VM instances in the hub and branch site networks
#onprem instances
gcloud compute instances delete s1-vm --zone=us-central1-a --quiet
#delete on prem firewall rules
gcloud compute firewall-rules delete site1-ssh --quiet
gcloud compute firewall-rules delete site1-internal --quiet
gcloud compute firewall-rules delete site1-cloud --quiet
gcloud compute firewall-rules delete site1-vpn --quiet
gcloud compute firewall-rules delete site1-iap --quiet
#delete on prem subnets
gcloud compute networks subnets delete site1-subnet --quiet
gcloud compute networks subnets delete s1-inside-subnet --quiet
gcloud compute networks subnets delete s1-inside-subnet --quiet
#delete on prem vpcs
gcloud compute networks delete site1-vpc --quiet
gcloud compute networks delete s1-inside-vpc --quiet
Delete the Cloud Hub resources
Login to cloud shell and delete VM instances in the hub and branch site networks
#delete ncc spokes
gcloud network-connectivity spokes delete s2c-wrk-cr1 --region us-central1 --quiet
#delete ncc hub
gcloud network-connectivity hubs delete ncc-hub --quiet
#delete hub instances
gcloud compute instances delete hub-r1 --zone=us-central1-a --quiet
#delete hub firewall rule
gcloud compute firewall-rules delete hub-ssh --quiet
gcloud compute firewall-rules delete hub-vpn --quiet
gcloud compute firewall-rules delete hub-internal --quiet
gcloud compute firewall-rules delete hub-iap --quiet
gcloud compute firewall-rules create workload-ssh --quiet
gcloud compute firewall-rules create workload-internal --quiet
gcloud compute firewall-rules create workload-onprem --quiet
gcloud compute firewall-rules create workload-iap --quiet
#delete hub subnets
gcloud compute networks subnets delete workload-subnet1 --quiet
gcloud compute networks subnets delete hub-subnet1 --quiet
#delete hub vpcs
gcloud compute networks delete workload-vpc --quiet
gcloud compute networks delete hub-vpc --quiet
15. Congratulations!
You have completed the Network Connectivity Center Lab!
What you covered
- Configured Software Defined WAN integration for NCC site to cloud
Next Steps
- Network Connectivity Center Overview
- Network Connectivity Center Documentation
- flexiWAN resources
- flexiWAN GitLab Repo
©Google, LLC or its affiliates. All rights reserved. Do not distribute.