CodeLab: Dynamic Route Exchange with NCC

About this codelab
schedule35 minutes
subjectLast updated November 18, 2024
account_circleWritten by Eric Yu, Oswaldo Costa

In this lab, users will explore how Network Connectivity Center (NCC) can be used to establish on-prem connectivity at scale through the support for VPC Spokes and dynamic route exchange. When users define a VPC as an VPC spoke, this enables them to connect it to multiple VPC networks together via the NCC Hub. To establish network connectivity with a user's on-prem network, they can attach Router appliance virtual NIC, HA_VPN tunnels or Interconnect VLAN attachments to the same NCC hub as NCC VPC spokes.

The hub resource provides a centralized connectivity management model to interconnect spokes.

What you'll build

In this codelab, you'll build a logical hub and spoke topology with the NCC hub that will implement hybrid connectivity between the on-premise network and a workload VPC.

c06021c6aaa47682.png

What you'll learn

  • Distinguish between a Workload VPC and Routing VPC
  • NCC Integration of VPC spoke and Hybrid Spoke

What you'll need

  • Knowledge of GCP VPC network
  • Knowledge of Cloud Router and BGP routing
  • Google Cloud Project
  • Check your Quota:Networks and request additional Networks if required, screenshot below:

6bc606cb34bce7e8.png

Objectives

  • Setup the GCP Environment
  • Configure Network Connectivity Center with VPC as spoke
  • Configure Network Connectivity Center with HA-VPN tunnels as a hybrid spoke
  • Validate Data Path
  • Explore NCC serviceability features
  • Clean up used resources

Before you begin

Google Cloud Console and Cloud Shell

To interact with GCP, we will use both the Google Cloud Console and Cloud Shell throughout this lab.

NCC Hub Project Google Cloud Console

The Cloud Console can be reached at https://console.cloud.google.com.

Set up the following items in Google Cloud to make it easier to configure Network Connectivity Center:

In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

Launch the Cloud Shell. This Codelab makes use of $variables to aid gcloud configuration implementation in Cloud Shell.

gcloud auth list
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectname=[YOUR-PROJECT-NAME]
echo $projectname
region="us-central1"
zone="us-central1-a"

IAM Roles

NCC requires IAM roles to access specific APIs. Be sure to configure your user with the NCC IAM roles as required.

Role/Description

Permissions

networkconnectivity.networkAdmin - Allows network administrators to manage hub and spokes.

networkconnectivity.hubs.networkconnectivity.spokes.

networkconnectivity.networkSpokeManager - Allows adding and managing spokes in a hub. To be used in Shared VPC where the host-project owns the Hub, but other admins in other projects can add spokes for their attachments to the Hub.

networkconnectivity.spokes.**

networkconnectivity.networkUsernetworkconnectivity.networkViewer - Allows network users to view different attributes of hub and spokes.

networkconnectivity.hubs.getnetworkconnectivity.hubs.listnetworkconnectivity.spokes.getnetworkconnectivity.spokes.listnetworkconnectivity.spokes.aggregatedList

2. Setup the Network Environment

Overview

In this section, we'll deploy the three VPC networks and firewall rules in a single project. The logical diagram illustrates the network environment that will be setup in this step. For the purpose of this codelab, a VPC will be used to simulate an on-premise network.

6c8baa1bf0676379.png

Key Concept 1

Google Cloud Global VPC provides data path connectivity between 44+ GCP regions. Cloud Router, a regional service, dynamically advertises subnets and propagates learned routes in the region where the router is configured or throughout the entire VPC network. What determines the cloud router to propagate routes regionally or global depends on the user defining dynamic routing mode: regional or global.

In this section, we will start by configuring each VPC with regional routing mode. For the rest of this codelab:

  • "Routing VPC" identifies a VPC that is NOT configured as an NCC VPC spoke.
  • "Workload VPC" identifies a VPC configured as NCC spoke.

Create the workload VPC and a Subnet

The VPC network contains subnets that you'll install GCE VM for data path validation

vpc_spoke_network_name="workload-vpc"
vpc_spoke_subnet_name="workload-subnet"
vpc_spoke_subnet_ip_range="10.0.1.0/24"
vpc_spoke_name="workload-vpc-spoke"
region="us-central1"
zone="us-central1-a"

gcloud compute networks create "${vpc_spoke_network_name}" \
--subnet-mode=custom 

gcloud compute networks subnets create "${vpc_spoke_subnet_name}" \
--network="${vpc_spoke_network_name}" \
--range="${vpc_spoke_subnet_ip_range}" \
--region="${region}"

Create the routing VPC and a subnet

NCC supports all valid IPv4 subnet ranges except privately used public IP addresses.

routing_vpc_network_name="routing-vpc"
routing_vpc_subnet_name="routing-vpc-subnet"
routing_vpc_subnet_range="10.0.2.0/24"

gcloud compute networks create "${routing_vpc_network_name}" \
--subnet-mode=custom

gcloud compute networks subnets create "${routing_vpc_subnet_name}" \
--region="${region}" \
--network="${routing_vpc_network_name}" \
--range="${routing_vpc_subnet_range}"

Create the On-prem VPC and a subnet

NCC supports all valid IPv4 subnet ranges except privately used public IP addresses.

on_prem_network_name="on-prem-net-vpc"
on_prem_subnet_name="on-prem-subnet"
on_prem_subnet_range="10.0.3.0/24"

gcloud compute networks create "${on_prem_network_name}" \
--subnet-mode=custom

gcloud compute networks subnets create "${on_prem_subnet_name}" \
--region="${region}" \
--network="${on_prem_network_name}" \
--range="${on_prem_subnet_range}"

Configure Workload VPC Firewall Rules

workload_vpc_firewall_name="workload-protocol-fw-vpc"
workload_port_firewall_name="workload-port-firewall-vpc"

gcloud compute firewall-rules create "${workload_vpc_firewall_name}" \
--network=${vpc_spoke_network_name} \
--allow="tcp,udp,icmp"

gcloud compute firewall-rules create "${workload_port_firewall_name}" \
--network=${vpc_spoke_network_name} \
--allow="tcp:22,tcp:3389,tcp:11180,icmp"

Configure Routing VPC and VPC Firewall Rules

routing_vpc_fw_name="routing-vpc-protocol-fw"
routing_vpc_port_fw_name="routing-vpc--port-fw"

gcloud compute firewall-rules create "${routing_vpc_fw_name}" \
--network=${routing_vpc_network_name} \
--allow="tcp,udp,icmp"

gcloud compute firewall-rules create "${routing_vpc_port_fw_name}" \
--network=${routing_vpc_network_name} \
--allow="tcp:22,tcp:3389,tcp:11180,icmp"

Configure On-Prem VPC and VPC Firewall Rules

prem_protocol_fw_name="onprem-vpc-protocol-firewall"
prem_port_firewall_name="onprem-vpc-port-firewall-prem"

gcloud compute firewall-rules create "${prem_protocol_fw_name}" \
--network=${on_prem_network_name} \
--allow="tcp,udp,icmp"

gcloud compute firewall-rules create "${prem_port_firewall_name}" \
--network=${on_prem_network_name} \
--allow="tcp:22,tcp:3389,tcp:11180,icmp"

Configure GCE VM in Each VPC

You'll need temporary internet access to install packages on "vm1-vpc1-ncc."

Create three virtual machines, each VM will be assigned to one of the VPCs previously created

gcloud compute instances create vm1-vpc-workload \
--zone us-central1-a \
--subnet="${vpc_spoke_subnet_name}" \
--metadata=startup-script='#!/bin/bash
  apt-get update
  apt-get install apache2 -y
  apt-get install tcpdump -y
  service apache2 restart
  echo "
<h3>Web Server: www-vm1</h3>" | tee /var/www/html/index.html'


gcloud compute instances create vm2-vpc-routing \
--zone us-central1-a \
--subnet="${routing_vpc_subnet_name}" \
--no-address 

gcloud compute instances create vm3-onprem \
--zone us-central1-a \
--subnet="${on_prem_subnet_name}" \
--no-address 

3. Setup Hybrid Connectivity

In this section, we'll configure a HA VPN tunnel to connect the on-prem and routing VPC networks together.

ad64a1dee6dc74c9.png

Configure a Cloud Router with BGP in the routing VPC

routing_vpc_router_name="routing-vpc-cr"
routing_vpc_router_asn=64525

gcloud compute routers create "${routing_vpc_router_name}" \
--region="${region}" \
--network="${routing_vpc_network_name}" \
--asn="${routing_vpc_router_asn}"

Configure a Cloud Router with BGP in the On-Prem VPC

on_prem_router_name="on-prem-router"
on_prem_router_asn=64526

gcloud compute routers create "${on_prem_router_name}" \
--region="${region}" \
--network="${on_prem_network_name}" \
--asn="${on_prem_router_asn}"

Configure a VPN Gateway in the routing VPC

routing_vpn_gateway_name="routing-vpc-vpn-gateway"

gcloud compute vpn-gateways create "${routing_vpn_gateway_name}" \
--region="${region}" \
--network="${routing_vpc_network_name}"

Configure a VPN Gateway in the On-Prem VPC

on_prem_gateway_name="on-prem-vpn-gateway"

gcloud compute vpn-gateways create "${on_prem_gateway_name}" \
--region="${region}" \
--network="${on_prem_network_name}"

Configure a VPN tunnel in the routing VPC and on-prem VPC

secret_key=$(openssl rand -base64 24)
routing_vpc_tunnel_name="routing-vpc-tunnel"
on_prem_tunnel_name="on-prem-tunnel"

gcloud compute vpn-tunnels create "${routing_vpc_tunnel_name}" \
--vpn-gateway="${routing_vpn_gateway_name}" \
--peer-gcp-gateway="${on_prem_gateway_name}" \
--router="${routing_vpc_router_name}" \
--region="${region}" \
--interface=0 \
--shared-secret="${secret_key}"

gcloud compute vpn-tunnels create "${on_prem_tunnel_name}" \
--vpn-gateway="${on_prem_gateway_name}" \
--peer-gcp-gateway="${routing_vpn_gateway_name}" \
--router="${on_prem_router_name}" \
--region="${region}" \
--interface=0 \
--shared-secret="${secret_key}"

Create BGP sessions to BGP peer the routing vpc and the on-prem cloud routers

interface_hub_name="if-hub-to-prem"
hub_router_ip="169.254.1.1"

gcloud compute routers add-interface "${routing_vpc_router_name}" \
--interface-name="${interface_hub_name}" \
--ip-address="${hub_router_ip}" \
--mask-length=30 \
--vpn-tunnel="${routing_vpc_tunnel_name}" \
--region="${region}"

bgp_hub_name="bgp-hub-to-prem"
prem_router_ip="169.254.1.2"
gcloud compute routers add-bgp-peer "${routing_vpc_router_name}" \
--peer-name="${bgp_hub_name}" \
--peer-ip-address="${prem_router_ip}" \
--interface="${interface_hub_name}" \
--peer-asn="${on_prem_router_asn}" \
--region="${region}"

interface_prem_name="if-prem-to-hub"
gcloud compute routers add-interface "${on_prem_router_name}" \
--interface-name="${interface_prem_name}" \
--ip-address="${prem_router_ip}" \
--mask-length=30 \
--vpn-tunnel="${on_prem_tunnel_name}" \
--region="${region}"

bgp_prem_name="bgp-prem-to-hub"
gcloud compute routers add-bgp-peer "${on_prem_router_name}" \
--peer-name="${bgp_prem_name}" \
--peer-ip-address="${hub_router_ip}" \
--interface="${interface_prem_name}" \
--peer-asn="${routing_vpc_router_asn}" \
--region="${region}"

By default, NCC Hub subnets are not announced to hybrid spokes. In the next step, configure the cloud router to announce NCC subnet routes to the on-premise network.

gcloud compute routers update "${routing_vpc_router_name}" \
--advertisement-mode custom \
--set-advertisement-groups=all_subnets \
--set-advertisement-ranges="${vpc_spoke_subnet_ip_range}" \
--region="${region}"
gcloud compute routers update "${on_prem_router_name}" \
--advertisement-mode custom \
--set-advertisement-groups=all_subnets \
--region="${region}"

Update the on-prem cloud router BGP peering configuration to announce prefixes with a MED value of "111." In a later section, we'll observe NCC's behavior with BGP Med values.

on_prem_router_name="on-prem-router"
bgp_prem_name="bgp-prem-to-hub"

gcloud compute routers update-bgp-peer "${on_prem_router_name}" \
--peer-name="${bgp_prem_name}" \
--advertised-route-priority="111" \
--region="${region}"

Check the status of the routing vpc tunnel

gcloud compute vpn-tunnels describe routing-vpc-tunnel \
--region=us-central1 \
--format='flattened(status,detailedStatus)'

Check the status of the routing vpc cloud router

Use the gcloud command to list routing vpc cloud router's BGP learned routes.

gcloud compute routers get-status routing-vpc-cr \
--region=us-central1

4. Network Connectivity Center Hub

Overview

In this section, we'll configure a NCC Hub using gcloud commands. The NCC Hub will serve as the control plane responsible for building routing configuration between each VPC spoke.

715e7803d5c09569.png

Enable API Services

Enable the network connectivity API in case it is not yet enabled:

gcloud services enable networkconnectivity.googleapis.com

Create NCC Hub

Create a NCC hub using the gCloud command

hub_name="mesh-hub"
gcloud network-connectivity hubs create "${hub_name}"

Example output

Create request issued for: [mesh-hub]
Waiting for operation [projects/ncc/locations/global/operations/operation-1719930559145-61c448a0426e4-2d18c8dd-7107edbe] to complete...done.               
Created hub [mesh-hub].

Describe the newly created NCC Hub. Note the name and associated path.

gcloud network-connectivity hubs describe mesh-hub
createTime: '2024-07-02T14:29:19.260054897Z'
exportPsc: false
name: projects/ncc/locations/global/hubs/mesh-hub
policyMode: PRESET
presetTopology: MESH
routeTables:
- projects/ncc/locations/global/hubs/mesh-hub/routeTables/default
state: ACTIVE
uniqueId: 08f9ae88-f76f-432b-92b2-357a85fc83aa
updateTime: '2024-07-02T14:29:32.583206925Z'

NCC Hub introduced a routing table that defines the control plane for creating data connectivity. Find the name of NCC Hub's routing table

 gcloud network-connectivity hubs route-tables list --hub=mesh-hub
NAME     HUB       DESCRIPTION
default  mesh-hub

Find the URI of the NCC default route table.

gcloud network-connectivity hubs route-tables describe default --hub=mesh-hub
createTime: '2024-07-02T14:29:22.340190411Z'
name: projects/ncc/locations/global/hubs/mesh-hub/routeTables/default
state: ACTIVE
uid: fa2af78b-d416-41aa-b442-b8ebdf84f799

List the contents of the NCC Hub's default routing table. Note* NCC Hub's route table will be empty until NCC hybrid spokes or VPC spokes are defined.

gcloud network-connectivity hubs route-tables routes list --hub=mesh-hub --route_table=default

The NCC Hub's route table should be empty.

5. NCC with Hybrid and VPC Spokes

Overview

In this section, you'll configure two NCC Spoke using gCloud commands. One spoke will be a VPC spoke and the second will be a hybrid (VPN) spoke.

647c835a25a9ceb4.png

Configure Workload VPC(s) as a NCC Spoke

Configure workload VPC as an NCC spoke and assign it to the NCC hub that was previously created. NCC spoke API calls require a location to be specified. The flag "–global" allows the user to avoid specifying a full URI path when configuring a new NCC spoke.

vpc_spoke_name="workload-vpc-spoke"
vpc_spoke_network_name="workload-vpc"

gcloud network-connectivity spokes linked-vpc-network create "${vpc_spoke_name}" \
--hub="${hub_name}" \
--vpc-network="${vpc_spoke_network_name}" \
--global
Create request issued for: [workload-vpc-spoke]
Waiting for operation [projects/ncc/locations/global/operations/operation-1719931097138-61c44aa15463f-90de22c7-40c10e6b] to complete...done.               
Created spoke [workload-vpc-spoke].
createTime: '2024-07-02T14:38:17.315200822Z'
group: projects/ncc/locations/global/hubs/mesh-hub/groups/default
hub: projects/ncc/locations/global/hubs/mesh-hub
linkedVpcNetwork:
  uri: https://www.googleapis.com/compute/v1/projects/ncc/global/networks/workload-vpc
name: projects/ncc/locations/global/spokes/workload-vpc-spoke
spokeType: VPC_NETWORK
state: ACTIVE
uniqueId: 33e50612-9b62-4ec7-be6c-962077fd47dc
updateTime: '2024-07-02T14:38:44.196850231Z'

Configure VPN tunnel in Routing VPC as a hybrid spoke

Use this gcloud command to configure the VPN tunnel as a hybrid spoke to join join mesh-hub.

vpn_spoke_name="hybrid-spoke"
routing_vpc_tunnel_name="routing-vpc-tunnel"
region="us-central1"
hub_name="mesh-hub"

gcloud network-connectivity spokes linked-vpn-tunnels create "${vpn_spoke_name}" \
--region="${region}" \
--hub="${hub_name}" \
--vpn-tunnels="${routing_vpc_tunnel_name}"

Sample Output

Create request issued for: [hybrid-spoke]
Waiting for operation [projects/ncc/locations/us-central1/operations/operation-1719932916561-61c45168774be-0a06ae03-88192175] to complete...done.          
Created spoke [hybrid-spoke].

Verify mesh-hub's spoke configuration

Use the gcloud command to list the contents of the NCC Hub's default routing table.

gcloud network-connectivity hubs list-spokes mesh-hub 

Analyze the mesh-hub's default routing table

Use the gcloud command to list the contents of the NCC Hub's default routing table.

gcloud network-connectivity hubs route-tables routes list --hub=mesh-hub \
--route_table=default

Cloud router-learned prefixes with BGP MED values are propagated across NCC spokes when using dynamic route exchange with NCC hybrid spokes.

Use the gcloud command to view the priority value of "111."

gcloud network-connectivity hubs route-tables routes list \
--hub=mesh-hub \
--route_table=default \
--effective-location=us-central1 \
--filter=10.0.3.0/24

6. Verify the data path

In this step, we'll validate the data path between NCC hybrid and VPC spoke. f266a4a762333161.png

Use the output from these gcloud commands to log on to the on prem VM.

gcloud compute instances list --filter="name=vm3-onprem"

Log on to the VM instance residing in the on-prem network.

gcloud compute ssh vm3-onprem --zone=us-central1-a

On vm3-onprem's terminal, use the curl command to establish a web session to the VM hosted in workload-vpc.

curl 10.0.1.2 -v
*   Trying 10.0.1.2:80...
* Connected to 10.0.1.2 (10.0.1.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.0.1.2
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Wed, 03 Jul 2024 15:41:34 GMT
< Server: Apache/2.4.59 (Debian)
< Last-Modified: Mon, 01 Jul 2024 20:36:16 GMT
< ETag: "1e-61c358c8272ba"
< Accept-Ranges: bytes
< Content-Length: 30
< Content-Type: text/html
< 

<h3>Web Server: www-vm1</h3>
* Connection #0 to host 10.0.1.2 left intact

7. Clean Up

Login to cloud shell and delete GCP resources.

Delete NCC spokes

gcloud network-connectivity spokes delete workload-vpc-spoke --global \
--quiet

gcloud network-connectivity spokes delete hybrid-spoke \
--quiet \
--region us-central1

Delete NCC Hub

gcloud network-connectivity hubs delete mesh-hub --quiet

Delete Firewall Rules

gcloud compute firewall-rules delete onprem-vpc-port-firewall-prem onprem-vpc-protocol-firewall routing-vpc--port-fw routing-vpc-protocol-fw workload-port-firewall-vpc workload-protocol-fw-vpc --quiet

Delete HA-VPN Tunnel

gcloud compute vpn-tunnels delete on-prem-tunnel \
--region=us-central1 \
--quiet 

gcloud compute vpn-tunnels delete routing-vpc-tunnel \
--region=us-central1 \
--quiet 

Delete VPN-Gateway

gcloud compute vpn-gateways delete on-prem-vpn-gateway \
--region=us-central1 --quiet

gcloud compute vpn-gateways delete routing-vpc-vpn-gateway \
--region us-central1 --quiet

Delete Cloud Router

gcloud compute routers delete routing-vpc-cr --region us-central1 --quiet

gcloud compute routers delete on-prem-router --region us-central1 --quiet

Delete GCE Instances

gcloud compute instances delete vm1-vpc-workload \
--zone=us-central1-a \
--quiet


gcloud compute instances delete vm2-vpc-routing \
--zone=us-central1-a \
--quiet

gcloud compute instances delete vm3-onprem \
--zone=us-central1-a \
--quiet

Delete VPC Subnets

gcloud compute networks subnets delete workload-subnet --region us-central1 --quiet

gcloud compute networks subnets delete on-prem-subnet --region us-central1 --quiet

gcloud compute networks subnets delete routing-vpc-subnet --region us-central1 --quiet

Delete VPC(s)

gcloud compute networks delete on-prem-net-vpcworkload-vpc routing-vpc 
--quiet 

8. Congratulations!

You have completed the Dynamic Route Exchange Network Connectivity Center Lab!

What you covered

  • Dynamic Route Exchange with Network connectivity center

Next Steps

©Google, LLC or its affiliates. All rights reserved. Do not distribute.