What you'll learn

What you'll need

To interact with GCP, we will use both the Google Cloud Console and Cloud Shell throughout this lab.

Google Cloud Console

The Cloud Console can be reached at https://console.cloud.google.com.

Cloudnet19 Training environment setup

In this lab, you will use your @google.com and a test @gcpnetworking.training identities to interact with a pre-determined project in the gcpnetworking.training Organization. IAM and billing have already been configured for you.

Click on the project selector dropdown at the top of the page:

Select the gcpnetworking.training Org in the project selector drop down.

You should see a project available to you in the format vpcuser##project. (If you do not see a project, please let a trainer know).

Click ‘OPEN' to navigate to your reserved project.

Google Cloud Shell

Google Cloud Shell is a Debian-based virtual machine pre-loaded with all the development tools you'll need that can be automatically provisioned from the Cloud Console. This means that all you will need for this lab is a browser. Yes, it works on a Chromebook!

Activate Google Cloud Shell

From the GCP Console click the Cloud Shell icon on the top right toolbar:

Then click "Start Cloud Shell":

It should only take a few moments to provision and connect to the environment:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this lab can be done with simply a browser or your Google Chromebook.

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

Run the following command in the cloud shell to confirm that you are authenticated:

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

Looking for your PROJECT_ID? It's the ID you used in the setup steps. You can find it in the console dashboard, Home, any time.

This lab is broken into three different sections:

Shared VPC (xx minutes)

Cloud NAT (xx minutes)

VPC Peering Custom Routes (xx minutes)

[Add]

The Shared VPC portion of the lab will leverage an existing host project, with your vpcuser##project serving as the service project. The host project has the following characteristics:

Your vpcuser##project has no local VPC. The topology for the Shared VPC section of the lab is shown in the following figure.

<>

<>

From the Cloud Console GUI, switch to the project named hostproject, as shown below:

From the navigation column on the left, select:

NETWORKING > VPC network > Shared VPC

In the main console window, select the Attached projects tab, as shown below:

This tab shows the list of service projects currently attached to the host project.

To attach your vpcuser##project as a service project, click the Attach projects button.

In the Attached projects screen, type the name of your vpcuser##project in the Filter by project number or ID search field.

Check the box next to vpcuser##project in the filter results.

Note: Although not used in this lab, note the Kubernetes Engine access option, which is required when you plan to deploy GKE clusters in a service project.

Under Shared mode, ensure that the Individual subnets (subnet-level permissions) option is selected. This option allows you to share specific subnets in the hostnet VPC with your service project.

Under Subnets to share find and select the vpcuser##-subnet1 subnet. This is the first subnet you will share with your service project.

The full configuration should resemble the following figure:

Click Save to attach your service project and share the first subnet.

Still within Cloud Console, switch to your vpcuser##project, as shown below:

From the navigation column on the left, select VPC networks.

In the main window, select the Networks shared to my project, as shown below:

Notice that all of the subnets from the host project are visible? Why?

<describe why>

Now you'll switch to an IAM account that only has administrative privileges in the vpcuser##project service project.

In the top right-hand corner of the Cloud Console GUI, click to user icon associated with your google.com account. In the menu that appears, click Add account, as shown below:

Log in using the vpcuser##@gcpnetworking.training account.

If necessary, navigate to:

NETWORKING > VPC network > <>

On the Networks shared with this <>

Note that the only subnet that is visible is vpcuser##-subnet1. This is the typical experience of an administrator in a service project where only select subnets from the host project are shared.

Now you'll add a new admin user to the service project.

From the left-hand navigation menu in the Cloud Console GUI, select:

IAM > <> > <>

Click Add to add a new IAM account.

Add the @gmail.com user noted in the spreadsheet for your lab project.

Click <Save>.

In the top right-hand corner of the Cloud Console GUI, click to user icon associated with the account. In the menu that appears, click Add account.

Log in using the credentials for the @gmail.com account.

From the left-hand navigation menu, navigate to:

NETWORKING > <> > <>

Notice that the Networks <> doesn't appear. Why?

<explain why>

Switch back to the tab with your google.com account.

<steps to add new user with subnet permissions>

Finally, you'll start sharing a new subnet with the existing vpcuser##project service project.

From the Cloud Console tab with your google.com account, navigate to:

NETWORKING > <> > <>

<steps to share a new subnet>

<duration>

From your Cloud Shell session, execute the following commands to cleanup the Shared VPC lab environment:

Delete vm instances

<overview>

The Cloud NAT portion of the lab will leverage a single VPC in your vpcuser##project. The base configuration pre-creating for your by Terraform has the following characteristics:

The topology for the Cloud NAT section of the lab is shown in the following figure.

<>

<>

<

Steps:

Follow the instructions below or the README file, which describes how to install and run the appropriate terraform script.

From within Cloud Shell, clone the git repository that contains the lab Terraform modules:

```

git clone https://github.com/kaysal/training.git

```

Sample Output:

```

Cloning into 'training'...

remote: Enumerating objects: 220, done.

remote: Counting objects: 100% (220/220), done.

remote: Compressing objects: 100% (153/153), done.

remote: Total 859 (delta 134), reused 138 (delta 62), pack-reused 639

Receiving objects: 100% (859/859), 113.81 KiB | 0 bytes/s, done.

Resolving deltas: 100% (505/505), done.

```

Change to directory of the cloned repository:

```

cd training/codelab19/

```

Install Terraform (if it is not already installed):

```

./terraform-install.sh

```

source ~/.bashrc

The `init.sh` script lets you select a given lab and then configures terraform with the project ID of the project where the Cloud Shell is launched. Make sure you are still in the `training/codelabs19/` directory and execute the following command:

```

./init.sh

```

A list of available lab templates is displayed, e.g.;

```

List of Labs

-----------------------

1) CDN 5) L4_ILB 9) GKE

2) VPC_Peering 6) L7_ILB 10) Traffic_Director

3) HA_VPN 7) NAT

4) DNS 8) Security

Select a Lab template number [Press CRTL+C to exit]:

```

To complete the initial setup for this section of the lab, select the `NAT` option and press `Enter`.

Confirm your selection, then wait for Terraform to finish creating the resources.

<

From Cloud Shell, verify the three subnets were created:

```

gcloud compute networks subnets list

```

Sample output:

```

NAME REGION NETWORK RANGE

vpc-demo-subnet3 us-east1 vpc-demo 10.3.1.0/24

vpc-demo-subnet1 us-central1 vpc-demo 10.1.1.0/24

vpc-demo-subnet2 us-central1 vpc-demo 10.2.1.0/24

```

Verify the VM instances that were created in each subnet:

```

gcloud compute instances list

```

Sample output:

```

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

vpc-demo-vm1 us-central1-a f1-micro 10.1.1.2 RUNNING

vpc-demo-vm2 us-central1-a f1-micro 10.2.1.2 RUNNING

vpc-demo-vm3 us-east1-b f1-micro 10.3.1.2 RUNNING

```

Connect to the instance in us-east1 using Cloud IAP:

```

gcloud beta compute ssh vpc-demo-vm3 --tunnel-through-iap

```

From the shell of the instance, attempt to install the `dnsutils` package from the Internet:

```

sudo apt-get install -y dnsutils

```

This step should fail, with output similar to the following:

```

$ sudo apt-get update

0% [Connecting to prod.debian.map.fastly.net (151.101.184.204)] [Connecting to prod.debian.map.fastly.net (151.101.184.204)] [Connecting to packages.cloud.google.com (172.217.212.139)]

```

You can exit the update process using CTRL+C.

From with the Cloud Console GUI, navigate to;

NETWORKING > Network services > Cloud NAT

The following screen should appear:

Click Get started to configure your first NAT gateway.

Populate the following options:

Gateway name: <give the gateway a unique name>

VPC network: Select the vpc-demo network.

Region: Select us-east1.

Cloud Router: Select Create new router

On the Create a router screen, populate the Name field with a unique name for the Cloud Router. All of the other fields are pre-populated for you.

Click Create.

The completed configuration resembles the following:

Click Create to create the NAT gateway in region us-east1.

Verify the Cloud NAT gateway status is `Running`, as shown in the following screen:

Return to the VM instance shell in Cloud Shell and execute the same OS package manager update command:

```

sudo apt-get install -y dnsutils

```

The update should now succeed, with output similar to the following:

```

Selecting previously unselected package libxml2:amd64.

Preparing to unpack .../03-libxml2_2.9.4+dfsg1-2.2+deb9u2_amd64.deb ...

Unpacking libxml2:amd64 (2.9.4+dfsg1-2.2+deb9u2) ...

Selecting previously unselected package libisc160:amd64.

Preparing to unpack .../04-libisc160_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking libisc160:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package libdns162:amd64.

Preparing to unpack .../05-libdns162_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking libdns162:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package libisccc140:amd64.

Preparing to unpack .../06-libisccc140_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking libisccc140:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package libisccfg140:amd64.

Preparing to unpack .../07-libisccfg140_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking libisccfg140:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package libbind9-140:amd64.

Preparing to unpack .../08-libbind9-140_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking libbind9-140:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package liblwres141:amd64.

Preparing to unpack .../09-liblwres141_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking liblwres141:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package bind9-host.

Preparing to unpack .../10-bind9-host_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking bind9-host (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package dnsutils.

Preparing to unpack .../11-dnsutils_1%3a9.10.3.dfsg.P4-12.3+deb9u4_amd64.deb ...

Unpacking dnsutils (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Selecting previously unselected package geoip-database.

Preparing to unpack .../12-geoip-database_20170512-1_all.deb ...

Unpacking geoip-database (20170512-1) ...

Selecting previously unselected package xml-core.

Preparing to unpack .../13-xml-core_0.17_all.deb ...

Unpacking xml-core (0.17) ...

Setting up geoip-database (20170512-1) ...

Setting up sgml-base (1.29) ...

Setting up libgeoip1:amd64 (1.6.9-4) ...

Setting up libicu57:amd64 (57.1-6+deb9u2) ...

Setting up libxml2:amd64 (2.9.4+dfsg1-2.2+deb9u2) ...

Processing triggers for libc-bin (2.24-11+deb9u4) ...

Setting up liblwres141:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Processing triggers for man-db (2.7.6.1-2) ...

Setting up xml-core (0.17) ...

Setting up libisc160:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Setting up libisccc140:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Setting up libdns162:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Setting up libisccfg140:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Setting up libbind9-140:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Setting up bind9-host (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Setting up dnsutils (1:9.10.3.dfsg.P4-12.3+deb9u4) ...

Processing triggers for libc-bin (2.24-11+deb9u4) ...

Processing triggers for sgml-base (1.29) ...

```

Next, verify the IP address you're using to connect to the Internet:

```

dig TXT +short o-o.myaddr.l.google.com @ns1.google.com

```

Sample output:

```

$ dig TXT +short o-o.myaddr.l.google.com @ns1.google.com

"34.74.47.198"

```

Exit from the VM instance by typing `exit`.

You can confirm this is the public IP address used by the Cloud NAT gateway using the following `gcloud` command:

```

gcloud compute routers get-nat-mapping-info NAME

```

where NAME is the name of the Cloud Router you created when configuring the Cloud NAT gateway. You can view the list of configured routers using the command:

```

gcloud compute routers list

```

The following sample output confirms that the public IP address used by the VM instance above is configured as part of the Cloud NAT gateway:

```

---

instanceName: vpc-demo-vm3

interfaceNatMappings:

- natIpPortRanges:

- 35.196.114.103:1024-1055

numTotalNatPorts: 32

sourceAliasIpRange: ''

sourceVirtualIp: 10.3.1.2

- natIpPortRanges:

- 34.74.47.198:1024-1055

numTotalNatPorts: 32

sourceAliasIpRange: ''

sourceVirtualIp: 10.3.1.2

```

Next, you'll configure a second Cloud NAT gateway in the us-central1 region. But this time, you'll manually specify the public IP address for the gateway to use for NAT. You'll also use `gcloud` to configure the gateway this time, instead of Cloud Console.

First, reserve a public IP address in the us-central1 region:

```

gcloud compute addresses create us-central1-nat-ip1 --region us-central1

```

Verify the public IP address created using the following command:

```

gcloud compute addresses list

```

Sample output:

```

NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS

nat-auto-ip-4090550-3-1556908362781268 35.196.114.103 NAT_AUTO us-east1 RESERVED

nat-auto-ip-4090550-9-1556908374417536 34.74.47.198 NAT_AUTO us-east1 RESERVED

us-central1-nat-ip1 35.239.113.14 us-central1 RESERVED

```

Create a Cloud Router in region us-central1 to hold the NAT gateway configuration:

```

gcloud compute routers create us-central1-router1 \

> --network vpc-demo \

> --region us-central1

```

Now add the NAT configuration:

```

gcloud compute routers nats create us-central1-ngw \

> --router-region us-central1 \

> --router us-central1-router1 \

> --nat-all-subnet-ip-ranges \

> --nat-external-ip-pool=uc-central1-nat-ip1

```

Connect to the first instance in us-central1 using Cloud IAP:

```

gcloud beta compute ssh vpc-demo-vm1 --tunnel-through-iap

```

Verify the IP address the instance appears as to the Internet:

```

dig TXT +short o-o.myaddr.l.google.com @ns1.google.com

```

Sample output:

```

$ dig TXT +short o-o.myaddr.l.google.com @ns1.google.com

"35.239.113.14"

```

Exit from the instance shell by typing `exit`.

Connect to the second instance in us-central1 using Cloud IAP:

```

gcloud beta compute ssh vpc-demo-vm2 --tunnel-through-iap

```

Verify the IP address the instance appears as to the Internet:

```

dig TXT +short o-o.myaddr.l.google.com @ns1.google.com

```

Sample output:

```

$ dig TXT +short o-o.myaddr.l.google.com @ns1.google.com

"35.239.113.14"

```

<duration>

From your Cloud Shell session, execute the following commands to cleanup the Cloud NAT lab environment:

```

<delete nat gateways

Delete cloud routers

Delete static ip

./remove.sh

```

<overview>

<

Follow the instructions below or the README file, which describes how to install and run the appropriate terraform script.

From within Cloud Shell, clone the git repository that contains the lab Terraform modules:

```

git clone https://github.com/kaysal/training.git

```

Sample Output:

```

Cloning into 'training'...

remote: Enumerating objects: 220, done.

remote: Counting objects: 100% (220/220), done.

remote: Compressing objects: 100% (153/153), done.

remote: Total 859 (delta 134), reused 138 (delta 62), pack-reused 639

Receiving objects: 100% (859/859), 113.81 KiB | 0 bytes/s, done.

Resolving deltas: 100% (505/505), done.

```

Change to directory of the cloned repository:

```

cd training/codelab19/

```

Install Terraform (if it is not already installed):

```

./terraform-install.sh

```

source ~/.bashrc

The `init.sh` script lets you select a given lab and then configures terraform with the project ID of the project where the Cloud Shell is launched. Make sure you are still in the `training/codelabs19/` directory and execute the following command:

```

./init.sh

```

A list of available lab templates is displayed, e.g.;

```

List of Labs

-----------------------

1) CDN 5) L4_ILB 9) GKE

2) VPC_Peering 6) L7_ILB 10) Traffic_Director

3) HA_VPN 7) NAT

4) DNS 8) Security

Select a Lab template number [Press CRTL+C to exit]:

```

To complete the initial setup for this section of the lab, select the `VPC_Peering` option and press `Enter`.

Wait for Terraform to finish creating the resources.

From Cloud Shell, verify the three subnets were created:

```

gcloud compute networks list

```

Sample output:

```

NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4

vpc-demo CUSTOM REGIONAL

vpc-demo-2 CUSTOM GLOBAL

vpc-onprem CUSTOM GLOBAL

```

From with the Cloud Console GUI, navigate to;

NETWORKING > VPC network > Routes

The following screen should appear:

Type the following into the Filter resources search bar and press `Enter`:

```

network: vpc-demo2

```

The displayed routes are now limited to the vpc-demo-2 VPC, as shown below:

From the navigation column on the left, select VPC network peering.

On the screen that appears, click Create connection.

On the Create peering connection screen that appears, click Continue.

Populate the following options:

Name: Unique name for the one-way connection (e.g. demo2-to-demo).

Your VPC network: Either vpc-demo-2 or vpc-demo.

Peered VPC network: Select the In project vpcuser##project radio button.

VPC network name: Select the opposite VPC name from what was selected in the Your VPC network option.

Click Create.

While this first peering connection is created, you'll create the corresponding connection from the other direction.

On the top of the screen, click CREATE PEERING CONNECTION.

On the Create peering connection screen that appears, click Continue.

Populate the following options:

Name: Unique name for the one-way connection (e.g. demo-to-demo2).

Your VPC network: Either vpc-demo-2 or vpc-demo.

Peered VPC network: Select the In project vpcuser##project radio button.

VPC network name: Select the opposite VPC name from what was selected in the Your VPC network option.

Click Create.

The vpc-demo and vpc-demo2 VPCs are now peered. The VPC Network Peering screen should show a status of `Connected` for both peers, as shown in the example below:

Return to the VPC Routes view by selecting Routes on the navigation bar on the left.

Type the following into the Filter resources search bar and press `Enter`:

`network: vpc-demo2`

The displayed routes are now limited to the vpc-demo-2 VPC, as shown below:

What do you notice that's different?

There are now two routes that begin with peering-, which represent the subnet routes from the vpc-demo VPC. The next hop for both routes is the VPC peering connection from vpc-demo-2 to vpc-demo.

Note, however, that the custom (i.e. static and dynamic routes) do not appear in the vpc-demo-2 routing table.

<

Select vpc network peering

Select demo-to-demo2

Click edit

under Exchange custom routes, check Export custom routes

Click save

Select demo2-to-demo

Norice rejected routes under imported routes tab

Click edit

under Exchange custom routes, check Import custom routes

Click save

Return to the VPC Routes view by selecting Routes on the navigation bar on the left.

Type the following into the Filter resources search bar and press `Enter`:

`network: vpc-demo2`

The displayed routes are now limited to the vpc-demo-2 VPC, as shown below:

What do you notice that's different?

There is an additional peering route

Where is the dynamic route?

Go back to the demo2-to-demo peering details

<>

You can now see the dynamic route from vpc-demo

<duration>

From your Cloud Shell session, execute the following commands to cleanup the VPC Peering Custom Routes lab environment:

<

Delete the peerings

./remove

>

You have completed the VPC Networking codelab!

What you covered

Next Steps

Learn More

Learn more about the features you learned in this lab:

©Google, Inc. or its affiliates. All rights reserved. Do not distribute.