Managing your network in the cloud is not as complicated or tedious as maintaining many layers of vendor/hardware specific topology and configurations in your on-premises data center. While tasks can be easily automated, network management can still include the responsibilities of coordinating policies that cross multiple admin domains (organization, security, network, project), multiple teams and multiple resources (projects, networks, endpoints).

A concrete use case that demands this kind of coordination occurs when one internal micro-service makes use of another. Frequently, microservices are implemented by separate teams using separate, individual projects with resources that need to communicate.

In this codelab, two popular approaches of network management for complex organizational interaction are described: centralized and decentralized. You will work in Google Cloud Platform (GCP) using multiple projects to construct compute and networking resources. Along the way while doing this, multiple administrative roles are explained and engaged.

Google Shared VPC Networks, and VPC Network Peering are explored as features for coordinating your use of networking resources. Finally, multi-network connectivity is explored at the instance level using Multiple Network Interfaces. By configuring separate network interfaces, you can enforce separate policies via firewall rules and access controls to different interfaces.

Examples of use cases for multiple network interfaces include enabling virtualized network appliance functions, perimeter isolation, and bandwidth isolation techniques.

What you'll learn

What you'll need

To interact with GCP, we will use both the Google Cloud Console and Cloud Shell throughout this lab.

Google Cloud Console

The Cloud Console can be reached at https://console.cloud.google.com.

Cloudnet18 Training environment setup

In this lab, use your @google.com identity to interact with a pre-determined project in the gcpnetworking.training Organization. IAM and billing have already been configured for you.

Click on the project selector dropdown at the top of the page:

Select the gcpnetworking.training Org in the project selector drop down.

You should a project available to you in the format vpcuserXXproject. (If you do not see a project, please let the instructor know).

Click ‘OPEN' to navigate to your reserved project.

Google Cloud Shell

Google Cloud Shell is a Debian-based virtual machine pre-loaded with all the development tools you'll need that can be automatically provisioned from the Cloud Console. This means that all you will need for this lab is a browser. Yes, it works on a Chromebook!

Activate Google Cloud Shell

From the GCP Console click the Cloud Shell icon on the top right toolbar:

Then click "Start Cloud Shell":

It should only take a few moments to provision and connect to the environment:

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this lab can be done with simply a browser or your Google Chromebook.

Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID:

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Command output

Updated property [core/project].

Looking for your PROJECT_ID? It's the ID you used in the setup steps. You can find it in the console dashboard, Home, any time.

Note that each user project includes 3 VPC networks. You will use Deployment Manager to deploy the 3 initial VM instances used in this lab.

Copy and execute the Deployment files required to create this environment. This deployment takes <30s to complete usually.

Deployed networks and VM instances (click to expand)

From Cloud Shell (this deployment takes around 30s:

mkdir ~/vpclab

cd ~/vpclab

gsutil cp gs://vpclab/vpclab-setup.* .

gcloud deployment-manager deployments create [vpcuser##deployment] \
    --project [vpcuser##project] --config=vpclab-setup.yaml

Now the VM instances are created. You can verify this is successful by checking for 3 running instances.

gcloud compute instances list
NAME              ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
mynet-eu-vm       europe-west1-b  n1-standard-1               10.132.0.2   35.189.248.109  RUNNING
mynet-us-vm       us-central1-a   n1-standard-1               10.128.0.2   104.197.83.231  RUNNING
privatenet-us-vm  us-central1-a   n1-standard-1               172.16.0.2     104.197.72.116  RUNNING

You can also view the instances in the Cloud Console under Compute Engine > VM instances. You should notice these same 3 running instances: mynet-eu-vm, mynet-us-vm, and privatenet-us-vm.

GCP provides Shared VPC Networks and VPC Network Peering to give you the flexibility to administer your network resources in a centralized way, a decentralized way, or some hybrid of the two.

In the simplest cloud environment, a single Project might have 1 VPC Network, spanning many regions, with VM instances hosting very large and complicated applications. However, many organizations commonly deploy multiple, isolated Projects with multiple VPC Networks and Subnets. Further, associated Firewall rules, public IP addresses, VPN gateways/tunnels, and access policies are created separately across these Projects, VPC Networks, Subnets, and related instances. These isolated Projects and VPC Networks can be coordinated in different ways to satisfy many business requirements: separation of teams, a microservices approach, tiered-administration, etc. The structure of this control and management of resources is the job of Administrators.

GCP is flexible enough to support multiple approaches to network administration. This allows organizations to more carefully map resource policies, administrative controls, and related accounting to existing structures. In addition, administrators can carefully control the manner in which environments interact with each other, on-premises networks and the public Internet.

In GCP, VPC Network Peering enables two VPC Networks to permit direct communication over Google's SDN without requiring a VPN. This is a decentralized or distributed approach as each VPC Network may remain under the control of separate administrator groups and maintains its own global firewall and routing tables. Historically, such projects would consider Cloud VPN to facilitate private communication between VPC networks. However, VPC Network Peering does not incur the performance, management, and cost drawbacks present when using Cloud VPN.

Alternatively, Shared VPC Networks allows a host network, owned and managed by security and network administrators, to be shared among multiple projects. This is a centralized approach to multi-project networking as security and network policy occurs in a single designated VPC Network.

Finally, centralized (Shared VPC Networks) and decentralized (VPC Network Peering) networks can also be blended together to meet requirements for more complex Enterprise configurations. For example, a Shared VPC Network could use VPC Network Peering to peer to another network or use Cloud VPN to connect to an on-premises network if desired.

In order to properly consider the different models of network administration, you should next revisit the network-specific IAM roles.

Networking roles

GCP uses Cloud Identity and Access Management (IAM) to manage access controls: who (members) has what access (role) for which resource. Let's take a moment and examine the general Network-related IAM roles. View details in the Cloud Console under IAM & admin > Roles. Enter network in the filter search box.

General VPC network roles

The following roles are used in conjunction with single-project networking or VPC Network Peering to independently control administrative access to each VPC Network.

Shared VPC Network - Related Networking Roles

The following roles are designed to be used with Shared VPC Networking, explained later in this lab.

As a best-practice, it is recommended that the Shared VPC Admin (formerly the Cross Project Networking Admin/XPN Admin) also be a project owner on the host project.

Background

VPC Network Peering for Google Cloud Platform allows private RFC1918 connectivity between two VPC Networks belonging to the same project, different projects, or different organizations. Since peered VPC Networks co-exist on Google's global SDN, this feature eliminates the requirement to use VPN tunnels for communication between VPC Networks. This allows for increased throughput and lower cost since IPsec VPN gateways/tunnels can create a bottleneck and are billed at internet egress rates.

VPC Network Peering requires non-overlapping IP address ranges in both VPC Networks. Since VPC Network Peering provides a single RFC1918 connectivity space, two VPC Networks with Auto-mode Subnets CANNOT be peered. Further, VPC Network Peering is not transitive, meaning you cannot use a peered VPC Network to reach another peered VPC Network (e.g. if A and B are peered, and B and C are peered, instances in A cannot reach instances in C).

For this lab, you will peer the mynetwork VPC Network, which has Auto-mode subnets, with the privatenet VPC Network, using custom subnets. Then you will verify an instance in mynetwork. mynet-us-vm, can reach an instance in privatenet, privatenet-us-vm, using only private IP addresses.

VPC Peering (click to expand)

Process Flow

The illustration below outlines the steps to configure VPC Network Peering. Project Admins/Owners execute corresponding peering create commands between VPC Networks with non-overlapping RFC1918 address spaces.

Configure Peerings

Check Setup

List the VPC Networks, then check that no Peerings are created. You can view Routes as well to notice what changes when Peerings are created.

From Cloud Shell:

gcloud compute networks list
NAME          MODE    IPV4_RANGE  GATEWAY_IPV4
default       auto
managementnet custom
mynetwork     auto
privatenet    custom

gcloud compute networks peerings list
Listed 0 items.

gcloud compute routes list | grep -v default-
NAME  NETWORK  DEST_RANGE  NEXT_HOP  PRIORITY
<no non-default routes listed>

Now, we'll record the private IPs of the instances. As setup for a peering test later, you'll attempt ping from mynet-us-vm to privatenet-us-vm.

gcloud compute instances list
NAME                ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
mynet-eu-vm         europe-west1-b  n1-standard-1               10.132.0.2   23.251.141.208   RUNNING
mynet-us-vm         us-central1-a   n1-standard-1               10.128.0.2   104.198.242.23   RUNNING
privatenet-us-vm    us-central1-a   n1-standard-1               172.16.0.2                      RUNNING

Next, SSH into the mynet-us-vm instance and ping privatenet-us-vm (172.16.0.2) on its internal IP address. The simplest way to do this is via the SSH button in Cloud Console under Compute Engine > VM instances. When you use Cloud Console SSH , security key forwarding to your VM instances is automatically handled for you. You may need to click HIDE INFO PANEL to see the SSH button.

This ping attempt should block because you have not yet established the peering relationship between mynetwork and privatenet. You can also SSH into privatenet-us-vm and check for similar behavior to mynet-us-vm.

From mynet-us-vm:

ping 172.16.0.2 -c 2
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
^C

exit

Create Peerings

It's time to create the 2 symmetric peerings. In these examples, the VPC networks are numbered for simplified naming purposes. The first network in a peering relationship is 1, the second is 2, the third is 3. Given this, peerings would be named:

First, let's prove that overlapping IP ranges are checked and denied. You will notice that the overlapping CIDR check occurs when the 2nd peering is attempted.

From Cloud Shell:

gcloud compute networks peerings create peering-1-2 \
    --network mynetwork --peer-network default --auto-create-routes

Updated ...
...
  state: INACTIVE
...

gcloud compute networks peerings create peering-2-1 \
    --network default --peer-network mynetwork --auto-create-routes

ERROR: (gcloud.compute.networks.peerings.create) Could not fetch resource:
 - An IP range in the peer network overlaps with an IP range in the local network.

Clean up the first half of the last peering attempt.

From Cloud Shell:

gcloud compute networks peerings delete peering-1-2 \
    --network mynetwork

Now, try two properly planned, non-overlapping IP ranges: one from mynetwork -> privatenet and one from privatenet -> mynetwork.

From Cloud Shell:

gcloud compute networks peerings create peering-1-2 \
    --network mynetwork --peer-network privatenet --auto-create-routes
Updated ...
...
  state: INACTIVE
...

Notice, after creating the first peering, that it remains INACTIVE until the reverse peering is created. This ensures the VPC Network administrators of both VPC Networks agree on the peering arrangement.

Each peering creation can take upto 30s.

From Cloud Shell:

gcloud compute networks peerings list --network mynetwork
... STATE
... INACTIVE


gcloud compute networks peerings create peering-2-1 \
    --network privatenet --peer-network mynetwork --auto-create-routes
Updated ...

View Peerings/Routes

When you initiate the second peering, after it completes, the following peerings list command will show two ACTIVE peerings. When peering is ACTIVE, routes are exchanged and VM instances in the peered networks have full mesh connectivity with each other and with internal load balancer endpoints in the peered network.

From Cloud Shell:

gcloud compute networks peerings list
... STATE
... ACTIVE
... ACTIVE

If you run the routes list command now, you will notice added routes in both networks. These routes were added automatically because you used --auto-create-routes with the peerings create command.

The added routes correspond to the subnet CIDR ranges included in the peered network. For example, the peerings list output shows that mynetwork now can route to the 2 subnets (172.16.0.0/24 and 172.20.0.0/24) in privatenet.

From Cloud Shell:

gcloud compute routes list | grep -v default-
<notice many new routes with name peering-route-...>

gcloud compute routes list | grep peering-1-2
<notice 2 added routes in mynetwork for 2 subnets in privatenet>
peering-route-4a242a359b651837  mynetwork   172.16.0.0/24  peering-1-2               1000
peering-route-fac2a6fabd74ab15  mynetwork   172.20.0.0/24  peering-1-2               1000

Verify Network Peering

Now to verify VPC Network Peering is working, using Cloud Console, SSH back into mynet-us-vm and see if you can ping privatenet-us-vm at its private IP address (e.g., 172.16.0.2).

From mynet-us-vm (10.128.0.2):

ping 172.16.0.2 -c 2
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=0.909 ms
64 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=0.211 ms


exit

You can SSH into privatenet-us-vm and PING back to mynet-us-vm (10.128.0.2) or mynet-eu-vm (10.132.0.2)..

From privatenet-us-vm (172.16.0.2):

ping 10.128.0.2 -c 2
PING 10.128.0.2 (10.128.0.2) 56(84) bytes of data.
64 bytes from 10.128.0.2: icmp_seq=1 ttl=64 time=1.02 ms
64 bytes from 10.128.0.2: icmp_seq=2 ttl=64 time=0.223 ms

ping 10.132.0.2 -c 2
PING 10.132.0.2 (10.132.0.2) 56(84) bytes of data.
64 bytes from 10.132.0.2: icmp_seq=1 ttl=64 time=105 ms
64 bytes from 10.132.0.2: icmp_seq=2 ttl=64 time=105 ms

exit

You may wonder how to discover the IP addresses of the instances in the peered network. You can use gcloud compute instances list to view the instances present in any project on which you have permission. In this tutorial, you can see the running instances in your project. This includes instances in peered networks since they are all in a single project.

If you have the Compute Network Viewer role on another project, including a separate peered network project, you can pass that project-id to your gcloud compute instances list command and view running instances that way. The following command shows running instances in a shared-service-project you will use later in this codelab.

From Cloud Shell

gcloud compute instances list
NAME                ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
mynet-eu-vm         europe-west1-b  n1-standard-1               10.132.0.2   23.251.141.208   RUNNING
mynet-us-vm         us-central1-a   n1-standard-1               10.128.0.2   104.198.242.23   RUNNING
privatenet-us-vm    us-central1-a   n1-standard-1               172.16.0.2                      RUNNING

gcloud compute instances list --project shared-service-project
NAME           ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP     STATUS
service-eu-vm  europe-west1-b  n1-standard-1               192.168.132.2  104.155.56.119  RUNNING
service-us-vm  us-central1-a   n1-standard-1               192.168.128.2  104.197.245.74  RUNNING

Finally, verify that transitive networks also validate non-overlapping IP ranges. In this case both the default VPC network and the mynetwork VPC network are Auto-mode. This means they both use the same auto-assigned CIDR blocks for subnets. You already peered mynetwork with privatenet.

Notice that attempts to peer privatenet with default fail; an ERROR message results.

From Cloud Shell:

gcloud compute networks peerings create peering-2-3 \
    --network privatenet --peer-network default --auto-create-routes

Updated ...
...
  state: INACTIVE
...

gcloud compute networks peerings create peering-3-2 \
    --network default --peer-network privatenet --auto-create-routes

ERROR: (gcloud.compute.networks.peerings.create) Could not fetch resource:
 - An IP range in the local network overlaps with an IP range in one of the active peers of the peer network.

Now clean up the first part of the failed peering.

From Cloud Shell:

gcloud compute networks peerings delete peering-2-3 --network privatenet

If you have time, you can try an experiment in transitive network peerings.

Cleanup

Delete Peerings

Delete the peerings from the command line.

From Cloud Shell:

gcloud compute networks peerings delete peering-1-2 \
    --network mynetwork

Updated ...

gcloud compute networks peerings delete peering-2-1 \
    --network privatenet

Updated ...

Verify Deletion

View Peerings then Routes: both should be empty.

From Cloud Shell

gcloud compute networks peerings list
Listed 0 items.

gcloud compute routes list | grep -v default-
NAME  NETWORK  DEST_RANGE  NEXT_HOP  PRIORITY
<no non-default routes listed>

To be certain, SSH into mynet-us-vm and ping the private IP for privatenet-us-vm as we did before (e.g., 172.16.0.2). Notice it no longer succeeds.

From mynet-us-vm:

ping 172.16.0.2
PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
^C

exit

You have successfully explored VPC Network Peering! You can learn more in the official VPC Network Peering docs.

Shared VPC Networks are the preferred solution for at least three commonly requested business requirements:

This is available using a provider/consumer model: an organization-defined set of administrators provides networking resources that are consumed by departments operating autonomously.

Terminology

Shared VPC Host Project - project that hosts sharable VPC networking resources (VPC networks, subnets, firewall rules, routes, etc.) within an organization. The VPC networking resources within the shared/host VPC network can be used by other departments, represented as independent service projects, in the organization.

Shared VPC Service Project - project that contains the instances which can use the host VPC network in the host project. Represents an autonomously operated department. Each service project uses the centrally controlled VPC network resources provided by the Shared VPC host project. A specific department/team within the customer organization independently operates and owns a service project associated with a service or application..

A benefit of service projects is they allow billing and quota to be separate for each service. When a department owns a service project, the department billing details are separated (by project) and the department has a separate quota from other service projects.

Process Overview

Setting up a Shared VPC Network usually involves at least 3 different user roles: an Organization Admin, a Shared VPC Admin, and a Service Project Admin. This lab is simplified in that you complete only the actions of the Service Project Admin. You will perform these actions with vpcuser##project as your service project.

This section involves 3 pre-created projects:

Projects and Roles

Shared VPC Deployment (click to expand)

Process Flow

The illustration below outlines the steps to configure a Shared VPC project and shared host network. In this lab, you take on the role of the vpcuser##project Service Project Instance Admin (red swim lane). Work has already been completed as indicated in the green and blue swimlanes by the Org Admin and Shared VPC Admin. In addition, a second service project has already been configured. To better understand the Shared VPC workflow, review the required steps below.

Shared VPC Process (click to expand)

Admin Tasks: Configure Shared VPC Host Project and Shared VPC Network

The configuration of shared VPC host project and shared VPC networks is completed using Cloud Console or gcloud commands. The required tasks are distributed among an organization admin, security admin, shared VPC admin (considered the networking admin), and service project admins.

Cloud Console Support for Shared VPC

Cloud Console includes panels that show information about host and service projects. These screens aggregate information that would require executing multiple gcloud commands on the command-line. Cloud Console access of Shared VPC, shown below, requires the user to be granted the Shared VPC Admin role. In this lab, most of the work in conducted as a service project editor, therefore the Shared VPC panels are not accessible.

Cloud Console: Organization view

Cloud Console: Host Project view

Cloud Console: Service Project view

This is what a user sees, if they have IAM permissions: compute.subnetworks.getIamPolicy and resourcemanager.projects.getIamPolicy. These permissions are part of the Shared VPC Admin role, but I used a custom role here.

If you do not have these IAM permissions, you will see this in your service project view.

The following gcloud commands were previously issued on behalf of multiple administrator roles to configure the Shared VPC Host Network for this lab. The org-id for gcpnetworking.training (1015654926499) is frequently referenced in the commands.

Service Owner Tasks: Explore the Shared VPC Network

As mentioned previously, service project users have been granted the Compute Network User role on the host project. This enables use and review of the Shared VPC Host Network configuration including: addresses, firewalls, routes, VPC networks, subnets, vpns, etc.

Now, explore the available VPC Networks in the Shared VPC host project. You should discover the hostnet VPC Network.

From Cloud Shell:

gcloud compute shared-vpc get-host-project vpcuser##project

kind: compute#project
name: gcpnetworking-hostproject

 
gcloud compute networks list --project gcpnetworking-hostproject

NAME  MODE    IPV4_RANGE  GATEWAY_IPV4
hostnet  custom

Now, use this host-project-id to drill into the shared network (hostnet) and available subnetworks. This will show the URLs of the available subnets which will be needed when deploying a VM. Please make note of the URLs for each subnet.

gcloud alpha compute networks subnets list-usable \
    --project gcpnetworking-hostproject

PROJECT                    REGION        NETWORK  SUBNET         RANGE
gcpnetworking-hostproject  europe-west1  hostnet  hostsubnet-eu  192.168.132.0/24
gcpnetworking-hostproject  us-central1   hostnet  hostsubnet-us  192.168.128.0/24


gcloud compute networks subnets describe hostsubnet-eu \
    --region europe-west1 --project gcpnetworking-hostproject

...
selfLink: https://www.googleapis.com/compute/v1/projects/gcpnetworking-hostproject/regions/europe-west1/subnetworks/hostsubnet-eu


gcloud compute networks subnets describe hostsubnet-us \
    --region us-central1 --project gcpnetworking-hostproject

...
selfLink: https://www.googleapis.com/compute/v1/projects/gcpnetworking-hostproject/regions/us-central1/subnetworks/hostsubnet-us

Now, explore firewall policy of the shared Shared VPC network hostnet. You can see that traffic is allowed between instances in the hostnet subnets. You can also see that the only external traffic allowed is ICMP and SSH.

gcloud compute firewall-rules list --project gcpnetworking-hostproject

NAME                    NETWORK  SRC_RANGES                         RULES   SRC_TAGS  TARGET_TAGS
hostnet-allow-icmp      hostnet  0.0.0.0/0                          icmp
hostnet-allow-internal  hostnet  192.168.128.0/24,192.168.132.0/24  all
hostnet-allow-ssh       hostnet  0.0.0.0/0                          tcp:22

Configure Service Project and Deploy VM in Shared VPC Network

Now that you explored the Shared VPC Host Network, you can deploy a VM. The following table summarizes the arguments to the commands.

Don't forget to replace [vpcuser##project] with your specific project-id.

From Cloud Shell:

gcloud compute instances create [vpcuser##-eu-vm] \
    --project [vpcuser##project] --zone europe-west1-b \
    --subnet https://www.googleapis.com/compute/v1/projects/gcpnetworking-hostproject/regions/europe-west1/subnetworks/hostsubnet-eu

gcloud compute instances create [vpcuser##-us-vm] \
    --project [vpcuser##project] --zone us-central1-a \
    --subnet  https://www.googleapis.com/compute/v1/projects/gcpnetworking-hostproject/regions/us-central1/subnetworks/hostsubnet-us

You should see the instances you created running in the hostnet Shared VPC Network. Make note of the private IP addresses (in the 192.168.x.x block) from the results of the list command.

From Cloud Shell:

gcloud compute instances list --project [vpcuser##project] | grep [vpcuser##]

vpcuser##-eu-vm   europe-west1-b  n1-standard-1               192.168.132.x  35.189.233.38    RUNNING
vpcuser##-us-vm   us-central1-a   n1-standard-1               192.168.128.x  35.184.241.94    RUNNING

Testing Connectivity

In this step, you test connectivity between instances running in different projects: shared-service-project and vpcuser##project. In the lab setup, the VM instances from the shared-service-project, service-us-vm and service-eu-vm, were started in the hostnet Shared VPC Network. Verify they are running and take note of their internal IP addresses.

From Cloud Shell:

gcloud compute instances list --project shared-service-project

NAME           ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP    EXTERNAL_IP     STATUS
service-eu-vm  europe-west1-b  n1-standard-1               192.168.132.2  104.155.56.119  RUNNING
service-us-vm  us-central1-a   n1-standard-1               192.168.128.2  104.197.245.74  RUNNING

NOTE: You are able to view these running instances because the shared VPC admins granted networkViewer privileges to the vpcusers group in the shared-service-project policy.

Now, from the VM instances panel in Cloud Console, SSH into one of your created instances, [vpcuser##-us-vm], and ping either of the instances, service-us-vm, or service-eu-vm, running in hostnet.

From vpcuser##-us-vm

ping service-us-vm.c.shared-service-project.internal -c 2
PING service-us-vm.c.shared-service-project.internal (192.168.128.2) 56(84) bytes of data.
64 bytes from service-us-vm.c.shared-service-project.internal (192.168.128.2): icmp_seq=1 ttl=64 time=0.250 ms
64 bytes from service-us-vm.c.shared-service-project.internal (192.168.128.2): icmp_seq=2 ttl=64 time=0.241 ms


ping service-eu-vm.c.shared-service-project.internal -c 2
PING service-eu-vm.c.shared-service-project.internal (192.168.132.2) 56(84) bytes of data.
64 bytes from service-eu-vm.c.shared-service-project.internal (192.168.132.2): icmp_seq=1 ttl=64 time=129 ms
64 bytes from service-eu-vm.c.shared-service-project.internal (192.168.132.2): icmp_seq=2 ttl=64 time=105 ms

You can also ping the other instance you created from your service project, vpcuser##-eu-vm.

From vpcuser##-us-vm

ping vpcuser##-eu-vm.c.vpcuser##project.internal -c 2

PING vpcuser##-eu-vm.c.vpcuser##project.internal (192.168.132.5) 56(84) bytes of data.
64 bytes from vpcuser##-eu-vm.c.vpcuser##project.internal (192.168.132.5): icmp_seq=1 ttl=64 time=106 ms
64 bytes from vpcuser##-eu-vm.c.vpcuser##project.internal (192.168.132.5): icmp_seq=2 ttl=64 time=105 ms

exit

You see also that, as expected, ping traffic is many times faster within the same region vs inter-region between us-central1 and europe-west1.

Remember, earlier we ran a command to verify that ICMP traffic is allowed by the firewall in the shared VPC Network. You can look back and recheck the firewalls to be certain.

If you need help remembering the fully qualified name for a VM instance, you can use nslookup for this. Install nslookup first using the following command.

From vpcuser##-us-vm

sudo apt-get install dnsutils


nslookup 192.168.128.2 | grep name
... name = service-us-vm.c.shared-service-project.internal.


nslookup 192.168.132.2 | grep name
... name = service-eu-vm.c.shared-service-project.internal.



exit

You may be able to find find another student who has deployed a VM instance in hostnet. Try to test connectivity to those instances by issuing the ping command to the student's private IP or host_name while SSH'd into your newly created VM.

Try this next:

Check out this Two-tier web service use case for a real-world example of Shared VPC. This example illustrates how different project teams can manage their own resources, including VM instances, external and internal load balancers. At the same time, a single, centralized group of network and security admins can retain authority for providing networking connectivity and controlling the security rules for the cloud organization.

Different VPC networks can use different approaches including VPC peering and VPN to allow connectivity between instances in different networks. However, a class of use-cases demands a more fine-grained approach that supports the ability for individual instances to have connectivity to multiple VPC networks. GCP users can configure VM instances with Multiple Network Interfaces to enable the following:

Specifications

Every VM instance in a VPC network has a default network interface. You can create up to 8 interfaces, depending on the instances type.

Creation and Deletion

Requirements

Options

Default routes

Limitations

Provisioning Multiple Network Interfaces

Note that this example diagram is from the online docs. In this lab, you will run commands with IP addresses, and instance/network names specific to this lab environment.

Example: Network and security function

Networking and security virtual appliance (click to enlarge)

A typical setup is to configure a virtual network appliance on the path from public to private connectivity. In this way, traffic can only reach a private VPC network from a public external client, through an application level virtualized firewall enforcement point.

To create an instance that might host a VM-appliance, use the following command. This usually takes <30 seconds.

From Cloud Shell:

gcloud compute instances create vm-appliance \
    --network-interface subnet=privatesubnet-us,no-address \
    --network-interface subnet=mynetwork,private-network-ip=10.128.0.10 \
    --network-interface subnet=managementsubnet-us,no-address \
    --machine-type n1-standard-4 --zone us-central1-a


gcloud compute instances list --filter="name=vm-appliance"

NAME          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP                       EXTERNAL_IP    STATUS
vm-appliance  us-central1-a  n1-standard-4               172.16.0.3,10.128.0.10,10.130.0.2  35.188.178.94  RUNNING

This example illustrates a few optional arguments with --network-interface.

Other options are documented with --network-interface, but the general pattern for creating instances with multiple network interfaces is the same as shown.

Verify access

Now, create an instance in managementnet that will allow you to SSH into your vm-appliance.

From Cloud Shell:

gcloud compute instances create vm-management --zone us-central1-a \
    --machine-type f1-micro --subnet managementsubnet-us

Then, SSH into vm-management through the Cloud Console. Verify your multiple network interfaces and use ping to check reachability to instances in the attached subnets.

From vm-management (10.130.0.3) to vm-appliance (10.130.0.2)

ssh 10.130.0.2

sudo ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 172.16.0.3  netmask 255.255.255.255  broadcast 172.16.0.3
...
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.128.0.10  netmask 255.255.255.255  broadcast 10.128.0.2
...
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1460
        inet 10.130.0.2  netmask 255.255.255.255  broadcast 10.130.0.2
...
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
...


ping privatenet-us-vm -c  2

PING privatenet-us-vm.c.vpcuser01project.internal (172.16.0.2) 56(84) bytes of data.
64 bytes from privatenet-us-vm.c.vpcuser01project.internal (172.16.0.2): icmp_seq=1 ttl=64 time=1.02 ms


ping 172.16.0.2 -c 2

PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
64 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=0.924 ms


ping 10.128.0.2 -c 2

PING 10.128.0.2 (10.128.0.2) 56(84) bytes of data.
64 bytes from 10.128.0.2: icmp_seq=1 ttl=64 time=0.181 ms


ping 10.130.0.3 -c 2

PING 10.130.0.3 (10.130.0.3) 56(84) bytes of data.
64 bytes from 10.130.0.3: icmp_seq=1 ttl=64 time=1.07 ms

Try ping mynet-us-vm. Why do you think private compute engine DNS records for privatenet-us-vm can be resolved to 172.16.0.2 but mynet-us-vm cannot be resolved to 10.128.0.2?

From vm-management (10.130.0.3) to vm-appliance (10.130.0.2)

ping privatenet-us-vm -c  2
^C

exit
exit

Alias IP ranges are a capability of VPC networks that allow straightforward assignment of multiple IP addresses to a single VM instance and routing to those addresses. Using IP aliasing, you can configure multiple IP addresses, representing containers or applications hosted in a VM, without defining a separate network interface.

However, alias IP ranges are not supported on a VM that has multiple network interfaces enabled. While multiple network interfaces allow an instance to communicate with multiple VPC networks, alias IP ranges are designed to draw addresses from a subnet within a single VPC network. VM instances can be configured to use IP addresses from the local subnet's primary or secondary CIDR ranges.

Alias IP ranges have the general benefit that they are routable within the GCP virtual network without requiring additional routes, saving route quota. Also, with alias IP addresses configured, anti-spoofing checks are performed ensuring the traffic exiting VMs uses known IP addresses as source addresses. Alternatively, static routes are less secure because they disable anti-spoofing checks allowing arbitrary source IP addresses.

Container architecture in GCP is a key example to show additional benefits of alias IP ranges including:

Specifications

Creation and Deletion

Requirements

Limitations

Provisioning Alias IP Ranges

Note that this example diagram is from the online docs. In this lab, you will run commands with IP addresses, and instance/network names specific to this lab environment.

Example: Configuring containers with alias IP ranges

Alias IP Ranges and Containers (click to enlarge)

Using alias IP ranges, container IP addresses can be allocated from a secondary CIDR range and configured as alias IP addresses in the VM hosting the container.

In your lab environment, in your project, Network_A=mynetwork. To create a subnet with a secondary CIDR range, and an instance with an alias IP range, use the following commands.

From Cloud Shell:

gcloud compute networks subnets create privatesubnet-aliased \
    --network privatenet --region us-central1 \
    --range 10.133.0.0/20 \
    --secondary-range container-range=10.255.0.0/20


gcloud compute instances create privatenet-alias-vm1 --zone us-central1-a \
    --network-interface subnet=privatesubnet-aliased,aliases=container-range:10.255.0.0/29

gcloud compute instances create privatenet-alias-vm2 --zone us-central1-a \
    --network-interface subnet=privatesubnet-aliased,aliases=container-range:10.255.1.0/29


gcloud compute instances list --filter="name ~ privatenet-alias?"

NAME                  ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
privatenet-alias-vm1  us-central1-a  n1-standard-1               10.133.0.2   35.184.26.130  RUNNING
privatenet-alias-vm2  us-central1-a  n1-standard-1               10.133.0.3   35.188.178.94  RUNNING

Verify access via alias IP addresses

Use Cloud Console to SSH into privatenet-alias-vm1 through the Cloud Console. Verify your alias IP ranges and use ping to check reachability to alias addresses.

From privatenet-alias-vm1 (10.133.0.2)

ip route show table local | grep 10.255
local 10.255.0.0/29 dev eth0 proto 66 scope host 


ping privatenet-alias-vm2 -c 2
PING privatenet-alias-vm2.c.vpcuser01project.internal (10.133.0.3) 56(84) bytes of data.
64 bytes from privatenet-alias-vm2.c.vpcuser01project.internal (10.133.0.3): icmp_seq=1 ttl=64 time=0.183 ms

ping 10.255.1.0 -c 2
PING 10.255.1.0 (10.255.1.0) 56(84) bytes of data.
64 bytes from 10.255.1.0: icmp_seq=1 ttl=64 time=1.09 ms

ping 10.255.1.7 -c 2
PING 10.255.1.7 (10.255.1.7) 56(84) bytes of data.
64 bytes from 10.255.1.7: icmp_seq=1 ttl=64 time=1.41 ms

ping 10.255.1.8 -c 2
PING 10.255.1.8 (10.255.1.8) 56(84) bytes of data.
^C


exit

Why do you think ping 10.255.1.8 failed to respond?

You should cleanup the resources you created now that you have completed the lab module.

From Cloud Shell:

gcloud compute instances delete privatenet-alias-vm1 --zone us-central1-a
gcloud compute instances delete privatenet-alias-vm2 --zone us-central1-a
gcloud compute instances delete vm-management --zone us-central1-a
gcloud compute instances delete vm-appliance --zone us-central1-a

gcloud compute instances delete [vpcuser##-eu-vm] \
    --zone europe-west1-b 

gcloud compute instances delete [vpcuser##-us-vm] \
    --zone us-central1-a 

gcloud compute instances list | grep [vpcuser##]

Finally, cleanup the initial instances that were created in our setup script. It usually takes <30s for the deployment cleanup to complete successfully. When you list the deployments, you no longer see the vpcuser##deployment. The vpcuser##netdeployment does, however, remain. It was used to pre-prepare the network environment with mynetwork and privatenet.

gcloud deployment-manager deployments delete [vpcuser##deployment] \
    --project [vpcuser##project]

gcloud deployment-manager deployments list
NAME                    LAST_OPERATION_TYPE  STATUS  DESCRIPTION  MANIFEST                ERRORS
vpcuser##netdeployment  insert               DONE                 manifest-1498770525339  []

And, finally, you can remove the deployment files you copied.

cd ~

rm -rf ~/vpclab

You have completed the VPC Networking Connectivity Lab!

What you covered

Next Steps

Learn More

©Google, Inc. or its affiliates. All rights reserved. Do not distribute.