VPC Service Controls Basic Tutorial II - Troubleshooting Egress Violation

1. Introduction

VPC Service Controls (VPC-SC) is an organization level security control in Google Cloud that enables enterprise customers to mitigate data exfiltration risks. VPC Service Controls delivers zero-trust style access to multi-tenant services by enabling clients to restrict access to authorized IPs, client context, and device parameters while connecting to multi-tenant services from the internet and other services in order to reduce both intentional and unintentional losses. As we saw in VPC Service Controls Basic Tutorial I, you can use VPC Service Controls to create perimeters that protect the resources and data of services that you explicitly specify.

The goals of this tutorial are:

  • Understand the basics of VPC Service Controls
  • Update a service perimeter and test it using Dry-run mode
  • Protect two services with VPC Service Controls
  • Troubleshoot a VPC Service Controls egress violation while listing an object from Cloud Storage

2. Setup and requirements

For this tutorial, we need the following pre-requirements:

  • GCP Organization.
  • A folder under the Organization.
  • 2 GCP projects within the same Organization placed under the folder.
  • The required permissions at organization level.
  • Billing account for both projects.
  • VPC Service Controls Basic Tutorial I VPC Service Controls and Access Context Manager setup.

dbec101f41102ca2.png

Resources-setup

  1. Set up the resources as described in the "Resources-setup" section of VPC Service Controls Basic Tutorial I
  2. Verify that you have the required permissions to administer Cloud Storage.
  3. For this tutorial, we are going to start using CLI instead of the cloud console. In one of the development environments, set up the gcloud CLI:
  • Cloud Shell: to use an online terminal with the gcloud CLI already set up, activate Cloud Shell.

Activate Cloud Shell by clicking the icon at the top right corner of your cloud console. It can take a few seconds for the session to initialize. See Cloud Shell guide for more details.

a0ceb29950db4eac.png

  • Local shell: to use a local development environment, install and initialize the gcloud CLI.

Cost

You need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.

The only resources that will generate a cost is the VM Instance and the Cloud Storage Object. An estimated cost of the VM instance can be found in the pricing calculator. The estimated cost of Cloud Storage can be found in this pricing list.

3. Create a Storage Bucket and Object

As mentioned earlier, we are going to reuse the resources created in the previous tutorial. So we will go ahead and continue with the creation of the Cloud Storage bucket. For this tutorial, we are going to start using gcloud CLI instead of the console.

  1. In the Google Console, select ProjectX. In this project, we will be creating the Storage Bucket and object.
  2. Make sure that you set the cloud shell to use ProjectX by running the following command:
gcloud config set project PROJECT_ID
  1. In your development environment, run the following command:
gcloud storage buckets create gs://BUCKET_NAME --location=us-central1
  1. Create a storage object so we can read it from the VM Instance located in ProjectZ. We will create a .txt file.
nano hello.txt 

Add anything you want in the text file.

  1. Upload the object into the bucket.
gcloud storage cp /home/${USER}/hello.txt gs://BUCKET_NAME
  1. Verify the object has been uploaded into the bucket by listing it.
gcloud storage ls gs://BUCKET_NAME

You must see the hello.txt file listed in the console.

4. Protect Cloud Storage API

In the previous codelab, we created a perimeter and protected Compute Engine API. In this codelab, we will edit our dry run mode perimeter and add Cloud Storage. This will help us to determine the impact of perimeter protection by showing us the VPC Service Controls violations in the audit logs, but the resources will remain accessible until we enforce the perimeter.

  1. In the Google Console, select your Organization; Access VPC Service Controls. Ensure that you're at the Org scope.
  2. Open Cloud Shell and update the Dry Run perimeter "SuperProtection" created in the previous lab:
gcloud access-context-manager perimeters dry-run update SuperProtection --policy=POLICY --add-restricted-services=storage.googleapis.com
  1. Verify that the Cloud Storage API has been updated by describing the perimeter
gcloud access-context-manager perimeters dry-run describe SuperProtection --policy=POLICY 

In the output, you will see that the Cloud Storage API is listed below restricted services

along with Compute Engine API but with a "-vpcAccessibleServices: {}" label:

2025ddc01a2e9a81.png

5. Verify that the Cloud Storage API has been protected

In Dry Run mode, verify that the "SuperProtection" perimeter is showing us the denial by listing the object from the VM instance created in ProjectZ to ProjectX which is hosting the Storage Bucket

  1. In the Cloud Console, go to the project selector and select ProjectZ, then navigate to Compute Engine > VM Instances.
  2. Click the SSH button to connect to the VM Instance and access its command line.

5ca02149b78c11f9.png

  1. List the hello.txt file we uploaded earlier.
gcloud storage ls gs://BUCKET_NAME

As the Cloud Storage API is protected in dry-run mode, you should be able to list the resources, but you must have an error message in the ProjectZ audit logs.

  1. Go to Logs Explorer API in the ProjectZ and look for the VPC Service Controls last error message. You can use this filter to obtain the log we are looking for:
protoPayload.status.details.violations.type="VPC_SERVICE_CONTROLS"
"(Dry Run Mode) Request is prohibited by organization's policy. vpcServiceControlsUniqueIdentifier:UNIQUE_ID"

This filter will show us the last violation in Dry-run mode which belongs to Cloud Storage. Here is an example on how the log looks like and we can validate that the violation is an egress one when trying to list the content on the bucket located in ProjectX.

egressViolations: [
0: {
servicePerimeter: "accessPolicies/POLICY/servicePerimeters/SuperProtection"
source: "projects/PROJECTX_ID"
sourceType: "Network"
targetResource: "projects/PROJECTZ_ID"
}
]
resourceNames: [
0: "projects//buckets/BUCKET_NAME"
]
securityPolicyInfo: {
organizationId: "ORGANIZATION_ID"
servicePerimeterName: "accessPolicies/POLICY/servicePerimeters/SuperProtection"
}
violationReason: "NETWORK_NOT_IN_SAME_SERVICE_PERIMETER"
vpcServiceControlsUniqueId: "UNIQUE_ID"
}
methodName: "google.storage.objects.list"
  1. Since we have validated that the API call to Cloud Storage generates a VPC Service Controls violation, we will enforce the perimeter with the new configuration. Open Cloud Shell and enforce the Dry-run perimeter:
gcloud access-context-manager perimeters dry-run enforce SuperProtection --policy=POLICY --async
  1. Connect to the VM instance using SSH and list the storage bucket again to verify that the Dry-run perimeter has been enforced correctly.
gcloud storage ls gs://BUCKET_NAME

We will get a VPC Service Control violation in the VM CLI instead of a list of the Storage objects:

ERROR: (gcloud.storage.ls) User [PROJECT_NUMBER-compute@developer.gserviceaccount.com] does not have permission to access b instance [BUCKET_NAME] (or it may not exist): Request is prohibited by organization's policy. vpcServiceControlsUniqueIdentifier:"UNIQUE_ID"

We have successfully prevented Data exfiltration by using VPC Service Controls to prevent reading data from or copying data to a resource outside the perimeter.

6. Troubleshooting the list denial.

We are going to troubleshoot the denial we got from the VM instance CLI. Let's check the audit logs and look for the VPC Service Controls unique ID.

  1. Go to the project selector and select ProjectZ.
  2. Find the VPC Service Controls Unique ID in the audit logs by using the following query in Logs Explorer:
resource.type="audited_resource"
protoPayload.metadata."@type"="type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata"

This will show all VPC Service Controls audit logs. We will be looking for the last error log. As the API call was made from the VM instance, the principal must be the Compute Engine service account "PROJECT_NUMBER-compute@developer.gserviceaccount.com"

As we already have the VPC Service Controls unique ID, we can use it to get the desired log directly by using this filter:

protoPayload.metadata.vpcServiceControlsUniqueId="UNIQUE_ID"
  1. Click the VPC Service Controls header, and select "Troubleshoot denial" which will open VPC Service Controls Troubleshooter.

This API will show us in a friendly UI the violation reason, and if this was an ingress or egress violation among other useful things.

In this exercise, we will be looking for the following:

authenticationInfo: {
principalEmail: "PROJECT_ID-compute@developer.gserviceaccount.com"
egressViolations: [
0: {
servicePerimeter: "accessPolicies/POLICY/servicePerimeters/SuperProtection"
source: "projects/PROJECTZ_ID"
sourceType: "Network"
targetResource: "projects/PROJECTX_ID"
}
violationReason: "NETWORK_NOT_IN_SAME_SERVICE_PERIMETER"

This information is enough for us to know that we need to create an egress rule to let the Compute Engine service account access the storage bucket from ProjectZ to ProjectX. Also we can see that the network is not in the same perimeter, so we need to allow VPC communication to services and share data across service perimeters.

  1. Activate Cloud Shell and create a .yaml file with the egress rule using a text editor.
nano egresstorage.yaml 
- egressTo:
    operations:
    - serviceName: storage.googleapis.com
      methodSelectors:
      - method: \"*\"
    resources:
    - projects/PROJECTX_ID
 egressFrom:
    identities:
    - serviceAccount:PROJECT_ID-compute@developer.gserviceaccount.com
  1. Update the ingress policy that protects ProjectZ.
gcloud access-context-manager perimeters update SuperProtection --set-egress-policies=egresstorage.yaml --policy=POLICY 

Now we can try again to access the bucket from the VM instance.

  1. In the Cloud Console, go to the project selector and select ProjectZ, then navigate to Compute Engine > VM Instances.
  2. Click the SSH button to connect to the VM Instance and access its command line.
  3. Once you're in the VM CLI, try to list the objects in the Storage Bucket.
gcloud storage ls gs://BUCKET_NAME/

You will get the following error message:

ERROR: (gcloud.storage.ls) User [PROJECT_ID-compute@developer.gserviceaccount.com] does not have permission to access b instance [BUCKET_NAME] (or it may not exist): PROJECT_ID-compute@developer.gserviceaccount.com does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist).
  1. We need to grant an object reader permission to the Compute Engine service account to be able to list the objects in the Storage Bucket.
gcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME --member=serviceAccount:PROJECT_ID-compute@developer.gserviceaccount.com --role=roles/storage.objectViewer
  1. Once again, let's try to list the hello.txt file from the VM instance's CLI .
gcloud storage ls gs://BUCKET_NAME/
.
.
gs://BUCKET_NAME/hello.txt

Now we are able to list the object without a VPC Service Controls permission violation, but, what about downloading the file? Let's try that.

gcloud storage cp gs://BUCKET_NAME/hello.txt /home/${USER}

And we will get the following output

Copying gs://BUCKET_NAME/hello.txt to file:///home/${USER}
 Completed files 1/1 | 54.0B/54.0B  

7. Cleanup

While there is no separate charge for using VPC Service Controls when the service is not in use, it's a best practice to clean up the setup used in this laboratory. You can also delete your VM instance and/or Cloud projects to avoid incurring charges. Deleting your Cloud project stops billing for all the resources used within that project.

  1. To delete your VM instance, select the checkbox on the left side of your VM instance name, and then click Delete.

da0abf0894fe03cd.png

  1. To delete the perimeter, complete the following steps:
  • In the Google Cloud console, click Security, and then click VPC Service Controls at the Organization scope.
  • In the VPC Service Controls page, in the table row corresponding to the perimeter that you want to delete, click "Delete Icon"
  1. To delete the Access Level, complete the following steps:
  • In the Google Cloud console, Open the Access Context Manager page at the Folder scope.
  • In the grid, in the row for the access level that you want to delete, click "Delete Icon", and then click Delete.
  1. To delete the Storage object and Bucket, complete the following steps:
  • In the Google Cloud console, open the Cloud Storage buckets page .
  • Select the checkbox next to the bucket that you created.
  • Click Delete.
  • In the window that opens, confirm that you want to delete the bucket.
  • Click Delete.
  1. To shutdown your Project, complete the following steps:
  • In the Google Cloud console, go to the IAM & Admin Settings page of the project you want to delete.
  • On the IAM & Admin Settings page, click Shutdown.
  • Enter the project ID, and click Shutdown anyway.

8. Congratulations!

In this codelab, you updated a VPC Service Controls Dry-run perimeter, enforced it, and troubleshooted it.

Learn more

License

This work is licensed under a Creative Commons Attribution 2.0 Generic License.