What you need

To complete this lab, you need:

Internet access

Access to a supported Internet browser:

What you do

What you learn

Cloud Storage is a fundamental resource in GCP, with many advanced features. In this lab you will exercise many of Cloud Storage features that could be useful in your designs. You will explore Cloud Storage using both the console and the gsutil tool.

Step 1 Create a Project

Select an existing project or create a new google cloud project.

Remember the project ID, a unique name across all Google Cloud projects. It will be referred to later in this codelab as PROJECT_ID.

Step 1 Create an IAM Service Account

Console: Products and Services > IAM & Admin > Service Accounts

Click on [Create Service Account].

Property

Value

Name:

storecore

Role:

Project > Editor

Select "Furnish a new private key" of type JSON.

When you click CREATE it will download a JSON key file. You will need to find this key file and open it in a text editor to make a copy of it on the VM.

Click [Create].

Step 2 Create a Cloud Storage bucket

Console: Products and Services > Storage > Browser

A bucket must have a globally unique name. You could use part of your Project ID in the name to help make it unique. For example, if the Project ID was "myproj-154920" then my bucket name might be "storecore154920".

Create a bucket.

Property

Value

Name:

unique name

Default storage class:

Multi-Regional

Make a note of the bucket name. It will be used in this lab.

Step 3 Create a VM

Console: Products and Services > Compute Engine > VM instances

Click on [Create instance].

Property

Value

Name:

minecraft-server

Zone:

us-central1-c

Machine type:

n1-standard-1

You shouldn't need to change the following settings, just verify them.

Boot disk:

New 10GB, Debian Linux

Identity and API access

don't change

Firewall:

don't change

Click the [Create] button.

Step 4 SSH to the VM and authorize it to use the GCP API

In this lab you will be making changes to the Google Cloud SDK tools configuration files. Rather than making changes that could impact the functioning of Cloud Shell, you will create a VM and authorize it to use the SDK, and make the changes to the configuration files on that VM. This will simplify cleanup, since all you will need to do is delete the VM.

Find the downloaded JSON file from Step 1 on your computer.

Open the downloaded file in a text editor.

Select all the text [CTRL]-[A] and copy it [CTRL]-[C].

(Commands depend on text editor and operating system)

You will be pasting this text into a file on the VM.

SSH to the storecore VM.

Create a file named credentials.json with the text from the downloaded JSON file.

$ vi credentials.json

In the example, using vi, you would type 'i' for insert, then:

[CTRL]-[V] to paste the contents

[ESC]-[:][W][Q] to write the contents and exit the editor.

Enter the following command in the terminal to authorize the VM to use the Google Cloud API.

$ gcloud auth activate-service-account --key-file credentials.json

You will need the Project ID for the next command.

Enter the following command to reset the local profile and initialize the API.

$ gcloud init
  1. Select option [1], to "Re-initialize".
  2. Select the option for the gcloud services account you created beginning with "storecore"
  3. Enter the Project ID.
  4. Enter "Y"

Do you want to configure Google Compute Engine

(https://cloud.google.com/compute) settings (Y/n)?

Y

  1. For the zone, enter the number corresponding to us-central1-c.

Step 5 Download a sample file using CURL and make two copies

Enter the following command to download a sample file. This sample file is a publicly available hadoop documentation html file.

$ curl \
http://hadoop.apache.org/docs/current/\
hadoop-project-dist/hadoop-common/\
ClusterSetup.html > setup.html

You'll need more copies of the sample file for some of the activities.

$ cp setup.html setup2.html
$ cp setup.html setup3.html

Step 1 Copy the file to the bucket

$ gsutil cp setup.html gs://[Bucket Name]/

Step 2 Get the default access list that's been assigned to setup.html

$ gsutil acl get gs://[Bucket Name]/setup.html  > acl.txt
$ cat acl.txt

Step 3 Set the access list to private and verify the results

$ gsutil acl set private gs://[Bucket Name]/setup.html 
$ gsutil acl get gs://[Bucket Name]/setup.html  > acl2.txt
$ cat acl2.txt

Step 4 Update the access list to make the file publicly readable

$ gsutil acl ch -u AllUsers:R gs://[Bucket Name]/setup.html
$ gsutil acl get gs://[Bucket Name]/setup.html  > acl3.txt
$ cat acl3.txt

Step 5 Examine the file in console

Console: Products and Services > Storage > Browser

Click on the bucket: [Bucket Name]

You will now see that in the "Share Publicly" column, the setup.html file now has a "public link" available.

Step 6 Delete the local file and copy back from Cloud Storage

Return to the VM SSH terminal.

$ rm setup.html
$ ls
$ gsutil cp gs://[Bucket Name]/setup.html setup.html

Step 1 Generate a CSEK key

For the next step, you will need an AES-256 base-64 key.

Cut and paste the following program into the SSH terminal and run it.

python -c 'import base64; import os; print(base64.encodestring(os.urandom(32)))'

Highlight and copy the key.

Example:

$ python -c 'import base64; import os; print(base64.encodestring(os.urandom(32)))'
tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=

Step 2 Modify the boto file

The encryption controls are contained in a gsutil configuration file named .boto

$ ls -al

$ vi .boto

Locate the line that says "#encryption_key="

/encrypt

Press "i" to enter edit/insert mode.

Uncomment the line and paste the key at the end.

Example:

before:

# encryption_key=

after:

encryption_key=tmxElCaabWvJqR7uXEWQF39DhWTcDvChzuCmpHe6sb0=

Step 3 Upload the remaining setup.html files, now custom encrypted

$ gsutil cp setup2.html gs://[Bucket Name]/
$ gsutil cp setup3.html gs://[Bucket Name]/

Step 4 Verify in console

Console: Products and Services > Storage > Browser

Click on the bucket: [Bucket Name]

The setup2.html and setup3.html files show that they are customer encrypted:

Step 5 Delete local files

$ rm setup*

Step 6 Copy down 3 files

$ gsutil cp gs://[Bucket Name]/setup*.html

Step 7 cat encrypted files to see they made it back

$ cat setup.html
$ cat setup2.html
$ cat setup3.html

Step 1 Move the current CSEK encrypt key to decrypt key

Comment out the current encrypt_key line in the .boto file.

Copy the current key from the encrypt_key line to the decrypt_key1 line.

Example:

before:

encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=

# decryption_key1=

after:

# encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=

decryption_key1=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=

Step 2 Generate another CSEK key and add to .boto

python -c 'import base64; import os; print(base64.encodestring(os.urandom(32)))'

Highlight and copy the key.

Add another line with "encryption_key=" with the new key.

Example:

before:

# encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=

after:

# encryption_key=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=
encryption_key=HbFK4I8CaStcvKKIx6aNpdTse0kTsfZNUjFpM+YUEjY==

Step 3 gsutil rewrite the key for file #1

When a file is encrypted, rewriting the file decrypts it using the decryption_key1 that you previously set, and encrypting the file with the new encryption_key.

You are rewriting the key for setup2.html but not for setup3.html, so that you can see what will happen if you don't rotate the keys properly.

$ gsutil rewrite -k gs://[Bucket Name]/setup2.html

Step 4 Comment out the old decrypt key

Example:

before:

decryption_key1=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=

after:

# decryption_key1=2dFWQGnKhjOcz4h0CudPdVHLG2g+OoxP8FQOIKKTzsg=

Step 5 Download setup2 and setup3

$ gsutil cp  gs://[Bucket Name]/setup2.html recover2.html

$ gsutil cp  gs://[Bucket Name]/setup3.html recover3.html

What happened? setup3.html was not rewritten with the new key. So it can no longer be decrypted and the copy will fail.

You have rotated the CSEK keys.

Step 1 View the lifecycle policy for the bucket

$ gsutil lifecycle get gs://[Bucket Name]

Step 2 Create a JSON lifecycle policy file

Create a file named life.json with the following contents:

{
  "rule":
  [
    {
      "action": {"type": "Delete"},
      "condition": {"age": 31}
    }
  ]
}

These instructions tell Cloud Storage to delete the object after 31 days.

Step 3 set the policy

$ gsutil lifecycle set life.json gs://[Bucket Name]

Step 4 verify the policy

$ gsutil lifecycle get gs://[Bucket Name]

Step 1 View the versioning status for the bucket

$ gsutil versioning get gs://[Bucket Name]

The suspended policy means that it is not enabled.

Step 2 Enable versioning

$ gsutil versioning set on gs://[Bucket Name]

Verify that versioning was enabled:

$ gsutil versioning get gs://[Bucket Name]

Step 3 Create several versions of the sample file in the bucket

Check the size of the sample file:

$ ls -al setup.html

Delete five lines from setup to change it's size.

$ vi setup.html

Example:

[d][d][d][d][d]

[ESC][:][w][q]

Copy the file to the bucket with the "-v" versioning option:

$ gsutil cp setup.html gs://[Bucket Name]

Delete another five lines from setup to change it's size.

$ vi setup.html

Example:

[d][d][d][d][d]

[ESC][:][w][q]

Copy the file to the bucket with the "-v" versioning option:

$ gsutil cp setup.html gs://[Bucket Name]

Step 4 List the versions of the file

$ gsutil ls -a setup.html gs://[Bucket Name]/setup.html

Highlight and copy the fully versioned name of the file.

Step 5 Download the oldest, original version of the file

$ gsutil cp -a gs://[Bucket Name]/setup.html#version recovered.txt

Step 6 Verify the recovery

ls -al setup.html

ls -al recovered.txt

You have recovered the original file from the backup version.

Step 1 Make a nested directory

You will make a nested directory structure so that you can examine what happens when it is recursively copied to a bucket.

$ mkdir firstlevel
$ mkdir ./firstlevel/secondlevel
$ cp setup.html firstlevel
$ cp setup.html secondlevel

Step 2 Synch the home directory on the VM with the bucket

$ gsutil rsync -r . gs://[Bucket Name]

Step 3 Examine the results

Console: Products and Services > Storage > Browser

Click on the bucket: [Bucket Name]

Notice that there are folders present.

Click on /firstlevel and then on /secondlevel

Compare what you see in console with the results of this command:

$ gsutil ls -r gs://[Bucket Name]/firstlevel

Step 1 Create a second Project

Create a new google cloud project.

Remember the project ID, a unique name across all Google Cloud projects.

It will be referred to in the coming steps as [PROJECT_ID_2].

Step 2 Prepare the bucket

Console: Products and Services > Storage > Browser

Click on the [Create bucket] button.

Give the bucket a globally unique name.

Property

Value

Name:

unique name

Default storage class:

Multi-Regional

Upload a file to the bucket. Any small example file or text file will do.

Note the bucket name. It will be referred to as [BUCKET_NAME] in the following steps.

Step 3 Create a service account with specific role

Console: Products and Services > IAM & Admin > Service accounts

Click on the [Create service account] button.

Step 4 Create an IAM Service Account

Console: Products and Services > IAM & Admin > Service Accounts

Click on [Create Service Account].

Property

Value

Name:

cross-project-storage

Role:

Storage > Storage Object Viewer

Select "Furnish a new private key" of type JSON.

When you click CREATE it will download a JSON key file. You will need to find this key file and open it in a text editor to make a copy of it on the VM.

Click [Create].

Locate the JSON file and open it in a text editor. Prepare to copy the contents.

Step 5 Switch back to the first Project

Verify that the project corresponds to PROJECT_ID and not PROJECT_ID_2.

Step 6 Create a VM

Console: Products and Services > Compute Engine > VM instances

Click on [Create instance].

Property

Value

Name:

crossproject

Zone:

europe-west1-d

Machine type:

micro

You shouldn't need to change the following settings, just verify them.

Boot disk:

New 10GB, Debian Linux

Identity and API access

don't change

Firewall:

don't change

Click the [Create] button.

Step 7 SSH to the VM

Enter the following command to list the files in the bucket from the other project.

$ gsutil ls gs://[BUCKET_NAME]/

AccessDeniedException: 403 Caller does not have storage.objects.list access to bucket [BUCKET_NAME].

Step 8 Authorize the VM

Copy the contents from the downloaded JSON file and paste it into a new file on the VM.

$ vi credentials.json

In the example, using vi, you would type 'i' for insert, then:

[CTRL]-[V] to paste the contents

[ESC]-[:][W][Q] to write the contents and exit the editor.

Enter the following command in the terminal to authorize the VM to use the Google Cloud API.

$ gcloud auth activate-service-account --key-file credentials.json

You will need [PROJECT_ID_2] for the next command.

Enter the following command to reset the local profile and initialize the API.

$ gcloud init
  1. Select option [1], to "Re-initialize".
  2. Select the option for the gcloud services account you created beginning with "cross-project-storage"
  3. Enter the [Project_ID_2].
  4. Enter "Y"

Do you want to configure Google Compute Engine

(https://cloud.google.com/compute) settings (Y/n)?

Y

  1. If asked about the zone, enter the number corresponding to us-central1-c.

Step 9 Verify access

This command should now work:

$ gsutil ls gs://[BUCKET_NAME]/

As well as this one:

$ gsutil cat gs://[BUCKET_NAME]/

Now try to copy the credentials file to the bucket.

$ gsutil cp credentials.json gs://[BUCKET_NAME]/

Copying file://credentials.json [Content-Type=application/json]...
AccessDeniedException: 403 Caller does not have storage.objects.create access to bucket [BUCKET_NAME].

Step 10 Modify role

Return to the second Project, PROJECT_ID_2.

Console: Products and Services > IAM & Admin > Service accounts

  1. Select the cross-project-storage service account.
  2. Pull down the Role(s) menu and add the "Storage > Storage Object Admin" role.
  3. Click the [Save] button at the bottom of the list of Roles. If you don't click the [Save] button the change will not be made.
  4. The service account Roles should now say "Multiple".

Step 11 Verify changed access

Try again to copy the credentials file to the bucket.

$ gsutil cp credentials.json gs://[BUCKET_NAME]/

Copying file://credentials.json [Content-Type=application/json]...
- [1 files][  2.3 KiB/  2.3 KiB]                                                
Operation completed over 1 objects/2.3 KiB.

Step 12 About cross-project access billing

In this example the VM in Project_ID_2 now has the ability to upload files to Cloud Storage in a bucket that was created in another project.

Note that the Project where the bucket was created is the billing project for this activity. That means if the VM uploads a ton of files, it will not be billed to Project_ID_2, but instead to Project_ID.

Step 1 Clean up Project_ID_2

Switch to Project_ID_2.

Step 2 Clean up Project_ID

Switch to Project_ID.

┬ęGoogle, Inc. or its affiliates. All rights reserved. Do not distribute.