Secure shared data in use with Confidential Space

1. Overview

Confidential Space offers secure multi-party data sharing and collaboration, while allowing organizations to preserve the confidentiality of their data. This allows anyone to collaborate while retaining data ownership and protecting against rogue operators. It unlocks scenarios where you want to gain mutual value from aggregating and analyzing sensitive, often regulated, data, while retaining full control over it.

With Confidential Space, organizations can gain mutual value from aggregating and analyzing sensitive data such as personally identifiable information (PII), protected health information (PHI), intellectual property, and cryptographic secrets — while retaining full control over it

In this three-step lab you will set up a confidential space between Primus and Secundus Bank to determine common customers without sharing full account lists. It combines following steps:

  • Step 1: Building interaction between Primus and Secundus Bank with a simple workload that counts the number of customers at a given location
  • Step 2: Encrypt the customer data and move the access control to the encryption key to restrict the access to resources with encryption key
  • Step 3: Update workload to support customer lists comparison between the two banks and create key and encrypted database for Secundus bank.

We will use following architecture to secure the shared data in use between the two banks:

fdef93a6868a976.png

What you'll learn

  • How to authorize access to protected resources based on the attributes of:
  • What: the workload code
  • Where: the Confidential Space environment (the Confidential Space image on Confidential VM)
  • Who: the account that is running the workload
  • How to configure the necessary Cloud resources for running Confidential Space
  • How to run the workload in a Confidential VM running the Confidential Space VM image

This Codelab shows you how to use Confidential Space to remote attest to workloads running on Google Compute Engine.

What you'll need

2. Setup and Requirements

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)

b35bf95b8bf3d5d8.png

a99b7ace416376c4.png

bd84a6d3004737c5.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can update it at any time.
  • The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (it is typically identified as PROJECT_ID). If you don't like the generated ID, you may generate another random one. Alternatively, you can try your own and see if it's available. It cannot be changed after this step and will remain for the duration of the project.
  • For your information, there is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, you can delete the resources you created or delete the whole project. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

  1. From the Cloud Console, click Activate Cloud Shell 853e55310c205094.png.

55efc1aaa7a4d3ad.png

If you've never started Cloud Shell before, you're presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:

9c92662c6a846a5c.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

27e1bf0d67c79fa9.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and

authentication. All of your work in this lab can be done with simply a browser.

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

  1. Run the following command in Cloud Shell to confirm that you are authenticated:
$ gcloud auth list

Command output

  1. Create a new project - To create projects for Primus and Secundus bank run following commands:
$ gcloud projects create [PROJECT_ID]

Repeat the above for both Primus and Secundus banks, here [PROJECT_ID] refers to the name you provide to your projects, which will be used to set the environment variables in step 6.

Example project creation commands

$ gcloud projects create primus-bank
$ gcloud projects create secundus-bank
  1. Run the following command in Cloud Shell to confirm that the gcloud command knows about your project:
$ gcloud config list project

Command output

[core]
project = <PROJECT_ID>
  1. Enable Billing for your projects.
  2. Enable Confidential Computing API and following apis for the projects.
$ gcloud services enable \
    cloudapis.googleapis.com \
    cloudkms.googleapis.com \
    cloudresourcemanager.googleapis.com \
    cloudshell.googleapis.com \
    container.googleapis.com \
    containerregistry.googleapis.com \
    iam.googleapis.com
  1. In the Cloud Shell console or your own terminal console, set environment variables with your PROJECT_ID values.
PRIMUS_PROJECT_ID=<PRIMUS_PROJECT_ID>
SECUNDUS_PROJECT_ID=<SECUNDUS_PROJECT_ID>
  1. To get the project numbers, run the projects describe command.
gcloud projects describe $PRIMUS_PROJECT_ID
gcloud projects describe $SECUNDUS_PROJECT_ID
  1. Using the project numbers, set environment variables.
PRIMUS_PROJECT_NUMBER=<PRIMUS_PROJECT_NUMBER>
SECUNDUS_PROJECT_NUMBER=<SECUNDUS_PROJECT_NUMBER>
  1. You will need certain permissions for these two projects
  • For the PRIMUS_PROJECT, you will need Cloud KMS Admin, Storage Admin, Artifact Registry Administrator, Service Account Admin, IAM Workload Identity Pool Admin
  • For the SECUNDUS_PROJECT, you will need Compute Admin, Storage Admin, Service Account Admin, Cloud KMS Admin, IAM Workload Identity Pool Admin, Security Admin (optional)

3. Step 1- Configure resources and create workload : Setup Primus Bank

In this first step, you will build the foundation for the interaction between Primus and Secundus bank with a simple workload that counts the number of customers at a given location. First, you configure the necessary Cloud resources. Then, you run the workload in Confidential Space.

Configure resources:

Configure the following in Primus project:

  • primus_customer_list.csv: the file that contains the customer data.
  • $PRIMUS_PROJECT_ID-customer-storage: the bucket that stores the customer data file.
  • initial_workload.go:the workload that reads in the customer data and counts the users in a given geographic location.
  • primus-workloads: the artifact registry.
  • initial-workload-container: the Docker container that stores the workload.
  • trusted-workload-pool: the Workload Identity Pool (WIP) that validates claims.
  • attestation-verifier: the Workload Identity Pool provider which includes the authorization condition to use for tokens signed by the Attestation Verifier Service.
  • trusted-workload-account: the service account that trusted-workload-pool uses to access the protected resources - in this step it has permission to view the customer data that is stored in the $PRIMUS_PROJECT_ID-customer-storage bucket.

Configure the following in Secundus project:

  • run-confidential-vm: the service account that has permission to access the Confidential VM that runs the workload
  • $SECUNDUS_PROJECT_ID-results-storage: the bucket that stores the results of the workload

How Confidential Space works:

When you run the workload in Confidential Space, the following process takes place, using the configured resources:

  1. The workload requests a general Google access token for the trusted-workload-account service account from the WIP. It offers an Attestation Verifier service token with workload and environment claims.
  2. If the workload measurement claims in the Attestation Verifier service token match the attribute condition in the WIP, it returns the access token for trusted-workload-account.
  3. The workload uses the service account access token to use a trusted-workload-account to access the customer data in the $PRIMUS_PROJECT_ID-customer-storage bucket.
  4. The workload performs an operation on that data.
  5. The workload uses the run-confidential-vm service account to write the results of that operation to the $SECUNDUS_PROJECT_ID-results-storage bucket.

Before you begin

Before starting step 1, make sure you have first followed the setup instructions in the Set up and Requirements section

Create a folder for the initial workload code.

$ mkdir step1
$ cd step1 

Set up Primus Bank:

In the Primus project, set up the customer data, the workload, the trusted-workload-account service account, and the WIP. Then grant the trusted-workload-account service account the roles that allow it to view the bucket where the customer data is stored and to use the WIP.

When the workload is run, it gets a token from the WIP, and then uses that token to access the trusted-workload-account service account, which can then view the storage bucket.

Run the following command to set the default project to $PRIMUS_PROJECT_ID for this section of the lab.

$ gcloud config set project $PRIMUS_PROJECT_ID

Upload customer data to bucket:

Create primus_customer_list.csv file,

cat <<EOF >> primus_customer_list.csv
15,Alice,Seattle
36,Bob,Everett
56,Eve,Shoreline
134,Ashley,Seattle
305,Clinton,Redmond
506,Stephen,Kirkland
788,Cooper,Tacoma
987,Eleanor,Bellevue
1052,April,Everett
1113,Lincoln,Bellevue
1990,Phillip,Tacoma
2048,Eric,Seattle
EOF

Create the bucket,

$ gsutil mb gs://$PRIMUS_PROJECT_ID-customer-storage

To learn more about creating storage buckets, see the Cloud Storage documentation

Upload the csv file to the bucket

$ gsutil cp primus_customer_list.csv gs://$PRIMUS_PROJECT_ID-customer-storage/primus_customer_list.csv

Create workload:

Create the workload that counts users in the customer data.

Run the following command to create the workload in a file named initial_workload.go.

cat <<EOF > initial_workload.go
// Initial_workload performs queries on the (imaginary) Primus Bank dataset.
//
// This package expects all data to be passed in as part of the subcommand arguments.
// Supported subcommands are:
//   count-location
package main

import (
    "context"
    "encoding/csv"
    "flag"
    "fmt"
    "os"
    "regexp"
    "strings"

    "cloud.google.com/go/storage"
    glog "github.com/golang/glog"
    "github.com/google/subcommands"
    "google.golang.org/api/option"
)

const (
    primusBucketName = "$PRIMUS_PROJECT_ID-customer-storage"       // Bucket for the Primus Bank, created earlier
    primusDataPath   = "primus_customer_list.csv" // Name of CSV file in the bucket
)

const credentialConfig = \`{
"type": "external_account",
"audience": "//iam.googleapis.com/projects/$PRIMUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"credential_source": {
  "file": "/run/container_launcher/attestation_verifier_claims_token"
},
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com:generateAccessToken"
}\`

func readInPrimusTable(ctx context.Context) ([][]string, error) {
    // Create a client that uses identity federation with the attestation verifier JWT.
    storageClient, err := storage.NewClient(ctx, option.WithCredentialsJSON([]byte(credentialConfig)))
    if err != nil {
        return nil, fmt.Errorf("could not create storage client with federated credentials: %w", err)
    }
    bucketHandle := storageClient.Bucket(primusBucketName)
    objectHandle := bucketHandle.Object(primusDataPath)
    objectReader, err := objectHandle.NewReader(ctx)
    if err != nil {
        return nil, fmt.Errorf("could not read in gs://%v/%v: %w", primusBucketName, primusDataPath, err)
    }
    defer objectReader.Close()
    csvReader := csv.NewReader(objectReader)
    customerData, err := csvReader.ReadAll()
    if err != nil {
        return nil, fmt.Errorf("could not read in gs://%v/%v: %w", primusBucketName, primusDataPath, err)
    }
    return customerData, nil
}

type countLocationCmd struct{}
func (*countLocationCmd) Name() string     { return "count-location" }
func (*countLocationCmd) Synopsis() string { return "counts the number of users at the given location" }
func (*countLocationCmd) Usage() string {
    return "Usage: initial_workload count-location <location> <output_bucket> <output_path>"
}
func (*countLocationCmd) SetFlags(_ *flag.FlagSet) {}
func (*countLocationCmd) Execute(ctx context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus {
    if f.NArg() != 2 {
        glog.Errorf("Not enough arguments (expected location and output object URI)")
        return subcommands.ExitUsageError
    }

    outputURI := f.Arg(1)
    re := regexp.MustCompile(\`gs://([^/]*)/(.*)\`)
    matches := re.FindStringSubmatch(outputURI)
    if matches == nil || matches[0] != outputURI || len(matches) != 3 {
        glog.Errorf("Second argument should be in the format gs://bucket/object")
        return subcommands.ExitUsageError
    }
    outputBucket := matches[1]
    outputPath := matches[2]
    client, err := storage.NewClient(ctx)
    if err != nil {
        glog.Errorf("Error creating storage client with application default credentials: %v", err)
        return subcommands.ExitFailure
    }
    outputWriter := client.Bucket(outputBucket).Object(outputPath).NewWriter(ctx)

    customerData, err := readInPrimusTable(ctx)
    if err != nil {
        // Writes errors reading in the primus bank data to the results bucket.
        // This becomes relevant when demonstrating the failure case.
        _, err = outputWriter.Write([]byte(fmt.Sprintf("Error reading in Primus Bank data: %v", err)))
        if err != nil {
            glog.Errorf("Could not write to %v: %v", outputURI, err)
        }
        if err = outputWriter.Close(); err != nil {
            glog.Errorf("Could not write to %v: %v", outputURI, err)
        }
        return subcommands.ExitFailure
    }

    location := strings.ToLower(f.Arg(0))
    count := 0
    if location == "-" {
        count = len(customerData)
    } else {
        for _, line := range customerData {
            if strings.ToLower(line[2]) == location {
                count++
            }
        }
    }

    _, err = outputWriter.Write([]byte(fmt.Sprintf("%d", count)))
    if err != nil {
        glog.Errorf("Could not write to %v: %v", outputURI, err)
        return subcommands.ExitFailure
    }

    if err = outputWriter.Close(); err != nil {
        glog.Errorf("Could not write to %v: %v", outputURI, err)
        return subcommands.ExitFailure
    }

    return subcommands.ExitSuccess
}

func main() {
    flag.Parse()
    ctx := context.Background()

    subcommands.Register(&countLocationCmd{}, "")

    os.Exit(int(subcommands.Execute(ctx)))
}
EOF

Build and publish container:

Build the workload and create a Dockerfile. Then create a private registry where the Secundus run-confidential-vm service account can be granted access. Finally, build and publish the workload to a container.

  1. Build the workload. Use CGO_ENABLED=0 so that the binary is statically linked.
go mod init initial-workload && go mod tidy
CGO_ENABLED=0 go build initial_workload.go
  1. Create a Dockerfile.
cat <<EOF > Dockerfile
FROM alpine:latest

WORKDIR /test

COPY initial_workload /test

ENTRYPOINT ["/test/initial_workload"]

LABEL "tee.launch_policy.allow_cmd_override"="true"

CMD []
EOF
  1. Create an Artifact Registry docker repository
$ gcloud artifacts repositories create primus-workloads \
  --repository-format=docker --location=us
  1. Build and publish the Docker container.
$ gcloud auth configure-docker us-docker.pkg.dev
docker build -t us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/initial-workload-container:latest .
docker push us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/initial-workload-container:latest

Create trusted-workload-account service account:

  1. Create the trusted-workload-account service account
$ gcloud iam service-accounts create trusted-workload-account
  1. Grant the Storage Object Viewer role on the $PRIMUS_PROJECT_ID-customer-storage bucket to the service account. This permits the service account to view the customer list stored in the bucket.
$ gsutil iam ch serviceAccount:trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com:objectViewer \
  gs://$PRIMUS_PROJECT_ID-customer-storage

Create a Workload Identity Pool (WIP):

Primus Bank wants to authorize workloads to access their customer data based on attributes of the following resources:

  • What: Code that is verified
  • Where: An environment that is secure
  • Who: An operator that is trusted

Primus uses Workload identity federation to enforce an access policy based on these requirements.

Workload identity federation allows you to specify attribute conditions. These conditions restrict which identities can authenticate with the workload identity pool (WIP). You can add the Attestation Verifier Service to the WIP as a workload identity pool provider to present measurements and enforce the policy.

  1. Create WIP
$ gcloud iam workload-identity-pools create trusted-workload-pool \
    --location="global"
  1. Create a new OIDC workload identity pool provider

The specified –attribute-condition authorizes access to the primus-workloads container. It requires:

  • What: Latest initial-workload-container uploaded to the primus-workloads repository.
  • Where: Confidential Space trusted execution environment, version 0.1 or later.
  • Who: Secundus Bank run-confidential-vm service account.
$ gcloud iam workload-identity-pools providers create-oidc attestation-verifier \
    --location="global" \
    --workload-identity-pool="trusted-workload-pool" \
    --issuer-uri="https://confidentialcomputing.googleapis.com/" \
    --allowed-audiences="https://sts.googleapis.com" \
    --attribute-mapping="google.subject='assertion.sub'" \
    --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' &&
      int(assertion.swversion) >= 1 &&
      assertion.submods.container.image_reference ==
      'us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/initial-workload-container:latest'   && 'run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com' in
      assertion.google_service_accounts"
  1. Grant the workloadIdentityUser role on the trusted-workload-account service account to the trusted-workload-pool WIP. This allows the WIP to impersonate the service account.
$ gcloud iam service-accounts add-iam-policy-binding \
trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com \
--role=roles/iam.workloadIdentityUser \
--member="principalSet://iam.googleapis.com/projects/$PRIMUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/*"

4. Step 1- Set up Secundus Bank

In the Secundus project, create the run-confidential-vm service account and the $SECUNDUS_PROJECT_ID-results-storage bucket.

Run the following command to set the default project to $SECUNDUS_PROJECT_ID for this section of the lab.

$ gcloud config set project $SECUNDUS_PROJECT_ID

Create run-confidential-vm service account:

  1. Create the run-confidential-vm service account.
$ gcloud iam service-accounts create run-confidential-vm
  1. Grant the Service Account User role on the run-confidential-vm service account to your user account. This allows your user account to impersonate the service account.
$ gcloud iam \
    service-accounts add-iam-policy-binding \
    run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
    --member="user:$(gcloud config get-value account)" \
    --role='roles/iam.serviceAccountUser'
  1. (Optional) Grant the service account the Log Writer permission. This allows the Confidential Space environment to write logs to Cloud Logging in addition to the Serial Console, so you can review logs after the VM is terminated (Requires Security Admin permission).
$ gcloud projects add-iam-policy-binding $SECUNDUS_PROJECT_ID \
    --member=serviceAccount:run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
    --role=roles/logging.logWriter

Create a bucket for results:

Create the $SECUNDUS_PROJECT_ID-results-storage bucket. Then grant the run-confidential-vm service account permission to create files in the bucket, so it can store the workload results there.

  1. Create the results-storage bucket.
$ gsutil mb gs://$SECUNDUS_PROJECT_ID-results-storage
  1. Grant the Storage Object Creator role on the /$SECUNDUS_PROJECT_ID-results-storage bucket to the run-confidential-vm service account. This permits the service account to store query results to the bucket.
$ gsutil iam ch \
    serviceAccount:run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com:objectCreator \
    gs://$SECUNDUS_PROJECT_ID-results-storage

Primus Bank grants permission to Secundus Bank to use its data:

In this final configuration step, Primus permits Secundus to access the workload. Secundus provides the name of its run-confidential-vm service account, and then Primus grants it the Viewer role on the repository.

Grant the Viewer role on the primus-workloads repository to the run-confidential-vm service account.

$ gcloud artifacts repositories add-iam-policy-binding primus-workloads \
--project=$PRIMUS_PROJECT_ID --role='roles/viewer' --location=us \
--member="serviceAccount:run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com"

5. Step 1- Secundus runs the workload

In the Secundus project, create a Confidential VM instance, and then view the results of the workload.

Create the instance:

In the Secundus project, create the Confidential VM instance.

$ gcloud compute instances create secundus-initial-vm --confidential-compute \
  --shielded-secure-boot \
  --maintenance-policy=TERMINATE --scopes=cloud-platform  --zone=us-west1-b \
  --image-project=confidential-space-images \
  --image-family=confidential-space \
  --service-account=run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
  --metadata ^~^tee-image-reference=us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/initial-workload-container:latest~tee-restart-policy=Never~tee-cmd="[\"count-location\",\"Seattle\",\"gs://$SECUNDUS_PROJECT_ID-results-storage/seattle-result\"]"

View results:

In the Secundus project, view the results of the workload.

$ gsutil cat gs://$SECUNDUS_PROJECT_ID-results-storage/seattle-result

The result should be 3, as this is how many people from Seattle are listed in the input file!

Primus changes policy:

Primus Bank's contract that allows Secundus Bank access to their data expires. So Primus Bank updates their attribute condition to allow VMs with the service account from their new partner, Tertius Bank.

Primus Bank modifies the Workload Identity Pool provider:

In the $PRIMUS_PROJECT_ID, update the attribute condition for the Attestation Verifier identity provider to authorize workloads at a new location.

  1. Set the project to $PRIMUS_PROJECT_ID.
$ gcloud config set project $PRIMUS_PROJECT_ID
  1. Update the Attestation Verifier service provider in the workload identity pool.
$ gcloud iam workload-identity-pools providers update-oidc attestation-verifier \
    --location="global" --workload-identity-pool=trusted-workload-pool \
    --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' && int(assertion.swversion) >= 1 && assertion.submods.container.image_reference == 'us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/initial-workload-container:latest' && 'run-confidential-vm@tertius-project-id.iam.gserviceaccount.com' in assertion.google_service_accounts"

Re-run the workload:

When Secundus Bank tries to run the original workload, it fails. To view the error, delete the original results file and VM instance, and then try to run the workload again.

Delete results file and VM instance

  1. Set the project to the $SECUNDUS_PROJECT_ID project.
$ gcloud config set project $SECUNDUS_PROJECT_ID
  1. Delete the results file.
$ gsutil rm gs://$SECUNDUS_PROJECT_ID-results-storage/seattle-result
  1. Delete the Confidential VM instance.
$ gcloud compute instances delete secundus-initial-vm

Run the unauthorized workload:

$ gcloud compute instances create secundus-initial-vm --confidential-compute \
  --shielded-secure-boot \
  --maintenance-policy=TERMINATE --scopes=cloud-platform  --zone=us-west1-b \
  --image-project=confidential-space-images \
  --image-family=confidential-space \
  --service-account=run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
  --metadata ^~^tee-image-reference=us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/initial-workload-container:latest~tee-restart-policy=Never~tee-cmd="[\"count-location\",\"Seattle\",\"gs://$SECUNDUS_PROJECT_ID-results-storage/seattle-result\"]"

View error:

Instead of the results of the workload, you see an error (The given credential is rejected by the attribute condition).

$ gsutil cat gs://$SECUNDUS_PROJECT_ID-results-storage/seattle-result

Clean up:

Stop the instance if you used a confidential-space-debug image family.

$ gcloud compute instances stop secundus-initial-vm

Move up one directory for the next step.

$ cd ..

6. Step 2

Overview:

In the second step, you expand on what you built in Step 1 by encrypting the customer data and moving the access control to the encryption key. This allows for wider distribution of the (encrypted) data since the data cannot be used by others without the encryption key. To do this, first you create an encryption key that only the authorized workload can use to decrypt the data. Then you encrypt the data and update the workload to decrypt it by contacting the WIP to impersonate the service account with access to the key.

Configuring resources:

In the Primus project, you configure:

  • primus-data-keys: the key ring for the data encryption keys.
  • customer-data-key: the key used to encrypt the customer data.
  • $PRIMUS_PROJECT_ID-customer-storage: the bucket created in Step 1, here modified to to contain only encrypted data.
  • encrypted_primus_customer_list.csv: the encrypted customer data.
  • trusted-workload-account: the service account which can access customer-data-key.
  • trusted-workload-pool: the WIP which second-workload-container uses to access protected resources, which includes trusted-workload-account.
  • attestation-verifier: a copy of the WIP provider created in Step 1, here configured to only allow access to the second-workload-container.
  • second-workload-container: the modified workload which decrypts encrypted_primus_customer_list.csv.

Before you begin:

  1. Be sure you have completed Step1. The Step 2 portion of the lab reuses Cloud resources created in Step 1.
  2. Create a folder for the second workload code
$ mkdir step2
$ cd step2

Updating Primus Bank:

In the Primus project, set up the data encryption key, encrypt the customer data, set up the trusted-workload-account service account, modify the WIP to allow access to the new service account, and develop a new version of the workload which decrypts the data before performing queries on it.

Run the following command to set the default project to PRIMUS_PROJECT_ID for this section of the lab:

$ gcloud config set project $PRIMUS_PROJECT_ID

Create the customer data encryption key:

Create the encryption key which will be used to encrypt the customer data.

  1. Create the key ring.
$ gcloud kms keyrings create primus-data-keys --location=global
  1. Create the key.
$ gcloud kms keys create customer-data-key --location=global \
  --keyring=primus-data-keys --purpose=encryption
  1. Grant your user account access to the key to encrypt the customer data
$ gcloud kms keys add-iam-policy-binding \
  projects/$PRIMUS_PROJECT_ID/locations/global/keyRings/primus-data-keys/cryptoKeys/customer-data-key \
  --member="user:$(gcloud config get-value account)" \
  --role='roles/cloudkms.cryptoKeyEncrypter'

Encrypt the customer data:

Modify the object in the $PRIMUS_PROJECT_ID-customer-storage bucket created in Step 1. Upload the encrypted customer data and then delete the unencrypted customer data object from the bucket. Now that the data is encrypted, Primus Bank can modify the permissions on the bucket so that Secundus Bank could directly view the encrypted data.

  1. Encrypt the customer list.
$ gcloud kms encrypt \
   --ciphertext-file=encrypted_primus_customer_list.csv \
   --plaintext-file=../step1/primus_customer_list.csv \
   --key=projects/$PRIMUS_PROJECT_ID/locations/global/keyRings/primus-data-keys/cryptoKeys/customer-data-key

Upload encrypted customer list:

  1. Upload the encrypted customer list to the bucket.
$ gsutil cp encrypted_primus_customer_list.csv \
    gs://$PRIMUS_PROJECT_ID-customer-storage
  1. Delete the unencrypted customer list.
$ gsutil rm gs://$PRIMUS_PROJECT_ID-customer-storage/primus_customer_list.csv
  1. Give the run-confidential-vm service account direct access to the bucket.
$ gsutil iam ch serviceAccount:run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com:objectViewer \
  gs://$PRIMUS_PROJECT_ID-customer-storage

Update trusted-wrokload-account service account:

Give the trusted-workload-account service account created in step 1 the Cloud KMS Crypto Key Decrypter role on the customer-data-key key.

$ gcloud kms keys add-iam-policy-binding customer-data-key \
    --keyring='primus-data-keys' --location='global' \
    --member="serviceAccount:trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com" \
    --role='roles/cloudkms.cryptoKeyDecrypter'

Modify the WIP:

Modify the attestation-verifier WIP provider created in Step 1 to have an attribute condition that only allows second-workload-container to access the pool.

The new attribute condition will authorize access to the second-workload-container. It will require:

  • What: Latest second-workload-container uploaded to the primus-workloads repository.
  • Where: Confidential Space trusted execution environment, version 0.1 or later.
  • Who: Secundus Bank run-confidential-vm service account.
$ gcloud iam workload-identity-pools providers update-oidc attestation-verifier \
    --location="global" --workload-identity-pool=trusted-workload-pool \
    --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' &&
        int(assertion.swversion) >= 1 &&
        assertion.submods.container.image_reference ==
        'us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/second-workload-container:latest'
        && 'run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com' in
        assertion.google_service_accounts"

Create the code for the second workload

The container will now need to be updated to decrypt the customer list that it receives.

Run the following command to create the new workload in a file named second_workload.go.

cat <<EOF > second_workload.go
// second_workload performs queries on the (imaginary) Primus Bank dataset.
//
// This package expects all data to be passed in as part of the subcommand arguments.
// Supported subcommands are:
//   count-location
package main

import (
    "bytes"
    "context"
    "encoding/csv"
    "flag"
    "fmt"
    "hash/crc32"
    "os"
  "regexp"
    "strings"

    kms "cloud.google.com/go/kms/apiv1"
    "cloud.google.com/go/storage"
    glog "github.com/golang/glog"
    "github.com/google/subcommands"
    "google.golang.org/api/option"
    kmspb "google.golang.org/genproto/googleapis/cloud/kms/v1"
    "google.golang.org/protobuf/types/known/wrapperspb"
)

const (
    primusBucketName             = "$PRIMUS_PROJECT_ID-customer-storage"            // Bucket for the Primus Bank, created earlier
    primusDataPath               = "encrypted_primus_customer_list.csv" // Name of CSV file in the bucket
    keyName                      = "projects/$PRIMUS_PROJECT_ID/locations/global/keyRings/primus-data-keys/cryptoKeys/customer-data-key"
    wipProviderName              = "projects/$PRIMUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier"
    keyAccessServiceAccountEmail = "trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com"
)

const credentialConfig = \`{
"type": "external_account",
"audience": "//iam.googleapis.com/%s",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"credential_source": {
  "file": "/run/container_launcher/attestation_verifier_claims_token"
},
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/%s:generateAccessToken"
}\`

func crc32c(data []byte) uint32 {
    t := crc32.MakeTable(crc32.Castagnoli)
    return crc32.Checksum(data, t)
}

func decryptPrimusTable(ctx context.Context, keyName, trustedServiceAccountEmail, wipProviderName string, encryptedData []byte) ([]byte, error) {
    credentialConfig := fmt.Sprintf(credentialConfig, wipProviderName, trustedServiceAccountEmail)
    kmsClient, err := kms.NewKeyManagementClient(ctx, option.WithCredentialsJSON([]byte(credentialConfig)))
    if err != nil {
        return nil, fmt.Errorf("creating a new KMS client with federated credentials: %w", err)
    }

    decryptRequest := &kmspb.DecryptRequest{
        Name:             keyName,
        Ciphertext:       encryptedData,
        CiphertextCrc32C: wrapperspb.Int64(int64(crc32c(encryptedData))),
    }

    decryptResponse, err := kmsClient.Decrypt(ctx, decryptRequest)
    if err != nil {
        return nil, fmt.Errorf("could not decrypt ciphertext: %w", err)
    }
    if int64(crc32c(decryptResponse.Plaintext)) != decryptResponse.PlaintextCrc32C.Value {
        return nil, fmt.Errorf("decrypt response corrupted in-transit")
    }

    return decryptResponse.Plaintext, nil
}

func readInPrimusTable(ctx context.Context) ([][]string, error) {
    storageClient, err := storage.NewClient(ctx)
    if err != nil {
        return nil, fmt.Errorf("could not create storage client with default credentials: %w", err)
    }
    bucketHandle := storageClient.Bucket(primusBucketName)
    objectHandle := bucketHandle.Object(primusDataPath)

    objectReader, err := objectHandle.NewReader(ctx)
    if err != nil {
        return nil, fmt.Errorf("could not read in gs://%v/%v: %w", primusBucketName, primusDataPath, err)
    }
    defer objectReader.Close()
    encryptedData := make([]byte, objectReader.Attrs.Size)
    bytesRead, err := objectReader.Read(encryptedData)
    if int64(bytesRead) != objectReader.Attrs.Size || err != nil {
        return nil, fmt.Errorf("could not read in gs://%v/%v: %w", primusBucketName, primusDataPath, err)
    }
    decryptedData, err := decryptPrimusTable(ctx, keyName, keyAccessServiceAccountEmail, wipProviderName, encryptedData)
    if err != nil {
        return nil, fmt.Errorf("could not decrypt gs://%v/%v: %w", primusBucketName, primusDataPath, err)
    }
    csvReader := csv.NewReader(bytes.NewReader(decryptedData))
    customerData, err := csvReader.ReadAll()
    if err != nil {
        return nil, fmt.Errorf("could not read in gs://%v/%v: %w", primusBucketName, primusDataPath, err)
    }
    return customerData, nil
}

type countLocationCmd struct{}

func (*countLocationCmd) Name() string     { return "count-location" }
func (*countLocationCmd) Synopsis() string { return "counts the number of users at the given location" }
func (*countLocationCmd) Usage() string {
    return "Usage: second_workload count-location <location> <output_bucket> <output_path>"
}
func (*countLocationCmd) SetFlags(_ *flag.FlagSet) {}
func (*countLocationCmd) Execute(ctx context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus {
    if f.NArg() != 2 {
        glog.Errorf("Not enough arguments (expected location and output object URI)")
        return subcommands.ExitUsageError
    }

    outputURI := f.Arg(1)
    re := regexp.MustCompile(\`gs://([^/]*)/(.*)\`)
    matches := re.FindStringSubmatch(outputURI)
    if matches == nil || matches[0] != outputURI || len(matches) != 3 {
        glog.Errorf("Second argument should be in the format gs://bucket/object")
        return subcommands.ExitUsageError
    }
    outputBucket := matches[1]
    outputPath := matches[2]
    client, err := storage.NewClient(ctx)
    if err != nil {
        glog.Errorf("Error creating storage client with application default credentials: %v", err)
        return subcommands.ExitFailure
    }
    outputWriter := client.Bucket(outputBucket).Object(outputPath).NewWriter(ctx)

    customerData, err := readInPrimusTable(ctx)
    if err != nil {
        // Writes errors reading in the primus bank data to the results bucket.
        // This becomes relevant when demonstrating the failure case.
        _, err = outputWriter.Write([]byte(fmt.Sprintf("Error reading in Primus Bank data: %v", err)))
        if err != nil {
            glog.Errorf("Could not write to %v: %v", outputURI, err)
        }
        if err = outputWriter.Close(); err != nil {
            glog.Errorf("Could not write to %v: %v", outputURI, err)
        }
        return subcommands.ExitFailure
    }

    location := strings.ToLower(f.Arg(0))
    count := 0
    if location == "-" {
        count = len(customerData)
    } else {
        for _, line := range customerData {
            if strings.ToLower(line[2]) == location {
                count++
            }
        }
    }

    _, err = outputWriter.Write([]byte(fmt.Sprintf("%d", count)))
    if err != nil {
        glog.Errorf("Could not write to %v: %v", outputURI, err)
        return subcommands.ExitFailure
    }

    if err = outputWriter.Close(); err != nil {
        glog.Errorf("Could not write to %v: %v", outputURI, err)
        return subcommands.ExitFailure
    }

    return subcommands.ExitSuccess
}

func main() {
    flag.Parse()
    ctx := context.Background()

    subcommands.Register(&countLocationCmd{}, "")

    os.Exit(int(subcommands.Execute(ctx)))
}
EOF

Build and publish the new container:

  1. Build the workload. Use CGO_ENABLED=0 so that the binary is statically linked.
$ go mod init second-workload && go mod tidy
CGO_ENABLED=0 go build second_workload.go
  1. Create a Dockerfile.
cat <<EOF > Dockerfile
FROM alpine:latest

WORKDIR /test

COPY second_workload /test

ENTRYPOINT ["/test/second_workload"]

LABEL "tee.launch_policy.allow_cmd_override"="true"

CMD []
EOF
  1. Build and publish the Docker container to the remote repository created in Step 1.
$ docker build -t us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/second-workload-container:latest .
docker push us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/second-workload-container:latest

In the Secundus project, create a Confidential VM instance, and then view the results of the workload.

Run the workload:

In the Secundus project, create a Confidential VM instance which starts up the new workload container, and then view the results.

Run the following command to set the default project to $SECUNDUS_PROJECT_ID for this section of the lab.

$ gcloud config set project $SECUNDUS_PROJECT_ID

Create the instance:

In the Secundus project, create the Confidential VM instance.

$ gcloud compute instances create secundus-second-vm --confidential-compute \
  --shielded-secure-boot \
  --maintenance-policy=TERMINATE --scopes=cloud-platform  --zone=us-west1-b \
  --image-project=confidential-space-images \
  --image-family=confidential-space \
  --service-account=run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
  --metadata ^~^tee-image-reference=us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/second-workload-container:latest~tee-restart-policy=Never~tee-cmd="[\"count-location\",\"Tacoma\",\"gs://$SECUNDUS_PROJECT_ID-results-storage/tacoma-result\"]"

View results:

In the Secundus project, view the results of the workload in the bucket created in Step 1.

$ gsutil cat gs://$SECUNDUS_PROJECT_ID-results-storage/tacoma-result

The result should be 2, as this is how many people from Tacoma are listed in the input file!

Clean up:

Stop the instance if you used a confidential-space-debug image family.

$ gcloud compute instances stop secundus-second-vm

Move up one directory for the next step.

$ cd ..

7. Step 3

Overview:

In the third step, you build on the knowledge from Step 2 to support multiple parties protecting their information with Confidential Space. First, as Primus Bank you update the workload to support comparing customer lists from two banks, and create that container in a reproducible way. Next, you create a key and encrypted database for Secundus Bank. Then, as Secundus Bank, you audit the Primus Bank code and use the digest of that workload to authorize access to Secundus Bank's own data.

Configuring resources:

In the Primus project, you configure:

  • third-workload-container: the modified workload which supports listing common customers.
  • attestation-verifier: the WIP provider created in Step 1, here configured to only allow access to the third-workload-container.
  • trusted-workload-pool: the WIP which trusted-workload-container uses to access protected resources.
  • attestation-verifier: a WIP provider created in Step 1 for trusted-workload-pool, here configured to only allow access to the third-workload-container.

In the Secundus project, you configure:

  • secundus-data-keys: the key ring for the data encryption keys.
  • customer-data-key: the key used to encrypt the customer data.
  • $SECUNDUS_PROJECT_ID-customer-storage: the bucket that stores the customer data file.
  • encrypted_primus_customer_list.csv: the encrypted customer data.
  • trusted-workload-account: the service account which can access customer-data-key.
  • secundus-workloads: the artifact registry used to verify the digest.
  • third-workload-container: the Docker container that stores the workload.
  • trusted-workload-pool: the WIP which third-workload-container uses to access protected resources, which includes customer-data-key.
  • attestation-verifier: a copy of the WIP provider created for Primus Bank, here configured to only allow access to the audited container.

Before you begin:

  1. Make sure you have completed the instructions in the intro, Step 1, and Step 2. This step uses Cloud resources created in steps 1 and 2.
  2. Create the directory for the third workload code
$ mkdir step3
$ cd step3

Primus Bank updates its resources:

Primus Bank modifies the workload container to add a query that compares two customer lists. It also updates the build of the container to be reproducible. This allows another party to audit the code and verify the container digest before giving the workload access to their data.

Run the following command to set the default project to PRIMUS_PROJECT_ID for this section of the lab:

$ gcloud config set project $PRIMUS_PROJECT_ID

Update workload and rebuild the binary:

  1. Add a query that returns the intersection of two customer lists.
cat <<EOF > third_workload.go
// third_workload performs queries on the (imaginary) Primus Bank dataset.
//
// This package expects all data to be passed in as part of the subcommand arguments.
// Supported subcommands are:
//   count-location
//   list-common-customers
package main

import (
  "bytes"
  "context"
  "encoding/csv"
  "errors"
  "flag"
  "fmt"
  "hash/crc32"
  "os"
  "regexp"
  "strings"

  kms "cloud.google.com/go/kms/apiv1"
  "cloud.google.com/go/storage"
  glog "github.com/golang/glog"
  "github.com/google/subcommands"
  "google.golang.org/api/option"
  kmspb "google.golang.org/genproto/googleapis/cloud/kms/v1"
  "google.golang.org/protobuf/types/known/wrapperspb"
)

const (
  primusBucketName             = "$PRIMUS_PROJECT_ID-customer-storage"       // Bucket for the Primus Bank, created earlier
  primusDataPath               = "encrypted_primus_customer_list.csv" // Name of CSV file in the bucket
  primusKeyName                      = "projects/$PRIMUS_PROJECT_ID/locations/global/keyRings/primus-data-keys/cryptoKeys/customer-data-key"
  primusWIPProviderName              = "projects/$PRIMUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier"
  primusKeyAccessServiceAccountEmail = "trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com"
)

const credentialConfig = \`{
"type": "external_account",
"audience": "//iam.googleapis.com/%s",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"credential_source": {
  "file": "/run/container_launcher/attestation_verifier_claims_token"
},
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/%s:generateAccessToken"
}\`

func crc32c(data []byte) uint32 {
  t := crc32.MakeTable(crc32.Castagnoli)
  return crc32.Checksum(data, t)
}

func decryptFile(ctx context.Context, keyName, trustedServiceAccountEmail, wipProviderName string, encryptedData []byte) ([]byte, error) {
  credentialConfig := fmt.Sprintf(credentialConfig, wipProviderName, trustedServiceAccountEmail)
  kmsClient, err := kms.NewKeyManagementClient(ctx, option.WithCredentialsJSON([]byte(credentialConfig)))
  if err != nil {
    return nil, fmt.Errorf("creating a new KMS client with federated credentials: %w", err)
  }

  decryptRequest := &kmspb.DecryptRequest{
    Name:             keyName,
    Ciphertext:       encryptedData,
    CiphertextCrc32C: wrapperspb.Int64(int64(crc32c(encryptedData))),
  }

  decryptResponse, err := kmsClient.Decrypt(ctx, decryptRequest)
  if err != nil {
    return nil, fmt.Errorf("could not decrypt ciphertext: %w", err)
  }
  if int64(crc32c(decryptResponse.Plaintext)) != decryptResponse.PlaintextCrc32C.Value {
    return nil, fmt.Errorf("decrypt response corrupted in-transit")
  }

  return decryptResponse.Plaintext, nil
}

type tableInput struct{
  BucketName string
  DataPath string
  KeyName string
  KeyAccessServiceAccountEmail string
  WIPProviderName string
}

func readInTable(ctx context.Context, tableInfo tableInput) ([][]string, error) {
  storageClient, err := storage.NewClient(ctx)
  if err != nil {
    return nil, fmt.Errorf("could not create storage client with default credentials: %w", err)
  }
  bucketHandle := storageClient.Bucket(tableInfo.BucketName)
  objectHandle := bucketHandle.Object(tableInfo.DataPath)

  objectReader, err := objectHandle.NewReader(ctx)
  if err != nil {
    return nil, fmt.Errorf("could not read in gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  defer objectReader.Close()
  encryptedData := make([]byte, objectReader.Attrs.Size)
  bytesRead, err := objectReader.Read(encryptedData)
  if int64(bytesRead) != objectReader.Attrs.Size || err != nil {
    return nil, fmt.Errorf("could not read in gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  decryptedData, err := decryptFile(ctx, tableInfo.KeyName, tableInfo.KeyAccessServiceAccountEmail, tableInfo.WIPProviderName, encryptedData)
  if err != nil {
    return nil, fmt.Errorf("could not decrypt gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  csvReader := csv.NewReader(bytes.NewReader(decryptedData))
  customerData, err := csvReader.ReadAll()
  if err != nil {
    return nil, fmt.Errorf("could not read in gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  return customerData, nil
}

func readInPrimusTable(ctx context.Context) ([][]string, error) {
  primusTableInfo := tableInput{
    BucketName: primusBucketName,
    DataPath: primusDataPath,
    KeyName:   primusKeyName,
    KeyAccessServiceAccountEmail: primusKeyAccessServiceAccountEmail,
    WIPProviderName: primusWIPProviderName,
  }
  return readInTable(ctx, primusTableInfo)
}

func writeErrorToBucket(outputWriter *storage.Writer, outputBucket, outputPath string, err error) {
    // Writes errors reading in protected data to the results bucket.
    // This becomes relevant when demonstrating the failure case.
    if _, err = outputWriter.Write([]byte(fmt.Sprintf("Error reading in protected data: %v", err))); err != nil {
      glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
    }
    if err = outputWriter.Close(); err != nil {
      glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
    }
}

type countLocationCmd struct{}

func (*countLocationCmd) Name() string     { return "count-location" }
func (*countLocationCmd) Synopsis() string { return "counts the number of users at the given location" }
func (*countLocationCmd) Usage() string {
  return "Usage: third_workload count-location <location> <output object URI>"
}
func (*countLocationCmd) SetFlags(_ *flag.FlagSet) {}
func (*countLocationCmd) Execute(ctx context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus {
  if f.NArg() != 2 {
    glog.Errorf("Not enough arguments (expected location and output object URI)")
    return subcommands.ExitUsageError
  }

  outputURI := f.Arg(1)
  re := regexp.MustCompile(\`gs://([^/]*)/(.*)\`)
  matches := re.FindStringSubmatch(outputURI)
  if matches == nil || matches[0] != outputURI || len(matches) != 3 {
    glog.Errorf("Second argument should be in the format gs://bucket/object")
    return subcommands.ExitUsageError
  }
  outputBucket := matches[1]
  outputPath := matches[2]

  client, err := storage.NewClient(ctx)
  if err != nil {
    glog.Errorf("Error creating storage client with application default credentials: %v", err)
    return subcommands.ExitFailure
  }
  outputWriter := client.Bucket(outputBucket).Object(outputPath).NewWriter(ctx)

  customerData, err := readInPrimusTable(ctx)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  location := strings.ToLower(f.Arg(0))
  count := 0
  if location == "-" {
    count = len(customerData)
  } else {
    for _, line := range customerData {
      if strings.ToLower(line[2]) == location {
        count++
      }
    }
  }

  _, err = outputWriter.Write([]byte(fmt.Sprintf("%d", count)))
  if err != nil {
    glog.Errorf("Could not write to %v: %v", outputURI, err)
    return subcommands.ExitFailure
  }

  if err = outputWriter.Close(); err != nil {
    glog.Errorf("Could not write to %v: %v", outputURI, err)
    return subcommands.ExitFailure
  }

  return subcommands.ExitSuccess
}

func commonCustomers(primusDataset, inputDataset [][]string) ([]string, error) {
  var common []string
  set := make(map[string]bool)
  for _, entry := range primusDataset {
    if len(entry) != 3 {
      return nil, errors.New("invalid entry in primusDataset, must be of length 3 in the form (id, name, location)")
    }
    set[entry[1]] = true
  }

  for _, entry := range inputDataset {
    if len(entry) != 3 {
      return nil, errors.New("invalid entry in inputDataset, must be of length 3 in the form (id, name, location)")
    }
    if set[entry[1]] {
      common = append(common, entry[1])
    }
  }
  return common, nil
}

type listCommonCustomersCmd struct{}

func (*listCommonCustomersCmd) Name() string     { return "list-common-customers" }
func (*listCommonCustomersCmd) Synopsis() string { return "lists the customers in common between two lists" }
func (*listCommonCustomersCmd) Usage() string {
  return "Usage: third_workload list-common-customers <customer database URI> <database key> <database service account> <database WIP> <output object URI>"
}
func (*listCommonCustomersCmd) SetFlags(_ *flag.FlagSet) {}
func (*listCommonCustomersCmd) Execute(ctx context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus {
  if f.NArg() != 5 {
    glog.Errorf("Not enough arguments (expected database URI, database encryption key, associated service account, associated WIP, and output object URI)")
    return subcommands.ExitUsageError
  }

  re := regexp.MustCompile(\`gs://([^/]*)/(.*)\`)

  inputURI := f.Arg(0)
  inputMatches := re.FindStringSubmatch(inputURI)
  if inputMatches == nil || inputMatches[0] != inputURI || len(inputMatches) != 3 {
    glog.Errorf("First argument should be in the format gs://bucket/object")
    return subcommands.ExitUsageError
  }
  inputBucket := inputMatches[1]
  inputPath := inputMatches[2]

  outputURI := f.Arg(4)
  outputMatches := re.FindStringSubmatch(outputURI)
  if outputMatches == nil || outputMatches[0] != outputURI || len(outputMatches) != 3 {
    glog.Errorf("Fifth argument should be in the format gs://bucket/object")
    return subcommands.ExitUsageError
  }
  outputBucket := outputMatches[1]
  outputPath := outputMatches[2]
  client, err := storage.NewClient(ctx)
  if err != nil {
    glog.Errorf("Error creating storage client with application default credentials: %v", err)
    return subcommands.ExitFailure
  }
  outputWriter := client.Bucket(outputBucket).Object(outputPath).NewWriter(ctx)

  primusCustomerData, err := readInPrimusTable(ctx)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  tableInfo := tableInput{
    BucketName: inputBucket,
    DataPath:   inputPath,
    KeyName: f.Arg(1),
    KeyAccessServiceAccountEmail: f.Arg(2),
    WIPProviderName: f.Arg(3),
  }
  inputCustomerData, err := readInTable(ctx, tableInfo)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  common, err := commonCustomers(primusCustomerData, inputCustomerData)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  var result string
  if len(common) > 0 {
    result = strings.Join(common, "\n")
  } else {
    result = "No common customers found"
  }
  _, err = outputWriter.Write([]byte(result))
  if err != nil {
    glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  if err = outputWriter.Close(); err != nil {
    glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  return subcommands.ExitSuccess
}

func main() {
  flag.Parse()
  ctx := context.Background()

  subcommands.Register(&countLocationCmd{}, "")
  subcommands.Register(&listCommonCustomersCmd{}, "")

  os.Exit(int(subcommands.Execute(ctx)))
}
EOF
  1. Then build the binary in a reproducible way. For golang, builds typically aren't reproducible due to including the full source paths. Adding the go build flag -trimpath solves that.
$ go mod init third-workload && go mod tidy
CGO_ENABLED=0 go build -trimpath third_workload.go

Build and upload the image:

  1. Create the Dockerfile for the package. Note that this Dockerfile uses a specific image digest for the base image, in order to prevent any changes in a referenced tag from changing the measurement of the container.
cat <<EOF > Dockerfile
FROM alpine@sha256:21a3deaa0d32a8057914f36584b5288d2e5ecc984380bc0118285c70fa8c9300

WORKDIR /test

COPY third_workload /test

ENTRYPOINT ["/test/third_workload"]

LABEL "tee.launch_policy.allow_cmd_override"="true"

CMD []
EOF
  1. Build and upload the image.
$ docker build -t us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest .
docker push us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest
  1. Store the digest that is printed by the previous step in $PRIMUS_WORKLOAD_DIGEST (the digest should start with sha256:)
$ PRIMUS_WORKLOAD_DIGEST=<digest>
  1. Clean up the files for later re-creation.
$ rm third_workload image.tar canonicalized.tar

Modify the WIP:

Modify the attestation-verifier WIP provider that you created in Step 1 to allow workloads which meets the following conditions:

  • What: Latest third-workload-container uploaded to the primus-workloads repository.
  • Where: Confidential Space trusted execution environment, version 0.1 or later.
  • Who: Secundus Bank run-confidential-vm service account.
$ gcloud iam workload-identity-pools providers update-oidc attestation-verifier \
    --location="global" --workload-identity-pool=trusted-workload-pool \
    --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' &&
        int(assertion.swversion) >= 1 &&
        assertion.submods.container.image_reference ==
        'us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest'
        && 'run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com' in
        assertion.google_service_accounts"

Set up Secundus Bank:

Now that it has its own data that needs protecting, the Secundus Bank will need to go through a lot of the same configuration that the Primus Bank has gone through in earlier steps. This includes creating a database, setting up a key and encrypting the database, and then setting up the service account and Workload Identity Pool to access those resources. As the Secundus Bank, you will also audit the code and verify the digest that you will authorize on.

To set the default project as the Secundus Bank, run:

$ gcloud config set project $SECUNDUS_PROJECT_ID

Set up the customer data encryption key:

Create the Secundus Bank encryption key which will be used to encrypt its customer data.

  1. Create the key ring
$ gcloud kms keyrings create secundus-data-keys --location=global
  1. Create the key.
$ gcloud kms keys create customer-data-key --location=global \
  --keyring=secundus-data-keys --purpose=encryption
  1. Grant yourself access to the key to encrypt the customer data
$ gcloud kms keys add-iam-policy-binding \
  projects/$SECUNDUS_PROJECT_ID/locations/global/keyRings/secundus-data-keys/cryptoKeys/customer-data-key \
  --member="user:$(gcloud config get-value account)" \
  --role='roles/cloudkms.cryptoKeyEncrypter'

Create and upload customer data:

  1. To create the secundus_customer_list.csv file, run the following at the command line.
cat <<EOF >> secundus_customer_list.csv
1421,Eric,Seattle
3099,Clinton,Redmond
4045,Ashley,Tukwila
4456,Joey,Seattle
4667,May,Everett
5443,Royce,Bellevue
6678,Cooper,Tacoma
6694,Jackson,Tacoma
EOF
  1. Then run the following to encrypt it:
$ gcloud kms encrypt \
   --ciphertext-file=encrypted_secundus_customer_list.csv \
   --plaintext-file=secundus_customer_list.csv \
   --key=projects/$SECUNDUS_PROJECT_ID/locations/global/keyRings/secundus-data-keys/cryptoKeys/customer-data-key
  1. To create the bucket and upload the file, complete the following steps
  2. Create a bucket
$ gsutil mb gs://$SECUNDUS_PROJECT_ID-customer-storage
  1. Upload the CSV file to the bucket.
$ gsutil cp encrypted_secundus_customer_list.csv \
  gs://$SECUNDUS_PROJECT_ID-customer-storage/
  1. Give the run-confidential-vm service account direct access to the bucket.
$ gsutil iam ch serviceAccount:run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com:objectViewer \
gs://$SECUNDUS_PROJECT_ID-customer-storage

Create the trusted-workload-account service account:

Create the trusted-workload-account service account for Secundus Bank, and then grant it the Cloud KMS Crypto Key Decrypter role on the customer-data-key key.

  1. Create the trusted-workload-account service account..
$ gcloud iam service-accounts create trusted-workload-account
  1. Grant the Cloud KMS Crypto Key Decrypter role on the customer-data-key key to the service account. This permits the service account to use the key to decrypt.
$ gcloud kms keys add-iam-policy-binding customer-data-key \
  --keyring='secundus-data-keys' --location='global' \
  --member="serviceAccount:trusted-workload-account@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com" \
  --role='roles/cloudkms.cryptoKeyDecrypter'

Verify the digest of the workload:

To verify that the workload is trustworthy, Secundus Bank should coordinate with Primus Bank to review/audit the source code and the recipe/tool to build the workload. Ideally, they should use some deterministic building tools like Bazel to build the workflow image so it is easier to reproduce a build.

Since we just built the workflow, we would implicitly trust the image and use the digest in the next step (check it by echo $PRIMUS_WORKLOAD_DIGEST).

Create and configure the Workload Identity Pool (WIP):

Similarly to the WIP created for Primus Bank, Secundus Bank wants to authorize workloads to access their customer data based on:

  • What: The workload.
  • Where: The Confidential Space Environment.
  • Who: The account which is running the workload.

Primus Bank uses the image_reference claim, which includes the image tag, to determine whether they should authorize access. They control the remote repository, so they can be sure to only tag images which do not leak their data.

In comparison, Secundus Bank does not control the repository where they are getting the image, so they cannot safely make that assumption. Instead, they choose to authorize access to the workload based on its image_digest. Unlike the image_reference, which Primus Bank could change to point to a different image, Primus Bank could not have the image_digest refer to an image other than the one that Secundus Bank audited in the earlier step.

To create the WIP, complete the following steps.

  1. Create a WIP.
$ gcloud iam workload-identity-pools create trusted-workload-pool \
    --location="global"
  1. Create a new OIDC workload identity pool provider. The specified –attribute-condition authorizes the running workload to access the WIP under certain conditions. It requires:
  • What: The container which has the same digest measurement as the code the Secundus Bank audited. Unlike Primus Bank, Secundus Bank does not want to trust code based on it being uploaded to primus-workloads.
  • Where: Confidential Space trusted execution environment, version 0.1 or later.
  • Who: Secundus Bank's own run-confidential-vm service account.
$ gcloud iam workload-identity-pools providers create-oidc attestation-verifier \
    --location="global" \
    --workload-identity-pool="trusted-workload-pool" \
    --issuer-uri="https://confidentialcomputing.googleapis.com/" \
    --allowed-audiences="https://sts.googleapis.com" \
    --attribute-mapping="google.subject='assertion.sub'" \
    --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' &&
      int(assertion.swversion) >= 1 &&
      assertion.submods.container.image_digest == '$PRIMUS_WORKLOAD_DIGEST'
      && 'run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com' in
      assertion.google_service_accounts"
  1. Grant the workloadIdentityUser role on the trusted-workload-account service account to the trusted-workload-pool WIP. This allows the WIP to impersonate the service account.
$ gcloud iam service-accounts add-iam-policy-binding \
trusted-workload-account@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
--role=roles/iam.workloadIdentityUser \
--member="principalSet://iam.googleapis.com/projects/$SECUNDUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/*"

Secundus runs the workload:

Create a Confidential VM instance then view the results of the workload.

$ gcloud compute instances create secundus-third-vm --confidential-compute \
  --shielded-secure-boot \
  --maintenance-policy=TERMINATE --scopes=cloud-platform  --zone=us-west1-b \
  --image-project=confidential-space-images \
  --image-family=confidential-space \
  --service-account=run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
  --metadata ^~^tee-image-reference=us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest~tee-restart-policy=Never~tee-cmd="[\"list-common-customers\",\"gs://$SECUNDUS_PROJECT_ID-customer-storage/encrypted_secundus_customer_list.csv\",\"projects/$SECUNDUS_PROJECT_ID/locations/global/keyRings/secundus-data-keys/cryptoKeys/customer-data-key\",\"trusted-workload-account@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com\",\"projects/$SECUNDUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier\",\"gs://$SECUNDUS_PROJECT_ID-results-storage/list-common-result\"]

"

View results:

In the Secundus project, view the results of the workload

$ gsutil cat gs://$SECUNDUS_PROJECT_ID-results-storage/list-common-result

The result should list Eric, Clinton, Ashley and Cooper as the joint data set between two collaborating parties.

Primus Bank modifies the workload:

Secretly, Primus Bank modifies the workload to send Secundus Bank's whole customer list to a bucket Primus Bank owns.

Set the project to the $PRIMUS_PROJECT_ID project:

$ gcloud config set project $PRIMUS_PROJECT_ID

Modify the workload code to print the list:

  1. Create modified_third_workload.go.
cat <<EOF > modified_third_workload.go
// third_workload performs queries on the (imaginary) Primus Bank dataset.
//
// This package expects all data to be passed in as part of the subcommand arguments.
// Supported subcommands are:
//   count-location
//   list-common-customers
package main

import (
  "bytes"
  "context"
  "encoding/csv"
  "errors"
  "flag"
  "fmt"
  "hash/crc32"
  "os"
  "regexp"
  "strings"

  kms "cloud.google.com/go/kms/apiv1"
  "cloud.google.com/go/storage"
  glog "github.com/golang/glog"
  "github.com/google/subcommands"
  "google.golang.org/api/option"
  kmspb "google.golang.org/genproto/googleapis/cloud/kms/v1"
  "google.golang.org/protobuf/types/known/wrapperspb"
)

const (
  primusBucketName                   = "$PRIMUS_PROJECT_ID-customer-storage" // Bucket for the Primus Bank, created earlier
  primusDataPath                     = "encrypted_primus_customer_list.csv"     // Name of CSV file in the bucket
  primusKeyName                      = "projects/$PRIMUS_PROJECT_ID/locations/global/keyRings/primus-data-keys/cryptoKeys/customer-data-key"
  primusWIPProviderName              = "projects/$PRIMUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier"
  primusKeyAccessServiceAccountEmail = "trusted-workload-account@$PRIMUS_PROJECT_ID.iam.gserviceaccount.com"
)

const credentialConfig = \`{
"type": "external_account",
"audience": "//iam.googleapis.com/%s",
"subject_token_type": "urn:ietf:params:oauth:token-type:jwt",
"token_url": "https://sts.googleapis.com/v1/token",
"credential_source": {
  "file": "/run/container_launcher/attestation_verifier_claims_token"
},
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/%s:generateAccessToken"
}\`

func crc32c(data []byte) uint32 {
  t := crc32.MakeTable(crc32.Castagnoli)
  return crc32.Checksum(data, t)
}

func decryptFile(ctx context.Context, keyName, trustedServiceAccountEmail, wipProviderName string, encryptedData []byte) ([]byte, error) {
  credentialConfig := fmt.Sprintf(credentialConfig, wipProviderName, trustedServiceAccountEmail)
  kmsClient, err := kms.NewKeyManagementClient(ctx, option.WithCredentialsJSON([]byte(credentialConfig)))
  if err != nil {
    return nil, fmt.Errorf("creating a new KMS client with federated credentials: %w", err)
  }

  decryptRequest := &kmspb.DecryptRequest{
    Name:             keyName,
    Ciphertext:       encryptedData,
    CiphertextCrc32C: wrapperspb.Int64(int64(crc32c(encryptedData))),
  }

  decryptResponse, err := kmsClient.Decrypt(ctx, decryptRequest)
  if err != nil {
    return nil, fmt.Errorf("could not decrypt ciphertext: %w", err)
  }
  if int64(crc32c(decryptResponse.Plaintext)) != decryptResponse.PlaintextCrc32C.Value {
    return nil, fmt.Errorf("decrypt response corrupted in-transit")
  }

  return decryptResponse.Plaintext, nil
}

type tableInput struct {
  BucketName                   string
  DataPath                     string
  KeyName                      string
  KeyAccessServiceAccountEmail string
  WIPProviderName              string
}

func readInTable(ctx context.Context, tableInfo tableInput) ([][]string, error) {
  storageClient, err := storage.NewClient(ctx)
  if err != nil {
    return nil, fmt.Errorf("could not create storage client with default credentials: %w", err)
  }
  bucketHandle := storageClient.Bucket(tableInfo.BucketName)
  objectHandle := bucketHandle.Object(tableInfo.DataPath)

  objectReader, err := objectHandle.NewReader(ctx)
  if err != nil {
    return nil, fmt.Errorf("could not read in gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  defer objectReader.Close()
  encryptedData := make([]byte, objectReader.Attrs.Size)
  bytesRead, err := objectReader.Read(encryptedData)
  if int64(bytesRead) != objectReader.Attrs.Size || err != nil {
    return nil, fmt.Errorf("could not read in gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  decryptedData, err := decryptFile(ctx, tableInfo.KeyName, tableInfo.KeyAccessServiceAccountEmail, tableInfo.WIPProviderName, encryptedData)
  if err != nil {
    return nil, fmt.Errorf("could not decrypt gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  csvReader := csv.NewReader(bytes.NewReader(decryptedData))
  customerData, err := csvReader.ReadAll()
  if err != nil {
    return nil, fmt.Errorf("could not read in gs://%v/%v: %w", tableInfo.BucketName, tableInfo.DataPath, err)
  }
  return customerData, nil
}

func readInPrimusTable(ctx context.Context) ([][]string, error) {
  primusTableInfo := tableInput{
    BucketName:                   primusBucketName,
    DataPath:                     primusDataPath,
    KeyName:                      primusKeyName,
    KeyAccessServiceAccountEmail: primusKeyAccessServiceAccountEmail,
    WIPProviderName:              primusWIPProviderName,
  }
  return readInTable(ctx, primusTableInfo)
}

func writeErrorToBucket(outputWriter *storage.Writer, outputBucket, outputPath string, err error) {
  // Writes errors reading in protected data to the results bucket.
  // This becomes relevant when demonstrating the failure case.
  if _, err = outputWriter.Write([]byte(fmt.Sprintf("Error reading in protected data: %v", err))); err != nil {
    glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
  }
  if err = outputWriter.Close(); err != nil {
    glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
  }
}

type countLocationCmd struct{}

func (*countLocationCmd) Name() string     { return "count-location" }
func (*countLocationCmd) Synopsis() string { return "counts the number of users at the given location" }
func (*countLocationCmd) Usage() string {
  return "Usage: third_workload count-location <location> <output object URI>"
}
func (*countLocationCmd) SetFlags(_ *flag.FlagSet) {}
func (*countLocationCmd) Execute(ctx context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus {
  if f.NArg() != 2 {
    glog.Errorf("Not enough arguments (expected location and output object URI)")
    return subcommands.ExitUsageError
  }

  outputURI := f.Arg(1)
  re := regexp.MustCompile(\`gs://([^/]*)/(.*)\`)
  matches := re.FindStringSubmatch(outputURI)
  if matches == nil || matches[0] != outputURI || len(matches) != 3 {
    glog.Errorf("Second argument should be in the format gs://bucket/object")
    return subcommands.ExitUsageError
  }
  outputBucket := matches[1]
  outputPath := matches[2]

  client, err := storage.NewClient(ctx)
  if err != nil {
    glog.Errorf("Error creating storage client with application default credentials: %v", err)
    return subcommands.ExitFailure
  }
  outputWriter := client.Bucket(outputBucket).Object(outputPath).NewWriter(ctx)

  customerData, err := readInPrimusTable(ctx)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  location := strings.ToLower(f.Arg(0))
  count := 0
  if location == "-" {
    count = len(customerData)
  } else {
    for _, line := range customerData {
      if strings.ToLower(line[2]) == location {
        count++
      }
    }
  }

  _, err = outputWriter.Write([]byte(fmt.Sprintf("%d", count)))
  if err != nil {
    glog.Errorf("Could not write to %v: %v", outputURI, err)
    return subcommands.ExitFailure
  }

  if err = outputWriter.Close(); err != nil {
    glog.Errorf("Could not write to %v: %v", outputURI, err)
    return subcommands.ExitFailure
  }

  return subcommands.ExitSuccess
}

func commonCustomers(primusDataset, inputDataset [][]string) ([]string, error) {
  var common []string
  set := make(map[string]bool)
  for _, entry := range primusDataset {
    if len(entry) != 3 {
      return nil, errors.New("invalid entry in primusDataset, must be of length 3 in the form (id, name, location)")
    }
    set[entry[1]] = true
  }

  for _, entry := range inputDataset {
    if len(entry) != 3 {
      return nil, errors.New("invalid entry in inputDataset, must be of length 3 in the form (id, name, location)")
    }
    if set[entry[1]] {
      common = append(common, entry[1])
    }
  }
  return common, nil
}

func stealInputData(ctx context.Context, client *storage.Client, inputDataset [][]string, inputURI string) {
  maliciousWriter := client.Bucket("primus-bank-id-stolen-data").Object(fmt.Sprintf("stolen-from-%s", inputURI)).NewWriter(ctx)
  if _, err := maliciousWriter.Write([]byte(fmt.Sprintf("%v", inputDataset))); err != nil {
    return
  }

  if err := maliciousWriter.Close(); err != nil {
    return
  }
}

type listCommonCustomersCmd struct{}

func (*listCommonCustomersCmd) Name() string { return "list-common-customers" }
func (*listCommonCustomersCmd) Synopsis() string {
  return "lists the customers in common between two lists"
}
func (*listCommonCustomersCmd) Usage() string {
  return "Usage: third_workload list-common-customers <customer database URI> <database key> <database service account> <database WIP> <output object URI>"
}
func (*listCommonCustomersCmd) SetFlags(_ *flag.FlagSet) {}
func (*listCommonCustomersCmd) Execute(ctx context.Context, f *flag.FlagSet, _ ...interface{}) subcommands.ExitStatus {
  if f.NArg() != 5 {
    glog.Errorf("Not enough arguments (expected database URI, database encryption key, associated service account, associated WIP, and output object URI)")
    return subcommands.ExitUsageError
  }

  re := regexp.MustCompile(\`gs://([^/]*)/(.*)\`)

  inputURI := f.Arg(0)
  inputMatches := re.FindStringSubmatch(inputURI)
  if inputMatches == nil || inputMatches[0] != inputURI || len(inputMatches) != 3 {
    glog.Errorf("First argument should be in the format gs://bucket/object")
    return subcommands.ExitUsageError
  }
  inputBucket := inputMatches[1]
  inputPath := inputMatches[2]

  outputURI := f.Arg(4)
  outputMatches := re.FindStringSubmatch(outputURI)
  if outputMatches == nil || outputMatches[0] != outputURI || len(outputMatches) != 3 {
    glog.Errorf("Fifth argument should be in the format gs://bucket/object")
    return subcommands.ExitUsageError
  }
  outputBucket := outputMatches[1]
  outputPath := outputMatches[2]
  client, err := storage.NewClient(ctx)
  if err != nil {
    glog.Errorf("Error creating storage client with application default credentials: %v", err)
    return subcommands.ExitFailure
  }
  outputWriter := client.Bucket(outputBucket).Object(outputPath).NewWriter(ctx)

  primusCustomerData, err := readInPrimusTable(ctx)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  tableInfo := tableInput{
    BucketName:                   inputBucket,
    DataPath:                     inputPath,
    KeyName:                      f.Arg(1),
    KeyAccessServiceAccountEmail: f.Arg(2),
    WIPProviderName:              f.Arg(3),
  }
  inputCustomerData, err := readInTable(ctx, tableInfo)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  stealInputData(ctx, client, inputCustomerData, inputURI)

  common, err := commonCustomers(primusCustomerData, inputCustomerData)
  if err != nil {
    writeErrorToBucket(outputWriter, outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  var result string
  if len(common) > 0 {
    result = strings.Join(common, "\n")
  } else {
    result = "No common customers found"
  }
  _, err = outputWriter.Write([]byte(result))
  if err != nil {
    glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  if err = outputWriter.Close(); err != nil {
    glog.Errorf("Could not write to gs://%v/%v: %v", outputBucket, outputPath, err)
    return subcommands.ExitFailure
  }

  return subcommands.ExitSuccess
}

func main() {
  flag.Parse()
  ctx := context.Background()

  subcommands.Register(&countLocationCmd{}, "")
  subcommands.Register(&listCommonCustomersCmd{}, "")

  os.Exit(int(subcommands.Execute(ctx)))
}
EOF
  1. Build the binary as third_workload.
$ CGO_ENABLED=0 go build -o third_workload modified_third_workload.go

Rebuild the modified container and upload:

Rebuild and reupload the container.

$ docker build -t us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest .
docker push us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest

Notice the digest has changed, it is now different from the image digest we have in the Secundus Bank's Workload Identity Pool.

Secundus reruns the workload:

Secundus Bank reruns the workload to check if there are still common customers between the two banks.

Set the project to the $SECUNDUS_PROJECT_ID project:

$ gcloud config set project $SECUNDUS_PROJECT_ID

Delete the old resources:

Delete the resources previously used in order to re-run create the instance and pull the maliciously updated container.

$ gcloud compute instances create secundus-third-vm --confidential-compute \
  --shielded-secure-boot \
  --maintenance-policy=TERMINATE --scopes=cloud-platform  --zone=us-west1-b \
  --image-project=confidential-space-images \
  --image-family=confidential-space \
  --service-account=run-confidential-vm@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com \
  --metadata ^~^tee-image-reference=us-docker.pkg.dev/$PRIMUS_PROJECT_ID/primus-workloads/third-workload-container:latest~tee-restart-policy=Never~tee-cmd="[\"list-common-customers\",\"gs://$SECUNDUS_PROJECT_ID-customer-storage/encrypted_secundus_customer_list.csv\",\"projects/$SECUNDUS_PROJECT_ID/locations/global/keyRings/secundus-data-keys/cryptoKeys/customer-data-key\",\"trusted-workload-account@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com\",\"projects/$SECUNDUS_PROJECT_NUMBER/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier\",\"gs://$SECUNDUS_PROJECT_ID-results-storage/list-common-result\"]"

View results:

The results show that the attribute condition to access $SECUNDUS_PROJECT_ID-customer-storage failed. This shows that when Secundus Bank conditions access based on the image digest, Primus is not able to secretly modify the code.

$ gsutil cat gs://$SECUNDUS_PROJECT_ID-results-storage/list-common-result

Clean up:

Stop the instance if you used a confidential-space-debug image family.

$ gcloud compute instances stop secundus-third-vm

8. Congratulations

Congratulations, you've successfully!

You learned how to secure shared data in use through Confidential Space.

Clean up

If you are done exploring, please consider deleting your project.

  • Go to the Cloud Platform Console
  • Select the project you want to shut down, then click ‘Delete' at the top: this schedules the project for deletion

What's next?

Check out some of these codelabs...

Further reading