1. Overview
Confidential Space offers a secure environment for collaboration between multiple parties. This codelab demonstrates how Confidential Space can be used to protect sensitive intellectual property, such as machine learning models.
In this codelab, you will use Confidential Space to enable one company to securely share its proprietary machine learning model with another company who would like to use the model. Specifically, Company Primus has a machine learning model that would only be released to a workload running in Confidential Space, enabling Primus to retain complete control over its intellectual property. Company Secundus will be the workload operator and will run the machine learning workload in a Confidential Space. Secundus will load this model and run an inference using sample data owned by Secundus.
Here Primus is the workload author who authors the workload code, and a collaborator who wants to protect its intellectual property from the untrusted workload operator, Secundus. Secundus is the workload operator of the machine learning workload.
What you'll learn
- How to configure an environment where one party can share its proprietary ML model with another party without losing control over its intellectual property.
What you'll need
- A Google Cloud Platform Project
- Basic knowledge of Google Compute Engine ( codelab), Confidential VM, containers and remote repositories
- Basic knowledge of Service Accounts, Workload Identity Federation and attribute conditions.
Roles involved in a Confidential Space setup
In this codelab, Company Primus will be the resource owner and workload author, which will be responsible for the following:
- Setting up required cloud resources with a machine learning model
- Writing the workload code
- Publishing the workload image
- Configuring Workload Identity Pool policy to protect ML model against an untrusted operator
Secundus Company will be the operator, and responsible for:
- Setting up required cloud resources to store sample images used by workload and the results
- Running the ML workload in Confidential Space using the model provided by Primus
How Confidential Space works
When you run the workload in Confidential Space, the following process takes place, using the configured resources:
- The workload requests a general Google access token for the
$PRIMUS_SERVICEACCOUNT
from the Workload Identity Pool. It offers an Attestation Verifier service token with workload and environment claims. - If the workload measurement claims in the Attestation Verifier service token match the attribute condition in the WIP, it returns the access token for
$PRIMUS_SERVICEACCOUNT.
- The workload uses the service account access token associated with
$PRIMUS_SERVICEACCOUNT
to access the machine learning model stored in the$PRIMUS_INPUT_STORAGE_BUCKET
bucket. - The workload performs an operation on the data owned by Secundus and that workload is operated and run by Secundus in its project.
- The workload uses the
$WORKLOAD_SERVICEACCOUNT
service account to write the results of that operation to the$SECUNDUS_RESULT_STORAGE_BUCKET
bucket.
2. Set up Cloud Resources
Before you begin
- Clone this repository using the below command to get required scripts that are used as part of this codelab.
git clone https://github.com/GoogleCloudPlatform/confidential-space.git
- Change the directory for this codelab.
cd confidential-space/codelabs/ml_model_protection/scripts
- Ensure you have set the required project environment variables as shown below. For more information about setting up a GCP project, please refer to this codelab. You can refer to this to get details about how to retrieve project id and how it is different from project name and project number.
export PRIMUS_PROJECT_ID=<GCP project id of Primus>
export SECUNDUS_PROJECT_ID=<GCP project id of Secundus>
- Enable Billing for your projects.
- Enable Confidential Computing API and following APIs for both the projects.
gcloud services enable \
cloudapis.googleapis.com \
cloudresourcemanager.googleapis.com \
cloudshell.googleapis.com \
container.googleapis.com \
containerregistry.googleapis.com \
iam.googleapis.com \
confidentialcomputing.googleapis.com
- Assign values to the variables for the resource names specified above using the following command. These variables allow you to customize the resource names as needed and also use existing resources if they are already created. (e.g
export PRIMUS_INPUT_STORAGE_BUCKET='my-input-bucket'
)
- You can set the following variables with existing cloud resource names in Primus project. If the variable is set, then the corresponding existing cloud resource from the Primus project would be used. If the variable is not set, cloud resource name would be generated from project-name and new cloud-resource would be created with that name. Following are the supported variables for resource names:
| The bucket that stores the machine learning model of Primus. |
| The workload identity pool (WIP) of Primus that validates claims. |
| The workload identity pool provider of Primus which includes the authorization condition to use for tokens signed by the Attestation Verifier service. |
| Primus service account that |
| The artifact repository where workload Docker image will be pushed. |
- You can set the following variables with existing cloud resource names in the Secundus project. If the variable is set, then the corresponding existing cloud resource from the Secundus project would be used. If the variable is not set, cloud resource name would be generated from the project-name and a new cloud-resource would be created with that name. Following are the supported variables for resource names:
| The bucket that stores the sample images that Secundus would like to classify using the model provided by Primus.. |
| The bucket that stores the results of the workload. |
| The name of the workload container image. |
| The tag of workload container image. |
| The service account that has permission to access the Confidential VM that runs the workload. |
- You will need certain permissions for these two projects and you can refer to this guide on how to grant IAM roles using GCP console:
- For the
$PRIMUS_PROJECT_ID
, you will need Storage Admin, Artifact Registry Administrator, Service Account Admin, IAM Workload Identity Pool Admin. - For the
$SECUNDUS_PROJECT_ID
, you will need Compute Admin, Storage Admin, Service Account Admin, IAM Workload Identity Pool Admin, Security Admin (optional). - Run the following script to set the remaining variable names to values based on your project ID for resource names.
source config_env.sh
Set up Primus Company resources
As part of this step, you will set up the required cloud resources for Primus. Run the following script to set up the resources for Primus. Following resources will be created as part of script execution:
- Cloud storage bucket (
$PRIMUS_INPUT_STORAGE_BUCKET
) to store the machine learning model of Primus. - Workload identity pool (
$PRIMUS_WORKLOAD_IDENTITY_POOL
) to validate claims based on attributes conditions configured under its provider. - Service account (
$PRIMUS_SERVICEACCOUNT
) attached to above mentioned workload identity pool ($PRIMUS_WORKLOAD_IDENTITY_POOL
) with IAM access to read data from the cloud storage bucket (usingobjectViewer
role) and for connecting this service account to the workload identity pool (usingroles/iam.workloadIdentityUser
role).
As part of this cloud resources setup, we will be using a TensorFlow model. We can save the entire model that includes the model's architecture, weights, and training configuration in a ZIP archive. For the purpose of this codelab, we will use the MobileNet V1 model trained on the ImageNet dataset found here.
./setup_primus_company_resources.sh
Above mentioned script will set up the cloud resource, we will now download and publish the model to the Cloud Storage bucket created by the script.
- Download the pre-trained model from here.
- Once it is downloaded, rename the downloaded tar file to model.tar.gz.
- Publish the model.tar.gz file to Cloud Storage bucket using following command from the directory containing model.tar.gz file.
gsutil cp model.tar.gz gs://${PRIMUS_INPUT_STORAGE_BUCKET}/
Set up Secundus Company resources
As part of this step, you will set up the required cloud resources for Secundus. Run the following script to set up the resources for Secundus. As part of these steps the following resources will be created:
- Cloud storage bucket (
$SECUNDUS_INPUT_STORAGE_BUCKET
) to store the sample images for running inferences by Secundus. - Cloud storage bucket (
$SECUNDUS_RESULT_STORAGE_BUCKET
) to store the result of ML workload execution by Secundus.
Some sample images are made available here for this codelab.
./setup_secundus_company_resources.sh
3. Create Workload
Create workload service account
Now, you will create a service account for the workload with required roles and permissions. Run the following script to create a workload service account in the Secundus project. This service account would be used by the VM that runs the ML workload.
This workload service-account ($WORKLOAD_SERVICEACCOUNT
) will have the following roles:
confidentialcomputing.workloadUser
to get an attestation tokenlogging.logWriter
to write logs to Cloud Logging.objectViewer
to read data from the$SECUNDUS_INPUT_STORAGE_BUCKET
cloud storage bucket.objectUser
to write the workload result to the$SECUNDUS_RESULT_STORAGE_BUCKET
cloud storage bucket.
./create_workload_service_account.sh
Create workload
As part of this step, you will create a workload Docker image. Workload would be authored by Primus. The workload used in this codelab is machine learning Python code which accesses the ML model stored in the storage bucket of Primus and runs inferences with the sample images which are stored in a storage bucket.
The machine learning model stored in the storage bucket of Primus would only be accessible by the workloads meeting the required attribute conditions. These attribute conditions are described in more detail in the next section about authorizing the workload.
Here is the run_inference() method of the workload that will be created and used in this codelab. You can find the entire workload code here.
def run_inference(image_path, model):
try:
# Read and preprocess the image
image = tf.image.decode_image(tf.io.read_file(image_path), channels=3)
image = tf.image.resize(image, (128, 128))
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.expand_dims(image, axis=0)
# Get predictions from the model
predictions = model(image)
predicted_class = np.argmax(predictions)
top_k = 5
top_indices = np.argsort(predictions[0])[-top_k:][::-1]
# Convert top_indices to a TensorFlow tensor
top_indices_tensor = tf.convert_to_tensor(top_indices, dtype=tf.int32)
# Use TensorFlow tensor for indexing
top_scores = tf.gather(predictions[0], top_indices_tensor)
return {
"predicted_class": int(predicted_class),
"top_k_predictions": [
{"class_index": int(idx), "score": float(score)}
for idx, score in zip(top_indices, top_scores)
],
}
except Exception as e:
return {"error": str(e)}
Run the following script to create a workload in which the following steps are being performed:
- Create Artifact Registry(
$PRIMUS_ARTIFACT_REGISTRY
) owned by Primus. - Update the workload code with required resources names.
- Build the ML workload and create Dockerfile for building a Docker image of the workload code. Here is the Dockerfile used for this codelab.
- Build and publish the Docker image to the Artifact Registry (
$PRIMUS_ARTIFACT_REGISTRY
) owned by Primus. - Grant
$WORKLOAD_SERVICEACCOUNT
read permission for$PRIMUS_ARTIFACT_REGISTRY
. This is needed for the workload container to pull the workload docker image from the Artifact Registry.
./create_workload.sh
Additionally, workloads can be coded to ensure that it is loading the expected version of the machine learning model by checking the hash or signature of the model before using it. Advantage of such additional checks is that it ensures the integrity of the machine learning model. With this, the workload operator would also need to update the workload image or its parameters when workload is expected to use different versions of the ML model.
4. Authorize and Run Workload
Authorize Workload
Primus wants to authorize workloads to access their machine learning model based on attributes of the following resources:
- What: Code that is verified
- Where: An environment that is secure
- Who: An operator that is trusted
Primus uses Workload identity federation to enforce an access policy based on these requirements. Workload identity federation allows you to specify attribute conditions. These conditions restrict which identities can authenticate with the workload identity pool (WIP). You can add the Attestation Verifier Service to the WIP as a workload identity pool provider to present measurements and enforce the policy.
Workload identity pool was already created earlier as part of the cloud resources setup step. Now Primus will create a new OIDC workload identity pool provider. The specified --attribute-condition
authorizes access to the workload container. It requires:
- What: Latest
$WORKLOAD_IMAGE_NAME
uploaded to the$PRIMUS_ARTIFACT_REPOSITORY
repository. - Where: Confidential Space trusted execution environment is running on the fully supported Confidential Space VM image.
- Who: Secundus
$WORKLOAD_SERVICE_ACCOUNT
service account.
export WORKLOAD_IMAGE_DIGEST=$(docker images –digests ${PRIMUS_PROJECT_REPOSITORY_REGION}-docker.pkg.dev/${PRIMUS_PROJECT_ID}/${PRIMUS_ARTIFACT_REPOSITORY}/${WORKLOAD_IMAGE_NAME}:${WORKLOAD_IMAGE_TAG}| awk 'NR>1{ print $3 }')
gcloud config set project $PRIMUS_PROJECT_ID
gcloud iam workload-identity-pools providers create-oidc $PRIMUS_WIP_PROVIDER \
--location="global" \
--workload-identity-pool="$PRIMUS_WORKLOAD_IDENTITY_POOL" \
--issuer-uri="https://confidentialcomputing.googleapis.com/" \
--allowed-audiences="https://sts.googleapis.com" \
--attribute-mapping="google.subject='assertion.sub'" \
--attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' &&
'STABLE' in assertion.submods.confidential_space.support_attributes &&
assertion.submods.container.image_digest == '${WORKLOAD_IMAGE_DIGEST}' &&
assertion.submods.container.image_reference == '${PRIMUS_PROJECT_REPOSITORY_REGION}-docker.pkg.dev/$PRIMUS_PROJECT_ID/$PRIMUS_ARTIFACT_REPOSITORY/$WORKLOAD_IMAGE_NAME:$WORKLOAD_IMAGE_TAG' &&
'$WORKLOAD_SERVICEACCOUNT@$SECUNDUS_PROJECT_ID.iam.gserviceaccount.com' in assertion.google_service_accounts"
Run Workload
As part of this step, we will be running the workload in the Confidential Space VM. Required TEE arguments are passed using the metadata flag. Arguments for the workload container are passed using the "tee-cmd
" portion of the flag. The result of workload execution will be published to $SECUNDUS_RESULT_STORAGE_BUCKET
.
gcloud config set project $SECUNDUS_PROJECT_ID
gcloud compute instances create ${WORKLOAD_VM} \
--confidential-compute-type=SEV \
--shielded-secure-boot \
--maintenance-policy=TERMINATE \
--scopes=cloud-platform --zone=${SECUNDUS_PROJECT_ZONE} \
--image-project=confidential-space-images \
--image-family=confidential-space \
--service-account=${WORKLOAD_SERVICEACCOUNT}@${SECUNDUS_PROJECT_ID}.iam.gserviceaccount.com \
--metadata ^~^tee-image-reference=${PRIMUS_PROJECT_REPOSITORY_REGION}-docker.pkg.dev/${PRIMUS_PROJECT_ID}/${PRIMUS_ARTIFACT_REPOSITORY}/${WORKLOAD_IMAGE_NAME}:${WORKLOAD_IMAGE_TAG}
View results
After the workload has successfully completed, the result of the ML workload will be published to $SECUNDUS_RESULT_STORAGE_BUCKET
.
gsutil cat gs://$SECUNDUS_RESULT_STORAGE_BUCKET/result
Here are some examples of what the inference results on sample images might look like:
Image: sample_image_1.jpeg, Response: {'predicted_class': 531, 'top_k_predictions': [{'class_index': 531, 'score': 12.08437442779541}, {'class_index': 812, 'score': 10.269512176513672}, {'class_index': 557, 'score': 9.202644348144531}, {'class_index': 782, 'score': 9.08737564086914}, {'class_index': 828, 'score': 8.912498474121094}]}
Image: sample_image_2.jpeg, Response: {'predicted_class': 905, 'top_k_predictions': [{'class_index': 905, 'score': 9.53619384765625}, {'class_index': 557, 'score': 7.928380966186523}, {'class_index': 783, 'score': 7.70129919052124}, {'class_index': 531, 'score': 7.611623287200928}, {'class_index': 906, 'score': 7.021416187286377}]}
Image: sample_image_3.jpeg, Response: {'predicted_class': 905, 'top_k_predictions': [{'class_index': 905, 'score': 6.09878396987915}, {'class_index': 447, 'score': 5.992854118347168}, {'class_index': 444, 'score': 5.9582319259643555}, {'class_index': 816, 'score': 5.502010345458984}, {'class_index': 796, 'score': 5.450454235076904}]}
For each sample image in a Secundus storage bucket, you'll see an entry in the results. This entry will include two key pieces of information:
- Index of predicted_class: This is a numerical index representing the class that the model predicts the image belongs to.
- Top_k_predictions: This provides up to k predictions for the image, ranked from most to least likely. The value of k is set to 5 in this codelab, but you can adjust it in the workload code to get more or fewer predictions.
To translate the class index into a human-readable class name, refer to the list of labels available here. For example, if you see a class index of 2, it corresponds to the class label "tench" in the labels list.
In this codelab, we have demonstrated that a model owned by Primus that is only released to the workload running in a TEE. Secundus runs the ML workload in a TEE and this workload is able to consume the model owned by Primus while Primus retains full control over the model.
Run Unauthorized Workload
Secundus changes the workload image by pulling a different workload image from its own artifact repository which is not authorized by Primus. Workload identity pool of Primus has authorized only ${PRIMUS_PROJECT_REPOSITORY_REGION}-docker.pkg.dev/$PRIMUS_PROJECT_ID/$PRIMUS_ARTIFACT_REPOSITORY/$WORKLOAD_IMAGE_NAME:$WORKLOAD_IMAGE_TAG
Workload image..
Re-run the workload
When Secundus tries to run the original workload with this new workload image, it will fail. To view the error, delete the original results file and VM instance, and then try to run the workload again.
Please make sure there is a new docker image published under the artifact registry of Secundus (as us-docker.pkg.dev/${SECUNDUS_PROJECT_ID}/custom-image/${WORKLOAD_IMAGE_NAME}:${WORKLOAD_IMAGE_TAG}
) and workload serviceaccount ($WORKLOAD_SERVICEACCOUNT
) has given the artifact registry reader permission to read this new workload image. This is to ensure that workload does not exit before Primus's WIP policy rejects the token presented by workload.
Delete existing results file and VM instance
- Set the project to the
$SECUNDUS_PROJECT_ID
project.
gcloud config set project $SECUNDUS_PROJECT_ID
- Delete the result file.
gsutil rm gs://$SECUNDUS_RESULT_STORAGE_BUCKET/result
- Delete the Confidential VM instance.
gcloud compute instances delete ${WORKLOAD_VM}
Run the unauthorized workload:
gcloud compute instances create ${WORKLOAD_VM} \
--confidential-compute-type=SEV \
--shielded-secure-boot \
--maintenance-policy=TERMINATE \
--scopes=cloud-platform --zone=${SECUNDUS_PROJECT_ZONE} \
--image-project=confidential-space-images \
--image-family=confidential-space \
--service-account=${WORKLOAD_SERVICE_ACCOUNT}@${SECUNDUS_PROJECT_ID}.iam.gserviceaccount.com \
--metadata ^~^tee-image-reference=us-docker.pkg.dev/${SECUNDUS_PROJECT_ID}/custom-image/${WORKLOAD_IMAGE_NAME}:${WORKLOAD_IMAGE_TAG}
View error
Instead of the results of the workload, you see an error (The given credential is rejected by the attribute condition
).
gsutil cat gs://$SECUNDUS_RESULT_STORAGE_BUCKET/result
5. Clean Up
Here is the script that can be used to clean up the resources that we have created as part of this codelab. As part of this cleanup, the following resources will be deleted:
- Input storage bucket of Primus (
$PRIMUS_INPUT_STORAGE_BUCKET)
. - Primus service-account (
$PRIMUS_SERVICEACCOUNT
). - Artifact repository of Primus (
$PRIMUS_ARTIFACT_REPOSITORY
). - Primus workload identity pool (
$PRIMUS_WORKLOAD_IDENTITY_POOL
). - Workload service account of Secundus (
$WORKLOAD_SERVICEACCOUNT
). - Input storage bucket of Secundus (
$SECUNDUS_INPUT_STORAGE_BUCKET)
. - Workload Compute Instances.
- Result storage bucket of Secundus (
$SECUNDUS_RESULT_STORAGE_BUCKET
).
$ ./cleanup.sh
If you are done exploring, please consider deleting your project.
- Go to the Cloud Platform Console
- Select the project you want to shut down, then click ‘Delete' at the top: this schedules the project for deletion
What's next?
Check out some of these similar codelabs...