How to Transact Digital Assets with Multi-Party Computation and Confidential Space

1. Overview

Before we begin, although not entirely necessary, a working knowledge of following features and concepts will prove helpful in this codelab.


What you'll learn

This lab provides a reference implementation for performing MPC-compliant blockchain signing using Confidential Space. To illustrate the concepts, we will walk through a scenario where Company Primus wants to transfer digital assets to Company Secundus. In this scenario, Company Primus uses an MPC-compliant model, which means that instead of using individual private keys, they use distributed key shares. These key shares are held by multiple parties, in this case Alice and Bob. This approach provides Company Primus with several benefits, including simplified user experience, operational efficiency, and control over their private keys.

To explain the fundamental aspects of this process, we will detail the technical setup and walk you through the approval and signing process that initiates the transfer of digital assets from Company Primus to Company Secundus. Please note that Bob and Alice, who are both employees of Company Primus, must approve the transaction.

Although this reference implementation demonstrates signature operations, it does not cover all aspects of MPC key management. For instance, we do not discuss key generation. Additionally, there are alternative and complementary approaches, such as using non-Google Cloud services to generate co-signatures or having co-signers construct blockchain signatures in their own environments, which is a more decentralized architecture. We hope that this lab inspires different approaches to MPC on Google Cloud.

You will work with a simple workload that signs an Ethereum transaction in Confidential Space using co-signer key materials. Ethereum transaction signing is a process by which a user can authorize a transaction on the Ethereum blockchain. To send an Ethereum transaction, you need to sign it with your private key. This proves that you are the owner of the account and authorize the transaction. The signing process is as follows:

  1. The sender creates a transaction object that specifies the recipient address, the amount of ETH to send, and any other relevant data.
  2. The sender's private key is used to hash the transaction data.
  3. The hash is then signed with the private key.
  4. The signature is attached to the transaction object.
  5. The transaction is broadcast to the Ethereum network.

When a node on the network receives a transaction, it verifies the signature to make sure that it was signed by the owner of the account. If the signature is valid, the node will add the transaction to the blockchain.

To begin, you will configure the necessary Cloud resources. Then, you will run the workload in Confidential Space. This codelab will guide you through the following high-level steps:

  • How to configure the necessary Cloud resources for running Confidential Space
  • How to authorize access to protected resources based on the attributes of:
  • What: the workload container
  • Where: the Confidential Space environment (the Confidential Space image on Confidential VM)
  • Who: the account that is running the workload
  • How to run the workload in a Confidential VM running the Confidential Space VM image

Required APIs

You must enable the following APIs in the specified projects to be able to complete this guide.

API name

API title

Cloud KMS

Compute Engine

Confidential Computing


Artifact Registry

2. Set Up Cloud Resources

Before you begin

  • Clone this repository using the below command to get required scripts that are used as part of this codelab.
git clone
  • Change the directory for this codelab.
cd confidential-space/codelabs/digital_asset_transaction_codelab/scripts
  • Ensure you have set the required project environment variables as shown below. For more information about setting up a GCP project, please refer to this codelab. You can refer to this to get details about how to retrieve project id and how it is different from project name and project number. .
export PRIMUS_PROJECT_ID=<GCP project id>
  • Enable Billing for your projects.
  • Enable Confidential Computing API and following apis for both the projects.
gcloud services enable \ \ \ \ \ \ \ \
  • To set the variables for the resource names, you can use the following command. Note that this will override the resource names specific to your GCP project for company A, for example, export PRIMUS_INPUT_STORAGE_BUCKET='primus-input-bucket'
  • The following variables can be set for your GCP project in Company A:


The bucket that stores the encrypted keys.


The bucket that stores the MPC transaction result.


The KMS key used to encrypt the data stored in $PRIMUS_INPUT_STORAGE_BUCKET for Primus Bank.


The KMS keyring which will be used to create the encryption key $PRIMUS_KEY for Primus Bank.


The Workload Identity Pool provider which includes the attribute condition to use for tokens signed by the MPC workload service.


The service account that $PRIMUS_WORKLOAD_IDENTITY_POOL uses to access the protected resources. This service account will have permission to view the encrypted keys that are stored in the $PRIMUS_INPUT_STORAGE_BUCKET bucket.


The artifact repository for storing the workload container image.


The service account that has permission to access the Confidential VM that runs the workload.


The Docker container that runs the workload.


The name of the workload container image.


The tag of workload container image.

  • Run the following script to set the remaining variable names to values based on your project ID for resource names.

Set up Cloud resources

As part of this step, you will set up the cloud resources required for multi-party computation. For this lab, you will be using the following private key: 0000000000000000000000000000000000000000000000000000000000000001

In a production environment, you will generate your own private key. However, for the purposes of this lab, we will split this private key into two shares and encrypt each. In a production scenario, keys should never be stored in plaintext files. Instead, the private key can be generated outside of Google Cloud (or skipped entirely and replaced with custom MPC key shard creation) and then encrypted so that no one has access to the private key or the key shares. For the purposes of this lab we will be using the Gcloud CLI.

Run the following script to set up the required cloud resources. As part of these steps, below mentioned resources will be created:

  • A Cloud Storage bucket ($PRIMUS_INPUT_STORAGE_BUCKET) to store the encrypted private key shares.
  • A Cloud Storage bucket ($PRIMUS_RESULT_STORAGE_BUCKET) to store the result of the digital asset transaction.
  • An encryption key ($PRIMUS_KEY) and keyring ($PRIMUS_KEYRING) in KMS to encrypt the private key shares.
  • A workload identity pool ($PRIMUS_WORKLOAD_IDENTITY_POOL) to validate claims based on attributes conditions configured under its provider.
  • A service account ($PRIMUS_SERVICEACCOUNT) attached to above mentioned workload identity pool ($PRIMUS_WORKLOAD_IDENTITY_POOL) with with following IAM access:
  • roles/cloudkms.cryptoKeyDecrypter to decrypt the data using the KMS key.
  • objectViewer to read data from the Cloud Storage bucket.
  • roles/iam.workloadIdentityUser for connecting this service-account to the workload identity pool.

3. Create Workload

Create workload service-account

You will now create a service account for the workload with the required roles and permissions. To do this, run the following script, which will create a workload service account for Company A. This service account will be used by the VM that runs the workload.

The workload service-account ($WORKLOAD_SERVICEACCOUNT) will have the following roles:

  • confidentialcomputing.workloadUser to get an attestation token
  • logging.logWriter to write logs to Cloud Logging.
  • objectViewer to read data from the $PRIMUS_INPUT_STORAGE_BUCKET Cloud Storage bucket.
  • objectUser to write the workload result to the $PRIMUS_RESULT_STORAGE_BUCKET Cloud Storage bucket.

Create workload

This step involves creating a workload Docker image. The workload in this codelab is a simple Node.js MPC application that signs digital transactions for transferring assets using encrypted private key shares. Here is the workload project code. The workload project includes the following files.

package.json: This file contains the list of packages that should be used for the workload MPC application. In this case, we're using the @google-cloud/kms, @google-cloud/storage, ethers, and fast-crc32c libraries. Here is the package.json file that we would be using for this codelab.

index.js: This is an entrypoint of workload application and specifies what commands should be run when the workload container starts up. We've also included a sample unsigned transaction that would normally be provided by an untrusted application that asks users for their signature. This index.js file also imports functions from mpc.js, which we will be creating next. Below is the content of the index.js file and you can also find it here.

import {signTransaction, submitTransaction, uploadFromMemory} from './mpc.js';

const signAndSubmitTransaction = async () => {
  try {
    // Create the unsigned transaction object
    const unsignedTransaction = {
      nonce: 0,
      gasLimit: 21000,
      gasPrice: '0x09184e72a000',
      to: '0x0000000000000000000000000000000000000000',
      value: '0x00',
      data: '0x',

    // Sign the transaction
    const signedTransaction = await signTransaction(unsignedTransaction);

    // Submit the transaction to Ganache
    const transaction = await submitTransaction(signedTransaction);

    // Write the transaction receipt

    return transaction;
  } catch (e) {

await signAndSubmitTransaction();

mpc.js: This is where the transaction signing takes place. It imports functions from kms-decrypt and credential-config, which we'll be covering next. Below is the content of the mpc.js file and you can also find it here.

import {Storage} from '@google-cloud/storage';
import {ethers} from 'ethers';

import {credentialConfig} from './credential-config.js';
import {decryptSymmetric} from './kms-decrypt.js';

const providers = ethers.providers;
const Wallet = ethers.Wallet;

// The ID of the GCS bucket holding the encrypted keys
const bucketName = process.env.KEY_BUCKET;

// Name of the encrypted key files.
const encryptedKeyFile1 = 'alice_encrypted_key_share';
const encryptedKeyFile2 = 'bob_encrypted_key_share';

// Create a new storage client with the credentials
const storageWithCreds = new Storage({
  credentials: credentialConfig,

// Create a new storage client without the credentials
const storage = new Storage();

const downloadIntoMemory = async (keyFile) => {
  // Downloads the file into a buffer in memory.
  const contents =
      await storageWithCreds.bucket(bucketName).file(keyFile).download();

  return contents;

const provider =
    new providers.JsonRpcProvider(`http://${process.env.NODE_URL}:80`);

export const signTransaction = async (unsignedTransaction) => {
  /* Check if Alice and Bob have both approved the transaction
  For this example, we're checking if their encrypted keys are available. */
  const encryptedKey1 =
      await downloadIntoMemory(encryptedKeyFile1).catch(console.error);
  const encryptedKey2 =
      await downloadIntoMemory(encryptedKeyFile2).catch(console.error);

  // For each key share, make a call to KMS to decrypt the key
  const privateKeyshare1 = await decryptSymmetric(encryptedKey1[0]);
  const privateKeyshare2 = await decryptSymmetric(encryptedKey2[0]);

  /* Perform the MPC calculations
  In this example, we're combining the private key shares
  Alternatively, you could import your mpc calculations here */
  const wallet = new Wallet(privateKeyshare1 + privateKeyshare2);

  // Sign the transaction
  const signedTransaction = await wallet.signTransaction(unsignedTransaction);

  return signedTransaction;

export const submitTransaction = async (signedTransaction) => {
  // This can now be sent to Ganache
  const hash = await provider.sendTransaction(signedTransaction);
  return hash;

export const uploadFromMemory = async (contents) => {
  // Upload the results to the bucket without service account impersonation
  await storage.bucket(process.env.RESULTS_BUCKET)
      .file('transaction_receipt_' +

kms-decrypt.js: This file contains the code for the decryption using keys managed in KMS. Below is the content of the kms-decrypt.js file and you can also find it here.

import {KeyManagementServiceClient} from '@google-cloud/kms';
import crc32c from 'fast-crc32c';

import {credentialConfig} from './credential-config.js';

const projectId = process.env.PRIMUS_PROJECT_ID;
const locationId = process.env.PRIMUS_LOCATION;
const keyRingId = process.env.PRIMUS_ENC_KEYRING;
const keyId = process.env.PRIMUS_ENC_KEY;

// Instantiates a client
const client = new KeyManagementServiceClient({
  credentials: credentialConfig,

// Build the key name
const keyName = client.cryptoKeyPath(projectId, locationId, keyRingId, keyId);

export const decryptSymmetric = async (ciphertext) => {
  const ciphertextCrc32c = crc32c.calculate(ciphertext);
  const [decryptResponse] = await client.decrypt({
    name: keyName,
    ciphertextCrc32c: {
      value: ciphertextCrc32c,

  // Optional, but recommended: perform integrity verification on
  // decryptResponse. For more details on ensuring E2E in-transit integrity to
  // and from Cloud KMS visit:
  if (crc32c.calculate(decryptResponse.plaintext) !==
      Number(decryptResponse.plaintextCrc32c.value)) {
    throw new Error('Decrypt: response corrupted in-transit');

  const plaintext = decryptResponse.plaintext.toString();

  return plaintext;

credential-config.js: The file stores the workload identity pool paths and details for the service account impersonation. Here is the credential-config.js file that we would be using for this codelab.

Dockerfile: Finally, we will create our Dockerfile that will be used to build the workload docker image. defines the Dockerfile as specified here.

FROM node:16.18.0

ENV NODE_ENV=production


COPY ["package.json", "package-lock.json*", "./"]

RUN npm install --production

COPY . .

LABEL "tee.launch_policy.allow_cmd_override"="true"

CMD [ "node", "index.js" ]

Note: LABEL "tee.launch_policy.allow_cmd_override"="true" in the Dockerfile is a launch policy set by the image author. It allows the operator to override the CMD when executing the workload. By default, allow_cmd_override is set to false. LABEL "tee.launch_policy.allow_env_override" tells Confidential Space which environment variables image users are able to use .

Run the following script to create a workload in which the following steps are being performed:

  • Create Artifact Registry($PRIMUS_ARTIFACT_REPOSITORY) to store the workload docker image.
  • Update the workload code with required resources names. Here is the workload code used for this codelab.
  • Create Dockerfile for building a Docker image of the workload code. You can find the Dockerfile here.
  • Build and publish the Docker image to the Artifact Registry ($PRIMUS_ARTIFACT_REPOSITORY) created in the previous step.
  • Grant $WORKLOAD_SERVICEACCOUNT read permission for $PRIMUS_ARTIFACT_REPOSITORY. This is necessary so that the workload container to pull the workload docker image from the Artifact Registry.

Create the Blockchain Node

Ganache Ethereum Node

Before authorizing the workload, we need to create the Ethereum Ganache instance. The signed transaction would be submitted to this Ganache instance. Please take a note of the IP address of this instance. After running the below command, you might need to enter y to enable the API.

gcloud config set project $PRIMUS_PROJECT_ID
gcloud compute instances create-with-container mpc-lab-ethereum-node \
  --tags=http-server \
  --shielded-secure-boot \
  --shielded-vtpm \
  --shielded-integrity-monitoring \ \
--container-arg=--wallet.accounts=\"0x0000000000000000000000000000000000000000000000000000000000000001,0x21E19E0C9BAB2400000\" \

4. Authorize and Run Workload

Authorize Workload

As part of this step, we will be setting up the workload identity pool provider under the workload identity pool ($PRIMUS_WORKLOAD_IDENTITY_POOL). There are attribute-conditions configured for the workload identity as shown below. One of the conditions is to validate the workload image is being pulled from the expected artifact repository.

gcloud config set project $PRIMUS_PROJECT_ID
gcloud iam workload-identity-pools providers create-oidc ${PRIMUS_WIP_PROVIDER} \
 --location="${PRIMUS_PROJECT_LOCATION}" \
 --workload-identity-pool="$PRIMUS_WORKLOAD_IDENTITY_POOL" \
 --issuer-uri="" \
 --allowed-audiences="" \
 --attribute-mapping="google.subject='assertion.sub'" \
 --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' && 'STABLE' in assertion.submods.confidential_space.support_attributes && assertion.submods.container.image_reference == '${PRIMUS_PROJECT_REPOSITORY_REGION}$PRIMUS_PROJECT_ID/$PRIMUS_ARTIFACT_REPOSITORY/$WORKLOAD_IMAGE_NAME:$WORKLOAD_IMAGE_TAG' && '$WORKLOAD_SERVICEACCOUNT@$' in assertion.google_service_accounts"

Run Workload

This section explains how to run the workload on Confidential VM. To do this, we will pass the required TEE arguments using the metadata flag. Additionally, we will set environment variables for the workload container using the "tee-env-*" flag. The image has the following variables:

  • NODE_URL: The URL of the Ethereum node that will process the signed transaction.
  • RESULTS_BUCKET: The bucket that stores the mpc transaction result.
  • KEY_BUCKET: The bucket that stores the mpc encrypted keys.
  • PRIMUS_PROJECT_NUMBER: The project number used for the credential config file.
  • PRIMUS_PROJECT_ID: The project id used for the credential config file. The result of workload execution will be published to $PRIMUS_RESULT_STORAGE_BUCKET.
  • PRIMUS_WORKLOAD_IDENTITY_POOL: The workload identity pool used to validate claims.
  • PRIMUS_WIP_POROVIDER: The workload identity pool provider which includes the attribute conditions to use for validating tokens presented by workload.
  • WORKLOAD_SERVICEACCOUNT: The service-account of workload.
gcloud config set project $PRIMUS_PROJECT_ID
gcloud compute instances create $WORKLOAD_VM \
 --confidential-compute \
 --shielded-secure-boot \
 --maintenance-policy=TERMINATE \
 --scopes=cloud-platform \
 --image-project=confidential-space-images \
 --image-family=confidential-space \
 --service-account=$WORKLOAD_SERVICEACCOUNT@$ \

Check the Cloud Storage Results

You can view the transaction receipt in Cloud Storage. It might take a few minutes for Confidential Space to boot and for results to appear. You'll know the container is done when the VM is in the stopped state.

  1. Go to the Cloud Storage Browser page.
  3. Click on the transaction_receipt file.
  4. Click Download to download and view the transaction response.

Note: If results aren't appearing, you can go to the $WORKLOAD_VM in the Compute Engine Cloud Console page and click on "Serial port 1 (console)" to view the logs.

Check the Ganache Blockchain Transaction

You can also view the transaction in the blockchain log.

  1. Go to the Cloud Compute Engine page.
  2. Click on the mpc-lab-ethereum-node VM.
  3. Click SSH to open the SSH-in-browser window.
  4. In the SSH window, enter sudo docker ps to see the running Ganache container.
  5. Find the container ID for trufflesuite/ganache:v7.7.3
  6. Enter sudo docker logs CONTAINER_ID replacing CONTAINER_ID with the ID for trufflesuite/ganache:v7.7.3.
  7. View the logs for Ganache and confirm that there is a transaction listed in the logs.

5. Clean up

Here is the script that can be used to clean up the resources that we have created as part of this codelab. As part of this cleanup, the following resources will be deleted:

  • Input storage bucket used to store encrypted key shares ($PRIMUS_INPUT_STORAGE_BUCKET).
  • Encryption key and keyring ($PRIMUS_KEY & $PRIMUS_KEYRING).
  • Service-account used to access protected resources ($PRIMUS_SERVICEACCOUNT).
  • Workload identity pool ($PRIMUS_WORKLOAD_IDENTITY_POOL).
  • Workload service account ($WORKLOAD_SERVICEACCOUNT).
  • Workload Compute Instances.
  • Result storage bucket used to store the transaction result.($PRIMUS_RESULT_STORAGE_BUCKET).
  • Artifact registry used to store workload image ($PRIMUS_ARTIFACT_REPOSITORY).

If you are done exploring, please consider deleting your project.

  • Go to the Cloud Platform Console
  • Select the project you want to shut down, then click "Delete" at the top. This schedules the project for deletion.

What's next?

Check out some of these similar codelabs...

Further reading