How to Transact Digital Assets with Multi-Party Computation and Confidential Space

1. Overview

Working knowledge of the following features and concepts is helpful, but not strictly required.

mpc flow diagram

What you'll learn

In this lab, we describe a reference implementation for MPC-compliant blockchain signing using Confidential Space. Let's imagine Company A, which wants to transfer digital assets to Company B. Since they are leveraging an MPC-compliant model, instead of individual private keys, they use distributed key shares where key shareholders (Alice and Bob) collaborate to sign a transaction. This gives Company A the benefits of simplifying the user experience, and creating operational efficiencies, while retaining control over their private keys.

To describe the critical components that make this possible, we will walk through the technical setup, and outline the approval and signing process that triggers the transfer of digital assets from Company A to Company B. Please note that Bob and Alice work for Company A, and are required to approve the transaction.

This reference implementation cover signature operation, but does not cover all the aspect of MPC key management. For example, we did not focus on key generation. Also, alternatives and complementary approaches exist – including using non-Google Cloud services for producing co-signatures, or having co-signers take turns to build the blockchain signature in their own environments (which is a more decentralized architecture). Our hope is that this lab inspires different approaches to MPC on Google Cloud.

  • How to authorize access to protected resources based on the attributes of:
    • What: the workload container
    • Where: the Confidential Space environment (the Confidential Space image on Confidential VM)
    • Who: the account that is running the workload
  • How to configure the necessary Cloud resources for running Confidential Space
  • How to run the workload in a Confidential VM running the Confidential Space VM image

In this lab, you build the foundation for this interaction with a simple workload that signs an Ethereum transaction in Confidential Space based on co-signer key materials. First, you configure the necessary Cloud resources. Then, you run the workload in Confidential Space.

Configuring resources

  • $MPC_PROJECT_ID-mpc-encrypted-keys: the bucket that stores the encrypted keys.
  • $MPC_PROJECT_ID-mpc-results-storage: the bucket that stores the mpc transaction result.
  • mpc-workload-container: the Docker container that stores the workload.
  • trusted-workload-pool: the Workload Identity Pool (WIP) that validates claims.
  • attestation-verifier: the Workload Identity Pool provider which includes the authorization condition to use for tokens signed by the MPC service.
  • trusted-mpc-account: the service account that trusted-workload-pool uses to access the protected resources - in this step it has permission to view the encrypted keys that are stored in the $MPC_PROJECT_ID-mpc-encrypted-keys bucket.
  • run-confidential-vm: the service account that has permission to access the Confidential VM that runs the workload

Required APIs

You must enable the following APIs in the specified projects to be able to complete this guide.

API name

API title

Cloud KMS

Compute Engine

Confidential Computing


Artifact Registry

2. Setup and Requirements

Self-paced environment setup

  1. Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)

select project

new project

name project

Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID.

  1. Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.

Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300USD Free Trial program.

Using Google Cloud Shell

While Google Cloud Platform and Node.js can be operated remotely from your laptop, in this codelab you will use Google Cloud Shell, a command line environment running in the Cloud.

This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).

  1. To activate Cloud Shell from the Cloud Console, simply click Activate Cloud Shell


If you've never started Cloud Shell before, you're presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:


It should only take a few moments to provision and connect to Cloud Shell.


Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID.

gcloud auth list

Command output

Credentialed accounts:
 - <myaccount>@<mydomain>.com (active)
gcloud config list project

Command output

project = <PROJECT_ID>

If, for some reason, the project is not set, simply issue the following command:

gcloud config set project <PROJECT_ID>

Cloud Shell also sets some environment variables by default, which may be useful as you run future commands.


Command output


3. Key Generation and Encryption

To begin, set your base environment Project ID variable:

MPC_PROJECT_ID=$(gcloud config get-value core/project)

You can check this was properly set by running:


We'll be using this variable throughout the remainder of the lab.

If you haven't already, enable the APIs that will be used in the lab.

gcloud services enable

Create the encryption keyring in KMS for the private key

Create the encryption key which will be used to encrypt the private key shares.

  1. Create the key ring. After running the below command, you might need to enter y to enable the API.
    gcloud kms keyrings create mpc-keys --location=global
  2. Create the KMS key.
    gcloud kms keys create mpc-key --location=global \
      --keyring=mpc-keys --purpose=encryption --protection-level=hsm
  3. Grant your user account access to the key to encrypt the keys.
    gcloud kms keys add-iam-policy-binding \
      projects/$MPC_PROJECT_ID/locations/global/keyRings/mpc-keys/cryptoKeys/mpc-key \
      --member="user:$(gcloud config get-value account)" \

Create the Ethereum private key

For this lab, you'll be using this private key: 0000000000000000000000000000000000000000000000000000000000000001

In a production scenario, you'd generate your own private key. If you'd like to use a different private key for the lab be sure to include it in the CLI flag below when running the Ganache VM.

We're now going to split our private key into two shares and encrypt each.

Add your private key to a gcloud file for encryption.

echo -n "00000000000000000000000000000000" >> alice-key-share

Then run the command to store Bob's private key share.

echo -n "00000000000000000000000000000001" >> bob-key-share

Encrypt the Ethereum private key shards using KMS

Encrypt Alice's private key share.

gcloud kms encrypt \
    --key mpc-key \
    --keyring mpc-keys \
    --location global  \
    --plaintext-file alice-key-share \
    --ciphertext-file alice-encrypted-key-share

Encrypt Bob's private key share.

gcloud kms encrypt \
    --key mpc-key \
    --keyring mpc-keys \
    --location global  \
    --plaintext-file bob-key-share \
    --ciphertext-file bob-encrypted-key-share

Create the bucket to store the encrypted keys

  1. Create the mpc-encrypted-keys bucket.The mpc-encrypted-keys bucket will store the encrypted keys of Alice and Bob. In a production application, these keys could be held by Alice and Bob and then handed over when approval by each party is granted. They could also be separated out into different buckets on different projects.
    gsutil mb gs://$MPC_PROJECT_ID-mpc-encrypted-keys
  2. Upload Alice's and Bob's encrypted keys into the bucket. By doing this, we're approving the transaction and granting the Confidential Space VM access to the encrypted key.
    gcloud storage cp alice-encrypted-key-share gs://$MPC_PROJECT_ID-mpc-encrypted-keys/
    gcloud storage cp bob-encrypted-key-share gs://$MPC_PROJECT_ID-mpc-encrypted-keys/

Now that the keys have been created and encrypted, you can move on to the next step to create the MPC application.

4. Service Account and Workload Identity Pool

Create the MPC Service Account

  1. Create the trusted-mpc-account service account.
    gcloud iam service-accounts create trusted-mpc-account
  2. Allow the MPC service account access to decrypt the key shards.
    gcloud kms keys add-iam-policy-binding mpc-key \
    --keyring='mpc-keys' --location='global' \
    --member="serviceAccount:trusted-mpc-account@$" \

Create a Workload Identity Pool

We want to authorize workloads to access the encrypted keys based on attributes of the following resources.

  • What: Code that is verified
  • Where: An environment that is secure
  • Who: An operator that is trusted

We use Workload identity federation to enforce an access policy based on these requirements.

Workload identity federation allows you to specify attribute conditions. These conditions restrict which identities can authenticate with the workload identity pool (WIP). You can add the Attestation Verifier Service to the WIP as a workload identity pool provider to present measurements and enforce the policy.

To create the WIP, complete the following steps.


  1. Create a WIP.
    gcloud iam workload-identity-pools create trusted-workload-pool \
  2. Create a new OIDC workload identity pool provider.The specified --attribute-condition authorizes access to the mpc-workloads container. It requires:
    • What: Latest initial-workload-container uploaded to the mpc-workloads repository.
    • Where: Confidential Space trusted execution environment, version 0.1 or later.
    • Who: MPC trusted-mpc service account.
    Note: change int(assertion.swversion) >= 1 to int(assertion.swversion) == 0 if you choose confidential-space-debug image when creating the instance in the later step. See here for the full list of confidential vm attribute conditions.
    gcloud iam workload-identity-pools providers create-oidc attestation-verifier \
        --location="global" \
        --workload-identity-pool="trusted-workload-pool" \
        --issuer-uri="" \
        --allowed-audiences="" \
        --attribute-mapping="google.subject='assertion.sub'" \
        --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' &&
          'STABLE' in assertion.submods.confidential_space.support_attributes &&
          assertion.submods.container.image_reference ==
          && 'run-confidential-vm@$' in
  3. Grant the workloadIdentityUser role on the trusted-mpc-account service account to the trusted-workload-pool WIP. This allows the WIP to impersonate the service account.
    gcloud iam service-accounts add-iam-policy-binding \
    trusted-mpc-account@$ \
    --role=roles/iam.workloadIdentityUser \
    --member="principalSet://$(gcloud projects describe $MPC_PROJECT_ID --format="value(projectNumber)")/locations/global/workloadIdentityPools/trusted-workload-pool/*"

Create run-confidential-vm service account

Create the run-confidential-vm service account.


  1. Create the run-confidential-vm service account.
    gcloud iam service-accounts create run-confidential-vm
  2. Grant the Service Account User role on the run-confidential-vm service account to your user account. This allows your user account to impersonate the service account.
    gcloud iam \
        service-accounts add-iam-policy-binding \
        run-confidential-vm@$ \
        --member="user:$(gcloud config get-value account)" \
  3. (Optional) Grant the service account the Log Writer permission. This allows the Confidential Space environment to write logs to Cloud Logging in addition to the Serial Console, so you can review logs after the VM is terminated (Requires Security Admin permission).
    gcloud projects add-iam-policy-binding $MPC_PROJECT_ID \
        --member=serviceAccount:run-confidential-vm@$ \

5. Create the Blockchain Node and Results Bucket

Ganache Ethereum Node

  1. Create the Ethereum Ganache instance and take note of the IP address. After running the below command, you might need to enter y to enable the API.
  gcloud compute instances create-with-container mpc-lab-ethereum-node  \
    --zone=us-central1-a \
    --tags=http-server \
    --shielded-secure-boot \
    --shielded-vtpm \
    --shielded-integrity-monitoring \ \
    --container-arg=--wallet.accounts=\"0x0000000000000000000000000000000000000000000000000000000000000001,0x21E19E0C9BAB2400000\" \

Create a bucket for results

Create the $MPC_PROJECT_ID-mpc-results-storage bucket. Then grant the run-confidential-vm service account permission to create files in the bucket, so it can store the workload results there.


  1. Create the mpc-results-storage bucket.
    gsutil mb gs://$MPC_PROJECT_ID-mpc-results-storage
  2. Grant the Storage Object Creator role on the /$MPC_PROJECT_ID-mpc-results-storage bucket to the run-confidential-vm service account. This permits the service account to store query results to the bucket.
    gsutil iam ch \
        serviceAccount:run-confidential-vm@$ \
  3. Grant the Storage Object Viewer role on the /$MPC_PROJECT_ID-mpc-encrypted-keys bucket to the trusted-mpc-account service account. This permits the service account to view the encrypted keys that were added by Alice and Bob.
    gsutil iam ch \
        serviceAccount:trusted-mpc-account@$ \

6. Create the MPC Instance

Create the files in the editor

  1. In Cloud Shell, click the Open Editor button button to launch the Cloud Shell Editor.

You'll then find yourself in an IDE environment similar to Visual Studio Code, in which you can create projects, edit source code, run your programs, etc. If your screen is too cramped, you can expand or shrink the dividing line between the console and your edit/terminal window by dragging the horizontal bar between those two regions.

You can switch back and forth between the Editor and the Terminal by clicking the Open Editor and Open Terminal buttons, respectively. Try switching back and forth between these two environments now.

Next, create a folder in which to store your work for this lab, by selecting File->New Folder, enter mpc-ethereum-demo, and click OK. All of the files you create in this lab, and all of the work you do in Cloud Shell, will take place in this folder.


Now create a package.json file. In the Cloud Editor window, click the File->New File menu to create a new file. When prompted for the new file's name, enter package.json and press the OK button. Make sure the new file ends up in the mpc-ethereum-demo project folder.

Place the following code into the package.json file. This will tell our image what packages should be used for the mpc application. In this case, we're using the @google-cloud/kms, @google-cloud/storage, ethers, and fast-crc32c libraries.

  "name": "gcp-mpc-ethereum-demo",
  "version": "1.0.0",
  "description": "Demo for GCP multi-party-compute on Confidential Space",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  "type": "module",
  "dependencies": {
    "@google-cloud/kms": "^3.2.0",
    "@google-cloud/storage": "^6.9.2",
    "ethers": "^5.7.2",
    "fast-crc32c": "^2.0.0"
  "author": "",
  "license": "ISC"


Next, create a index.js file. This is our entry file that specifies what commands should be run when the image starts up. We've also included a sample unsigned transaction. This transaction would normally be coming from an untrusted application that asks users for their signature. This index.js file also imports functions from mpc.js, which we'll be creating next.

import {signTransaction, submitTransaction, uploadFromMemory} from './mpc.js';

const signAndSubmitTransaction = async () => {
  try {
    // Create the unsigned transaction object
    const unsignedTransaction = {
      nonce: 0,
      gasLimit: 21000,
      gasPrice: '0x09184e72a000',
      to: '0x0000000000000000000000000000000000000000',
      value: '0x00',
      data: '0x',

    // Sign the transaction
    const signedTransaction = await signTransaction(unsignedTransaction);

    // Submit the transaction to Ganache
    const transaction = await submitTransaction(signedTransaction);

    // Write the transaction receipt

    return transaction;
  } catch (e) {

await signAndSubmitTransaction();


Create the mpc.js file to do the signing and paste the following code into the file. This is where the transaction signing will occur. You'll notice we're importing from kms-decrypt and credential-config, which we'll be making next.

import {ethers} from 'ethers';
import {decryptSymmetric} from './kms-decrypt.js';
import {Storage} from '@google-cloud/storage';
import {credentialConfig} from './credential-config.js';

const providers = ethers.providers;
const Wallet = ethers.Wallet;

// The ID of the GCS bucket holding the encrypted keys
const bucketName = process.env.KEY_BUCKET;

// Name of the encrypted key files.
const encryptedKeyFile1 = 'alice-encrypted-key-share';
const encryptedKeyFile2 = 'bob-encrypted-key-share';

// Create a new storage client with the credentials
const storageWithCreds = new Storage({
  credentials: credentialConfig,

// Create a new storage client without the credentials
const storage = new Storage();

const downloadIntoMemory = async (keyFile) => {
  // Downloads the file into a buffer in memory.
  const contents = await storageWithCreds.bucket(bucketName).file(keyFile).download();

  return contents;

const provider = new providers.JsonRpcProvider(`http://${process.env.NODE_URL}:80`);

export const signTransaction = async (unsignedTransaction) => {
  /* Check if Alice and Bob have both approved the transaction
  For this example, we're checking if their encrypted keys are available. */
  const encryptedKey1 = await downloadIntoMemory(encryptedKeyFile1).catch(console.error);
  const encryptedKey2 = await downloadIntoMemory(encryptedKeyFile2).catch(console.error);

  // For each key share, make a call to KMS to decrypt the key
  const privateKeyshare1 = await decryptSymmetric(encryptedKey1[0]);
  const privateKeyshare2 = await decryptSymmetric(encryptedKey2[0]);

  /* Perform the MPC calculations
  In this example, we're combining the private key shares
  Alternatively, you could import your mpc calculations here */
  const wallet = new Wallet(privateKeyshare1 + privateKeyshare2);

  // Sign the transaction
  const signedTransaction = await wallet.signTransaction(unsignedTransaction);

  return signedTransaction;

export const submitTransaction = async (signedTransaction) => {
  // This can now be sent to Ganache
  const hash = await provider.sendTransaction(signedTransaction);
  return hash;

export const uploadFromMemory = async (contents) => {
  // Upload the results to the bucket without service account impersonation
  await storage.bucket(process.env.RESULTS_BUCKET)
      .file('transaction_receipt_' +


Create the kms decryption file.

import {KeyManagementServiceClient} from '@google-cloud/kms';
import {credentialConfig} from './credential-config.js';

import crc32c from 'fast-crc32c';

const projectId = process.env.MPC_PROJECT_ID;
const locationId = 'global';
const keyRingId = 'mpc-keys';
const keyId = 'mpc-key';

// Instantiates a client
const client = new KeyManagementServiceClient({
  credentials: credentialConfig,

// Build the key name
const keyName = client.cryptoKeyPath(projectId, locationId, keyRingId, keyId);

export const decryptSymmetric = async (ciphertext) => {
  const ciphertextCrc32c = crc32c.calculate(ciphertext);
  const [decryptResponse] = await client.decrypt({
    name: keyName,
    ciphertextCrc32c: {
      value: ciphertextCrc32c,

  // Optional, but recommended: perform integrity verification on decryptResponse.
  // For more details on ensuring E2E in-transit integrity to and from Cloud KMS visit:
  if (
    crc32c.calculate(decryptResponse.plaintext) !==
  ) {
    throw new Error('Decrypt: response corrupted in-transit');

  const plaintext = decryptResponse.plaintext.toString();

  return plaintext;


Create the credential-config.js file. This stores our workload identity pool paths and details for the service account impersonation.

export const credentialConfig = {
  type: 'external_account',
  audience: `//${process.env.MPC_PROJECT_NUMBER}/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier`,
  subject_token_type: 'urn:ietf:params:oauth:token-type:jwt',
  token_url: '',
  credential_source: {
    file: '/run/container_launcher/attestation_verifier_claims_token',
  service_account_impersonation_url: `${process.env.MPC_PROJECT_ID}`,


Finally, we'll create our Dockerfile.

# pull official base image
FROM node:16.18.0

ENV NODE_ENV=production


COPY ["package.json", "package-lock.json*", "./"]

RUN npm install --production

COPY . .

LABEL "tee.launch_policy.allow_cmd_override"="true"

CMD [ "node", "index.js" ]

Once all the files are created, it should look like:

Cloud Editor Window

Create the repository

Click "Open Terminal" to re-open the Cloud Shell. Then create the Artifact Registry docker repository

$ gcloud artifacts repositories create mpc-workloads \
  --repository-format=docker --location=us

Build and publish the Docker container.

$ gcloud auth configure-docker
docker build -t$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest mpc-ethereum-demo
docker push$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest

You might need to hit Y to confirm the config file.

  1. Grant the service account that's going to run the workload the Artifact Registry Reader (roles/artifactregistry.reader) role so it can read from the repository:
    gcloud artifacts repositories add-iam-policy-binding mpc-workloads \
        --location=us \
        --member=serviceAccount:run-confidential-vm@$ \
  2. Grant the workloadUser role to the service account
    gcloud projects add-iam-policy-binding $MPC_PROJECT_ID \
    --member=serviceAccount:run-confidential-vm@$ \

7. Create the MPC Operator Confidential Space Instance

Create the Confidential VM instance.

The following variables have been added to the image:

  • NODE_URL: the URL of the Ethereum node that will process the signed transaction.
  • RESULTS_BUCKET: the bucket that stores the mpc transaction result.
  • KEY_BUCKET: the bucket that stores the mpc encrypted keys.
  • MPC_PROJECT_NUMBER: the project number, used for the credential config file.
  • MPC_PROJECT_ID: the project id, used for the credential config file.
gcloud compute instances create mpc-cvm --confidential-compute \
  --shielded-secure-boot \
  --maintenance-policy=TERMINATE --scopes=cloud-platform --zone=us-central1-a \
  --image-project=confidential-space-images \
  --image-family=confidential-space \
  --service-account=run-confidential-vm@$ \
  --metadata ^~^$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest~tee-restart-policy=Never~tee-env-NODE_URL=$(gcloud compute instances describe mpc-lab-ethereum-node --format='get(networkInterfaces[0].networkIP)' --zone=us-central1-a)~tee-env-RESULTS_BUCKET=$MPC_PROJECT_ID-mpc-results-storage~tee-env-KEY_BUCKET=$MPC_PROJECT_ID-mpc-encrypted-keys~tee-env-MPC_PROJECT_ID=$MPC_PROJECT_ID~tee-env-MPC_PROJECT_NUMBER=$(gcloud projects describe $MPC_PROJECT_ID --format="value(projectNumber)")

Check the Cloud Storage Results

You can view the transaction receipt in Cloud Storage. It might take a few minutes for Confidential Space to boot and for results to appear. You'll know the container is done when the VM is in the stopped state.

  1. Go to the Cloud Storage Browser page.
  2. Click $MPC_PROJECT_ID-mpc-results-storage.
  3. Click on the transaction_receipt file.
  4. Click Download to download and view the transaction response.

Check the Ganache Blockchain Transaction

You can also view the transaction in the blockchain log.

  1. Go to the Cloud Compute Engine page.
  2. Click on the mpc-lab-ethereum-node VM.
  3. Click SSH to open the SSH-in-browser window.
  4. In the SSH window, enter sudo docker ps to see the running Ganache container.
  5. Find the container ID for trufflesuite/ganache:v7.7.3
  6. Enter sudo docker logs CONTAINER_ID replacing CONTAINER_ID with the ID for trufflesuite/ganache:v7.7.3.
  7. View the logs for Ganache and confirm that there is a transaction listed in the logs.

8. Congratulations!

You created a confidential space VM and signed a blockchain transaction using multi-party computation!

Clean up

If you are done exploring, please consider deleting your project.

  • Go to the Cloud Platform Console
  • Select the project you want to shut down, then click "Delete" at the top. This schedules the project for deletion.

Learn More