1. Overview
Working knowledge of the following features and concepts is helpful, but not strictly required.
- Cloud Storage, specifically buckets
- Compute Engine, specifically Confidential VM
- Service accounts
- Containers and remote repositories
- Workload identity federation and attribute conditions
What you'll learn
In this lab, we describe a reference implementation for MPC-compliant blockchain signing using Confidential Space. Let's imagine Company A, which wants to transfer digital assets to Company B. Since they are leveraging an MPC-compliant model, instead of individual private keys, they use distributed key shares where key shareholders (Alice and Bob) collaborate to sign a transaction. This gives Company A the benefits of simplifying the user experience, and creating operational efficiencies, while retaining control over their private keys.
To describe the critical components that make this possible, we will walk through the technical setup, and outline the approval and signing process that triggers the transfer of digital assets from Company A to Company B. Please note that Bob and Alice work for Company A, and are required to approve the transaction.
This reference implementation cover signature operation, but does not cover all the aspect of MPC key management. For example, we did not focus on key generation. Also, alternatives and complementary approaches exist – including using non-Google Cloud services for producing co-signatures, or having co-signers take turns to build the blockchain signature in their own environments (which is a more decentralized architecture). Our hope is that this lab inspires different approaches to MPC on Google Cloud.
- How to authorize access to protected resources based on the attributes of:
- What: the workload container
- Where: the Confidential Space environment (the Confidential Space image on Confidential VM)
- Who: the account that is running the workload
- How to configure the necessary Cloud resources for running Confidential Space
- How to run the workload in a Confidential VM running the Confidential Space VM image
In this lab, you build the foundation for this interaction with a simple workload that signs an Ethereum transaction in Confidential Space based on co-signer key materials. First, you configure the necessary Cloud resources. Then, you run the workload in Confidential Space.
Configuring resources
$MPC_PROJECT_ID-mpc-encrypted-keys
: the bucket that stores the encrypted keys.$MPC_PROJECT_ID-mpc-results-storage
: the bucket that stores the mpc transaction result.mpc-workload-container
: the Docker container that stores the workload.trusted-workload-pool
: the Workload Identity Pool (WIP) that validates claims.attestation-verifier
: the Workload Identity Pool provider which includes the authorization condition to use for tokens signed by the MPC service.trusted-mpc-account
: the service account thattrusted-workload-pool
uses to access the protected resources - in this step it has permission to view the encrypted keys that are stored in the$MPC_PROJECT_ID-mpc-encrypted-keys
bucket.run-confidential-vm
: the service account that has permission to access the Confidential VM that runs the workload
Required APIs
You must enable the following APIs in the specified projects to be able to complete this guide.
API name | API title |
cloudkms.googleapis.com | Cloud KMS |
compute.googleapis.com | Compute Engine |
confidentialcomputing.googleapis.com | Confidential Computing |
iamcredentials.googleapis.com | IAM |
artifactregistry.googleapis.com | Artifact Registry |
2. Setup and Requirements
Self-paced environment setup
- Sign in to Cloud Console and create a new project or reuse an existing one. (If you don't already have a Gmail or G Suite account, you must create one.)
Remember the project ID, a unique name across all Google Cloud projects (the name above has already been taken and will not work for you, sorry!). It will be referred to later in this codelab as PROJECT_ID
.
- Next, you'll need to enable billing in Cloud Console in order to use Google Cloud resources.
Running through this codelab shouldn't cost much, if anything at all. Be sure to to follow any instructions in the "Cleaning up" section which advises you how to shut down resources so you don't incur billing beyond this tutorial. New users of Google Cloud are eligible for the $300USD Free Trial program.
Using Google Cloud Shell
While Google Cloud Platform and Node.js can be operated remotely from your laptop, in this codelab you will use Google Cloud Shell, a command line environment running in the Cloud.
This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).
- To activate Cloud Shell from the Cloud Console, simply click Activate Cloud Shell
If you've never started Cloud Shell before, you're presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:
It should only take a few moments to provision and connect to Cloud Shell.
Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID
.
gcloud auth list
Command output
Credentialed accounts: - <myaccount>@<mydomain>.com (active)
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If, for some reason, the project is not set, simply issue the following command:
gcloud config set project <PROJECT_ID>
Cloud Shell also sets some environment variables by default, which may be useful as you run future commands.
echo $GOOGLE_CLOUD_PROJECT
Command output
<PROJECT_ID>
3. Key Generation and Encryption
To begin, set your base environment Project ID variable:
MPC_PROJECT_ID=$(gcloud config get-value core/project)
You can check this was properly set by running:
echo $MPC_PROJECT_ID
We'll be using this variable throughout the remainder of the lab.
If you haven't already, enable the APIs that will be used in the lab.
gcloud services enable cloudkms.googleapis.com compute.googleapis.com confidentialcomputing.googleapis.com iamcredentials.googleapis.com artifactregistry.googleapis.com
Create the encryption keyring in KMS for the private key
Create the encryption key which will be used to encrypt the private key shares.
- Create the key ring. After running the below command, you might need to enter
y
to enable the API.gcloud kms keyrings create mpc-keys --location=global
- Create the KMS key.
gcloud kms keys create mpc-key --location=global \ --keyring=mpc-keys --purpose=encryption --protection-level=hsm
- Grant your user account access to the key to encrypt the keys.
gcloud kms keys add-iam-policy-binding \ projects/$MPC_PROJECT_ID/locations/global/keyRings/mpc-keys/cryptoKeys/mpc-key \ --member="user:$(gcloud config get-value account)" \ --role='roles/cloudkms.cryptoKeyEncrypter'
Create the Ethereum private key
For this lab, you'll be using this private key: 0000000000000000000000000000000000000000000000000000000000000001
In a production scenario, you'd generate your own private key. If you'd like to use a different private key for the lab be sure to include it in the CLI flag below when running the Ganache VM.
We're now going to split our private key into two shares and encrypt each.
Add your private key to a gcloud file for encryption.
echo -n "00000000000000000000000000000000" >> alice-key-share
Then run the command to store Bob's private key share.
echo -n "00000000000000000000000000000001" >> bob-key-share
Encrypt the Ethereum private key shards using KMS
Encrypt Alice's private key share.
gcloud kms encrypt \
--key mpc-key \
--keyring mpc-keys \
--location global \
--plaintext-file alice-key-share \
--ciphertext-file alice-encrypted-key-share
Encrypt Bob's private key share.
gcloud kms encrypt \
--key mpc-key \
--keyring mpc-keys \
--location global \
--plaintext-file bob-key-share \
--ciphertext-file bob-encrypted-key-share
Create the bucket to store the encrypted keys
- Create the
mpc-encrypted-keys
bucket.Thempc-encrypted-keys
bucket will store the encrypted keys of Alice and Bob. In a production application, these keys could be held by Alice and Bob and then handed over when approval by each party is granted. They could also be separated out into different buckets on different projects.gsutil mb gs://$MPC_PROJECT_ID-mpc-encrypted-keys
- Upload Alice's and Bob's encrypted keys into the bucket. By doing this, we're approving the transaction and granting the Confidential Space VM access to the encrypted key.
gcloud storage cp alice-encrypted-key-share gs://$MPC_PROJECT_ID-mpc-encrypted-keys/
gcloud storage cp bob-encrypted-key-share gs://$MPC_PROJECT_ID-mpc-encrypted-keys/
Now that the keys have been created and encrypted, you can move on to the next step to create the MPC application.
4. Service Account and Workload Identity Pool
Create the MPC Service Account
- Create the trusted-mpc-account service account.
gcloud iam service-accounts create trusted-mpc-account
- Allow the MPC service account access to decrypt the key shards.
gcloud kms keys add-iam-policy-binding mpc-key \ --keyring='mpc-keys' --location='global' \ --member="serviceAccount:trusted-mpc-account@$MPC_PROJECT_ID.iam.gserviceaccount.com" \ --role='roles/cloudkms.cryptoKeyDecrypter'
Create a Workload Identity Pool
We want to authorize workloads to access the encrypted keys based on attributes of the following resources.
- What: Code that is verified
- Where: An environment that is secure
- Who: An operator that is trusted
We use Workload identity federation to enforce an access policy based on these requirements.
Workload identity federation allows you to specify attribute conditions. These conditions restrict which identities can authenticate with the workload identity pool (WIP). You can add the Attestation Verifier Service to the WIP as a workload identity pool provider to present measurements and enforce the policy.
To create the WIP, complete the following steps.
CLI
- Create a WIP.
gcloud iam workload-identity-pools create trusted-workload-pool \ --location="global"
- Create a new OIDC workload identity pool provider.The specified
--attribute-condition
authorizes access to thempc-workloads
container. It requires:- What: Latest
initial-workload-container
uploaded to thempc-workloads
repository. - Where: Confidential Space trusted execution environment, version 0.1 or later.
- Who: MPC
trusted-mpc
service account.
int(assertion.swversion) >= 1
toint(assertion.swversion) == 0
if you chooseconfidential-space-debug
image when creating the instance in the later step. See here for the full list of confidential vm attribute conditions.gcloud iam workload-identity-pools providers create-oidc attestation-verifier \ --location="global" \ --workload-identity-pool="trusted-workload-pool" \ --issuer-uri="https://confidentialcomputing.googleapis.com/" \ --allowed-audiences="https://sts.googleapis.com" \ --attribute-mapping="google.subject='assertion.sub'" \ --attribute-condition="assertion.swname == 'CONFIDENTIAL_SPACE' && 'STABLE' in assertion.submods.confidential_space.support_attributes && assertion.submods.container.image_reference == 'us-docker.pkg.dev/$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest' && 'run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com' in assertion.google_service_accounts"
- What: Latest
- Grant the
workloadIdentityUser
role on thetrusted-mpc-account
service account to thetrusted-workload-pool
WIP. This allows the WIP to impersonate the service account.gcloud iam service-accounts add-iam-policy-binding \ trusted-mpc-account@$MPC_PROJECT_ID.iam.gserviceaccount.com \ --role=roles/iam.workloadIdentityUser \ --member="principalSet://iam.googleapis.com/projects/$(gcloud projects describe $MPC_PROJECT_ID --format="value(projectNumber)")/locations/global/workloadIdentityPools/trusted-workload-pool/*"
Create run-confidential-vm service account
Create the run-confidential-vm
service account.
CLI
- Create the
run-confidential-vm
service account.gcloud iam service-accounts create run-confidential-vm
- Grant the Service Account User role on the run-confidential-vm service account to your user account. This allows your user account to impersonate the service account.
gcloud iam \ service-accounts add-iam-policy-binding \ run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com \ --member="user:$(gcloud config get-value account)" \ --role='roles/iam.serviceAccountUser'
- (Optional) Grant the service account the Log Writer permission. This allows the Confidential Space environment to write logs to Cloud Logging in addition to the Serial Console, so you can review logs after the VM is terminated (Requires
Security Admin
permission).gcloud projects add-iam-policy-binding $MPC_PROJECT_ID \ --member=serviceAccount:run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com \ --role=roles/logging.logWriter
5. Create the Blockchain Node and Results Bucket
Ganache Ethereum Node
- Create the Ethereum Ganache instance and take note of the IP address. After running the below command, you might need to enter
y
to enable the API.
gcloud compute instances create-with-container mpc-lab-ethereum-node \
--zone=us-central1-a \
--tags=http-server \
--shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--container-image=docker.io/trufflesuite/ganache:v7.7.3 \
--container-arg=--wallet.accounts=\"0x0000000000000000000000000000000000000000000000000000000000000001,0x21E19E0C9BAB2400000\" \
--container-arg=--port=80
Create a bucket for results
Create the $MPC_PROJECT_ID-mpc-results-storage
bucket. Then grant the run-confidential-vm
service account permission to create files in the bucket, so it can store the workload results there.
CLI
- Create the
mpc-results-storage
bucket.gsutil mb gs://$MPC_PROJECT_ID-mpc-results-storage
- Grant the Storage Object Creator role on the
/$MPC_PROJECT_ID-mpc-results-storage
bucket to therun-confidential-vm
service account. This permits the service account to store query results to the bucket.gsutil iam ch \ serviceAccount:run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com:objectCreator \ gs://$MPC_PROJECT_ID-mpc-results-storage
- Grant the Storage Object Viewer role on the
/$MPC_PROJECT_ID-mpc-encrypted-keys
bucket to thetrusted-mpc-account
service account. This permits the service account to view the encrypted keys that were added by Alice and Bob.gsutil iam ch \ serviceAccount:trusted-mpc-account@$MPC_PROJECT_ID.iam.gserviceaccount.com:objectViewer \ gs://$MPC_PROJECT_ID-mpc-encrypted-keys
6. Create the MPC Instance
Create the files in the editor
- In Cloud Shell, click the
button to launch the Cloud Shell Editor.
You'll then find yourself in an IDE environment similar to Visual Studio Code, in which you can create projects, edit source code, run your programs, etc. If your screen is too cramped, you can expand or shrink the dividing line between the console and your edit/terminal window by dragging the horizontal bar between those two regions.
You can switch back and forth between the Editor and the Terminal by clicking the Open Editor
and Open Terminal
buttons, respectively. Try switching back and forth between these two environments now.
Next, create a folder in which to store your work for this lab, by selecting File->New Folder, enter mpc-ethereum-demo
, and click OK
. All of the files you create in this lab, and all of the work you do in Cloud Shell, will take place in this folder.
package.json
Now create a package.json
file. In the Cloud Editor window, click the File->New File menu to create a new file. When prompted for the new file's name, enter package.json
and press the OK
button. Make sure the new file ends up in the mpc-ethereum-demo
project folder.
Place the following code into the package.json file. This will tell our image what packages should be used for the mpc application. In this case, we're using the @google-cloud/kms, @google-cloud/storage, ethers, and fast-crc32c libraries.
{
"name": "gcp-mpc-ethereum-demo",
"version": "1.0.0",
"description": "Demo for GCP multi-party-compute on Confidential Space",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"type": "module",
"dependencies": {
"@google-cloud/kms": "^3.2.0",
"@google-cloud/storage": "^6.9.2",
"ethers": "^5.7.2",
"fast-crc32c": "^2.0.0"
},
"author": "",
"license": "ISC"
}
index.js
Next, create a index.js
file. This is our entry file that specifies what commands should be run when the image starts up. We've also included a sample unsigned transaction. This transaction would normally be coming from an untrusted application that asks users for their signature. This index.js file also imports functions from mpc.js, which we'll be creating next.
import {signTransaction, submitTransaction, uploadFromMemory} from './mpc.js';
const signAndSubmitTransaction = async () => {
try {
// Create the unsigned transaction object
const unsignedTransaction = {
nonce: 0,
gasLimit: 21000,
gasPrice: '0x09184e72a000',
to: '0x0000000000000000000000000000000000000000',
value: '0x00',
data: '0x',
};
// Sign the transaction
const signedTransaction = await signTransaction(unsignedTransaction);
// Submit the transaction to Ganache
const transaction = await submitTransaction(signedTransaction);
// Write the transaction receipt
uploadFromMemory(transaction);
return transaction;
} catch (e) {
console.log(e);
uploadFromMemory(e);
}
};
await signAndSubmitTransaction();
mpc.js
Create the mpc.js file to do the signing and paste the following code into the file. This is where the transaction signing will occur. You'll notice we're importing from kms-decrypt and credential-config, which we'll be making next.
import {ethers} from 'ethers';
import {decryptSymmetric} from './kms-decrypt.js';
import {Storage} from '@google-cloud/storage';
import {credentialConfig} from './credential-config.js';
const providers = ethers.providers;
const Wallet = ethers.Wallet;
// The ID of the GCS bucket holding the encrypted keys
const bucketName = process.env.KEY_BUCKET;
// Name of the encrypted key files.
const encryptedKeyFile1 = 'alice-encrypted-key-share';
const encryptedKeyFile2 = 'bob-encrypted-key-share';
// Create a new storage client with the credentials
const storageWithCreds = new Storage({
credentials: credentialConfig,
});
// Create a new storage client without the credentials
const storage = new Storage();
const downloadIntoMemory = async (keyFile) => {
// Downloads the file into a buffer in memory.
const contents = await storageWithCreds.bucket(bucketName).file(keyFile).download();
return contents;
};
const provider = new providers.JsonRpcProvider(`http://${process.env.NODE_URL}:80`);
export const signTransaction = async (unsignedTransaction) => {
/* Check if Alice and Bob have both approved the transaction
For this example, we're checking if their encrypted keys are available. */
const encryptedKey1 = await downloadIntoMemory(encryptedKeyFile1).catch(console.error);
const encryptedKey2 = await downloadIntoMemory(encryptedKeyFile2).catch(console.error);
// For each key share, make a call to KMS to decrypt the key
const privateKeyshare1 = await decryptSymmetric(encryptedKey1[0]);
const privateKeyshare2 = await decryptSymmetric(encryptedKey2[0]);
/* Perform the MPC calculations
In this example, we're combining the private key shares
Alternatively, you could import your mpc calculations here */
const wallet = new Wallet(privateKeyshare1 + privateKeyshare2);
// Sign the transaction
const signedTransaction = await wallet.signTransaction(unsignedTransaction);
return signedTransaction;
};
export const submitTransaction = async (signedTransaction) => {
// This can now be sent to Ganache
const hash = await provider.sendTransaction(signedTransaction);
return hash;
};
export const uploadFromMemory = async (contents) => {
// Upload the results to the bucket without service account impersonation
await storage.bucket(process.env.RESULTS_BUCKET)
.file('transaction_receipt_' + Date.now())
.save(JSON.stringify(contents));
};
kms-decrypt.js
Create the kms decryption file.
import {KeyManagementServiceClient} from '@google-cloud/kms';
import {credentialConfig} from './credential-config.js';
import crc32c from 'fast-crc32c';
const projectId = process.env.MPC_PROJECT_ID;
const locationId = 'global';
const keyRingId = 'mpc-keys';
const keyId = 'mpc-key';
// Instantiates a client
const client = new KeyManagementServiceClient({
credentials: credentialConfig,
});
// Build the key name
const keyName = client.cryptoKeyPath(projectId, locationId, keyRingId, keyId);
export const decryptSymmetric = async (ciphertext) => {
const ciphertextCrc32c = crc32c.calculate(ciphertext);
const [decryptResponse] = await client.decrypt({
name: keyName,
ciphertext,
ciphertextCrc32c: {
value: ciphertextCrc32c,
},
});
// Optional, but recommended: perform integrity verification on decryptResponse.
// For more details on ensuring E2E in-transit integrity to and from Cloud KMS visit:
// https://cloud.google.com/kms/docs/data-integrity-guidelines
if (
crc32c.calculate(decryptResponse.plaintext) !==
Number(decryptResponse.plaintextCrc32c.value)
) {
throw new Error('Decrypt: response corrupted in-transit');
}
const plaintext = decryptResponse.plaintext.toString();
return plaintext;
};
credential-config.js
Create the credential-config.js file. This stores our workload identity pool paths and details for the service account impersonation.
export const credentialConfig = {
type: 'external_account',
audience: `//iam.googleapis.com/projects/${process.env.MPC_PROJECT_NUMBER}/locations/global/workloadIdentityPools/trusted-workload-pool/providers/attestation-verifier`,
subject_token_type: 'urn:ietf:params:oauth:token-type:jwt',
token_url: 'https://sts.googleapis.com/v1/token',
credential_source: {
file: '/run/container_launcher/attestation_verifier_claims_token',
},
service_account_impersonation_url: `https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/trusted-mpc-account@${process.env.MPC_PROJECT_ID}.iam.gserviceaccount.com:generateAccessToken`,
};
Dockerfile
Finally, we'll create our Dockerfile.
# pull official base image
FROM node:16.18.0
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
LABEL "tee.launch_policy.allow_cmd_override"="true"
LABEL "tee.launch_policy.allow_env_override"="NODE_URL,RESULTS_BUCKET,KEY_BUCKET,MPC_PROJECT_NUMBER,MPC_PROJECT_ID"
CMD [ "node", "index.js" ]
Once all the files are created, it should look like:
Create the repository
Click "Open Terminal" to re-open the Cloud Shell. Then create the Artifact Registry docker repository
$ gcloud artifacts repositories create mpc-workloads \
--repository-format=docker --location=us
Build and publish the Docker container.
$ gcloud auth configure-docker us-docker.pkg.dev
docker build -t us-docker.pkg.dev/$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest mpc-ethereum-demo
docker push us-docker.pkg.dev/$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest
You might need to hit Y
to confirm the config file.
- Grant the service account that's going to run the workload the Artifact Registry Reader (
roles/artifactregistry.reader
) role so it can read from the repository:gcloud artifacts repositories add-iam-policy-binding mpc-workloads \ --location=us \ --member=serviceAccount:run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com \ --role=roles/artifactregistry.reader
- Grant the workloadUser role to the service account
gcloud projects add-iam-policy-binding $MPC_PROJECT_ID \ --member=serviceAccount:run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com \ --role=roles/confidentialcomputing.workloadUser
7. Create the MPC Operator Confidential Space Instance
Create the Confidential VM instance.
The following variables have been added to the image:
NODE_URL
: the URL of the Ethereum node that will process the signed transaction.RESULTS_BUCKET
: the bucket that stores the mpc transaction result.KEY_BUCKET
: the bucket that stores the mpc encrypted keys.MPC_PROJECT_NUMBER
: the project number, used for the credential config file.MPC_PROJECT_ID
: the project id, used for the credential config file.
gcloud compute instances create mpc-cvm --confidential-compute \
--shielded-secure-boot \
--maintenance-policy=TERMINATE --scopes=cloud-platform --zone=us-central1-a \
--image-project=confidential-space-images \
--image-family=confidential-space \
--service-account=run-confidential-vm@$MPC_PROJECT_ID.iam.gserviceaccount.com \
--metadata ^~^tee-image-reference=us-docker.pkg.dev/$MPC_PROJECT_ID/mpc-workloads/initial-workload-container:latest~tee-restart-policy=Never~tee-env-NODE_URL=$(gcloud compute instances describe mpc-lab-ethereum-node --format='get(networkInterfaces[0].networkIP)' --zone=us-central1-a)~tee-env-RESULTS_BUCKET=$MPC_PROJECT_ID-mpc-results-storage~tee-env-KEY_BUCKET=$MPC_PROJECT_ID-mpc-encrypted-keys~tee-env-MPC_PROJECT_ID=$MPC_PROJECT_ID~tee-env-MPC_PROJECT_NUMBER=$(gcloud projects describe $MPC_PROJECT_ID --format="value(projectNumber)")
Check the Cloud Storage Results
You can view the transaction receipt in Cloud Storage. It might take a few minutes for Confidential Space to boot and for results to appear. You'll know the container is done when the VM is in the stopped state.
- Go to the Cloud Storage Browser page.
- Click
$MPC_PROJECT_ID-mpc-results-storage
. - Click on the
transaction_receipt
file. - Click Download to download and view the transaction response.
Check the Ganache Blockchain Transaction
You can also view the transaction in the blockchain log.
- Go to the Cloud Compute Engine page.
- Click on the
mpc-lab-ethereum-node
VM. - Click
SSH
to open the SSH-in-browser window. - In the SSH window, enter
sudo docker ps
to see the running Ganache container. - Find the container ID for
trufflesuite/ganache:v7.7.3
- Enter
sudo docker logs CONTAINER_ID
replacing CONTAINER_ID with the ID fortrufflesuite/ganache:v7.7.3
. - View the logs for Ganache and confirm that there is a transaction listed in the logs.
8. Congratulations!
You created a confidential space VM and signed a blockchain transaction using multi-party computation!
Clean up
If you are done exploring, please consider deleting your project.
- Go to the Cloud Platform Console
- Select the project you want to shut down, then click "Delete" at the top. This schedules the project for deletion.