1. Introduction
In this codelab you will learn how to use AlloyDB AI by combining vector search with Vertex AI embeddings.
Prerequisites
- A basic understanding of the Google Cloud, Console
- Basic skills in command line interface and google shell
What you'll learn
- How to deploy AlloyDB cluster and primary instance
- How to connect to the AlloyDB from Google Compute Engine VM
- How to create database and enable AlloyDB AI
- How to load data to the database
- How to use Vertex AI embedding model in AlloyDB
- How to enrich the result using Vertex AI generative model
- How to improve performance using vector index
What you'll need
- A Google Cloud Account and Google Cloud Project
- A web browser such as Chrome
2. Setup and Requirements
Self-paced environment setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Start Cloud Shell
While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.
From the Google Cloud Console, click the Cloud Shell icon on the top right toolbar:
It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this codelab can be done within a browser. You do not need to install anything.
3. Before you begin
Enable API
Output:
Inside Cloud Shell, make sure that your project ID is setup:
gcloud config set project [YOUR-PROJECT-ID]
Set environment variable PROJECT_ID:
PROJECT_ID=$(gcloud config get-value project)
Enable all necessary services:
gcloud services enable alloydb.googleapis.com \
compute.googleapis.com \
cloudresourcemanager.googleapis.com \
servicenetworking.googleapis.com \
aiplatform.googleapis.com
Expected output
student@cloudshell:~ (test-project-001-402417)$ gcloud config set project test-project-001-402417 Updated property [core/project]. student@cloudshell:~ (test-project-001-402417)$ PROJECT_ID=$(gcloud config get-value project) Your active configuration is: [cloudshell-14650] student@cloudshell:~ (test-project-001-402417)$ student@cloudshell:~ (test-project-001-402417)$ gcloud services enable alloydb.googleapis.com \ compute.googleapis.com \ cloudresourcemanager.googleapis.com \ servicenetworking.googleapis.com \ aiplatform.googleapis.com Operation "operations/acat.p2-4470404856-1f44ebd8-894e-4356-bea7-b84165a57442" finished successfully.
Configure your default region to use the Vertex AI embedding models. Read more about available locations for Vertex AI. In the example we are using us-central1 region.
gcloud config set compute/region us-central1
4. Deploy AlloyDB
Before creating an AlloyDB cluster we need an available private IP range in our VPC to be used by the future AlloyDB instance. If we don't have it then we need to create it, assign it to be used by internal Google services and after that we will be able to create the cluster and instance.
Create private IP range
We need to configure Private Service Access configuration in our VPC for AlloyDB. The assumption here is that we have the "default" VPC network in the project and it is going to be used for all actions.
Create the private IP range:
gcloud compute addresses create psa-range \
--global \
--purpose=VPC_PEERING \
--prefix-length=24 \
--description="VPC private service access" \
--network=default
Create private connection using the allocated IP range:
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=psa-range \
--network=default
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute addresses create psa-range \ --global \ --purpose=VPC_PEERING \ --prefix-length=24 \ --description="VPC private service access" \ --network=default Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/global/addresses/psa-range]. student@cloudshell:~ (test-project-402417)$ gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --ranges=psa-range \ --network=default Operation "operations/pssn.p24-4470404856-595e209f-19b7-4669-8a71-cbd45de8ba66" finished successfully. student@cloudshell:~ (test-project-402417)$
Create AlloyDB Cluster
Create an AlloyDB cluster in the us-central1 region.
Define password for the postgres user. You can define your own password or use a random function to generate one
export PGPASSWORD=`openssl rand -hex 12`
Expected console output:
student@cloudshell:~ (test-project-402417)$ export PGPASSWORD=`openssl rand -hex 12`
Note the PostgreSQL password for future use:
echo $PGPASSWORD
Expected console output:
student@cloudshell:~ (test-project-402417)$ echo $PGPASSWORD bbefbfde7601985b0dee5723
Define region and AlloyDB cluster name. We are going to use us-central1 region and alloydb-aip-01 as a cluster name:
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
Run command to create the cluster:
gcloud alloydb clusters create $ADBCLUSTER \
--password=$PGPASSWORD \
--network=default \
--region=$REGION
Expected console output:
export REGION=us-central1 export ADBCLUSTER=alloydb-aip-01 gcloud alloydb clusters create $ADBCLUSTER \ --password=$PGPASSWORD \ --network=default \ --region=$REGION Operation ID: operation-1697655441138-6080235852277-9e7f04f5-2012fce4 Creating cluster...done.
Create AlloyDB Primary Instance
Create an AlloyDB primary instance for our cluster in the same cloud shell session. If you are disconnected you will need to define the region and cluster name environment variables again.
gcloud alloydb instances create $ADBCLUSTER-pr \
--instance-type=PRIMARY \
--cpu-count=2 \
--region=$REGION \
--cluster=$ADBCLUSTER
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud alloydb instances create $ADBCLUSTER-pr \ --instance-type=PRIMARY \ --cpu-count=2 \ --region=$REGION \ --availability-type ZONAL \ --cluster=$ADBCLUSTER Operation ID: operation-1697659203545-6080315c6e8ee-391805db-25852721 Creating instance...done.
5. Connect to AlloyDB
AlloyDB is deployed using a private-only connection, so we need a VM with PostgreSQL client installed to work with the database.
Deploy GCE VM
Create a GCE VM in the same region and VPC as the AlloyDB cluster.
In Cloud Shell execute:
export ZONE=us-central1-a
gcloud compute instances create instance-1 \
--zone=$ZONE \
--create-disk=auto-delete=yes,boot=yes,image=projects/debian-cloud/global/images/$(gcloud compute images list --filter="family=debian-12 AND family!=debian-12-arm64" --format="value(name)") \
--scopes=https://www.googleapis.com/auth/cloud-platform
Expected console output:
student@cloudshell:~ (test-project-402417)$ export ZONE=us-central1-a student@cloudshell:~ (test-project-402417)$ export ZONE=us-central1-a gcloud compute instances create instance-1 \ --zone=$ZONE \ --create-disk=auto-delete=yes,boot=yes,image=projects/debian-cloud/global/images/$(gcloud compute images list --filter="family=debian-12 AND family!=debian-12-arm64" --format="value(name)") \ --scopes=https://www.googleapis.com/auth/cloud-platform Created [https://www.googleapis.com/compute/v1/projects/test-project-402417/zones/us-central1-a/instances/instance-1]. NAME: instance-1 ZONE: us-central1-a MACHINE_TYPE: n1-standard-1 PREEMPTIBLE: INTERNAL_IP: 10.128.0.2 EXTERNAL_IP: 34.71.192.233 STATUS: RUNNING
Install Postgres Client
Install the PostgreSQL client software on the deployed VM
Connect to the VM:
gcloud compute ssh instance-1 --zone=us-central1-a
Expected console output:
student@cloudshell:~ (test-project-402417)$ gcloud compute ssh instance-1 --zone=us-central1-a Updating project ssh metadata...working..Updated [https://www.googleapis.com/compute/v1/projects/test-project-402417]. Updating project ssh metadata...done. Waiting for SSH key to propagate. Warning: Permanently added 'compute.5110295539541121102' (ECDSA) to the list of known hosts. Linux instance-1.us-central1-a.c.gleb-test-short-001-418811.internal 6.1.0-18-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. student@instance-1:~$
Install the software running command inside the VM:
sudo apt-get update
sudo apt-get install --yes postgresql-client
Expected console output:
student@instance-1:~$ sudo apt-get update sudo apt-get install --yes postgresql-client Get:1 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable InRelease [5146 B] Get:2 https://packages.cloud.google.com/apt cloud-sdk-bullseye InRelease [6406 B] Hit:3 https://deb.debian.org/debian bullseye InRelease Get:4 https://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB] Get:5 https://packages.cloud.google.com/apt google-compute-engine-bullseye-stable/main amd64 Packages [1930 B] Get:6 https://deb.debian.org/debian bullseye-updates InRelease [44.1 kB] Get:7 https://deb.debian.org/debian bullseye-backports InRelease [49.0 kB] ...redacted... update-alternatives: using /usr/share/postgresql/13/man/man1/psql.1.gz to provide /usr/share/man/man1/psql.1.gz (psql.1.gz) in auto mode Setting up postgresql-client (13+225) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for libc-bin (2.31-13+deb11u7) ...
Connect to the Instance
Connect to the primary instance from the VM using psql.
In the same Cloud Shell tab with the opened SSH session to your instance-1 VM.
Use the noted AlloyDB password (PGPASSWORD) value and the AlloyDB cluster id to connect to AlloyDB from the GCE VM:
export PGPASSWORD=<Noted password>
export PROJECT_ID=$(gcloud config get-value project)
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
export INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)")
psql "host=$INSTANCE_IP user=postgres sslmode=require"
Expected console output:
student@instance-1:~$ export PGPASSWORD=CQhOi5OygD4ps6ty student@instance-1:~$ ADBCLUSTER=alloydb-aip-01 student@instance-1:~$ REGION=us-central1 student@instance-1:~$ INSTANCE_IP=$(gcloud alloydb instances describe $ADBCLUSTER-pr --cluster=$ADBCLUSTER --region=$REGION --format="value(ipAddress)") gleb@instance-1:~$ psql "host=$INSTANCE_IP user=postgres sslmode=require" psql (15.6 (Debian 15.6-0+deb12u1), server 15.5) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) Type "help" for help. postgres=>
Close the psql session:
exit
6. Prepare Database
We need to create a database, enable Vertex AI integration, create database objects and import the data.
Grant Necessary Permissions to AlloyDB
Add Vertex AI permissions to the AlloyDB service agent.
Open another Cloud Shell tab using the sign "+" at the top.
In the new cloud shell tab execute:
PROJECT_ID=$(gcloud config get-value project)
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-alloydb.iam.gserviceaccount.com" \
--role="roles/aiplatform.user"
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ PROJECT_ID=$(gcloud config get-value project) Your active configuration is: [cloudshell-11039] student@cloudshell:~ (test-project-001-402417)$ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-alloydb.iam.gserviceaccount.com" \ --role="roles/aiplatform.user" Updated IAM policy for project [test-project-001-402417]. bindings: - members: - serviceAccount:service-4470404856@gcp-sa-alloydb.iam.gserviceaccount.com role: roles/aiplatform.user - members: ... etag: BwYIEbe_Z3U= version: 1
Close the tab by either execution command "exit" in the tab:
exit
Create Database
Create database quickstart.
In the GCE VM session execute:
Create database:
psql "host=$INSTANCE_IP user=postgres" -c "CREATE DATABASE quickstart_db"
Expected console output:
student@instance-1:~$ psql "host=$INSTANCE_IP user=postgres" -c "CREATE DATABASE quickstart_db" CREATE DATABASE student@instance-1:~$
Enable Vertex AI Integration
Enable Vertex AI integration and the pgvector extensions in the database.
In the GCE VM execute:
psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "CREATE EXTENSION IF NOT EXISTS google_ml_integration CASCADE"
psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "CREATE EXTENSION IF NOT EXISTS vector"
Expected console output:
student@instance-1:~$ psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "CREATE EXTENSION IF NOT EXISTS google_ml_integration CASCADE" psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "CREATE EXTENSION IF NOT EXISTS vector" CREATE EXTENSION CREATE EXTENSION student@instance-1:~$
Import Data
Download the prepared data and import it into the new database.
In the GCE VM execute:
gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_demo_schema.sql |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db"
gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_products.csv |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "\copy cymbal_products from stdin csv header"
gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_inventory.csv |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "\copy cymbal_inventory from stdin csv header"
gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_stores.csv |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "\copy cymbal_stores from stdin csv header"
Expected console output:
student@instance-1:~$ gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_demo_schema.sql |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" SET SET SET SET SET set_config ------------ (1 row) SET SET SET SET SET SET CREATE TABLE ALTER TABLE CREATE TABLE ALTER TABLE CREATE TABLE ALTER TABLE CREATE TABLE ALTER TABLE CREATE SEQUENCE ALTER TABLE ALTER SEQUENCE ALTER TABLE ALTER TABLE ALTER TABLE student@instance-1:~$ gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_products.csv |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "\copy cymbal_products from stdin csv header" COPY 941 student@instance-1:~$ gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_inventory.csv |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "\copy cymbal_inventory from stdin csv header" COPY 263861 student@instance-1:~$ gsutil cat gs://cloud-training/gcc/gcc-tech-004/cymbal_stores.csv |psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" -c "\copy cymbal_stores from stdin csv header" COPY 4654 student@instance-1:~$
7. Calculate embeddings
After importing the data we got our product data in the cymbal_products table, inventory showing the number of available products in each store in the cymbal_inventory table and list of the stores in the cymbal_stores table. We need to calculate the vector data based on descriptions for our products and we are going to use function embedding for that. Using the function we are going to use Vertex AI integration to calculate vector data based on our products descriptions and add it to the table. You can read more about the used technology in the documentation.
Create embedding column
Connect to the database using psql and create a virtual column with the vector data using the embedding function in the cymbal_products table. The embedding function returns vector data from Vertex AI based on the data supplied from the product_description column.
psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db"
In the psql session after connecting to the database execute:
ALTER TABLE cymbal_products ADD COLUMN embedding vector(768) GENERATED ALWAYS AS (embedding('text-embedding-004',product_description)) STORED;
The command will create the virtual column and populate it with vector data.
Expected console output:
student@instance-1:~$ psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db" psql (13.11 (Debian 13.11-0+deb11u1), server 14.7) WARNING: psql major version 13, server major version 14. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. quickstart_db=> ALTER TABLE cymbal_products ADD COLUMN embedding vector(768) GENERATED ALWAYS AS (embedding('text-embedding-004',product_description)) STORED; ALTER TABLE quickstart_db=>
8. Run Similarity Search
We can now run our search using similarity search based on vector values calculated for the descriptions and the vector value we get for our request.
The SQL query can be executed from the same psql command line interface or, as alternative, from AlloyDB Studio. Any multirow and complex output might look better in the AlloyDB Studio.
Connect to AlloyDB Studio
In the following chapters all the SQL commands requiring connection to the database can be alternatively executed in the AlloyDB Studio. To run the command you need to open the web console interface for your AlloyDB cluster by clicking on the primary instance.
Then click on AlloyDB Studio on the left:
Choose the quickstart_db database, user postgres and provide the password noted when we created the cluster. Then click on the "Authenticate" button.
It will open the AlloyDB Studio interface. To run the commands in the database you click on the "Editor 1" tab on the right.
It opens interface where you can run SQL commands
If you prefer to use command line psql then follow the alternative route and connect to the database from your VM SSH session as it has been described in the previous chapters.
Run Similarity Search from psql
If your database session was disconnected then connect to the database again using psql or the AlloyDB Studio.
Connect to the database:
psql "host=$INSTANCE_IP user=postgres dbname=quickstart_db"
Run a query to get a list of available products most closely related to a client's request. The request we are going to pass to Vertex AI to get the vector value sounds like "What kind of fruit trees grow well here?"
Here is the query you can run to choose first 10 items most suitable for our request:
SELECT
cp.product_name,
left(cp.product_description,80) as description,
cp.sale_price,
cs.zip_code,
(cp.embedding <=> embedding('text-embedding-004','What kind of fruit trees grow well here?')::vector) as distance
FROM
cymbal_products cp
JOIN cymbal_inventory ci on
ci.uniq_id=cp.uniq_id
JOIN cymbal_stores cs on
cs.store_id=ci.store_id
AND ci.inventory>0
AND cs.store_id = 1583
ORDER BY
distance ASC
LIMIT 10;
And here is the expected output:
quickstart_db=> SELECT cp.product_name, left(cp.product_description,80) as description, cp.sale_price, cs.zip_code, (cp.embedding <=> embedding('text-embedding-004','What kind of fruit trees grow well here?')::vector) as distance FROM cymbal_products cp JOIN cymbal_inventory ci on ci.uniq_id=cp.uniq_id JOIN cymbal_stores cs on cs.store_id=ci.store_id AND ci.inventory>0 AND cs.store_id = 1583 ORDER BY distance ASC LIMIT 10; product_name | description | sale_price | zip_code | distance -------------------------+----------------------------------------------------------------------------------+------------+----------+--------------------- Cherry Tree | This is a beautiful cherry tree that will produce delicious cherries. It is an d | 75.00 | 93230 | 0.43922018972266397 Meyer Lemon Tree | Meyer Lemon trees are California's favorite lemon tree! Grow your own lemons by | 34 | 93230 | 0.4685112926118228 Toyon | This is a beautiful toyon tree that can grow to be over 20 feet tall. It is an e | 10.00 | 93230 | 0.4835677149651668 California Lilac | This is a beautiful lilac tree that can grow to be over 10 feet tall. It is an d | 5.00 | 93230 | 0.4947204525907498 California Peppertree | This is a beautiful peppertree that can grow to be over 30 feet tall. It is an e | 25.00 | 93230 | 0.5054166905547247 California Black Walnut | This is a beautiful walnut tree that can grow to be over 80 feet tall. It is a d | 100.00 | 93230 | 0.5084219510932597 California Sycamore | This is a beautiful sycamore tree that can grow to be over 100 feet tall. It is | 300.00 | 93230 | 0.5140519790508755 Coast Live Oak | This is a beautiful oak tree that can grow to be over 100 feet tall. It is an ev | 500.00 | 93230 | 0.5143126438081371 Fremont Cottonwood | This is a beautiful cottonwood tree that can grow to be over 100 feet tall. It i | 200.00 | 93230 | 0.5174774727252058 Madrone | This is a beautiful madrona tree that can grow to be over 80 feet tall. It is an | 50.00 | 93230 | 0.5227400803389093
9. Improve Response
You can improve the response to a client application using the result of the query and prepare a meaningful output using the supplied query results as part of the prompt to the Vertex AI generative foundation language model.
To achieve that we plan to generate a JSON with our results from the vector search, then use that generated JSON as addition to a prompt for a text LLM model in Vertex AI to create a meaningful output. In the first step we generate the JSON, then we test it in the Vertex AI Studio and in the last step we incorporate it into a SQL statement which can be used in an application.
Generate output in JSON format
Modify the query to generate the output in JSON format and return only one row to pass to Vertex AI
Here is the example of the query:
WITH trees as (
SELECT
cp.product_name,
left(cp.product_description,80) as description,
cp.sale_price,
cs.zip_code,
cp.uniq_id as product_id
FROM
cymbal_products cp
JOIN cymbal_inventory ci on
ci.uniq_id=cp.uniq_id
JOIN cymbal_stores cs on
cs.store_id=ci.store_id
AND ci.inventory>0
AND cs.store_id = 1583
ORDER BY
(cp.embedding <=> embedding('text-embedding-004','What kind of fruit trees grow well here?')::vector) ASC
LIMIT 1)
SELECT json_agg(trees) FROM trees;
And here is the expected JSON in the output:
[{"product_name":"Cherry Tree","description":"This is a beautiful cherry tree that will produce delicious cherries. It is an d","sale_price":75.00,"zip_code":93230,"product_id":"d536e9e823296a2eba198e52dd23e712"}]
Run the prompt in Vertex AI Studio
We can use the generated JSON to supply it as a part of the prompt to generative AI text model in the Vertex AI Studio
Open the Vertex AI Studio in the cloud console.
It may ask you to enable additional APIs but you can ignore the request. We don't need any additional APIs to finish our lab.
Here is the prompt we are going to use:
You are a friendly advisor helping to find a product based on the customer's needs.
Based on the client request we have loaded a list of products closely related to search.
The list in JSON format with list of values like {"product_name":"name","description":"some description","sale_price":10,"zip_code": 10234, "produt_id": "02056727942aeb714dc9a2313654e1b0"}
Here is the list of products:
[place for our JSON]
The customer asked "What tree is growing the best here?"
You should give information about the product, price and some supplemental information' as prompt
And here is the result when we run the prompt with our JSON values and using the gemini-1.5-flash model:
The answer we got from the model in this example follows. Note that your answer may be different because of model and parameters changes over time:
"I see you're looking for a tree that thrives in your area. Based on your zip code, 93230, the Cherry Tree seems like a great option!
It's described as a beautiful tree that produces delicious cherries. It's currently on sale for $75.00.
While I don't have specific details on its growth rate in your area, I can tell you that cherry trees generally prefer well-drained soil and full sun.
To ensure the best results, I recommend consulting with a local nursery or gardening expert who can provide more tailored advice for your specific location and soil conditions. They can also help you choose the best variety for your needs and offer tips on planting and care."
Run the prompt in PSQL
We can use the AlloyDB AI integration with Vertex AI to get the same response from a generative model using SQL directly in the database. But to use the gemini-1.5-flash model we need to register it first.
Upgrade the extension to version 1.4.1 (if the current version is lower). Connect to the quickstart_db database from psql as it has been shown before (or use AlloyDB Studio) and execute:
SELECT extversion from pg_extension where extname='google_ml_integration';
If the returned value is less than 1.4.1 then execute:
ALTER EXTENSION google_ml_integration UPDATE TO '1.4.1';
Then we need to set google_ml_integration.enable_model_support database flag to "on". To perform that you can use the AlloyDB web console interface or run the following gcloud command.
PROJECT_ID=$(gcloud config get-value project)
REGION=us-central1
ADBCLUSTER=alloydb-aip-01
gcloud beta alloydb instances update $ADBCLUSTER-pr \
--database-flags google_ml_integration.enable_model_support=on \
--region=$REGION \
--cluster=$ADBCLUSTER \
--project=$PROJECT_ID \
--update-mode=FORCE_APPLY
The command takes around 3-5 min to execute in the background. Then you can verify the new flag in the psql session or using AlloyDB Studio connecting to the quickstart_db database.
show google_ml_integration.enable_model_support;
The expected output from the psql session is "on":
postgres=> show google_ml_integration.enable_model_support; google_ml_integration.enable_model_support -------------------------------------------- on (1 row)
Then we need to register two models. The first one is the already used text-embedding-004 model. It needs to be registered since we enabled the model registration capabilities.
To register the model run in psql or AlloyDB Studio the following code:
CALL
google_ml.create_model(
model_id => 'text-embedding-004',
model_provider => 'google',
model_qualified_name => 'text-embedding-004',
model_type => 'text_embedding',
model_auth_type => 'alloydb_service_agent_iam',
model_in_transform_fn => 'google_ml.vertexai_text_embedding_input_transform',
model_out_transform_fn => 'google_ml.vertexai_text_embedding_output_transform');
And the next model we need to register is gemini-1.5-flash-002 which will be used to generate the user-friendly output.
CALL
google_ml.create_model(
model_id => 'gemini-1.5-flash-002',
model_request_url => 'https://$REGION-aiplatform.googleapis.com/v1/projects/$PROJECT_ID/locations/$REGION/publishers/google/models/gemini-1.5-flash-002:streamGenerateContent',
model_provider => 'google',
model_auth_type => 'alloydb_service_agent_iam');
You can always verify the list of registered models by selecting information from the google_ml.model_info_view.
select model_id,model_type from google_ml.model_info_view;
Here is sample output
quickstart_db=> select model_id,model_type from google_ml.model_info_view; model_id | model_type -------------------------+---------------- textembedding-gecko | text_embedding textembedding-gecko@001 | text_embedding text-embedding-004 | text_embedding gemini-1.5-flash-001 | generic (4 rows)
Now we can use the generated in a subquery JSON to supply it as a part of the prompt to generative AI text model using SQL.
In the psql or AlloyDB Studio session to the database run the query
WITH trees AS (
SELECT
cp.product_name,
cp.product_description AS description,
cp.sale_price,
cs.zip_code,
cp.uniq_id AS product_id
FROM
cymbal_products cp
JOIN cymbal_inventory ci ON
ci.uniq_id = cp.uniq_id
JOIN cymbal_stores cs ON
cs.store_id = ci.store_id
AND ci.inventory>0
AND cs.store_id = 1583
ORDER BY
(cp.embedding <=> embedding('text-embedding-004',
'What kind of fruit trees grow well here?')::vector) ASC
LIMIT 1),
prompt AS (
SELECT
'You are a friendly advisor helping to find a product based on the customer''s needs.
Based on the client request we have loaded a list of products closely related to search.
The list in JSON format with list of values like {"product_name":"name","product_description":"some description","sale_price":10}
Here is the list of products:' || json_agg(trees) || 'The customer asked "What kind of fruit trees grow well here?"
You should give information about the product, price and some supplemental information' AS prompt_text
FROM
trees),
response AS (
SELECT
json_array_elements(google_ml.predict_row( model_id =>'gemini-1.5-flash-002',
request_body => json_build_object('contents',
json_build_object('role',
'user',
'parts',
json_build_object('text',
prompt_text)))))->'candidates'->0->'content'->'parts'->0->'text' AS resp
FROM
prompt)
SELECT
string_agg(resp::text,
' ')
FROM
response;
And here is the expected output. Your output might be different depending on the model version and parameters.:
"That" "'s a great question! Based on your location (assuming you're" " in zip code 93230), I have a suggestion for a" " fruit tree that should thrive.\n\nWe have the **Cherry Tree** available.\n\n**Product Name:** Cherry Tree\n\n**Description:** This is a beautiful cherry" " tree that will produce delicious cherries. It's a deciduous tree (meaning it loses its leaves in the fall) growing to about 15 feet tall." " The leaves are dark green in summer, turning a beautiful red in the fall. Cherry trees are known for their beauty, shade, and privacy.\n\n**Sale Price:** $75.00\n\n**Important Considerations for Growing" " Cherry Trees:**\n\n* **Climate:** Cherry trees prefer a cool, moist climate, and 93230 falls within a suitable range (USDA zones 4-9). However, it's always a good idea to" " check the specific microclimate of your property (sun exposure, drainage etc.).\n* **Soil:** They do best in sandy soil. If your soil is different, you may need to amend it to improve drainage.\n* **Pollination:** Many cherry varieties require a second, compatible cherry tree for proper pollination" ". Check the specific pollination needs of this variety before purchase if you want a significant cherry yield.\n\nThis cherry tree is a beautiful addition to any yard and will provide you with delicious cherries if you can meet its needs. Would you like to know more about its pollination requirements, or perhaps see if we have any other" " fruit trees suitable for your area?\n" ""
10. Create vector index
Our dataset is quite small and response time primarily depends on interaction with AI models. But when you have millions of vectors the vector search part can take a significant portion of our response time and put a high load on the system. To improve that we can build an index on top of our vectors.
Create ScaNN index
To build the SCANN index we need to enable one more extension. The extension alloydb_scann provides us interface to work with the ANN type vector index using Google ScaNN algorithm.
CREATE EXTENSION IF NOT EXISTS alloydb_scann;
Expected output:
quickstart_db=> CREATE EXTENSION IF NOT EXISTS alloydb_scann; CREATE EXTENSION Time: 27.468 ms quickstart_db=>
Now we can create the index. In the following example I am leaving most parameters as default and providing only a number of partitions (num_leaves) for the index:
CREATE INDEX cymbal_products_embeddings_scann ON cymbal_products
USING scann (embedding cosine)
WITH (num_leaves=31, max_num_levels = 2);
You can read about tuning index parameters in the documentation.
Expected output:
quickstart_db=> CREATE INDEX cymbal_products_embeddings_scann ON cymbal_products USING scann (embedding cosine) WITH (num_leaves=31, max_num_levels = 2); CREATE INDEX quickstart_db=>
Compare Response
Now we can run the vector search query in EXPLAIN mode and verify if the index has been used.
EXPLAIN (analyze)
WITH trees as (
SELECT
cp.product_name,
left(cp.product_description,80) as description,
cp.sale_price,
cs.zip_code,
cp.uniq_id as product_id
FROM
cymbal_products cp
JOIN cymbal_inventory ci on
ci.uniq_id=cp.uniq_id
JOIN cymbal_stores cs on
cs.store_id=ci.store_id
AND ci.inventory>0
AND cs.store_id = 1583
ORDER BY
(cp.embedding <=> embedding('text-embedding-004','What kind of fruit trees grow well here?')::vector) ASC
LIMIT 1)
SELECT json_agg(trees) FROM trees;
Expected output:
Aggregate (cost=16.59..16.60 rows=1 width=32) (actual time=2.875..2.877 rows=1 loops=1) -> Subquery Scan on trees (cost=8.42..16.59 rows=1 width=142) (actual time=2.860..2.862 rows=1 loops=1) -> Limit (cost=8.42..16.58 rows=1 width=158) (actual time=2.855..2.856 rows=1 loops=1) -> Nested Loop (cost=8.42..6489.19 rows=794 width=158) (actual time=2.854..2.855 rows=1 loops=1) -> Nested Loop (cost=8.13..6466.99 rows=794 width=938) (actual time=2.742..2.743 rows=1 loops=1) -> Index Scan using cymbal_products_embeddings_scann on cymbal_products cp (cost=7.71..111.99 rows=876 width=934) (actual time=2.724..2.724 rows=1 loops=1) Order By: (embedding <=> '[0.008864171,0.03693164,-0.024245683,-0.00355923,0.0055611245,0.015985578,...<redacted>...5685,-0.03914233,-0.018452475,0.00826032,-0.07372604]'::vector) -> Index Scan using walmart_inventory_pkey on cymbal_inventory ci (cost=0.42..7.26 rows=1 width=37) (actual time=0.015..0.015 rows=1 loops=1) Index Cond: ((store_id = 1583) AND (uniq_id = (cp.uniq_id)::text))
From the output we can clearly see that the query was using "Index Scan using cymbal_products_embeddings_scann on cymbal_products".
And if we run the query without explain:
WITH trees as (
SELECT
cp.product_name,
left(cp.product_description,80) as description,
cp.sale_price,
cs.zip_code,
cp.uniq_id as product_id
FROM
cymbal_products cp
JOIN cymbal_inventory ci on
ci.uniq_id=cp.uniq_id
JOIN cymbal_stores cs on
cs.store_id=ci.store_id
AND ci.inventory>0
AND cs.store_id = 1583
ORDER BY
(cp.embedding <=> embedding('text-embedding-004','What kind of fruit trees grow well here?')::vector) ASC
LIMIT 1)
SELECT json_agg(trees) FROM trees;
Expected output:
[{"product_name":"Meyer Lemon Tree","description":"Meyer Lemon trees are California's favorite lemon tree! Grow your own lemons by ","sale_price":34,"zip_code":93230,"product_id":"02056727942aeb714dc9a2313654e1b0"}]
We can see that the result is slightly different and return not the Cherry tree which was on the top in our search without index but the second choice the Meyer Lemon tree. So the index is giving us performance but still accurate enough to deliver good results.
You can try different indexes available for the vectors and more labs and examples with langchain integration available on the documentation page.
11. Clean up environment
Destroy the AlloyDB instances and cluster when you are done with the lab
Delete AlloyDB cluster and all instances
The cluster is destroyed with option force which also deletes all the instances belonging to the cluster.
In the cloud shell define the project and environment variables if you've been disconnected and all the previous settings are lost:
gcloud config set project <your project id>
export REGION=us-central1
export ADBCLUSTER=alloydb-aip-01
export PROJECT_ID=$(gcloud config get-value project)
Delete the cluster:
gcloud alloydb clusters delete $ADBCLUSTER --region=$REGION --force
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ gcloud alloydb clusters delete $ADBCLUSTER --region=$REGION --force All of the cluster data will be lost when the cluster is deleted. Do you want to continue (Y/n)? Y Operation ID: operation-1697820178429-6082890a0b570-4a72f7e4-4c5df36f Deleting cluster...done.
Delete AlloyDB Backups
Delete all AlloyDB backups for the cluster:
for i in $(gcloud alloydb backups list --filter="CLUSTER_NAME: projects/$PROJECT_ID/locations/$REGION/clusters/$ADBCLUSTER" --format="value(name)" --sort-by=~createTime) ; do gcloud alloydb backups delete $(basename $i) --region $REGION --quiet; done
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ for i in $(gcloud alloydb backups list --filter="CLUSTER_NAME: projects/$PROJECT_ID/locations/$REGION/clusters/$ADBCLUSTER" --format="value(name)" --sort-by=~createTime) ; do gcloud alloydb backups delete $(basename $i) --region $REGION --quiet; done Operation ID: operation-1697826266108-60829fb7b5258-7f99dc0b-99f3c35f Deleting backup...done.
Now we can destroy our VM
Delete GCE VM
In Cloud Shell execute:
export GCEVM=instance-1
export ZONE=us-central1-a
gcloud compute instances delete $GCEVM \
--zone=$ZONE \
--quiet
Expected console output:
student@cloudshell:~ (test-project-001-402417)$ export GCEVM=instance-1 export ZONE=us-central1-a gcloud compute instances delete $GCEVM \ --zone=$ZONE \ --quiet Deleted
12. Congratulations
Congratulations for completing the codelab.
What we've covered
- How to deploy AlloyDB cluster and primary instance
- How to connect to the AlloyDB from Google Compute Engine VM
- How to create database and enable AlloyDB AI
- How to load data to the database
- How to use Vertex AI embedding model in AlloyDB
- How to enrich the result using Vertex AI generative model
- How to improve performance using vector index
13. Survey
Output: