Build a Patent Search App with AlloyDB, Vector Search & Vertex AI!

1. Overview

Across different industries, Patent research is a critical tool for understanding the competitive landscape, identifying potential licensing or acquisition opportunities, and avoiding infringing on existing patents.

Patent research is vast and complex. Sifting through countless technical abstracts to find relevant innovations is a daunting task. Traditional keyword-based searches are often inaccurate and time-consuming. Abstracts are lengthy and technical, making it difficult to grasp the core idea quickly. This can lead to researchers missing key patents or wasting time on irrelevant results.

The secret sauce behind this revolution lies in Vector Search. Instead of relying on simple keyword matching, vector search transforms text into numerical representations (embeddings). This allows us to search based on the meaning of the query, not just the specific words used. In the world of literature searches, this is a game-changer. Imagine finding a patent for a "wearable heart rate monitor" even if the exact phrase isn't used in the document.

Objective

In this codelab, we will work towards making the process of searching for patents faster, more intuitive, and incredibly precise by leveraging AlloyDB, pgvector extension, and in-place Gemini 1.5 Pro, Embeddings and Vector Search.

What you'll build

As part of this lab, you will:

  1. Create an AlloyDB instance and load Patents Public Dataset data
  2. Enable pgvector and generative AI model extensions in AlloyDB
  3. Generate embeddings from the insights
  4. Perform real time Cosine similarity search for user search text
  5. Deploy the solution in serverless Cloud Functions

The following diagram represents the flow of data and steps involved in the implementation.

8b73c40a0d12e194.png

 High level diagram representing the flow of the Patent Search Application with AlloyDB

Requirements

  • A browser, such as Chrome or Firefox
  • A Google Cloud project with billing enabled.

2. Before you begin

Create a project

  1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
  2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project .
  3. You'll use Cloud Shell, a command-line environment running in Google Cloud that comes preloaded with bq. Click Activate Cloud Shell at the top of the Google Cloud console.

Activate Cloud Shell button image

  1. Once connected to Cloud Shell, you check that you're already authenticated and that the project is set to your project ID using the following command:
gcloud auth list
  1. Run the following command in Cloud Shell to confirm that the gcloud command knows about your project.
gcloud config list project
  1. If your project is not set, use the following command to set it:
gcloud config set project <YOUR_PROJECT_ID>
  1. Enable the required APIs. You can use a gcloud command in the Cloud Shell terminal:
gcloud services enable alloydb.googleapis.com \ 
                       compute.googleapis.com \
                       cloudresourcemanager.googleapis.com \
                       servicenetworking.googleapis.com \
                       run.googleapis.com \
                       cloudbuild.googleapis.com \
                       cloudfunctions.googleapis.com \
                       aiplatform.googleapis.com

The alternative to the gcloud command is through the console by searching for each product or using this link.

Refer documentation for gcloud commands and usage.

3. Prepare your AlloyDB database

Let's create an AlloyDB cluster, instance and table where the patent dataset will be loaded.

Create a AlloyDB objects

Create a cluster and instance with cluster id "patent-cluster", password "alloydb", PostgreSQL 15 compatible and the region as "us-central1", networking set to "default". Set instance id to "patent-instance". Click CREATE CLUSTER. The details to create a cluster are in this link: https://cloud.google.com/alloydb/docs/cluster-create.

Create a table

You can create a table using the DDL statement below in the AlloyDB Studio:

CREATE TABLE patents_data ( id VARCHAR(25), type VARCHAR(25), number VARCHAR(20), country VARCHAR(2), date VARCHAR(20), abstract VARCHAR(300000), title VARCHAR(100000), kind VARCHAR(5), num_claims BIGINT, filename VARCHAR(100), withdrawn BIGINT) ;

Enable Extensions

For building the Patent Search App, we will use the extensions pgvector and google_ml_integration. The pgvector extension allows you to store and search vector embeddings. The google_ml_integration extension provides functions you use to access Vertex AI prediction endpoints to get predictions in SQL. Enable these extensions by running the following DDLs:

CREATE EXTENSION vector;
CREATE EXTENSION google_ml_integration;

Grant Permission

Run the below statement to grant execute on the "embedding" function:

GRANT EXECUTE ON FUNCTION embedding TO postgres;

Grant Vertex AI User ROLE to the AlloyDB service account

From Google Cloud IAM console, grant the AlloyDB service account (that looks like this: service-<<PROJECT_NUMBER>>@gcp-sa-alloydb.iam.gserviceaccount.com) access to the role "Vertex AI User". PROJECT_NUMBER will have your project number.

Alternatively, you can also grant the access using gcloud command:

PROJECT_ID=$(gcloud config get-value project)


gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member="serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-alloydb.iam.gserviceaccount.com" \
--role="roles/aiplatform.user"

Alter the table to add a Vector column for storing the Embeddings

Run the below DDL to add the abstract_embeddings field to the table we just created. This column will allow storage for the vector values of the text:

ALTER TABLE patents_data ADD column abstract_embeddings vector(768);

4. Load patent data into the database

The Google Patents Public Datasets on BigQuery will be used as our dataset. We will use the AlloyDB Studio to run our queries. The alloydb-pgvector repository includes the insert_into_patents_data.sql script we will run to load the patent data.

  1. In the Google Cloud console, open the AlloyDB page.
  2. Select your newly created cluster and click the instance.
  3. In the AlloyDB Navigation menu, click AlloyDB Studio. Sign in with your credentials.
  4. Open a new tab by clicking the New tab icon on the right.
  5. Copy the insert query statement from the insert_into_patents_data.sql script mentioned above to the editor. You can copy 50-100 insert statements for a quick demo of this use case.
  6. Click Run. The results of your query appear in the Results table.

5. Create Embeddings for patents data

First let's test the embedding function, by running the following sample query:

SELECT embedding( 'textembedding-gecko@003', 'AlloyDB is a managed, cloud-hosted SQL database service.');

This should return the embeddings vector, that looks like an array of floats, for the sample text in the query. Looks like this:

25a1d7ef0e49e91e.png

Update the abstract_embeddings Vector field

Run the below DML to update the patent abstracts in the table with the corresponding embeddings:

UPDATE patents_data set abstract_embeddings = embedding( 'textembedding-gecko@003', abstract);

6. Perform Vector search

Now that the table, data, embeddings are all ready, let's perform the real time Vector Search for the user search text. You can test this by running the query below:

SELECT id || ' - ' || title as literature FROM patents_data ORDER BY abstract_embeddings <=> embedding('textembedding-gecko@003', 'A new Natural Language Processing related Machine Learning Model')::vector LIMIT 10;

In this query,

  1. The user search text is: "A new Natural Language Processing related Machine Learning Model".
  2. We are converting it to embeddings in the embedding() method using the model: textembedding-gecko@003.
  3. "<=>" represents the use of the COSINE SIMILARITY distance method.
  4. We are converting the embedding method's result to vector type to make it compatible with the vectors stored in the database.
  5. LIMIT 10 represents that we are selecting the 10 closest matches of the search text.

Below is the result:

8e77af965fc787ae.png

As you can observe in your results, the matches are pretty close to the search text.

7. Take the application to web

Ready for taking this app to the web? Follow the steps below:

  1. Go to Cloud Shell Editor, and click the "Cloud Code — Sign in" icon on the bottom left corner (Status bar) of the editor. Select your current Google Cloud Project that has billing enabled and make sure you are signed in to the same project from Gemini as well (on the right corner of the status bar).
  2. Click the Cloud Code icon and wait till the Cloud Code dialog pops up. Select New Application and in the Create New Application pop up, select Cloud Functions application:

a800ee1eb6cb8a5b.png

In page 2/2 of the Create New Application pop up, select Java: Hello World and enter the name of your project as "alloydb-pgvector" in your preferred location and click OK:

5b09446ecf7d4f8d.png

  1. In the resulting project structure, search for pom.xml and replace it with contents from the repo file. It should have these dependencies in addition to a couple more:

2b3a3cdd75a57711.png

  1. Replace the HelloWorld.java file with the content from the repo file.

Note that you have to replace the below values with your actuals:

String ALLOYDB_DB = "postgres";
String ALLOYDB_USER = "postgres";
String ALLOYDB_PASS = "*****";
String ALLOYDB_INSTANCE_NAME = "projects/<<YOUR_PROJECT_ID>>/locations/us-central1/clusters/<<YOUR_CLUSTER>>/instances/<<YOUR_INSTANCE>>";
//Replace YOUR_PROJECT_ID, YOUR_CLUSTER, YOUR_INSTANCE with your actual values

Note that the function expects the search text as input parameter with key "search" and in this implementation, we are returning only one closest match from the database:

// Get the request body as a JSON object.
JsonObject requestJson = new Gson().fromJson(request.getReader(), JsonObject.class);
String searchText = requestJson.get("search").getAsString();

//Sample searchText: "A new Natural Language Processing related Machine Learning Model";
BufferedWriter writer = response.getWriter();
String result = "";
HikariDataSource dataSource = AlloyDbJdbcConnector();

try (Connection connection = dataSource.getConnection()) {
   //Retrieve Vector Search by text (converted to embeddings) using "Cosine Similarity" method
   try (PreparedStatement statement = connection.prepareStatement("SELECT id || ' - ' || title as literature FROM patents_data ORDER BY abstract_embeddings <=> embedding('textembedding-gecko@003', '" + searchText + "' )::vector LIMIT 1")) {
     ResultSet resultSet = statement.executeQuery();
     resultSet.next();
     String lit = resultSet.getString("literature");
     result = result + lit + "\n";
     System.out.println("Matching Literature: " + lit);
 }
writer.write("Here is the closest match: " + result);
}
  1. To deploy the Cloud Function you just created, run the following command from the Cloud Shell terminal. Remember to navigate into the corresponding project folder first using the command:
cd alloydb-pgvector

Then run the command:

gcloud functions deploy patent-search --gen2 --region=us-central1 --runtime=java11 --source=. --entry-point=cloudcode.helloworld.HelloWorld --trigger-http

IMPORTANT STEP:

Once you have set out for deployment, you should be able to see the functions in the Google Cloud Run Functions console. Search for the newly created function and open it, edit the configurations and change the following:

  1. Go to Runtime, build, connections and security settings
  2. Increase the timeout to 180 seconds
  3. Go to the CONNECTIONS tab:

4e83ec8a339cda08.png

  1. Under the Ingress settings, make sure "Allow all traffic" is selected.
  2. Under the Egress settings, Click on the Network dropdown and select "Add New VPC Connector" option and follow the instructions you see on the dialog box that pops-up:

8126ec78c343f199.png

  1. Provide a name for the VPC Connector and make sure the region is the same as your instance. Leave the Network value as default and set Subnet as Custom IP Range with the IP range of 10.8.0.0 or something similar that is available.
  2. Expand SHOW SCALING SETTINGS and make sure you have the configuration set to exactly the following:

7baf980463a86a5c.png

  1. Click CREATE and this connector should be listed in the egress settings now.
  2. Select the newly created connector
  3. Opt for all traffic to be routed through this VPC connector.

8. Test the application

Once it is deployed, you should see the endpoint in the following format:

https://us-central1-YOUR_PROJECT_ID.cloudfunctions.net/patent-search

You can test it from the Cloud Shell Terminal by running following command:

gcloud functions call patent-search --region=us-central1 --gen2 --data '{"search": "A new Natural Language Processing related Machine Learning Model"}'

Result:

da3dcfac7d024031.png

You can also test it from the Cloud Functions list. Select the deployed function and navigate to the tab "TESTING". In the Configure triggering event section text box for request json, enter the following:

{"search": "A new Natural Language Processing related Machine Learning Model"}

Click TEST THE FUNCTION button and you can see the result on the right side of the page:

e21f806d661996ff.png

That's it! It is that simple to perform Similarity Vector Search using the Embeddings model on AlloyDB data.

9. Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this post, follow these steps:

  1. In the Google Cloud console, go to the Manage
  2. resources page.
  3. In the project list, select the project that you want to delete, and then click Delete.
  4. In the dialog, type the project ID, and then click Shut down to delete the project.

10. Congratulations

Congratulations! You have successfully performed a similarity search using AlloyDB, pgvector and Vector search. By combining the capabilities of AlloyDB, Vertex AI, and Vector Search, we've taken a giant leap forward in making literature searches accessible, efficient, and truly meaning-driven.