About this codelab
1. Overview
In this lab, you will use Google's generative AI products to build infrastructure in Google Cloud with the aid of Gemini Cloud Assist, query BigQuery data using natural language to SQL features of Data Canvas, write code in Colab Enterprise Jupyter notebooks and in Eclipse Theia (Visual Studio Code) with the help of Gemini Code Assist, and integrate AI search and chat features built on Cloud Storage and BigQuery grounding sources in Vertex AI Agent Builder.
Our goal is to create a recipes and cooking website called AI Recipe Haven. The site will be built in Python and Streamlit and will contain two major pages. Cooking Advice will host a chatbot we will create using Gemini and a Vertex AI Agent Builder grounded source tied to a group of cookbooks, and it will offer cooking advice and answer cooking related questions. Recipe Search will be a search engine fed by Gemini, this time grounded in a BigQuery recipe database.
If you get hung up on any of the code in this exercise, solutions for all code files are located in the exercise GitHub repo on the solution branch.
Objectives
In this lab, you learn how to perform the following tasks:
- Activate and use Gemini Cloud Assist
- Create a search app in Vertex AI Agent Builder for the cooking advice chatbot
- Load and clean data in a Colab Enterprise notebook, with help from Gemini Code Assist
- Create a search app in Vertex AI Agent Builder for the recipe generator
- Frame out the core Python and Streamlit web application, with a little Gemini help
- Deploy the web application to Cloud Run
- Connect the Cooking Advice page to our cookbook-search Agent Builder app
- (Optional) Connect the Recipe Search page to the recipe-search Agent Builder app
- (Optional) Explore the final application
2. Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This Qwiklabs hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
What you need
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
- Time to complete the lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab.
Note: If you are using a Pixelbook, open an Incognito window to run this lab.
How to start your lab and sign in to the Google Cloud Console
- Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.
- Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Open the tabs in separate windows, side-by-side.
If you see the Choose an account page, click Use Another Account.
- In the Sign in page, paste the username that you copied from the Connection Details panel. Then copy and paste the password.
Important: You must use the credentials from the Connection Details panel. Do not use your Qwiklabs credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges). 4. Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Cloud Console opens in this tab.
Note: You can view the menu with a list of Google Cloud Products and Services by clicking the Navigation menu at the top-left.
3. Task 0. Check your Workstation cluster
In a later part of this lab you will be using a Google Cloud Workstation to do some development work. The startup process for this lab should have begun the creation of your Workstation's cluster. Before moving on, let's make sure that the cluster is building.
- In the Google Cloud Console, use the search box to navigate to Cloud Workstations.
- Use the left hand navigation menu to view Cluster management.
- If you have a cluster that's Updating, you are good and can move on to Task 1. If you don't see any clusters in any state, refresh the page. If you still don't see a cluster Updating (building), then End Lab this lab using the button in the upper left of these instructions and restart the lab.
4. Task 1. Activate and use Gemini Cloud Assist
In this task we will activate and use Gemini Cloud Assist. While working in the Google Cloud Console, Gemini Cloud Assist can offer advice, help you through building, configuring, and monitoring your Google Cloud infrastructure, and can even suggest gcloud
commands and write Terraform scripts.
- To activate Cloud Assist for use, click into the Search box at the top of the Cloud Console UI and select Ask Gemini (or the wording might be Ask Gemini for Cloud console).
- Scroll to the Required API section of the page and Enable the Gemini for Google Cloud API.
- If you don't immediately see a chat interface, click Start chatting. Start by asking Gemini to explain some of the benefits of using Cloud Workstations. Take a few minutes to explore the generated response.
- Next, ask about the benefits of Agent Builder and how it can help ground generative responses.
- Finally, let's look at a comparison. In the Gemini chat window of Google Cloud Console, ask the following question:
What are the major steps to creating a search app grounded in a
GCS data source using Vertex AI Agent builder?
- Now, in your non-incognito window, go to the public Gemini website here, log in if needed, and ask the same question. Are the responses the same or at least similar? The specific steps? Is either noticeably better? Regardless, keep the responses in mind as we run through the next steps.
Note: If you try to do the above step using your temporary Qwiklabs account you'll be blocked. If your work account is also blocked, because your org is not allowing Gemini web app usage, simply skip the step and move on. This will not impact your ability to complete this exercise.
5. Task 2. Create a search app in Vertex AI Agent Builder for the cooking advice chatbot
The web site we are building will have a cooking advice page containing a chatbot designed to help users find answers to cooking related questions. It will be powered by Gemini grounded in a source containing 70 public-domain cookbooks. The cookbooks will act as the source of truth Gemini uses when answering questions.
- Use the Cloud Console search box to navigate to Vertex AI. From the Dashboard, click Enable All Recommended APIs. If you get a popup box about the Vertex AI API itself needing enabling, please Enable it as well.
- Use search to navigate to Agent Builder then Continue and Activate the API.
- As Gemini suggested in our earlier advice seeking, creating a search app in Agent Builder starts with the creation of an authoritative data source. When the user searches, Gemini understands the question and how to compose intelligent responses, but it will look to the grounded source for the information used in that response, rather than pulling from its innate knowledge.
From the left-hand menu, navigate to Data Stores and Create Data Store. 4. The public domain cookbooks we are using to ground our cooking advice page are currently in a Cloud Storage bucket in an external project. Select the Cloud Storage source type. 5. Examine but don't change the default options related to the type of information we are importing. Leave the import type set to Folder and for the bucket path use: labs.roitraining.com/labs/old-cookbooks
, then Continue. 6. Name the data store: old-cookbooks
. Edit and change the ID to old-cookbooks-id
and Create the data store.
Vertex AI Agent builder supports several app types, and the Data Store acts as the source of truth for each. Search apps are good for general use and search. Chat apps are for generative flows in Dataflow driven chatbot/voicebot applications. Recommendation apps help create better recommendation engines. And, Agent apps are for creating GenAI driven agents. Eventually, Agent would probably serve us best in what we want to do, but with the product currently being previewed, we'll stick with the Search app type. 7. Use the left-side menu to navigate to Apps, then click Create App. 8. Select the Search app type. Examine but don't change the various options. Name the app: cookbook-search
. Edit and set the app ID to cookbook-search-id
. Set the company to Google
and Continue. 9. Check the old-cookbooks
data store you created a few steps ago and Create the Search App.
If you examine the Activity tab, you'll likely see that the cookbooks are still importing and indexing. It will take 5+ minutes for Agent Builder to index the thousands of pages contained in the 70 cookbooks we've given it. While it's working, let's load and clean some recipe database data for our recipe generator.
6. Task 3. Load and clean data in a Colab Enterprise notebook, with help from Gemini Code Assist
Google Cloud offers a couple of major ways you can work with Jupiter notebooks. We are going to use Google's newest offering, Colab Enterprise. Some of you may be familiar with Google's Colab product, commonly used by individuals and organizations who would like to experiment with Jupiter notebooks in a free environment. Colab Enterprise is a commercial Google Cloud offering that's fully integrated with the rest of Google's cloud products and which takes full advantage of the security and compliance capabilities of the GCP environment.
One of the features Colab Enterprise offers is integration with Google's Gemini Code Assist. Code Assist may be used in a number of different code editors and can offer advice as well as seamless inline suggestions while you code. We will leverage this generative assistant while we wrangle our recipe data.
- Use search to navigate to Colab Enterprise and Create a notebook. If you get an offer to experiment with new Colab features, dismiss it. To get the runtime, the compute power behind the notebook, up and going, press Connect in the upper right corner of your new notebook.
- Use the triple dot menu next to the current notebook name in the Colab Enterprise Files pane to rename it
Data Wrangling
.
- Create a new + Text box, and use the up arrow to move it so it's the first cell on the page.
- Edit the text box and enter:
# Data Wrangling
Import the Pandas library
- In the code block below the text block you just created, start typing imp and Gemini Code Assist should suggest the rest of the import in grey. Press tab to accept the suggestion.
import pandas as pd
- Below the import code box, create another text box and enter:
Create a Pandas DataFrame from: gs://labs.roitraining.com/labs/recipes/recipe_dataset.csv. View the first few records.
- Create and edit another code block. Again, start typing df and examine the Gemini Code Assistant generated code. If you see an autocomplete droplist of Python keywords over the generated suggestion, hit escape to see the light grey suggested code. Again tab to accept the suggestion. If your suggestion didn't contain the head() function call, add it.
df = pd.read_csv('gs://labs.roitraining.com/labs/recipes/recipe_dataset.csv')
df.head()
- Click into your first code cell, where you imported Pandas, and use the Commands menu or keyboard to run the selected cell. On the keyboard shift+enter will run the cell and shift focus to the next cell, creating one if needed. Wait for the cell to execute before moving on.
Note: You will see [ ] just to the left when a cell hasn't been executed. While a cell is executing, you'll see a spinning, working animation. Once the cell finishes, a number will appear, like [13]. 9. Execute the cell that loads the CSV into the DataFrame. Wait for the file to load and examine the first five rows of data. This is the recipe data we will load into BigQuery and we'll eventually use it to ground our recipe generator. 10. Create a new code block and enter the below comment. After typing the comment, move to the next code line and you should receive the suggestion df.columns
. Accept it then run the cell.
# List the current DataFrame column names
We've just demonstrated that you really have two choices on how you get help from Gemini Code Assist in a Jupyter notebook: text cells above code cells, or comments inside the code cell itself. Comments inside code cells work well in Jupyter notebooks, but this approach will also work in any other IDE supporting Google's Gemini Code assist.
- Let's do a little column cleanup. Rename the column
Unnamed: 0
toid
, andlink
touri
. Use your choice of prompt > code techniques to create the code, then run the cell when satisfied.
# Rename the column 'Unnamed: 0' to 'id' and 'link' to 'uri'
df.rename(columns={'Unnamed: 0': 'id', 'link': 'uri'}, inplace=True)
- Remove the
source
andNER
columns and usehead()
to view the first few rows. Again, get Gemini to help. Run the last two lines and examine the results.
# Remove the source and NER columns
df.drop(columns=['source', 'NER'], inplace=True)
df.head()
- Let's see how many records are in our dataset. Again, start with your choice of prompting technique and see if you can get Gemini to help you generate the code.
# Count the records in the DataFrame
df.shape # count() will also work
- 2.23 million records is probably more recipes than we have time for. The indexing process in Agent Builder would likely take too long for our exercise today. As a compromise, let's sample out 150,000 recipes and work with that. Use your prompt > code approach to take the sample and store it in a new DataFrame named
dfs
(s for small).
# Sample out 150,000 records into a DataFrame named dfs
dfs = df.sample(n=150000)
- Our recipe source data is ready to load into BigQuery. Before we do the load, let's head over to BigQuery and prep a dataset to hold our table. In the Google Cloud Console use the Search Box to navigate to BigQuery. You might right-click BigQuery and open it in a new browser tab.
- If it's not already visible, open the Gemini AI Chat panel using the Gemini logo in the upper right of the Cloud Console. If you are asked to enable the API again, either press enable or refresh the page. Run the prompt:
What is a dataset used for in BigQuery?
After you've explored the response ask:How can I create a dataset named recipe_data using the Cloud Console?
Compare the results to the following few steps.
- In the BigQuery Explorer pane, click the triple dot View actions menu next to your project ID. Then select Create dataset.
- Give the dataset and ID of
recipe_data
. Leave the location type to US and Create Dataset. If you receive an error that the dataset already exists, simply move on.
With the dataset created in BigQuery, let's switch back to our notebook and do the insert. 19. Switch back to your Data Wrangling notebook in Colab Enterprise. In a new code cell, create a variable named project_id
and use it to hold your current project ID. Look in the upper left of these instructions, below the End Lab button, and you'll find the current project ID. It's also on the Cloud Console home page if you like. Assign the value into your project_id
variable and run the cell.
# Create a variable to hold the current project_id
project_id='YOUR_PROJECT_ID'
- Use the prompt > code approach to create a block of code that will insert the DataFrame
dfs
into a table namedrecipes
in the dataset we just createdrecipe_data
. Run the cell.
dfs.to_gbq(destination_table='recipe_data.recipes', project_id=project_id, if_exists='replace')
7. Task 4. Create a search app in Vertex AI Agent Builder for the recipe generator
Excellent, with our table of recipe data created, let's use it to build a grounded data source for our recipe generator. The approach we will use will be similar to what we did for our cooking chatbot. We will use Vertex AI Agent Builder to create a Data Store, and then use that as the source of truth for a Search App.
If you like, feel free to ask Gemini in the Google Cloud Console to remind you of the steps to create an Agent Builder search app, or you can follow the steps listed below.
- Use Search to navigate to Agent Builder. Open the Data Stores and Create Data Store. This time, Select the BigQuery Data Store type.
- In the table selection cell, press Browse and search for
recipes
. Select the radio button next to your table. If you see recipes from other qwiklabs-gcp-... projects, make sure to Select the one that belongs to you.
Note: If you click on recipes
instead of selecting the radio button next to it, it will open a new tab in your browser and take you to the table overview page in BigQuery. Just close the browser tab and select the radio button in Agent Builder. 3. Examine but don't change the rest of the default options, then Continue. 4. In the schema review page, examine the initial default configurations, but don't change anything. Continue 5. Name the datastore recipe-data
. Edit the datastore ID and set it to recipe-data-id
. Create the Data Store. 6. Navigate to Apps using the left hand navigation menu and Create App. 7. Select the Search app once more. Name the app recipe-search
and set the ID to recipe-search-id
. Set the company name to Google
and Continue. 8. This time, check the recipe-data data sources. Create the app.
It will take a while for our database table to index. While it does, let's experiment with BigQuery's new Data Canvas and see if we can find an interesting recipe or two. 9. Use the search box to navigate to BigQuery. At the top of the BigQuery Studio, click the down arrow next to the right-most tab and select Data canvas. Set the region to us-central1.
- In the Data canvas search box, search for
recipes
, and Add to canvas your table. - A visual representation of your recipes table will be loaded into the BigQuery Data canvas. You can explore the table's schema, preview the data in the table, and examine other details. Below the table representation, click Query.
- The canvas will load a more or less typical BigQuery query dialog with one addition: above the query window is a text box you can use to prompt Gemini for help. Let's see if we can find some cake recipes in our sample. Run the following prompt (by typing the text and pressing enter/return to trigger the SQL generation):
Please select the title and ingredients for all the recipes with a title that contains the word cake.
- Look at the SQL generated. Once you're satisfied, Run the query.
- Not too shabby! Feel free to experiment with a few other prompts and queries before moving on. When you experiment, try less specific prompts to see what works, and what doesn't. As an example, this prompt:
Do I have any chili recipes?
(Don't forget to run the new query) Returned a list of chili recipes but left out the ingredients until I modified it to:
Do I have any chili recipes? Please include their title and ingredients.
(Yes, I say please when I prompt. My Mama would be so proud.)
I noticed that one chili recipe contained mushrooms, and who wants that in chili? I asked Gemini to help me exclude those recipes.
Do I have any chili recipes? Please include their title and ingredients, and ignore any recipes with mushrooms as an ingredient.
8. Task 5. Frame out the core Python and Streamlit web application, with a little Gemini help
With both of our Vertex AI Agent Builder data stores indexing and with our search apps just about ready to roll, let's get to building our web application.
We will be leveraging Gemini Code Assist while we work. For more information on using Gemini Code Assist in Visual Studio Code, see the documentation here
We will be doing our development in a Google Cloud Workstation; a cloud based development environment, in our case, pre-loaded with Eclipse Theia (open source Visual Studio Code). An automated script in this exercise has created the Cloud Workstation cluster and configuration for us, but we still need to create the Cloud Workstation itself. If you'd like more information on Cloud Workstations and their use, you should ask Gemini Cloud Assist :-)
- Use search to navigate to Cloud Workstations, then Create Workstation. Name the Workstation
dev-env
and use the my-config configuration. Create the workstation. - After a few minutes, you will see your new workstation in your My workstations list. Start the
dev-env
and once it's running, Launch the development environment. - The workstation editor will open on a new browser tab, and after a few moments, you should see a familiar Theia (Visual Studio Code) interface. On the left side of the interface, expand the Source Control tab and press Clone Repository.
- For the repository URL enter
https://github.com/haggman/recipe-app
. Clone the repo into youruser
folder, then Open the cloned repo for editing. - Before we explore the cloned folder and start working on our web application, we need to get the editor's Cloud Code plugin logged into Google Cloud and we need to enable Gemini. Let's do that now. In the bottom left of your editor, click Cloud Code - Sign in. If you don't see the link, wait a minute and check again.
- The terminal window will display a long URL. Open the URL in the browser and run through the steps to grant Cloud Code access to your Google Cloud environment. Make sure you use your exercise temp
student-...
account and not your personal Google Cloud account when you authenticate. In the final dialog, Copy the verification code and paste it back into the waiting terminal window in your Cloud Workstation browser tab. - After a few moments, the Cloud Code link at the bottom left of your editor will change to Cloud Code - No Project. Click the new link to select a project. The command pallet should open at the top of the editor. Click Select a Google Cloud project and select your qwiklabs-gcp-... project. After a few moments, the link in the lower left of your editor will update to display your project ID. This indicates that Cloud Code is successfully attached to your working project.
- With Cloud Code connected to your project, you can now activate Gemini Code Assist. In the lower right of your editor interface, click the crossed out Gemini logo. The Gemini Chat pane will open on the left of the editor. Click Select a Google Cloud Project. When the command pallet opens, select your qwiklabs-gcp-... project. If you've followed the steps correctly (and Google hasn't changed anything), then you should now see an active Gemini chat window.
- Lastly, let's get the editor terminal window equally configured. Use hamburger menu > View > Terminal to open the terminal window. Execute
gcloud init
. Once again, use the link to allow the Cloud Shell terminal to work against yourqwiklabs-gcp-...
project. When asked, select the numeric option of yourqwiklabs-gcp-...
project. - Excellent, with our terminal, Gemini chat, and Cloud Code configurations all set, open the Explorer tab and take a few minutes to explore the files in the current project.
- In the Explorer open your
requirements.txt
file for editing. Switch to the Gemini chat pane and ask:
From the dependencies specified in the requirements.txt file, what type of application are we building?
- So, we are building an interactive web application using Python and Streamlit that's interacting with Vertex AI and Discovery Engine, nice. For now, let's focus on the web application components. As Gemini says, Streamlit is a framework for building data-driven web applications in Python. Now ask:
Does the current project's folder structure seem appropriate for a Streamlit app?s
This is where Gemini tends to have issues. Gemini can access the file you have currently open in the editor, but it can't actually see the whole project. Try asking this:
Given the below, does the current project's file and folder structure seem appropriate for a Streamlit app?
- build.sh
- Home.py
- requirements.txt
- pages
-- Cooking_Advice.py
-- Recipe_Search.py
Get a better answer?
- Let's get some more information about Streamlit:
What can you tell me about Streamlit?
Nice, so we can see Gemini is offering us a nice overview including pros and cons.
- If you wanted to explore the cons, you could ask:
What are the major downsides or shortcomings?
Notice, we didn't have to say, "of Streamlit," because Gemini chat is conversational (multi-turn). Gemini knows what we've been talking about because we are in a chat session. If at any point you want to wipe the Gemini chat history clean, use the trashcan icon at the top of the Gemini code chat window.
9. Task 6: Deploy the web application to Cloud Run
Excellent, we have our core application structure in place, but will it all work? Better yet, where should we host it in Google Cloud?
- In the Gemini chat window, ask:
If I containerize this application, what compute technologies
in Google Cloud would be best for hosting it?
- Remember, if you weren't already working in your IDE, you could also use Google Cloud Assist. Open the Google Cloud Console, then open Gemini Cloud Assist and ask:
If I have a containerized web application, where would be the
best place to run it in Google Cloud?
Were the two sets of advice the same? Do you agree/disagree with any of the advice? Remember, Gemini is a Generative AI assistant, and like a human assistant, you won't always agree with everything it says. Still, having that helper always at your side while you work in Google Cloud and in your code editor can make you much more efficient.
- For a stateless short-lived containerized web application, Cloud Run would be a great option. In the Gemini chat window of your code editor, try the prompt:
What steps would be required to run this application in
Cloud Run?
- It looks like the first thing we need to do is create a Dockerfile. Using the editor, create a file named
Dockerfile
in the root of your project folder. Make sure you don't accidentally place it in the pages folder. Open the file for editing. - Let's use the side Gemini chat panel to create our
Dockerfile
. Use a prompt like the one below. When the results are displayed in chat, use the + next to the copy icon just above the suggested Dockerfile to insert the suggested code into the Dockerfile.
Create a Dockerfile for the application in the current folder.
The dependencies are defined in requirements.txt and I want you
to use the Python 3 slim bookworm base image.
Gemini doesn't always return the same response to the same prompt. The first time I asked Gemini for a Dockerfile I got the exact file I'm going to suggest you use. Just now I received the suggestion:
```docker
# Base image
FROM python:3-bookworm-slim
# Set working directory
WORKDIR /app
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
gcc \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Install pip and virtualenv
RUN pip install --upgrade pip virtualenv
# Create virtual environment
RUN python3 -m venv venv
# Activate virtual environment
WORKDIR /app/venv/bin
RUN . activate
# Install Streamlit and libraries from requirements.txt
RUN pip install -r requirements.txt
# Copy application files
COPY . /app
# Expose port 8501 for Streamlit
EXPOSE 8501
# Start Streamlit app
CMD ["streamlit", "run", "main.py"]
That's a heck of a Dockerfile. I'd simplify it a bit. We don't need the apt-get section as anything needed for Python is already in our base image. Also, using a virtual environment in a Python container is a waste of space, so I'd remove that. The expose command isn't strictly necessary, but it's fine. Also, it's trying to start main.py which I don't have.
6. Tweak the Dockerfile so it resembles the following:
FROM python:3.11-slim-bookworm
WORKDIR /app
COPY requirements.txt . RUN pip install –no-cache-dir –upgrade pip &&
pip install –no-cache-dir -r requirements.txt
COPY . .
CMD ["streamlit", "run", "Home.py"]
7. We need a place in Google cloud where we can store our container image. Let's get a little help from Gemini. In the Google Cloud Console Gemini Cloud Assist ask:
Where's the best place in Google Cloud to store Docker images?
8. If one of the answers you received is the Google Container Registry, then I guess Gemini hasn't gotten word that GCR is deprecated. Again, just like human assistants, you may get out of date or simply wrong answers (hallucinations). Always make sure to consider your choices carefully, even when Gemini is recommending something.
Let's go with Artifact Registry. Ask Gemini Cloud Assist how to create a docker registry in Artifact Registry named cooking-images.
How can I use gcloud to create a docker registry in Artifact Registry?
9. Now ask Gemini how you could use Cloud Build to build a new image named `recipe-web-app` from the Dockerfile in the current folder.
How could I use gcloud to build a new Cloud Run service named recipe-web-app from an image of the same name out of the Artifact Registry repo we just created?
10. To save you a little time, I've created a script that will create the Artifact Registry repo (if needed), use Cloud Build to build and push the image to the repo, and finally to deploy the application to Cloud Run. In your code editor use the **Explorer** view to open `build.sh` and explore the file.
11. Gemini can operate via the chat window, but it can also work directly in your code file using comments, like we used in the Data Wrangling notebook, and it also may be invoked using Control+i on Windows or Command+i on Mac. Click somewhere in the build.sh script file, activate Gemini using the appropriate Command+i / Control+i command.
<img src="img/61ac2c9a245a3695.png" alt="61ac2c9a245a3695.png" width="624.00" />
12. At the prompt enter the below. Examine and **Accept** the change.
Please comment the current file.
How cool is that?! How many times have you had to work with someone elses code, only to have to waste time gaining a base understanding of their commentless work before you can even start making your changes. Gemini to the rescue!
13. Let's build and deploy our application. In the terminal window execute the `build.sh` file.
. build.sh
14. If you watch the build process, first it will build the Artifact Registry docker repo. Then, it uses Cloud Build to create the container image from the Dockerfile in the local folder (since we didn't supply a `cloudbuild.yaml`). Lastly, the docker image will be deployed into a new Cloud Run service. At the end of the script you'll get a Cloud Run test URL to use.
Open the returned link in a new tab of your browser. Take a moment and explore the application's structure and pages. Nice, now we need a hook in our generative AI functionality.
## Task 7: Connect the Cooking Advice page to our cookbook-search Agent Builder app
We have the framework for the web application running, but we need to connect the two work pages to our two Vertex AI Agent Builder search apps. Let's start with Cooking Advice.
1. In the Google Cloud console use search to navigate to **Chat** in Vertex AI.
2. In the right hand settings page pane set the model to **gemini-1.5-flash-002**. Slide the output token limit up to the max so the model can return longer answers if needed. Open the **Safety Filter Settings**. Set Hate speech, Sexually explicit content, and Harassment content to **Block some**. Set Dangerous content to **Block few** and **Save**. We're setting Dangerous Content a bit lower because talking about knives and cutting can be misinterpreted by Gemini as violence.
3. Slide on the toggle to enable **Grounding** then click **Customize**. Set the grounding source to **Vertex AI search** and for the datastore path use the following. Change YOUR_PROJECT_ID to the project ID found up near the End Lab button in these instructions, then **Save** the grounding settings
projects/YOUR_PROJECT_ID/locations/global/collections/default_collection/dataStores/old-cookbooks-id
**Note:** If you get an error then you either didn't change the project ID to your actual project ID, or you may have missed the step where you changed the old-cookbooks Agent Builder Data Store ID. Check your Agent Builder > Data Stores > old-cookbooks for its actual Data store ID.
4. Test a couple of chat messages. Perhaps start with the below. Try a few others if you like.
How can I tell if a tomato is ripe?
5. The model works, now let's experiment with the code. Click **Clear Conversation** so our conversations don't become part of the code then click **Get Code**.
<img src="img/dce8ad7ee006cca1.png" alt="dce8ad7ee006cca1.png" width="624.00" />
6. At the top of the code window, press Open Notebook so we can experiment and perfect the code in Colab Enterprise before integrating it into our app.
7. Take a few minutes to familiarize yourself with the code. Let's make a couple of changes to adapt it to what we want. Before we start, run the first code cell to connect to the compute and install the AI Platform SDK. After the block runs you will be prompted to restart the session. Go ahead and do that.
8. Move to the code we pulled out of Vertex AI Studio. Change the name of the method *multiturn_generate_content* to `start_chat_session`.
9. Scroll to the `model = GenerativeModel(` method call. The existing code defines the `generation_config` and `safety_settings` but doesn't actually use them. Modify the creation of the `GenerativeModel` so it resembles:
model = GenerativeModel( "gemini-1.5-flash-002", tools=tools, generation_config=generation_config, safety_settings=safety_settings, )
10. Lastly, add a final line to the method, just below `chat = model.start_chat()`, so the function returns the `chat` object. The finished function should look like the below.
**Note:** DO NOT COPY this code into your notebook. It is simply here as a sanity check.
def start_chat_session(): vertexai.init(project="qwiklabs-gcp-02-9a7298ceaaec", location="us-central1") tools = [ Tool.from_retrieval( retrieval=grounding.Retrieval( source=grounding.VertexAISearch(datastore="projects/qwiklabs-gcp-02-9a7298ceaaec/locations/global/collections/default_collection/dataStores/old-cookbooks-id"), ) ), ] model = GenerativeModel( "gemini-1.5-flash-002", tools=tools, generation_config=generation_config, safety_settings=safety_settings, ) chat = model.start_chat() return chat
11. Scroll to the bottom of the code cell and change the final line calling the old function so it calls the new function name and stores the returned object in a variable `chat`. Once you are satisfied with your changes, run the cell.
chat = start_chat_session()
12. Create a new code cell and add the comment `# Use chat to invoke Gemini and print out the response`. Move to the next line and type resp and Gemini should auto complete the block for you. Update the prompt to `How can I tell if a tomato is ripe?`. Run the cell
response = chat.send_message("How can I tell if a tomato is ripe?") print(response)
13. That's the response alright, but the part we really want is that nested text field. Modify the codeblock to print just that section, like:
response = chat.send_message("How can I tell if a tomato is ripe?") print(response.candidates[0].content.parts[0].text)
14. Good, now that we have working chat code, let's integrate it into our web application. Copy all the contents of the code cell that creates the `start_chat_session` function (we won't need the test cell). If you click into the cell you can click the triple dot menu in the upper right corner and copy from there
<img src="img/17bf8d947393d4b.png" alt="17bf8d947393d4b.png" width="326.00" />
15. Switch to your Cloud Workstation editor and open pages\Cooking_Advice.py for editing.
16. Locate the comment:
Add the code you copied from your notebook below this message
17. Paste your copied code just below the above comment. Nice, now we have the section which drives the chat engine via a grounded call to Gemini. Now let's integrate it into Streamlit.
18. Locate section of commented code directly below the comment:
Here's the code to setup your session variables
Uncomment this block when instructed
19. Uncomment this section of code (Up till the next `Setup done, let's build the page UI` section) and explore it. It creates or retrieves the chat and history session variables.
20. Next, we need to integrate the history and chat functionality into the UI. Scroll in the code until you locate the below comment.
Here's the code to create the chat interface
Uncomment the below code when instructed
21. Uncomment the rest of the code below the comment and take a moment to explore it. If you like, highlight it and get Gemini to explain its functionality.
22. Excellent, now let's build the application and deploy it. When the URL comes back, launch the application and give the Cooking Advisor page a try. Perhaps ask it about ripe tomatoes, or the bot knows a good way to prepare brussels sprouts.
. build.sh
How cool is that! Your own personal AI cooking advisor :-)
## Task 8: (Optional) Connect the Recipe Search page to the recipe-search Agent Builder app
When we connected the Cooking Advice page to its grounded source, we did so using the Gemini API directly. For Recipe Search, let's connect to the Vertex AI Agent Builder search app directly.
1. In your Cloud Workstation editor, open the `pages/Recipe_Search.py` page for editing. Investigate the structure of the page.
2. Towards the top of the file, set your project ID.
3. Examine the `search_sample` function. This code more or less comes directly from the Discovery Engine documentation [here](https://cloud.google.com/generative-ai-app-builder/docs/preview-search-results#genappbuilder_search-python). You can find a working copy in this notebook [here](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/search/create_datastore_and_search.ipynb).
4. The only change I made was to return the `response.results` instead of just the results. Without this, the return type is an object designed to page through results, and that's something we don't need for our basic application.
5. Scroll to the very end of the file and uncomment the entire section below `Here are the first 5 recipes I found`.
6. Highlight the whole section you just uncommented and open Gemini Code chat. Ask, `Explain the highlighted code`. If you don't have something selected, Gemini can explain the whole file. If you highlight a section and ask Gemini to explain, or comment, or improve it, Gemini will.
Take a moment and read through the explanation. For what it's worth, using a Colab Enterprise notebook is a great way to explore the Gemini APIs before you integrate them into your application. It's especially helpful at exploring some of the newer APIs which may not be documented as well as they could be.
7. At your editor terminal window, run `build.sh` to deploy the final application. Wait until the new version is deployed before moving to the next step.
## Task 9: (Optional) Explore the final application
Take a few minutes to explore the final application.
1. In the Google Cloud console, use search to navigate to **Cloud Run**, then click into your **recipe-web-app**.
2. Locate the application test URL (towards the top) and open it in a new browser tab.
3. The application home page should appear. Note the basic layout and navigation provided by Streamlit, with the python files from the `pages` folder displayed as navigational choices, and the `Home.py` loaded as the home page. Navigate to the **Cooking Advice** page.
4. After a few moments the chat interface will appear. Again, note the nice core layout provided by Streamlit.
5. Try a few cooking related questions and see how the bot functions. Something like:
Do you have any advice for preparing broccoli?
How about a classic chicken soup recipe?
Tell me about meringue.
6. Now let's find a recipe or two. Navigate to the Recipe Search page and try a few searches. Something like:
Chili con carne
Chili, corn, rice
Lemon Meringue Pie
A dessert containing strawberries
## Congratulations!
You have created an application leveraging Vertex AI Agent Builder applications. Along the way you've explored Gemini Cloud Assist, Gemini Code Assist, and the natural language to SQL features of BigQuery's Data Canvas. Fantastic job!