About this codelab
1. Overview
In this lab, you will create a GenAI Agent, connect it to the Cloud Run application and integrate the agent into the Slack workspace.
What you will learn
There are several main parts to the lab:
- Deploy Cloud Run application to integrate with Gemini APIs
- Create and deploy Vertex AI Agent
- Integrate Agent into Slack
- Configure data store for Q&A over PDF documents
Prerequisites
- This lab assumes familiarity with the Cloud Console and Cloud Shell environments.
2. Setup and Requirements
Cloud Project setup
- Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.
- The Project name is the display name for this project's participants. It is a character string not used by Google APIs. You can always update it.
- The Project ID is unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference your Project ID (typically identified as
PROJECT_ID
). If you don't like the generated ID, you might generate another random one. Alternatively, you can try your own, and see if it's available. It can't be changed after this step and remains for the duration of the project. - For your information, there is a third value, a Project Number, which some APIs use. Learn more about all three of these values in the documentation.
- Next, you'll need to enable billing in the Cloud Console to use Cloud resources/APIs. Running through this codelab won't cost much, if anything at all. To shut down resources to avoid incurring billing beyond this tutorial, you can delete the resources you created or delete the project. New Google Cloud users are eligible for the $300 USD Free Trial program.
Environment Setup
Open Gemini chat.
Enable Cloud AI Companion API:
Click "Start chatting
" and follow one of the sample questions or type your own prompt to try it out.
Prompts to try:
- Explain Cloud Run in 5 key points.
- You are Google Cloud Run Product Manager, explain Cloud Run to a student in 5 short key points.
- You are Google Cloud Run Product Manager, explain Cloud Run to a Certified Kubernetes Developer in 5 short key points.
- You are Google Cloud Run Product Manager, explain when you would use Cloud Run versus GKE to a Senior Developer in 5 short key points.
Check out the Prompt Guide to learn more about writing better prompts.
How Gemini for Google Cloud uses your data
Google's privacy commitment
Google was one of the first in the industry to publish an AI/ML privacy commitment, which outlines our belief that customers should have the highest level of security and control over their data that's stored in the cloud.
Data you submit and receive
The questions that you ask Gemini, including any input information or code that you submit to Gemini to analyze or complete, are called prompts. The answers or code completions that you receive from Gemini are called responses. Gemini doesn't use your prompts or its responses as data to train its models.
Encryption of prompts
When you submit prompts to Gemini, your data is encrypted in-transit as input to the underlying model in Gemini.
Program data generated from Gemini
Gemini is trained on first-party Google Cloud code as well as selected third-party code. You're responsible for the security, testing, and effectiveness of your code, including any code completion, generation, or analysis that Gemini offers you.
Learn more how Google handles your prompts.
3. Options to test prompts
You have several options to test prompts.
Vertex AI Studio is a part of Google Cloud's Vertex AI platform, specifically designed to simplify and accelerate the development and use of generative AI models.
Google AI Studio is a web-based tool for prototyping and experimenting with prompt engineering and the Gemini API.
- Gemini Web App (gemini.google.com)
The Google Gemini web app (gemini.google.com) is a web-based tool designed to help you explore and utilize the power of Google's Gemini AI models.
- Google Gemini mobile app for Android and Google app on iOS
4. Clone the repo
Return to Google Cloud Console and activate Cloud Shell by clicking on the icon to the right of the search bar.
In the opened terminal, run following commands
git clone https://github.com/GoogleCloudPlatform/genai-for-developers.git
cd genai-for-developers
git checkout slack-agent-jira-lab
Click "Open Editor"
Using the "File / Open Folder
" menu item, open "genai-for-developers
".
Open a new terminal
5. Create Service Account
Create a new service account and keys.
You will use this service account to make API calls to Vertex AI Gemini API from Cloud Run application.
Configure project details using your qwiklabs project details.
Example: qwiklabs-gcp-00-2c10937585bb
gcloud config set project YOUR_QWIKLABS_PROJECT_ID
Create service account and grant roles.
export LOCATION=us-central1
export PROJECT_ID=$(gcloud config get-value project)
export SERVICE_ACCOUNT_NAME='vertex-client'
export DISPLAY_NAME='Vertex Client'
export KEY_FILE_NAME='vertex-client-key'
gcloud iam service-accounts create $SERVICE_ACCOUNT_NAME --project $PROJECT_ID --display-name "$DISPLAY_NAME"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/aiplatform.admin"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/aiplatform.user"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/cloudbuild.builds.editor"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/artifactregistry.admin"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/storage.admin"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/run.admin"
gcloud projects add-iam-policy-binding $PROJECT_ID --member="serviceAccount:$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" --role="roles/secretmanager.secretAccessor"
gcloud iam service-accounts keys create $KEY_FILE_NAME.json --iam-account=$SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com
If prompted to authorize, click "Authorize" to continue.
Enable required services to use Vertex AI APIs and Gemini chat.
gcloud services enable \
generativelanguage.googleapis.com \
aiplatform.googleapis.com \
cloudaicompanion.googleapis.com \
run.googleapis.com \
cloudresourcemanager.googleapis.com
Enable required services to use Vertex AI APIs and Gemini chat.
gcloud services enable \
artifactregistry.googleapis.com \
cloudbuild.googleapis.com \
runapps.googleapis.com \
workstations.googleapis.com \
servicemanagement.googleapis.com \
secretmanager.googleapis.com \
containerscanning.googleapis.com
Enable Gemini Code Assist
Click on the "Gemini" icon, in the bottom right corner, click "Sign-in
" and "Select Google Cloud project
".
From the popup window, select your qwiklabs project.
Example:
Open file "devai-api/app/routes.py
" and then right click anywhere in the file and select "Gemini Code Assist > Explain
this"
from the context menu.
Review Gemini's explanation for the selected file.
6. Deploy Devai-API to Cloud Run
Check that you are in the right folder.
cd ~/genai-for-developers/devai-api
For this lab, we follow best practices and use Secret Manager to store and reference the Access Token and LangChain API Key values in Cloud Run.
Set environment variables.
export JIRA_API_TOKEN=your-jira-token
export JIRA_USERNAME="YOUR-EMAIL"
export JIRA_INSTANCE_URL="https://YOUR-JIRA-PROJECT.atlassian.net"
export JIRA_PROJECT_KEY="YOUR-JIRA-PROJECT-KEY"
export JIRA_CLOUD=true
export GITLAB_PERSONAL_ACCESS_TOKEN=your-gitlab-token
export GITLAB_URL="https://gitlab.com"
export GITLAB_BRANCH="devai"
export GITLAB_BASE_BRANCH="main"
export GITLAB_REPOSITORY="GITLAB-USERID/GITLAB-REPO"
export LANGCHAIN_API_KEY=your-langchain-key
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
Store JIRA Access Token in the Secret Manager.
echo -n $JIRA_API_TOKEN | \
gcloud secrets create JIRA_API_TOKEN \
--data-file=-
Store GitLab Access Token in the Secret Manager.
echo -n $GITLAB_PERSONAL_ACCESS_TOKEN | \
gcloud secrets create GITLAB_PERSONAL_ACCESS_TOKEN \
--data-file=-
Store LangChain API Key in the Secret Manager.
echo -n $LANGCHAIN_API_KEY | \
gcloud secrets create LANGCHAIN_API_KEY \
--data-file=-
Deploy application to Cloud Run.
gcloud run deploy devai-api \
--source=. \
--region="$LOCATION" \
--allow-unauthenticated \
--service-account vertex-client \
--set-env-vars PROJECT_ID="$PROJECT_ID" \
--set-env-vars LOCATION="$LOCATION" \
--set-env-vars GITLAB_URL="$GITLAB_URL" \
--set-env-vars GITLAB_REPOSITORY="$GITLAB_REPOSITORY" \
--set-env-vars GITLAB_BRANCH="$GITLAB_BRANCH" \
--set-env-vars GITLAB_BASE_BRANCH="$GITLAB_BASE_BRANCH" \
--set-env-vars JIRA_USERNAME="$JIRA_USERNAME" \
--set-env-vars JIRA_INSTANCE_URL="$JIRA_INSTANCE_URL" \
--set-env-vars JIRA_PROJECT_KEY="$JIRA_PROJECT_KEY" \
--set-env-vars JIRA_CLOUD="$JIRA_CLOUD" \
--set-env-vars LANGCHAIN_TRACING_V2="$LANGCHAIN_TRACING_V2" \
--update-secrets="LANGCHAIN_API_KEY=LANGCHAIN_API_KEY:latest" \
--update-secrets="GITLAB_PERSONAL_ACCESS_TOKEN=GITLAB_PERSONAL_ACCESS_TOKEN:latest" \
--update-secrets="JIRA_API_TOKEN=JIRA_API_TOKEN:latest" \
--min-instances=1 \
--max-instances=3
Answer Y
to create Artifact Registry Docker repository.
Deploying from source requires an Artifact Registry Docker repository to store built containers. A repository named [cloud-run-source-deploy] in
region [us-central1] will be created.
Do you want to continue (Y/n)? y
Ask Gemini to explain the command:
Review gcloud run deploy SERVICE_NAME --source=.
flow below. Learn more.
Behind the scenes, this command uses Google Cloud's buildpacks
and Cloud Build
to automatically build container images from your source code without having to install Docker on your machine or set up buildpacks or Cloud Build. That is, the single command described above does what would otherwise require the gcloud builds submit
and the gcloud run deploy
commands.
If you have provided Dockerfile(which we did in this repository) then Cloud Build will use it to build container images vs relying on the buildpacks to automatically detect and build container images. To learn more about buildpacks check out documentation.
Review Cloud Build logs in the Console.
Review created Docker image in Artifact Registry.
Open cloud-run-source-deploy/devai-api
and review vulnerabilities that were automatically detected. Check ones that have fixes available and see how it can be fixed based on the description.
Review Cloud Run instance details in the Cloud Console.
Test endpoint by running curl command.
curl -X POST \
-H "Content-Type: application/json" \
-d '{"prompt": "PROJECT-100"}' \
$(gcloud run services list --filter="(devai-api)" --format="value(URL)")/generate
Review output:
7. Vertex AI Agent Builder
Search and open "Agent Builder".
Activate APIs
Create Agent app:
Type "Agent" for Display name and click "Agree & Create".
Set Agent Name:
Agent
Set Goal:
Help user with questions about JIRA project
Set Instructions:
- Greet the users, then ask how you can help them today.
- Summarize the user's request and ask them to confirm that you understood correctly.
- If necessary, seek clarifying details.
- Thank the user for their business and say goodbye.
Click "Save":
Test the Agent using emulator chat on the right side:
Open Tools menu and create a new Tool:
Select OpenAPI
from the Type dropdown.
Set Tool Name:
jira-project-status
Set Description:
Returns JIRA project status
Set Schema (YAML) - replace YOUR CLOUD RUN URL.
openapi: 3.0.0
info:
title: CR API
version: 1.0.0
description: >-
This is the OpenAPI specification of a service.
servers:
- url: 'https://YOUR CLOUD RUN URL'
paths:
/create-jira-issue:
post:
summary: Request impl
operationId: create-jira-issue
requestBody:
description: Request impl
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Prompt'
responses:
'200':
description: Generated
content:
application/json:
schema:
type: string
/generate:
post:
summary: Request impl
operationId: generate
requestBody:
description: Request impl
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Prompt'
responses:
'200':
description: Generated
content:
application/json:
schema:
type: string
/test:
get:
summary: Request impl
operationId: test
responses:
'200':
description: Generated
content:
application/json:
schema:
type: string
components:
schemas:
Prompt:
type: object
required:
- prompt
properties:
prompt:
type: string
Save the Tool configuration:
Return to Agent configuration and update instructions to use the tool:
Add instructions to use new tool:
- Use ${TOOL: jira-project-status} to help the user with JIRA project status.
Switch to Examples tab and add new example:
Set Display Name:
jira-project-flow
Using menu at the bottom, model the conversation between user and agent:
Tool invocation configuration:
Click Save and Cancel. Return to the Agent emulator and test the flow.
Review Best Practices for Vertex AI Agents
Agent Settings
Logging settings
Model configuration.
GitGub integration to push and restore Agent configuration.
Agent emulator controls:
8. Slack Integration
Open the Integrations menu and click "Connect" on the Slack tile.
Open the link and create a new Slack app at https://api.slack.com/apps
Select from "Manifest":
Pick a workspace to develop your app
Switch to YAML and paste this manifest:
display_information:
name: Agent
description: Agent
background_color: "#1148b8"
features:
app_home:
home_tab_enabled: false
messages_tab_enabled: true
messages_tab_read_only_enabled: false
bot_user:
display_name: Agent
always_online: true
oauth_config:
scopes:
bot:
- app_mentions:read
- chat:write
- im:history
- im:read
- im:write
- incoming-webhook
settings:
event_subscriptions:
request_url: https://dialogflow-slack-4vnhuutqka-uc.a.run.app
bot_events:
- app_mention
- message.im
org_deploy_enabled: false
socket_mode_enabled: false
token_rotation_enabled: false
Click "Create":
Install to Workspace:
Select "#general" channel and click "Allow"
Under "Basic Information / App Credentials" - copy "Signing Secret" and set it in Slack integration.
Open "OAuth & Permissions" and copy "Bot User OAuth Token" and set it in Slack integration.
Set the required fields and click "Start".
Agent's "Access Token" value is "Bot User OAUth Token" from Slack.
Agent's "Signing Token" value is "Signing Secret" from Slack.
Copy "Webhook URL" and return to Slack app configuration.
Open the "Event Subscriptions" section and paste the url.
Save the changes.
Open "Slack" and add an agent by typing "@Agent".
For example, adding an app with the name "@CX".
Ask the agent for a JIRA project summary.
9. Q&A over PDF documents
Create Cloud Storage Bucket
Open GCS in the Cloud Console: https://console.cloud.google.com/storage/browser
Create a new bucket.
For bucket name type: "pdf-docs
" + last 5 digits of your GCP project.
Location type: multi-region, us
.
Storage class: Standard
Access control: Uniform
Data protection: uncheck soft delete policy
Click "Create
".
Confirm "Public access will be prevented".
Download PDF report and upload it to the bucket. https://services.google.com/fh/files/misc/exec_guide_gen_ai.pdf
Bucket with uploaded file view:
Data store configuration
Return to Agent Console and open "Agent
", scroll down and click "+ Data store
".
Use following values:
Tool name: pdf-docs
Type: Data store
Description: pdf-docs
Click "Save
"
Click the "Create a data store
" at the bottom on the page.
Click "AGREE
" when asked about "Do you agree to have your search & conversation data stores in the us region?"
Type "Google
" in the "Provide Company" field.
On the next screen, click "CREATE DATA STORE
".
Select "Cloud Storage
" as data source.
Prepare data for ingesting
https://cloud.google.com/generative-ai-app-builder/docs/prepare-data
HTML and TXT files must be 2.5 MB or smaller.
PDF, PPTX, and DOCX files must be 100 MB or smaller.
You can import up to 100,000 files at a time.
Select: unstructured documents
And select your GCS bucket/folder.
Click continue:
For data store name type: "pdf-docs
"
Select "Digital parser
" from the dropdown.
Enable advanced chunking.
Enable ancestor headings in chunks.
Click "Create
".
Select data store and click "Create
"
Click on the data store and review Documents, Activity and Processing Config.
It will take ~5-10 minutes to complete the import.
Parsing and Chunking options
You can control content parsing in the following ways:
- Digital parser. The digital parser is on by default for all file types unless a different parser type is specified. The digital parser processes ingested documents if no other default parser is specified for the data store or if the specified parser doesn't support the file type of an ingested document.
- OCR parsing for PDFs. Public preview. If you plan to upload scanned PDFs or PDFs with text inside images, you can turn on the OCR parser to improve PDF indexing. See About OCR parsing for PDFs.
- Layout parser. Public preview. Turn on the layout parser for HTML, PDF, or DOCX files if you plan to use Vertex AI Search for RAG. See Chunk documents for RAG for information about this parser and how to turn it on.
Learn more about parsing and chunking documents.
Tool configuration
Return to the tab with Tools configuration.
Refresh the browser and select "pdf-docs
" from the Unstructured dropdown.
Configure grounding.
Type "Google
" for company name.
Payload settings - check "Include snippets in the response payload
"
Click "Save
".
Agent's instructions configuration
Return to Agent configuration.
Add new instruction:
- Provide detailed answer to users questions about the exec guide to gen ai using information in the ${TOOL:pdf-docs}
Save configuration.
Create an example for PDF-Docs tool
Switch to the Examples tab. Create a new example.
Using actions "+
":
Add "User input":
What are the main capabilities?
Add "Tool use".
- Tool & Action: "
pdf-docs
"
Input (requestBody)
{
"query": "Main capabilities",
"filter": "",
"userMetadata": {},
"fallback": ""
}
Tool Output:
{
"answer": "Detailed answer about main capabilities",
"snippets": [
{
"uri": "https://storage.cloud.google.com/pdf-docs-49ca4/exec_guide_gen_ai.pdf",
"text": "Detailed answer about main capabilities",
"title": "exec_guide_gen_ai"
}
]
}
Add "Agent response"
Detailed answer about main capabilities.
https://storage.cloud.google.com/pdf-docs-49ca4/exec_guide_gen_ai.pdf
Configured example:
Tool invocation configuration:
Test the configuration by sending a question to the Agent in the emulator.
Question:
What are the 10 steps in the exec guide?
Select "Agent
" and click "Save example
".
Provide a name "user-question-flow
" and save.
Format agent response and include link to the pdf doc from the tool output section.
Save the example.
Return to the emulator and click "Replay conversation
". Check the updated response format.
Ask another question:
What are the main capabilities in the exec guide?
Source PDF document.
Question:
What should I consider when evaluating projects?
Source PDF document.
Question:
What are the priority use cases in Retail and CPG in the exec guide?
Source PDF document.
10. Prebuilt Agents
Explore prebuilt Agents from the menu on the left.
Select one of the agents and deploy it. Explore Agent's setup, instructions and tools.
11. Congratulations!
Congratulations, you finished the lab!
What we've covered:
- How to deploy Cloud Run application to integrate with Gemini APIs
- How to create and deploy Vertex AI Agent
- How to add Slack integration for the Agent
- How to configure data store for Q&A over PDF documents
What's next:
- Review Best Practices for Vertex AI Agents
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Deleting the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
©2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.