1. 簡介
AI 代理的實用度取決於可存取的資料。大多數現實世界的資料都存在於資料庫中,而將代理程式連線至資料庫通常表示要在代理程式碼中編寫連線管理、查詢邏輯和嵌入管道。需要存取資料庫的每個代理程式都會重複這項工作,且每次查詢變更都需要重新部署代理程式。
本程式碼研究室會介紹不同的做法。您可以在 YAML 檔案中宣告資料庫工具 (標準 SQL 查詢、向量相似度搜尋,甚至是自動生成嵌入項目),而 MCP Toolbox for Databases 會以 MCP 伺服器身分處理所有資料庫作業。您的代理程式碼會保持在最低限度:載入工具,讓 Gemini 決定要呼叫哪個工具。
建構項目
「TechJobs」的智慧職缺公告助理:這是由 Gemini 支援的 ADK 代理程式,可協助開發人員使用標準篩選條件 (角色、技術堆疊) 瀏覽技術職缺,並透過自然語言說明 (例如「我想找 AI 聊天機器人相關的遠端工作」) 探索職缺。代理程式會完全透過 MCP Toolbox for Databases 讀取及寫入 Cloud SQL PostgreSQL 資料庫,並處理所有資料庫存取作業,包括自動生成向量搜尋的嵌入項目。完成後,工具箱和代理程式都會在 Cloud Run 上執行。
課程內容
- MCP (Model Context Protocol) 如何為 AI 代理標準化工具存取權,以及 MCP Toolbox for Databases 如何將這項標準套用至資料庫作業
- 將 MCP Toolbox for Databases 設為 ADK 代理與 Cloud SQL PostgreSQL 之間的 Middleware
- 在
tools.yaml中以宣告方式定義資料庫工具,代理程式中不會有資料庫程式碼 - 使用
ToolboxToolset建構 ADK 代理,從正在執行的 Toolbox 伺服器載入工具 - 使用 Cloud SQL 內建的
embedding()函式生成向量嵌入,並透過pgvector啟用語意搜尋 - 在寫入作業中使用
valueFromParam功能,自動擷取向量 - 將 Toolbox 伺服器和 ADK 代理部署至 Cloud Run
必要條件
- 具有試用帳單帳戶的 Google Cloud 帳戶
- 對 Python 和 SQL 有基本瞭解
- 具備 Cloud Database 和 ADK 的經驗會很有幫助
2. 設定環境
這個步驟會準備 Cloud Shell 殼層環境、設定 Google Cloud 雲端專案,並複製參照存放區。
開啟 Cloud Shell
在瀏覽器中開啟 Cloud Shell。Cloud Shell 提供預先設定的環境,內含本程式碼研究室所需的所有工具。看到授權提示時,按一下「授權」。
然後依序點選「View」(檢視) ->「Terminal」(終端機),開啟終端機。介面應與下圖類似:

這會是我們的主要介面,頂端是 IDE,底部是終端機
設定工作目錄
建立工作目錄。您在本程式碼研究室中編寫的所有程式碼都會放在這裡:
mkdir -p ~/build-agent-adk-toolbox-cloudsql
cloudshell workspace ~/build-agent-adk-toolbox-cloudsql && cd ~/build-agent-adk-toolbox-cloudsql
接著,請準備幾個目錄,以便管理播種指令碼和記錄等項目
mkdir -p ~/build-agent-adk-toolbox-cloudsql/scripts
mkdir -p ~/build-agent-adk-toolbox-cloudsql/logs
設定 Google Cloud 專案
建立含有位置變數的 .env 檔案:
# For Vertex AI / Gemini API calls
echo "GOOGLE_CLOUD_LOCATION=global" > .env
# For Cloud SQL, Cloud Run, Artifact Registry
echo "REGION=us-central1" >> .env
如要簡化終端機中的專案設定,請將這個專案設定指令碼下載到工作目錄:
curl -sL https://raw.githubusercontent.com/alphinside/cloud-trial-project-setup/main/setup_verify_trial_project.sh -o setup_verify_trial_project.sh
執行指令碼,這個指令會驗證試用帳單帳戶、建立新專案 (或驗證現有專案)、將專案 ID 儲存至目前目錄中的 .env 檔案,並在 gcloud 中設定有效專案。
bash setup_verify_trial_project.sh && source .env
指令碼會執行下列動作:
- 確認您有有效的試用帳單帳戶
- 檢查
.env中是否有現有專案 (如有) - 建立新專案或重複使用現有專案
- 將試用帳單帳戶連結至專案
- 將專案 ID 儲存至
.env - 將專案設為有效
gcloud專案
在 Cloud Shell 終端機提示中,檢查工作目錄旁的黃色文字,確認專案設定正確。應該會顯示專案 ID。

啟用必要 API
接著,我們需要為要互動的產品啟用多個 API:
gcloud services enable \
aiplatform.googleapis.com \
sqladmin.googleapis.com \
compute.googleapis.com \
run.googleapis.com \
cloudbuild.googleapis.com \
artifactregistry.googleapis.com
- Vertex AI API (
aiplatform.googleapis.com):代理程式會使用 Gemini 模型,而工具箱會使用嵌入 API 進行向量搜尋。 - Cloud SQL Admin API (
sqladmin.googleapis.com):用於佈建及管理 PostgreSQL 執行個體。 - Compute Engine API (
compute.googleapis.com) - 建立 Cloud SQL 執行個體時需要此 API。 - Cloud Run、Cloud Build、Artifact Registry - 在本程式碼研究室稍後的部署步驟中使用
3. 準備資料庫初始化指令碼
這個步驟會開始建立 Cloud SQL 執行個體,並執行自動設定指令碼,等待執行個體準備就緒,然後建立資料庫、填入職缺資訊,並產生嵌入內容,所有作業一次完成。
首先,請將資料庫密碼新增至 .env 檔案,然後重新載入:
echo "DB_PASSWORD=techjobs-pwd" >> .env
echo "DB_INSTANCE=jobs-instance" >> .env
echo "DB_NAME=jobs_db" >> .env
source .env
建立 Bash 指令碼,用於建立執行個體和資料庫
接著,使用下列指令建立 scripts/setup_database.sh 指令碼
mkdir -p ~/build-agent-adk-toolbox-cloudsql/scripts
cloudshell edit scripts/setup_database.sh
然後將下列程式碼複製到 scripts/setup_database.sh 檔案中
#!/bin/bash
set -e
source .env
echo "================================================"
echo "Database Setup"
echo "================================================"
echo ""
# Step 1: Create Cloud SQL instance
echo "[1/5] Creating Cloud SQL instance..."
# Check if instance already exists
if gcloud sql instances describe "$DB_INSTANCE" --quiet >/dev/null 2>&1; then
echo " Instance already exists"
else
echo " Creating instance (takes 5-10 minutes)..."
gcloud sql instances create "$DB_INSTANCE" \
--database-version=POSTGRES_17 \
--tier=db-custom-1-3840 \
--edition=ENTERPRISE \
--region="$REGION" \
--root-password="$DB_PASSWORD" \
--enable-google-ml-integration \
--database-flags cloudsql.enable_google_ml_integration=on \
--quiet
fi
echo " ✓ Instance ready"
echo ""
# Step 2: Verify instance is ready
echo "[2/5] Verifying instance state..."
STATE=$(gcloud sql instances describe "$DB_INSTANCE" --format='value(state)')
if [ "$STATE" != "RUNNABLE" ]; then
echo "ERROR: Instance not ready (state: $STATE)"
exit 1
fi
echo " ✓ Instance is RUNNABLE"
echo ""
# Step 3: Grant IAM permissions
echo "[3/5] Granting Vertex AI permissions..."
SERVICE_ACCOUNT=$(gcloud sql instances describe "$DB_INSTANCE" \
--format='value(serviceAccountEmailAddress)')
if [ -z "$SERVICE_ACCOUNT" ]; then
echo "ERROR: Could not retrieve service account"
exit 1
fi
gcloud projects add-iam-policy-binding "$GOOGLE_CLOUD_PROJECT" \
--member="serviceAccount:$SERVICE_ACCOUNT" \
--role="roles/aiplatform.user" \
--quiet
echo " ✓ Permissions granted"
echo ""
# Step 4: Create database
echo "[4/5] Creating database..."
# Check if database already exists
if gcloud sql databases describe "$DB_NAME" \
--instance="$DB_INSTANCE" --quiet >/dev/null 2>&1; then
echo " Database already exists"
else
gcloud sql databases create "$DB_NAME" \
--instance="$DB_INSTANCE" \
--quiet
fi
echo " ✓ Database '$DB_NAME' ready"
echo ""
# Step 5: Seed database and generate embeddings
echo "[5/5] Seeding database and generating embeddings..."
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
SETUP_SCRIPT="${SCRIPT_DIR}/setup_jobs_db.py"
if [ ! -f "$SETUP_SCRIPT" ]; then
echo "ERROR: Setup script not found: $SETUP_SCRIPT"
exit 1
fi
uv run "$SETUP_SCRIPT"
echo ""
echo "================================================"
echo "Setup complete!"
echo "================================================"
echo ""
建立資料種子的 Python 指令碼
接著,使用下列指令建立播種指令碼 Python 檔案 scripts/setup_jobs_db.py
cloudshell edit scripts/setup_jobs_db.py
然後將下列程式碼複製到 scripts/setup_jobs_db.py 檔案中
import os
import sys
from pathlib import Path
from dotenv import load_dotenv
from google.cloud.sql.connector import Connector
import pg8000
import time
# Load environment variables from .env file
env_path = Path(__file__).parent.parent / '.env'
load_dotenv(env_path)
EMBEDDING_MODEL='gemini-embedding-001'
# Verify required environment variables
required_vars = ['GOOGLE_CLOUD_PROJECT', 'REGION', 'DB_PASSWORD']
missing_vars = [var for var in required_vars if not os.environ.get(var)]
if missing_vars:
print(f"ERROR: Missing required environment variables: {', '.join(missing_vars)}", file=sys.stderr)
print(f"", file=sys.stderr)
print(f"Expected .env file location: {env_path}", file=sys.stderr)
if not env_path.exists():
print(f"✗ File not found at that location", file=sys.stderr)
else:
print(f"✓ File exists but is missing the variables above", file=sys.stderr)
print(f"", file=sys.stderr)
print(f"Make sure your .env file contains:", file=sys.stderr)
for var in missing_vars:
print(f" {var}=<value>", file=sys.stderr)
sys.exit(1)
# Job listings data (fictional, for tutorial purposes only)
JOBS = [
("Senior Backend Engineer", "Stripe", "Backend", "Go, PostgreSQL, gRPC, Kubernetes", "$180-250K/year", "San Francisco, Hybrid", 3,
"Design and build high-throughput microservices powering payment infrastructure for millions of businesses. Optimize Go services for sub-100ms latency at scale, work with PostgreSQL and Redis for data persistence, and deploy on Kubernetes clusters handling billions of API calls."),
("Machine Learning Engineer", "Spotify", "Data/AI", "Python, TensorFlow, BigQuery, Vertex AI", "$170-230K/year", "Stockholm, Remote", 2,
"Build and deploy ML models for music recommendation and personalization systems serving hundreds of millions of listeners. Design feature pipelines in BigQuery, train models using distributed computing, and serve predictions through real-time APIs processing thousands of requests per second."),
("Frontend Engineer", "Vercel", "Frontend", "React, TypeScript, Next.js", "$140-190K/year", "Remote", 4,
"Build developer-facing dashboard interfaces and deployment tools used by millions of developers worldwide. Create responsive, accessible React components for project management, analytics, and real-time deployment monitoring with a focus on developer experience."),
("DevOps Engineer", "Datadog", "DevOps", "Terraform, GCP, Docker, Kubernetes, ArgoCD", "$160-220K/year", "New York, Hybrid", 2,
"Manage cloud infrastructure powering an observability platform used by thousands of engineering teams. Automate deployment pipelines with ArgoCD, manage multi-cloud Kubernetes clusters, and implement infrastructure-as-code with Terraform across production environments."),
("Mobile Engineer (Android)", "Grab", "Mobile", "Kotlin, Jetpack Compose, GraphQL", "$120-170K/year", "Singapore, Hybrid", 3,
"Develop features for a super-app serving millions of users across Southeast Asia. Build modern Android UIs with Jetpack Compose, integrate GraphQL APIs, and optimize app performance for diverse device capabilities and network conditions."),
("Data Engineer", "Airbnb", "Data", "Python, Apache Spark, Airflow, BigQuery", "$160-210K/year", "San Francisco, Hybrid", 2,
"Build data pipelines that process booking, search, and pricing data for a global travel marketplace. Design ETL workflows with Apache Spark and Airflow, maintain data warehouses in BigQuery, and ensure data quality for analytics and machine learning teams."),
("Full Stack Engineer", "Revolut", "Full Stack", "TypeScript, Node.js, React, PostgreSQL", "$130-180K/year", "London, Remote", 5,
"Build the next generation of financial products making banking accessible to millions of users across 35 countries. Develop real-time trading interfaces with React and WebSockets, build Node.js APIs handling market data streams, and design PostgreSQL schemas for financial transactions."),
("Site Reliability Engineer", "Cloudflare", "SRE", "Go, Prometheus, Grafana, GCP, Terraform", "$170-230K/year", "Austin, Hybrid", 2,
"Ensure 99.99% uptime for a global network handling millions of requests per second. Define SLOs, build monitoring dashboards with Prometheus and Grafana, manage incident response, and automate infrastructure scaling across 300+ data centers worldwide."),
("Cloud Architect", "Google Cloud", "Cloud", "GCP, Terraform, Kubernetes, Python", "$200-280K/year", "Seattle, Hybrid", 1,
"Help enterprises modernize their infrastructure on Google Cloud. Design multi-region architectures, lead migration projects from on-premises to GKE, and build reference implementations using Terraform and Cloud Foundation Toolkit."),
("Backend Engineer (Payments)", "Square", "Backend", "Java, Spring Boot, PostgreSQL, Kafka", "$160-220K/year", "San Francisco, Hybrid", 3,
"Build payment processing systems handling millions of transactions for businesses of all sizes. Design event-driven architectures using Kafka, implement idempotent payment flows with Spring Boot, and ensure PCI-DSS compliance across all services."),
("AI Engineer", "Hugging Face", "Data/AI", "Python, LangChain, Vertex AI, FastAPI, PostgreSQL", "$150-210K/year", "Paris, Remote", 2,
"Build AI-powered tools for the largest open-source ML community. Develop RAG pipelines that index and search model documentation, create conversational agents using LangChain, and deploy AI services with FastAPI on cloud infrastructure."),
("Platform Engineer", "Coinbase", "Platform", "Rust, Kubernetes, AWS, Terraform", "$180-250K/year", "Remote", 0,
"Build the infrastructure platform for a leading cryptocurrency exchange. Develop high-performance matching engines in Rust, manage Kubernetes clusters for microservices, and design CI/CD pipelines that enable rapid feature deployment with zero downtime."),
("QA Automation Engineer", "Shopify", "QA", "Python, Selenium, Cypress, Jenkins", "$110-160K/year", "Toronto, Hybrid", 3,
"Design and maintain automated test suites for a commerce platform powering millions of merchants. Build end-to-end test frameworks with Cypress and Selenium, integrate tests into Jenkins CI pipelines, and establish quality gates that prevent regressions in checkout and payment flows."),
("Security Engineer", "CrowdStrike", "Security", "Python, SIEM, Kubernetes, Penetration Testing", "$170-240K/year", "Austin, On-site", 1,
"Protect enterprise customers from cyber threats on a leading endpoint security platform. Conduct penetration testing, design security monitoring with SIEM tools, implement zero-trust networking in Kubernetes environments, and lead incident response for security events."),
("Product Engineer", "GitLab", "Full Stack", "Go, React, PostgreSQL, Redis, GCP", "$140-200K/year", "Remote", 4,
"Own features end-to-end for an all-in-one DevSecOps platform used by millions of developers. Build Go microservices for CI/CD pipelines, create React frontends for code review and project management, and collaborate with product managers to iterate on user-facing features using data-driven development."),
]
def get_connection():
"""Create a connection to Cloud SQL using the connector."""
project = os.environ['GOOGLE_CLOUD_PROJECT']
region = os.environ['REGION']
password = os.environ['DB_PASSWORD']
instance = os.environ['DB_INSTANCE']
database = os.environ['DB_NAME']
connector = Connector()
conn = connector.connect(
f"{project}:{region}:{instance}",
"pg8000",
user="postgres",
password=password,
db=database
)
return conn, connector
def create_schema(cursor):
"""Create extensions and jobs table."""
cursor.execute("CREATE EXTENSION IF NOT EXISTS google_ml_integration")
cursor.execute("CREATE EXTENSION IF NOT EXISTS vector")
cursor.execute("""
CREATE TABLE IF NOT EXISTS jobs (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
company VARCHAR NOT NULL,
role VARCHAR NOT NULL,
tech_stack VARCHAR NOT NULL,
salary_range VARCHAR NOT NULL,
location VARCHAR NOT NULL,
openings INTEGER NOT NULL,
description TEXT NOT NULL,
description_embedding vector(3072)
)
""")
def seed_jobs(cursor, conn):
"""Insert job listings."""
cursor.execute("SELECT COUNT(*) FROM jobs")
existing_count = cursor.fetchone()[0]
if existing_count > 0:
print(f" {existing_count} jobs already exist, skipping seed")
return 0
cursor.executemany("""
INSERT INTO jobs (title, company, role, tech_stack, salary_range, location, openings, description)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
""", JOBS)
conn.commit()
return len(JOBS)
def generate_embeddings(cursor, conn):
"""Generate embeddings using Cloud SQL's embedding() function."""
cursor.execute("SELECT COUNT(*) FROM jobs WHERE description_embedding IS NULL")
null_count = cursor.fetchone()[0]
if null_count == 0:
print(" All jobs already have embeddings")
return 0
cursor.execute(f"""
UPDATE jobs
SET description_embedding = embedding('{EMBEDDING_MODEL}', description)::vector
WHERE description_embedding IS NULL
""")
rows_updated = cursor.rowcount
conn.commit()
return rows_updated
def main():
conn, connector = get_connection()
cursor = conn.cursor()
try:
create_schema(cursor)
conn.commit()
seeded = seed_jobs(cursor, conn)
if seeded > 0:
print(f" ✓ Inserted {seeded} jobs")
# Waiting for vertex role propagation
time.sleep(60)
embedded = generate_embeddings(cursor, conn)
if embedded > 0:
print(f" ✓ Generated {embedded} embeddings")
except Exception as e:
print(f"ERROR: {e}", file=sys.stderr)
sys.exit(1)
finally:
cursor.close()
conn.close()
connector.close()
if __name__ == "__main__":
main()
接著來看看下一個步驟
4. 建立及初始化資料庫
現在指令碼已準備好執行。我們需要 Python 執行準備好的指令碼,因此請先準備 Python
設定 Python 專案
uv 是以 Rust 編寫的快速 Python 套件和專案管理員 ( 請參閱 uv 說明文件)。本程式碼研究室使用這項工具,可快速且輕鬆地維護 Python 專案
初始化 Python 專案,並新增必要的依附元件:
uv init
uv add cloud-sql-python-connector --extra pg8000
uv add python-dotenv
請注意,我們在這裡使用 cloud-sql-python-connector Python SDK 初始化與資料庫執行個體的安全連線,並使用應用程式預設憑證進行驗證。
執行設定指令碼
現在,我們可以在背景執行設定指令碼,並使用下列指令檢查寫入 logs/atabase_setup.log 檔案的控制台輸出內容。等待期間,您可以繼續下一個章節
mkdir -p ~/build-agent-adk-toolbox-cloudsql/logs
bash scripts/setup_database.sh > logs/database_setup.log 2>&1 &
下載 Toolbox 二進位檔
在本教學課程中,我們將使用 MCP Toolbox,幸運的是,這個工具箱隨附預建二進位檔,可在 Linux 環境中使用。現在,我們在背景下載,因為這需要一段時間。執行下列指令下載二進位檔,並檢查 logs/toolbox_dl.log 的輸出記錄。等待期間,您可以繼續下一個章節
cd ~/build-agent-adk-toolbox-cloudsql
curl -O https://storage.googleapis.com/mcp-toolbox-for-databases/v1.0.0/linux/amd64/toolbox > logs/toolbox_dl.log 2>&1 &
瞭解設定指令碼 scripts/setup_database.sh
現在我們來瞭解先前設定的設定指令碼。這個檔案會執行下列程序
- 我們在該處執行的第一個指令是
gcloud sql instances create指令,並加上下列旗標
db-custom-1-3840是ENTERPRISE版本中最小的專屬核心 Cloud SQL 層級 (1 個 vCPU,3.75 GB RAM)。詳情請參閱這篇文章。Vertex AI 機器學習整合功能需要專用核心,共用核心層級 (db-f1-micro、db-g1-small) 不支援這項功能。--root-password設定預設postgres使用者的密碼。--enable-google-ml-integration可啟用 Cloud SQL 與 Vertex AI 的內建整合功能,讓您使用embedding()函式,直接從 SQL 呼叫嵌入模型。
- 確認執行個體是否已處於
RUNNABLE狀態 - 使用
gcloud projects add-iam-policy-binding指令,授予 Cloud SQL 執行個體的服務帳戶呼叫 Vertex AI 的權限。這是內建embedding()函式所必需的項目,我們會在植入資料庫時使用這個函式 - 建立資料庫
- 執行播種指令碼
setup_jobs_db.py指令碼
瞭解種子指令碼 scripts/setup_jobs_db.py
現在,我們來看看種子指令碼,這個指令碼會執行下列動作:
- 初始化與資料庫執行個體的連線
- 安裝兩項 PostgreSQL 擴充功能:
google_ml_integration:提供embedding()SQL 函式,可直接從 SQL 呼叫 Vertex AI 嵌入模型。這是資料庫層級的擴充功能,可讓您在jobs_db中使用機器學習函式。您在建立執行個體時設定的執行個體層級旗標 (--enable-google-ml-integration),可讓 Cloud SQL VM 連線至 Vertex AI,而擴充功能則會在這個特定資料庫中提供 SQL 函式。vector(pgvector) - 新增vector資料類型和距離運算子,用於儲存及查詢嵌入內容。
- 建立資料表,請注意
description_embedding資料欄是vector(3072),也就是儲存 3072 維度向量的pgvector資料欄。 - 提供初始工作資料
- 使用
embedding()函式,透過內建的 Vertex 整合功能,從description欄位產生嵌入資料,並填入description_embedding
embedding('gemini-embedding-001', description):直接從 SQL 呼叫 Vertex AI 的 Gemini 嵌入模型,並傳遞每個職務的description文字。這是您在種子指令碼中安裝的google_ml_integration擴充功能。::vector:將傳回的浮點數陣列轉換為 pgvector 的vector型別,以便儲存及使用距離運算子查詢。UPDATE會在所有 15 個資料列中執行,為每個職務說明產生一個 3072 維度的嵌入。
這會準備初始資料,供專員存取
5. 設定 MCP Toolbox for Databases
這個步驟會介紹 MCP Toolbox for Databases,並設定該工具以連線至 Cloud SQL 執行個體,以及定義兩個標準 SQL 查詢工具。
什麼是 MCP?為什麼要使用 Toolbox?

MCP (Model Context Protocol) 是一項開放通訊協定,可將 AI 代理程式探索及與外部工具互動的方式標準化。這項通訊協定定義了用戶端/伺服器模型:代理會代管 MCP 用戶端,而工具則由 MCP 伺服器公開。任何相容於 MCP 的用戶端都能使用相容於 MCP 的伺服器,代理不需要為每項工具編寫自訂整合代碼。

MCP Toolbox for Databases 是專為資料庫存取而建構的開放原始碼 MCP 伺服器。如果沒有這項功能,您就必須編寫 Python 函式,開啟資料庫連線、管理連線集區、建構參數化查詢來防止 SQL 注入、處理錯誤,並將所有程式碼嵌入代理程式中。需要資料庫存取權的每位代理程式都會重複這項工作。變更查詢表示要重新部署代理程式。
使用 Toolbox 編寫 YAML 檔案。每項工具都會對應至參數化的 SQL 陳述式。Toolbox 會處理連線集區、參數化查詢、驗證和可觀測性。工具與代理程式分離,因此編輯 tools.yaml 並重新啟動 Toolbox 即可更新查詢,不必修改代理程式碼。這些工具適用於 ADK、LangGraph、LlamaIndex 或任何與 MCP 相容的架構。
撰寫工具設定
現在,我們需要在 Cloud Shell 編輯器中建立名為 tools.yaml 的檔案,設定工具設定
cloudshell edit tools.yaml
這個檔案使用多文件 YAML,以 --- 分隔的每個區塊都是獨立資源。每個資源都有 kind,可宣告資源類型 (資料庫連線為 sources,可供代理程式呼叫的動作為 tools),以及指定後端 (來源為 cloud-sql-postgres,以 SQL 為基礎的工具為 postgres-sql)。type工具會透過 name 參照來源,Toolbox 則會藉此瞭解要執行的連線集區。環境變數使用 ${VAR_NAME} 語法,並在啟動時解析。
現在,請先將下列指令碼複製到 tools.yaml 檔案中
# tools.yaml
# --- Data Source ---
kind: source
name: jobs-db
type: cloud-sql-postgres
project: ${GOOGLE_CLOUD_PROJECT}
region: ${REGION}
instance: ${DB_INSTANCE}
database: ${DB_NAME}
user: postgres
password: ${DB_PASSWORD}
---
這個指令碼會定義下列資源:
- 來源 (
jobs-db):告知 Toolbox 如何連線至 Cloud SQL PostgreSQL 執行個體。cloud-sql-postgres類型會在內部使用 Cloud SQL 連接器,自動處理驗證和安全連線。系統會在啟動時從環境變數解析${GOOGLE_CLOUD_PROJECT}、${REGION}和${DB_PASSWORD}預留位置。
接著,在 tools.yaml 的 --- 符號下方附加下列指令碼:
# --- Tool 1: Search jobs by role and/or tech stack ---
kind: tool
name: search-jobs
type: postgres-sql
source: jobs-db
description: >-
Search for job listings by role category and/or tech stack.
Use this tool when the developer wants to browse listings
by role (e.g., Backend, Frontend, Data) or find jobs
using a specific technology. Both parameters accept an
empty string to match all values.
statement: |
SELECT title, company, role, tech_stack, salary_range, location, openings
FROM jobs
WHERE ($1 = '' OR LOWER(role) = LOWER($1))
AND ($2 = '' OR LOWER(tech_stack) LIKE '%' || LOWER($2) || '%')
ORDER BY title
LIMIT 10
parameters:
- name: role
type: string
description: "The role category to filter by (e.g., 'Backend', 'Frontend', 'Data/AI', 'DevOps'). Use empty string for all roles."
- name: tech_stack
type: string
description: "A technology to search for in the tech stack (partial match, e.g., 'Python', 'Kubernetes'). Use empty string for all tech stacks."
---
# --- Tool 2: Get full details for a specific job ---
kind: tool
name: get-job-details
type: postgres-sql
source: jobs-db
description: >-
Get full details for a specific job listing including its description,
salary range, location, and number of openings. Use this tool when the
developer asks about a particular job by title or company.
statement: |
SELECT title, company, role, tech_stack, salary_range, location, openings, description
FROM jobs
WHERE LOWER(title) LIKE '%' || LOWER($1) || '%'
OR LOWER(company) LIKE '%' || LOWER($1) || '%'
parameters:
- name: search_term
type: string
description: "The job title or company name to look up (partial match supported)."
---
這個指令碼會定義下列資源:
- 工具 1 和 2 (
search-jobs、get-job-details) - 標準 SQL 查詢工具。每個對應都會將工具名稱 (代理程式看到的內容) 對應至參數化 SQL 陳述式 (資料庫執行的內容)。參數使用$1、$2位置預留位置。工具箱會將這些項目做為預先準備好的陳述式執行,避免發生 SQL 注入。
繼續操作,在 tools.yaml 的 --- 符號下方附加下列指令碼:
# --- Embedding Model ---
kind: embeddingModel
name: gemini-embedding
type: gemini
model: gemini-embedding-001
project: ${GOOGLE_CLOUD_PROJECT}
location: ${GOOGLE_CLOUD_LOCATION}
dimension: 3072
---
這個指令碼會定義下列資源:
- 嵌入模型 (
gemini-embedding) - 將工具箱設定為呼叫 Gemini 的gemini-embedding-001模型,生成 3072 維度的文字嵌入。Toolbox 會使用應用程式預設憑證 (ADC) 進行驗證,因此在 Cloud Shell 或 Cloud Run 中不需要 API 金鑰。請注意,這裡設定的dimension必須與先前設定的資料庫種子相同
繼續操作,在 tools.yaml 的 --- 符號下方附加下列指令碼:
# --- Tool 3: Semantic search by description ---
kind: tool
name: search-jobs-by-description
type: postgres-sql
source: jobs-db
description: >-
Find jobs that match a natural language description of what the developer
is looking for. Use this tool when the developer describes their ideal job
using interests, work style, career goals, or project type rather than a
specific role or tech stack. Examples: "I want to work on AI chatbots,"
"a remote job at a fintech startup," "something involving infrastructure
and reliability."
statement: |
SELECT title, company, role, tech_stack, salary_range, location, description
FROM jobs
WHERE description_embedding IS NOT NULL
ORDER BY description_embedding <=> $1
LIMIT 5
parameters:
- name: search_query
type: string
description: "A natural language description of the kind of job the developer is looking for."
embeddedBy: gemini-embedding
---
這個指令碼會定義下列資源:
- 工具 3 (
search-jobs-by-description) - 向量搜尋工具。search_query參數具有embeddedBy: gemini-embedding,可告知 Toolbox 攔截原始文字、傳送至嵌入模型,並在 SQL 陳述式中使用產生的向量。<=>運算子是 pgvector 的餘弦距離,值越小表示說明越相似。
最後,在 tools.yaml 的 --- 符號下方附加最後一個工具
# --- Tool 4: Add a new job listing with automatic embedding ---
kind: tool
name: add-job
type: postgres-sql
source: jobs-db
description: >-
Add a new job listing to the platform. Use this tool when a user asks
to post a job that is not currently listed.
statement: |
INSERT INTO jobs (title, company, role, tech_stack, salary_range, location, openings, description, description_embedding)
VALUES ($1, $2, $3, $4, $5, $6, CAST($7 AS INTEGER), $8, $9)
RETURNING title, company
parameters:
- name: title
type: string
description: "The job title (e.g., 'Senior Backend Engineer')."
- name: company
type: string
description: "The company name (e.g., 'Stripe', 'Spotify')."
- name: role
type: string
description: "The role category (e.g., 'Backend', 'Frontend', 'Data/AI', 'DevOps')."
- name: tech_stack
type: string
description: "Comma-separated list of technologies (e.g., 'Python, FastAPI, GCP')."
- name: salary_range
type: string
description: "The salary range (e.g., '$150-200K/year')."
- name: location
type: string
description: "Work location and arrangement (e.g., 'Remote')."
- name: openings
type: string
description: "The number of open positions."
- name: description
type: string
description: "A short description of the job (2-3 sentences)."
- name: description_vector
type: string
description: "Auto-generated embedding vector for the job description."
valueFromParam: description
embeddedBy: gemini-embedding
這個指令碼會定義下列資源:
- 工具 4 (
add-job) - 示範向量擷取作業。description_vector參數有兩個特殊欄位: valueFromParam: description:工具箱會將description參數的值複製到這個參數中。LLM 永遠不會看到這個參數。embeddedBy: gemini-embedding:工具箱會將複製的文字嵌入向量,然後傳遞至 SQL。
結果:一個工具呼叫會儲存原始說明文字及其向量嵌入,但代理程式完全不知道嵌入。
多文件 YAML 格式會以 --- 分隔各項資源。每個文件都有 kind、name 和 type 欄位,可定義文件內容。總而言之,我們已設定下列所有項目:
- 定義來源資料庫
- 定義工具 ( 工具 1 和 2),使用標準篩選條件查詢資料庫
- 定義嵌入模型
- 定義用來對資料庫執行向量搜尋的工具 ( 工具 3)
- 定義工具,將向量資料擷取 ( 工具 4) 至資料庫
6. 執行 MCP Toolbox 伺服器
在上一個步驟中,我們已為 MCP Toolbox 設定必要設定。現在可以執行伺服器了
驗證植入的資料
啟動 Toolbox 前,請先確認資料庫設定已完成。使用下列指令建立 Python 指令碼 scripts/verify_database.py
cloudshell edit scripts/verify_seed.py
然後將下列程式碼複製到 scripts/verify_seed.py 檔案中
#!/usr/bin/env python3
"""Verify the database has 15 jobs with embeddings."""
import os
import sys
from pathlib import Path
from dotenv import load_dotenv
from google.cloud.sql.connector import Connector
import pg8000
# Load environment variables
env_path = Path(__file__).parent.parent / '.env'
load_dotenv(env_path)
# Verify required environment variables
required_vars = ['GOOGLE_CLOUD_PROJECT', 'REGION', 'DB_PASSWORD', 'DB_INSTANCE', 'DB_NAME']
missing_vars = [var for var in required_vars if not os.environ.get(var)]
if missing_vars:
print(f"ERROR: Missing environment variables: {', '.join(missing_vars)}", file=sys.stderr)
sys.exit(1)
def verify_database():
"""Check that 15 jobs exist with embeddings."""
connector = Connector()
try:
project = os.environ['GOOGLE_CLOUD_PROJECT']
region = os.environ['REGION']
password = os.environ['DB_PASSWORD']
instance = os.environ['DB_INSTANCE']
database = os.environ['DB_NAME']
conn = connector.connect(
f"{project}:{region}:{instance}",
"pg8000",
user="postgres",
password=password,
db=database
)
cursor = conn.cursor()
# Count jobs and embeddings
cursor.execute("SELECT COUNT(*) FROM jobs")
job_count = cursor.fetchone()[0]
cursor.execute("SELECT COUNT(*) FROM jobs WHERE description_embedding IS NOT NULL")
embedding_count = cursor.fetchone()[0]
print(f"Jobs: {job_count}/15")
print(f"Embeddings: {embedding_count}/15")
cursor.close()
conn.close()
if job_count == 15 and embedding_count == 15:
print("\n✓ Database ready!")
return True
else:
print("\n✗ Database not ready")
return False
except Exception as e:
print(f"\nERROR: {e}", file=sys.stderr)
return False
finally:
connector.close()
if __name__ == "__main__":
success = verify_database()
sys.exit(0 if success else 1)
這項指令碼會檢查職缺資料的數量及其嵌入內容。使用下列指令執行指令碼
uv run scripts/verify_seed.py
如果看到下列終端機輸出內容,表示資料已準備就緒
Jobs: 15/15 Embeddings: 15/15 ✓ Database ready!
啟動 Toolbox 伺服器
在稍早的設定步驟中,我們已下載 toolbox 可執行檔。確認這個二進位檔案存在且已成功下載,如果沒有,請下載並等待完成
cd ~/build-agent-adk-toolbox-cloudsql
if [ ! -f toolbox ]; then
curl -O https://storage.googleapis.com/mcp-toolbox-for-databases/v1.0.0/linux/amd64/toolbox
fi
chmod +x toolbox
我們需要將 .env 變數公開給 MCP 工具箱執行的子程序。執行下列指令,啟動工具箱伺服器,並將控制台輸出內容記錄到 logs/mcp_toolbox.log 檔案
set -a; source .env; set +a
./toolbox --config tools.yaml --enable-api > logs/mcp_toolbox.log 2>&1 &
您應該會在 logs/mcp_toolbox.log 檔案中看到輸出內容,確認伺服器已準備就緒,如下所示:
... INFO "Initialized 1 sources: jobs-db" ... INFO "Initialized 0 authServices: " ... INFO "Using Vertex AI backend for Gemini embedding" ... INFO "Initialized 1 embeddingModels: gemini-embedding" ... INFO "Initialized 4 tools: add-job, search-jobs, get-job-details, search-jobs-by-description" ... ... INFO "Server ready to serve!"
驗證工具
查詢 Toolbox API,列出所有已註冊的工具:
curl -s http://localhost:5000/api/toolset | uv run -m json.tool
畫面上會顯示工具及其說明和參數。如下所示
...
"search-jobs-by-description": {
"description": "Find jobs that match a natural language description of what the developer is looking for. Use this tool when the developer describes their ideal job using interests, work style, career goals, or project type rather than a specific role or tech stack. Examples: \"I want to work on AI chatbots,\" \"a remote job at a fintech startup,\" \"something involving infrastructure and reliability.\"",
"parameters": [
{
"name": "search_query",
"type": "string",
"required": true,
"description": "A natural language description of the kind of job the developer is looking for.",
"authSources": []
}
],
"authRequired": []
}
...
直接測試 search-jobs 工具:
curl -s -X POST http://localhost:5000/api/tool/search-jobs/invoke \
-H "Content-Type: application/json" \
-d '{"role": "Backend", "tech_stack": ""}' | jq '.result | fromjson'
回應應包含種子資料中的兩項後端工程工作。
[
{
"title": "Backend Engineer (Payments)",
"company": "Square",
"role": "Backend",
"tech_stack": "Java, Spring Boot, PostgreSQL, Kafka",
"salary_range": "$160-220K/year",
"location": "San Francisco, Hybrid",
"openings": 3
},
{
"title": "Senior Backend Engineer",
"company": "Stripe",
"role": "Backend",
"tech_stack": "Go, PostgreSQL, gRPC, Kubernetes",
"salary_range": "$180-250K/year",
"location": "San Francisco, Hybrid",
"openings": 3
}
]
7. 建構 ADK 代理
現在,我們將在這個專案中使用 Python 的 ADK,請新增必要依附元件:
uv add google-adk==1.29.0 toolbox-adk==1.0.0
google-adk- Google 的 Agent Development Kit,包括 Gemini SDKtoolbox-adk- ADK 整合 MCP Toolbox for Databases。
建立代理目錄結構
ADK 需要特定資料夾版面配置:以代理程式命名的目錄,其中包含 __init__.py、agent.py 和 .env。為此,它內建指令可快速建立結構:
uv run adk create jobs_agent \
--model gemini-2.5-flash \
--project ${GOOGLE_CLOUD_PROJECT} \
--region ${GOOGLE_CLOUD_LOCATION}
您的目錄現在應如下所示:
build-agent-adk-toolbox-cloudsql/ ├── jobs_agent/ │ ├── __init__.py │ ├── agent.py │ └── .env ├── logs ├── scripts └── ...
接著,我們需要將 ADK 代理程式整合至正在執行的 Toolbox 伺服器,並測試所有四項工具:標準查詢、語意搜尋和向量擷取。代理程式碼非常簡潔:所有資料庫邏輯都位於 tools.yaml 中。
設定代理程式的環境
ADK 會從您在先前步驟中設定的殼層環境讀取 GOOGLE_GENAI_USE_VERTEXAI、GOOGLE_CLOUD_PROJECT 和 GOOGLE_CLOUD_LOCATION。唯一與代理程式相關的變數是 TOOLBOX_URL,請將其附加至代理程式的 .env 檔案:
echo -e "\nTOOLBOX_URL=http://127.0.0.1:5000" >> jobs_agent/.env
更新代理程式模組
在 Cloud Shell 編輯器中開啟 jobs_agent/agent.py
cloudshell edit jobs_agent/agent.py
並使用下列程式碼覆寫內容:
# jobs_agent/agent.py
import os
from google.adk.agents import LlmAgent
from toolbox_adk import ToolboxToolset
TOOLBOX_URL = os.environ.get("TOOLBOX_URL", "http://127.0.0.1:5000")
toolbox = ToolboxToolset(TOOLBOX_URL)
root_agent = LlmAgent(
name="jobs_agent",
model="gemini-2.5-flash",
instruction="""You are a helpful assistant at "TechJobs," a tech job listing platform.
Your job:
- Help developers browse job listings by role or tech stack.
- Provide full details about specific positions, including salary range and number of openings.
- Recommend jobs based on natural language descriptions of what the developer is looking for.
- Add new job listings to the platform when asked.
When a developer asks about a specific job by title or company, use the get-job-details tool.
When a developer asks for a specific role category or tech stack, use the search-jobs tool.
When a developer describes what kind of job they want — by interest area, work style,
career goals, or project type — use the search-jobs-by-description tool for semantic search.
When in doubt between search-jobs and search-jobs-by-description, prefer
search-jobs-by-description — it searches job descriptions and finds more relevant matches.
If a position has no openings (openings is 0), let the developer know
and suggest similar alternatives from the search results.
Be conversational, knowledgeable, and concise.""",
tools=[toolbox],
)
請注意,這裡沒有資料庫程式碼,因為 ToolboxToolset 會在啟動時連線至 Toolbox 伺服器,並載入所有可用工具。代理程式會依名稱呼叫工具;Toolbox 會將這些呼叫轉換為針對 Cloud SQL 的 SQL 查詢。
TOOLBOX_URL 環境變數在本機開發時預設為 http://127.0.0.1:5000。稍後部署至 Cloud Run 時,您會使用 Toolbox 服務的 Cloud Run URL 覆寫此設定,不需要變更程式碼。
目前指令僅參考兩個標準工具 (search-jobs 和 get-job-details)。您會在下一個步驟中新增語意搜尋和擷取工具,屆時再擴充指令。
測試代理
啟動 ADK 開發人員使用者介面:
cd ~/build-agent-adk-toolbox-cloudsql
uv run adk web --allow_origins "regex:https://.*\.cloudshell\.dev"
使用 Cloud Shell 的「網頁預覽」功能開啟終端機中顯示的網址 (通常是 http://localhost:8000),或按住 Ctrl 鍵並點選終端機中顯示的網址。在左上角的代理下拉式選單中,選取「jobs_agent」jobs_agent。
測試標準查詢
請嘗試使用下列提示,驗證標準 SQL 工具:
What backend engineering jobs do you have?
Any jobs using Kubernetes?
Tell me about the Cloud Architect position

測試語意搜尋
請嘗試使用自然語言描述,不要對應到特定角色或技術堆疊:
I want a remote job where I can work on AI and machine learning
Find me something in fintech with good work-life balance
I'm interested in infrastructure and reliability engineering
代理會根據查詢類型嘗試選取合適的工具:結構化篩選器會經過 search-jobs,自然語言描述則會經過 search-jobs-by-description。

測試向量擷取作業
請代理程式新增工作:
Add a new job: 'Robotics Software Engineer' at Boston Dynamics, role Robotics, tech stack: Python, C++, ROS, Computer Vision, salary $160-230K/year, location Waltham MA, Hybrid, 2 openings. Description: Design and implement autonomous navigation and manipulation algorithms for next-generation robots. Work on perception pipelines using computer vision and lidar, develop motion planning software in C++ and Python, and test systems on real hardware in warehouse and logistics environments.

現在試著搜尋:
Find me jobs involving autonomous systems and working with physical hardware
系統會在 INSERT 期間自動產生嵌入內容,因此不需要執行其他步驟。

現在,您已擁有功能完整的代理式 RAG 應用程式,可運用 ADK、MCP Toolbox 和 Cloud SQL。恭喜!接下來,讓我們進一步將這些應用程式部署至 Cloud Run!
現在,請先按下 Ctrl+C 鍵兩次,終止程序來停止開發人員使用者介面,再繼續操作。
8. 部署至 Cloud Run
代理程式和工具箱會在本地運作。這個步驟會將兩者都部署為 Cloud Run 服務,以便透過網際網路存取。Toolbox 服務會在 Cloud Run 上以 MCP 伺服器的形式執行,而代理服務會連線至該伺服器。
準備部署 Toolbox
為 Toolbox 服務建立部署目錄:
cd ~/build-agent-adk-toolbox-cloudsql
mkdir -p deploy-toolbox
cp toolbox tools.yaml deploy-toolbox/
建立 Toolbox 的 Dockerfile。在 Cloud Shell 編輯器中開啟 deploy-toolbox/Dockerfile:
cloudshell edit deploy-toolbox/Dockerfile
並將下列指令碼複製到該檔案中
# deploy-toolbox/Dockerfile
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY toolbox tools.yaml ./
RUN chmod +x toolbox
EXPOSE 8080
CMD ["./toolbox", "--config", "tools.yaml", "--enable-api", "--address", "0.0.0.0", "--port", "8080"]
Toolbox 二進位檔和 tools.yaml 會封裝到最小的 Debian 映像檔中。Cloud Run 會將流量轉送至通訊埠 8080。
部署 Toolbox 服務
cd ~/build-agent-adk-toolbox-cloudsql
gcloud run deploy toolbox-service \
--source deploy-toolbox/ \
--region $REGION \
--set-env-vars "DB_PASSWORD=$DB_PASSWORD,DB_INSTANCE=$DB_INSTANCE,DB_NAME=$DB_NAME,GOOGLE_CLOUD_PROJECT=$GOOGLE_CLOUD_PROJECT,REGION=$REGION,GOOGLE_CLOUD_LOCATION=$GOOGLE_CLOUD_LOCATION" \
--allow-unauthenticated \
--quiet > logs/deploy_toolbox.log 2>&1 &
這個指令會將來源提交至 Cloud Build、建構容器映像檔、將映像檔推送至 Artifact Registry,並部署至 Cloud Run。這項程序需要幾分鐘才能完成,您可以在 logs/deploy_toolbox.log 檔案中檢查部署程序記錄
準備部署代理程式
在 Toolbox 建構期間,設定代理程式的部署檔案。
在專案根目錄中建立 Dockerfile。在 Cloud Shell 編輯器中開啟 Dockerfile:
cloudshell edit Dockerfile
然後複製下列內容
# Dockerfile
FROM ghcr.io/astral-sh/uv:python3.12-trixie-slim
WORKDIR /app
COPY pyproject.toml ./
COPY uv.lock ./
RUN uv sync --no-dev
COPY jobs_agent/ jobs_agent/
EXPOSE 8080
CMD ["uv", "run", "adk", "web", "--host", "0.0.0.0", "--port", "8080"]
這個 Dockerfile 使用 ghcr.io/astral-sh/uv 做為基本映像檔,其中包含預先安裝的 Python 和 uv,因此不需要透過 pip 分別安裝 uv。
建立 .dockerignore 檔案,從容器映像檔中排除不必要的檔案:
cloudshell edit .dockerignore
然後將下列指令碼複製到該檔案中
# .dockerignore
.venv/
__pycache__/
*.pyc
.env
jobs_agent/.env
toolbox
tools.yaml
seed.sql
deploy-toolbox/
部署代理服務
等待 Toolbox 部署作業完成。再次檢查 logs/deploy_toolbox.log 的部署程序,確認程序是否正確。然後使用下列指令擷取 Cloud Run 網址
TOOLBOX_URL=$(gcloud run services describe toolbox-service \
--region=$REGION \
--format='value(status.url)')
echo "Toolbox URL: $TOOLBOX_URL"
畫面會顯示類似以下的輸出內容:
Toolbox URL: https://toolbox-service-xxxxxx-xx.a.run.app
接著,請驗證部署的工具箱是否正常運作:
curl -s "$TOOLBOX_URL/api/toolset" | python3 -m json.tool | head -5
如果輸出內容與這個範例類似,表示部署作業已成功
{
"serverVersion": "1.0.0+binary.linux.amd64.c5524d3",
"tools": {
"add-job": {
"description": "Add a new job listing to the platform. Use this tool when a user asks to post a job that is not currently listed.",
接著,請部署代理程式,並將 Toolbox 網址做為環境變數傳遞:
cd ~/build-agent-adk-toolbox-cloudsql
gcloud run deploy jobs-agent \
--source . \
--region $REGION \
--set-env-vars "TOOLBOX_URL=$TOOLBOX_URL,GOOGLE_CLOUD_PROJECT=$GOOGLE_CLOUD_PROJECT,GOOGLE_CLOUD_LOCATION=$GOOGLE_CLOUD_LOCATION,GOOGLE_GENAI_USE_VERTEXAI=TRUE" \
--allow-unauthenticated \
--quiet
代理程式碼會從環境中讀取 TOOLBOX_URL (您先前已設定)。在本地端,這個變數會指向 http://127.0.0.1:5000;在 Cloud Run 上,則會指向 Toolbox 服務網址。不必變更程式碼。
測試已部署的代理
擷取代理程式的 Cloud Run 網址:
AGENT_URL=$(gcloud run services describe jobs-agent \
--region=$REGION \
--format='value(status.url)')
echo "Agent URL: $AGENT_URL"
在瀏覽器中開啟網址。ADK 開發人員 UI 會載入,這個介面與您在本機使用的介面相同,現在會在 Cloud Run 上執行。
從下拉式選單中選取「jobs_agent」jobs_agent,然後測試:
What backend engineering jobs do you have?
I want a remote job working on AI and machine learning
這兩項查詢都會透過已部署的服務運作:Cloud Run 上的代理程式會呼叫 Cloud Run 上的工具箱,然後查詢 Cloud SQL。
9. 恭喜 / 清除
您已建構並部署智慧型求職看板助理,該助理使用 MCP Toolbox for Databases 連結 ADK 代理和 Cloud SQL PostgreSQL,並執行標準 SQL 查詢和語意向量搜尋。
您已經瞭解的內容
- MCP 如何為 AI 代理標準化工具存取權,以及 MCP Toolbox for Databases 如何將這項功能套用至資料庫作業,以宣告式 YAML 設定取代自訂資料庫程式碼
- 如何使用
cloud-sql-postgres來源類型,將 Cloud SQL PostgreSQL 設定為工具箱資料來源 - 如何使用參數化陳述式定義標準 SQL 查詢工具,防止 SQL 注入攻擊
- 如何使用 pgvector 和
gemini-embedding-001啟用向量搜尋,並透過embeddedBy參數自動嵌入查詢 valueFromParam如何啟用自動向量擷取功能:LLM 提供文字說明,而 Toolbox 會在背景複製、嵌入及儲存向量和文字- ADK 的
ToolboxToolset如何從執行中的 Toolbox 伺服器載入工具,盡量減少代理程式碼,並完全分離資料庫邏輯 - 如何將 Toolbox MCP 伺服器和 ADK 代理部署至 Cloud Run 做為個別服務
清理
如要避免系統向您的 Google Cloud 帳戶收取本程式碼研究室所建立資源的費用,請刪除個別資源或整個專案。
方法 1:刪除專案 (建議)
最簡單的清理方式就是刪除專案。這會移除與專案相關聯的所有資源。
gcloud projects delete $GOOGLE_CLOUD_PROJECT
方法 2:刪除個別資源
如要保留專案,但只移除在本程式碼研究室中建立的資源,請按照下列步驟操作:
gcloud run services delete jobs-agent --region=$REGION --quiet
gcloud run services delete toolbox-service --region=$REGION --quiet
gcloud sql instances delete jobs-instance --quiet
gcloud artifacts repositories delete cloud-run-source-deploy --location=$REGION --quiet 2>/dev/null
