1. Introduction
In this codelab, you will build a multi-agent system using the Agent Development Kit (ADK) and enable Agent Observability using the BigQuery Agent Analytics Plugin.You will ask the agent a series of questions, then use BigQuery to analyze conversation traces and agent tool usage.

What you'll do
- Build a multi-agent retail assistant using ADK
- Initialize the BigQuery Agent Analytics Plugin to capture and store trace data about this agents execution to BigQuery
- Analyze the agent log data in BigQuery
What you'll need
- A web browser such as Chrome
- A Google Cloud project with billing enabled, or
- A gmail account. The next section will show you how to redeem a free $5 credit for this codelab and set up a new project
This codelab is for developers of all levels, including beginners. You will use the command-line interface in Google Cloud Shell and Python code for ADK development. You don't need to be a Python expert, but a basic understanding of how to read code will help you understand the concepts.
2. Before you begin
Create a Google Cloud Project
- In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.

- Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.
Start Cloud Shell
Cloud Shell is a command-line environment running in Google Cloud that comes preloaded with necessary tools.
- Click Activate Cloud Shell at the top of the Google Cloud console:

- Once connected to Cloud Shell, run this command to verify your authentication in Cloud Shell:
gcloud auth list
- Run the following command to confirm that your project is configured for use with gcloud:
gcloud config get project
- If your project is not configured as expected, use the following command to set your project:
export PROJECT_ID=<YOUR_PROJECT_ID>
gcloud config set project $PROJECT_ID
Enable APIs
- Run this command to enable all the required APIs and services:
gcloud services enable bigquery.googleapis.com \
cloudresourcemanager.googleapis.com \
aiplatform.googleapis.com
- On successful execution of the command, you should see a message similar to the one shown below:
Operation "operations/..." finished successfully.
3. Installation & Setup
Return to Cloud Shell and ensure you are in your home directory.
Run the following command in Cloud Shell to create a new dataset called adk_logs in BigQuery:
bq mk --dataset --location=US adk_logs
Now, let's create a virtual Python environment and install the required packages.
- Open a new terminal tab in Cloud Shell and run this command to create and navigate to a folder named
adk-agent-observability:
mkdir adk-agent-observability
cd adk-agent-observability
- Create a virtual Python environment:
python -m venv .venv
- Activate the virtual environment:
source .venv/bin/activate
- Install ADK:
pip install --upgrade google-adk
4. Create an ADK application
Now, let's create our retail assistant Agent. This agent will be designed to ...
- Run the adk create utility command to scaffold a new agent application with the necessary folders and files:
adk create retail_assistant_app
Follow the prompts:
- Choose gemini-2.5-flash for the model.
- Choose Vertex AI for the backend.
- Confirm your default Google Cloud Project ID and region.
A sample interaction is shown below:

- Click the Open Editor button in Cloud Shell to open Cloud Shell Editor and view the newly created folders and files:

Note the generated files:
retail_assistant_app/
├── .venv/
└── retail_assistant_app/
├── __init__.py
├── agent.py
└── .env
- init.py: Marks the folder as a Python module.
- agent.py: Contains the initial agent definition.
- .env: You may need to click View > Toggle Hidden Files to view this file

- The .env file contains environment variables for your project, update any variables that were not correctly set from the prompts:
GOOGLE_GENAI_USE_VERTEXAI=1
GOOGLE_CLOUD_PROJECT=<YOUR_GOOGLE_PROJECT_ID>
GOOGLE_CLOUD_LOCATION=<YOUR_GOOGLE_CLOUD_REGION>
5. Define your agent
Let's now define a hierarchical multi-agent system.
- Real Time Trend Agent: Uses Google Search to find current fashion trends.
- Inventory Data Agent: Uses the BigQuery Toolset to query the public thelook_ecommerce dataset for available products.
- Retail assistant (Root) Agent: Orchestrates the workflow by asking the Trend Agent for advice and the Inventory Agent for matching products.
Replace the entire contents of retail_assistant_app/agent.py with the following code.
import os
import uuid
import asyncio
import google.auth
import dotenv
from google.genai import types
from google.adk.agents import Agent
from google.adk.apps import App
from google.adk.runners import InMemoryRunner
from google.adk.tools import AgentTool, google_search
from google.adk.tools.bigquery import BigQueryCredentialsConfig, BigQueryToolset
from google.adk.plugins.bigquery_agent_analytics_plugin import BigQueryAgentAnalyticsPlugin
dotenv.load_dotenv()
# --- Configuration ---
PROJECT_ID = os.getenv('GOOGLE_CLOUD_PROJECT', 'project_not_set')
DATASET_ID = "adk_logs"
TABLE_ID = "retail_assistant_agent_logs"
APP_NAME = "retail_assistant_agent"
USER_ID = "test_user"
# --- Toolsets ---
credentials, _ = google.auth.default()
credentials_config = BigQueryCredentialsConfig(credentials=credentials)
bigquery_toolset = BigQueryToolset(
credentials_config=credentials_config
)
# --- Agents ---
# 1. Trend Spotter
real_time_agent = Agent(
name="real_time_agent",
model="gemini-2.5-flash",
description="Researches external factors like weather, local events, and current fashion trends.",
instruction="""
You are a real-time research agent.
Use Google Search to find real-time information relevant to the user's request,
such as the current weather in their location or trending styles.
""",
tools=[google_search]
)
# 2. Inventory Manager
inventory_data_agent = Agent(
name="inventory_data_agent",
model="gemini-2.5-flash",
description="Oversees product inventory in the BigQuery `thelook_ecommerce` dataset to find available items and prices.",
instruction=f"""
You manage the inventory. You have access to the `bigquery-public-data.thelook_ecommerce` dataset via the BigQuery toolset.
Run all BigQuery queries the project id of: '{PROJECT_ID}'
Your workflow:
1. Look at the products table.
2. Find items that match the requirements, factor in the results from the trend_setter agent if there are any.
3. Return with a user friendly response, including the list of specific products and prices.
""",
tools=[bigquery_toolset]
)
# 3. Root Orchestrator
root_agent = Agent(
name="retail_assistant",
model="gemini-2.5-flash",
description="The primary orchestrator, responsible for handling user input, delegating to sub-agents, and synthesizing the final product recommendation.",
instruction="""
You are a Retail Assistant.
You can ask the 'real_time_agent' agent for any realtime information needed, or style advice, include any information provided by the user.
You should ask the 'inventory_data_agent' agent to find a maximum of 3 available items matching that style.
Combine the results into a recommendation.
""",
tools=[AgentTool(agent=real_time_agent)],
sub_agents=[inventory_data_agent]
)
6. Generate logs with the BigQuery Agent Analytics Plugin
Now, let's configure the BigQuery Agent Analytics Plugin to capture execution data.
To do this, you will create an instance of the App class. This class serves as the runtime container for your agent; it manages the conversation loop, handles user state, and orchestrates any attached plugins (like our agent analytics logger).
The code below:
- Initializes the Logging Plugin: Creates the
BigQueryAgentAnalyticsPluginwith the required connection details. - Integrates the Plugin: Passes the initialized BigQuery plugin into the
Appconstructor, ensuring that agent execution events are automatically captured and logged. - Runs and Logs Agent Execution: Executes the conversational flow via
runner.run_async, with the plugin simultaneously collecting and sending the entire sequence of events to BigQuery before closing its resources.
Copy and paste this code below the agent definitions in the agent.py file:
async def main(prompt: str):
"""Runs a conversation with the BigQuery agent using the ADK Runner."""
bq_logger_plugin = BigQueryAgentAnalyticsPlugin(
project_id=PROJECT_ID, dataset_id=DATASET_ID, table_id=TABLE_ID
)
app = App(name=APP_NAME, root_agent=root_agent, plugins=[bq_logger_plugin])
runner = InMemoryRunner(app=app)
try:
session_id = f"{USER_ID}_{uuid.uuid4().hex[:8]}"
my_session = await runner.session_service.create_session(
app_name=APP_NAME, user_id=USER_ID, session_id=session_id
)
async for event in runner.run_async(
user_id=USER_ID,
new_message=types.Content(
role="user", parts=[types.Part.from_text(text=prompt)]
),
session_id=my_session.id,
):
if event.content.parts and event.content.parts[0].text:
print(f"** {event.author}: {event.content.parts[0].text}")
except Exception as e:
print(f"Error in main: {e}")
finally:
print("Closing BQ Plugin...")
await bq_logger_plugin.close()
print("BQ Plugin closed.")
if __name__ == "__main__":
prompts = [
"what outfits do you have available that are suitable for the weather in london this week?",
"You are such a cool agent! I need a gift idea for my friend who likes yoga.",
"I'd like to complain - the products sold here are not very good quality!"
]
for prompt, prompt in enumerate(prompts):
asyncio.run(main(prompt))
With the instrumentation in place, it's time to see the agent in action. Run the script to trigger the conversation workflow.
python retail_assistant_app/agent.py
You should see the retail assistant orchestrating the workflow:
- It asks the Real Time Trend Agent (real_time_agent) to identify the weather in London and search for suitable fashion trends.
- It then delegates to the Inventory Data Agent (inventory_data_agent) to query the
thelook_ecommerceBigQuery dataset for specific products that match those trends. - Finally, the Root Orchestrator synthesizes the results into a final recommendation.
All the while, the plugin is streaming the agent's execution trace to BigQuery.
7. Analyze Agent Logs
Tool Usage
We are now able to see what our agent was up to behind the scenes! The data has been streamed to BigQuery and is ready for analysis:
- In the Google Cloud Console, search for BigQuery.
- In the Explorer pane, locate your project.
- Expand the
adk_logsdataset. - Open the
retail_assitant_agent_logstable and click Query.

To see what tool calls your agent made, and capture any tool errors, run the following query in the BigQuery Editor:
SELECT
-- Extract text between "Tool Name: " and the next comma (or end of line)
REGEXP_EXTRACT(content, r'Tool Name: ([^,]+)') AS tool_name,
-- Count every time a tool finished (successfully or with an error)
COUNT(*) AS total_finished_runs,
-- Count it as a failure if it's an explicit system error OR contains "error" in the text
COUNTIF(event_type = 'TOOL_ERROR' OR REGEXP_CONTAINS(content, r'(?i)\berror\b')) AS failure_count
FROM
`.adk_logs.retail_assistant_agent_logs`
WHERE
event_type IN ('TOOL_COMPLETED', 'TOOL_ERROR')
GROUP BY
1
Click on Visualization to view this as a chart:

Token Usage
To infer the cost of your agents, you can aggregate the prompt tokens and candidate tokens consumed by each distinct agent:
SELECT
t.agent,
SUM(CAST(REGEXP_EXTRACT(t.content, r'prompt:\s*(\d+)') AS INT64)) AS prompt_tokens,
SUM(CAST(REGEXP_EXTRACT(t.content, r'candidates:\s*(\d+)') AS INT64)) AS candidate_tokens
FROM
`adk_logs.retail_assistant_agent_logs` AS t
WHERE
t.event_type = 'LLM_RESPONSE'
AND t.content LIKE '%Token Usage: %'
GROUP BY 1
Click on Visualization to view this as a chart:

8. [Bonus] Analyze User Sentiment
Now let's analyze the sentiment of the user's input provided to the agent.
- Create a cloud resource connection to enable BigQuery to interact with Vertex AI services.:
bq mk --connection --location=us \
--connection_type=CLOUD_RESOURCE test_connection
You should see a response like:
Connection 517325854360.us.test_connection successfully created
- Create a cloud resource connection:
export SERVICE_ACCOUNT_EMAIL=$(bq show --format=prettyjson --connection us.test_connection | grep "serviceAccountId" | cut -d '"' -f 4)
- Run this command to validate that the service account was created successfully:
echo $SERVICE_ACCOUNT_EMAIL
You should see your service account displayed:

- Grant the resource connection service account the project-level permissions required to interact with Vertex AI:
gcloud projects add-iam-policy-binding $(gcloud config get-value project) \
--member="serviceAccount:$SERVICE_ACCOUNT_EMAIL" \
--role='roles/bigquery.connectionUser' \
gcloud projects add-iam-policy-binding $(gcloud config get-value project) \
--member="serviceAccount:$SERVICE_ACCOUNT_EMAIL" \
--role='roles/aiplatform.user'
Wait a few minutes and then run the BigQuery AI.SCORE function to analyze user sentiment:
SELECT
timestamp,
user_id,
content,
AI.SCORE((
'What is the sentiment of the user in this text:', content,
'Use a scale from 1 to 5.'),
connection_id => 'us.test_connection') AS user_sentiment
FROM
`adk_logs.retail_assistant_agent_logs`
WHERE
event_type = 'USER_MESSAGE_RECEIVED'
ORDER BY
user_sentiment DESC;
The AI.SCORE function will assign a sentiment value between 1 and 5 for each user input. You should see results like the below: 
9. Clean Up
To avoid ongoing charges to your Google Cloud account, delete the resources created during this workshop.
Delete the logging dataset created by the script:
bq rm -r -f -d $PROJECT_ID:adk_logs
To remove the bigquery-adk-codelab directory and its contents:
cd ..
rm -rf adk-agent-observability
10. Congratulations
Congratulations! You've built a multi-agent system with the Agent Development Kit (ADK) and successfully integrated the BigQuery Agent Analytics Plugin to track and audit your agent's behavior.