Deploy, Manage, and Observe ADK Agent on Cloud Run

1. Introduction

This tutorial will guide you through deploying, managing, and monitoring a powerful agent built with the Agent Development Kit (ADK) on Google Cloud Run. The ADK empowers you to create agents capable of complex, multi-agent workflows. By leveraging Cloud Run, a fully managed serverless platform, you can deploy your agent as a scalable, containerized application without worrying about the underlying infrastructure. This powerful combination allows you to focus on your agent's core logic while benefiting from Google Cloud's robust and scalable environment.

Throughout this tutorial, we will explore the seamless integration of the ADK with Cloud Run. You'll learn how to deploy your agent and then dive into the practical aspects of managing your application in a production-like setting. We will cover how to safely roll out new versions of your agent by managing traffic, enabling you to test new features with a subset of users before a full release.

Furthermore, you will gain hands-on experience with monitoring the performance of your agent. We will simulate a real-world scenario by conducting a load test to observe Cloud Run's automatic scaling capabilities in action. To gain deeper insights into your agent's behavior and performance, we will enable tracing with Cloud Trace. This will provide a detailed, end-to-end view of requests as they travel through your agent, allowing you to identify and address any performance bottlenecks. By the end of this tutorial, you will have a comprehensive understanding of how to effectively deploy, manage, and monitor your ADK-powered agents on Cloud Run.

Through the codelab, you will employ a step by step approach as follows:

  1. Create a PostgreSQL database on CloudSQL to be used for ADK Agent database session service
  2. Set up a basic ADK agent
  3. Setup database session service to be used by ADK runner
  4. Initial deploy the agent to cloud run
  5. Load testing and inspect cloud run auto scaling
  6. Deploy new agent revision and gradually increase traffic to new revisions
  7. Setup cloud tracing and inspect agent run tracing

Architecture Overview

5e38fc5607fb4543.jpeg

Prerequisites

  • Comfortable working with Python
  • An understanding of basic full-stack architecture using HTTP service

What you'll learn

  • ADK structure and local utilities
  • Setup ADK agent with Database session service
  • Setup PostgreSQL in CloudSQL to be used by Database session service
  • Deploy application to Cloud Run using Dockerfile and setup initial environment variables
  • Configure and Test Cloud Run auto scaling with load testing
  • Strategy to gradual release with Cloud Run
  • Setup ADK Agent tracing to Cloud Trace

What you'll need

  • Chrome web browser
  • A Gmail account
  • A Cloud Project with billing enabled

This codelab, designed for developers of all levels (including beginners), uses Python in its sample application. However, Python knowledge isn't required for understanding the concepts presented.

2. 🚀 Preparing Workshop Development Setup

Step 1: Select Active Project in the Cloud Console

In the Google Cloud Console, on the project selector page, select or create a Google Cloud project (see top left section of your console)

9803a4534597d962.png

Click on it, and you will see list of all of your project like this example,

5b60dbeab4f9b524.png

The value that is indicated by the red box is the PROJECT ID and this value will be used throughout the tutorial.

Make sure that billing is enabled for your Cloud project. To check this, click on the burger icon ☰ on your top left bar which shows the Navigation Menu and find the Billing menu

db49b5267c00cc33.png

If you see the "Google Cloud Platform Trial Billing Account" under the Billing / Overview title ( top left section of your cloud console ), your project is ready to be utilized for this tutorial. If not, go back to the start of this tutorial and redeem the trial billing account

7f607aa026552bf5.png

Step 2: Prepare Cloud SQL Database

We will need a database to be utilized by the ADK agent later on. Let's create a PostgreSQL database on Cloud SQL. First, navigate to the search bar on the top section of the cloud console, and type "cloud sql". Then click the Cloud SQL product

1005cb65520eb3fc.png

After that, we will need to create a new database instance, click the Create Instance and choose PostgreSQL

7f2ad19bc246895d.png

ead4a98e7a8d8a39.png

You may also need to enable the Compute Engine API if you start with new project, just click the Enable API if this prompt show up

724cf67681535679.png

Next, we will choose the specifications of the database, choose Enterprise edition with Sandbox Edition preset

24aa9defed93a3ef.png

After that, set the instance name and default password for user postgres here. You can set this up with whatever credentials you want, however for the sake of this tutorial we will go with "adk-deployment" for instance name and "ADK-deployment123" for password here

f9db3a2a923e988f.png

Let's use us-central1 with single zone for this tutorial, we can finalize our database creation then and let it finish all the required setup by clicking the Create Instance button

773e2ea11d97369d.png

While waiting this to be finished, we can continue to the next section

Step 3: Familiarize with Cloud Shell

You'll use Cloud Shell for most part of the tutorials, Click Activate Cloud Shell at the top of the Google Cloud console. If it prompts you to authorize, click Authorize

1829c3759227c19b.png

b8fe7df5c3c2b919.png

Once connected to Cloud Shell, we will need to check whether the shell ( or terminal ) is already authenticated with our account

gcloud auth list

If you see your personal gmail like below example output, all is good

Credentialed Accounts

ACTIVE: *
ACCOUNT: alvinprayuda@gmail.com

To set the active account, run:
    $ gcloud config set account `ACCOUNT`

If not, try refreshing your browser and ensure you click the Authorize when prompted ( it might be interrupted due to connection issue )

Next, we also need to check whether the shell is already configured to the correct PROJECT ID that you have, if you see there is value inside ( ) before the $ icon in the terminal ( in below screenshot, the value is "adk-cloudrun-deployment-476504" ) this value shows the configured project for your active shell session.

5ccbc0cf16feaa0.png

If the shown value is already correct, you can skip the next command. However if it's not correct or missing, run the following command

gcloud config set project <YOUR_PROJECT_ID>

Then, clone the template working directory for this codelab from Github, run the following command. It will create the working directory in the deploy_and_manage_adk directory

git clone https://github.com/alphinside/deploy-and-manage-adk-service.git deploy_and_manage_adk

Step 4: Familiarize with Cloud Shell Editor and Setup Application Working Directory

Now, we can set up our code editor to do some coding stuff. We will use the Cloud Shell Editor for this

Click on the Open Editor button, this will open a Cloud Shell Editor b16d56e4979ec951.png

After that, go to the top section of the Cloud Shell Editor and click File->Open Folder, find your username directory and find the deploy_and_manage_adk directory then click the OK button. This will make the chosen directory as the main working directory. In this example, the username is alvinprayuda, hence the directory path is shown below

ee00d484ff2f8351.png

b1fbf2dcd99c468b.png

Now, your Cloud Shell Editor working directory should look like this ( inside deploy_and_manage_adk )

4068b1443241bfa1.png

Now, open the terminal for the editor. You can do it by clicking on Terminal -> New Terminal on the menu bar, or use Ctrl + Shift + C , it will open a terminal window on the bottom part of the browser

55361099b2f56c79.png

Your current active terminal should be inside the deploy_and_manage_adk working directory. We will utilize Python 3.12 in this codelab and we will use uv python project manager to simplify the need of creating and managing python version and virtual environment. This uv package is already preinstalled on Cloud Shell.

Run this command to install the required dependencies to the virtual environment on the .venv directory

uv sync --frozen

Now, we will need to enable the required APIs via the command shown below. This could take a while.

gcloud services enable aiplatform.googleapis.com \
                       run.googleapis.com \
                       cloudbuild.googleapis.com \
                       cloudresourcemanager.googleapis.com \
                       sqladmin.googleapis.com

On successful execution of the command, you should see a message similar to the one shown below:

Operation "operations/..." finished successfully.

Next, we will need to set up configuration files for this project.

Rename the .env.example file to .env

cp .env.example .env

Open the .env file and update the GOOGLE_CLOUD_PROJECT value to your project-id

# .env

# Google Cloud and Vertex AI configuration
GOOGLE_CLOUD_PROJECT=your-project-id
GOOGLE_CLOUD_LOCATION=global
GOOGLE_GENAI_USE_VERTEXAI=True

# Database connection for session service
# DB_CONNECTION_NAME=your-db-connection-name

For this codelab, we are going with the pre-configured values for GOOGLE_CLOUD_LOCATION and GOOGLE_GENAI_USE_VERTEXAI. For now, we will keep the DB_CONNECTION_NAME commented out.

Now we can move to the next step, inspect the agent logic and deploy it

3. 🚀 Build the Weather Agent with ADK and Gemini 2.5

Introduction to ADK Directory Structure

Let's start by exploring what ADK has to offer and how to build the agent. ADK complete documentation can be accessed in this URL . ADK offers us many utilities within its CLI command execution. Some of them are the following :

  • Setup the agent directory structure
  • Quickly try interaction via CLI input output
  • Quickly setup local development UI web interface

Now, let's check the agent structure on the weather_agent directory

weather_agent/
├── __init__.py
├── agent.py

And if you inspect the init.py and agent.py you will see this code

# __init__.py

from weather_agent.agent import root_agent

__all__ = ["root_agent"]
# agent.py


import os
from pathlib import Path

import google.auth
from dotenv import load_dotenv
from google.adk.agents import Agent
from weather_agent.tool import get_weather

# Load environment variables from .env file in root directory
root_dir = Path(__file__).parent.parent
dotenv_path = root_dir / ".env"
load_dotenv(dotenv_path=dotenv_path)


# Use default project from credentials if not in .env
_, project_id = google.auth.default()
os.environ.setdefault("GOOGLE_CLOUD_PROJECT", project_id)
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", "global")
os.environ.setdefault("GOOGLE_GENAI_USE_VERTEXAI", "True")

root_agent = Agent(
    name="weather_agent",
    model="gemini-2.5-flash",
    instruction="""
You are a helpful AI assistant designed to provide accurate and useful information.
""",
    tools=[get_weather],
)

ADK Code Explanation

This script contains our agent initiation where we initialize the following things:

  • Set the model to be used to gemini-2.5-flash
  • Provide tool get_weather to support the agent functionality as weather agent

Run the Web UI

Now, we can interact with the agent and inspect its behavior locally. ADK allows us to have a development web UI to interact and inspect what's going on during the interaction. Run the following command to start the local development UI server

uv run adk web --port 8080

It will spawn output like the following example, means that we can already access the web interface

INFO:     Started server process [xxxx]
INFO:     Waiting for application startup.

+-----------------------------------------------------------------------------+
| ADK Web Server started                                                      |
|                                                                             |
| For local testing, access at http://localhost:8080.                         |
+-----------------------------------------------------------------------------+

INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)

Now, to check it, click the Web Preview button on the top area of your Cloud Shell Editor and select Preview on port 8080

e7c9f56c2463164.png

You will see the following web page where you can select available agents on the top left drop down button ( in our case it should be weather_agent ) and interact with the bot. You will see many informations about the log details during agent runtime in the left window

d95b1e057315fee2.png

Now, try to interact with it. On the left bar, we can inspect the trace for each input, so we can understand how long the time it takes for each action taken by the agent before forming the final answer.

39c0a06ace937683.png

This one of the observability features that has been built into ADK, currently we inspect it locally. Later on we will see how this integrated into Cloud Tracing so we have centralized trace of all requests

4. 🚀 The Backend Server Script

In order to make the agent accessible as a service, we will wrap the agent inside a FastAPI app. We can configure necessary services to support the agent here like preparing Session, Memory, or Artifact service for production purposes here. Here is the code of the server.py that will be used

import os

from dotenv import load_dotenv
from fastapi import FastAPI
from google.adk.cli.fast_api import get_fast_api_app
from pydantic import BaseModel
from typing import Literal
from google.cloud import logging as google_cloud_logging


# Load environment variables from .env file
load_dotenv()

logging_client = google_cloud_logging.Client()
logger = logging_client.logger(__name__)

AGENT_DIR = os.path.dirname(os.path.abspath(__file__))

# Get session service URI from environment variables
session_uri = os.getenv("SESSION_SERVICE_URI", None)

# Prepare arguments for get_fast_api_app
app_args = {"agents_dir": AGENT_DIR, "web": True, "trace_to_cloud": True}

# Only include session_service_uri if it's provided
if session_uri:
    app_args["session_service_uri"] = session_uri
else:
    logger.log_text(
        "SESSION_SERVICE_URI not provided. Using in-memory session service instead. "
        "All sessions will be lost when the server restarts.",
        severity="WARNING",
    )

# Create FastAPI app with appropriate arguments
app: FastAPI = get_fast_api_app(**app_args)

app.title = "weather-agent"
app.description = "API for interacting with the Agent weather-agent"


class Feedback(BaseModel):
    """Represents feedback for a conversation."""

    score: int | float
    text: str | None = ""
    invocation_id: str
    log_type: Literal["feedback"] = "feedback"
    service_name: Literal["weather-agent"] = "weather-agent"
    user_id: str = ""


# Example if you want to add your custom endpoint
@app.post("/feedback")
def collect_feedback(feedback: Feedback) -> dict[str, str]:
    """Collect and log feedback.

    Args:
        feedback: The feedback data to log

    Returns:
        Success message
    """
    logger.log_struct(feedback.model_dump(), severity="INFO")
    return {"status": "success"}


# Main execution
if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="0.0.0.0", port=8080)

Server Code Explanation

These are the things that is defined in the server.py script:

  1. Convert our agent into a FastAPI app using the get_fast_api_app method. This way we will inherit the same route definition that is utilized for the web development UI.
  2. Configure necessary Session, Memory, or Artifact service by adding the keyword arguments to the get_fast_api_app method. In this tutorial, if we configure SESSION_SERVICE_URI env var, then the session service will use that otherwise it will use in-memory session
  3. We can add custom route to support other backend business logic, in the script we add feedback functionality route example
  4. Enable cloud tracing in the get_fast_api_app arg parameters, to send trace to Google Cloud Trace
  5. Run the FastAPI service using uvicorn

5. 🚀 Deploying to Cloud Run

Now, let's deploy this agent service to Cloud Run. For the sake of this demo, this service will be exposed as a public service that can be accessed by others. However, keep in mind that this is not the best practices as it is not secure

5e38fc5607fb4543.jpeg

In this codelab, we will use Dockerfile to deploy our agent to Cloud Run. Below is the Dockerfile content that will be used

FROM python:3.12-slim

RUN pip install --no-cache-dir uv==0.7.13

WORKDIR /app

COPY . .

RUN uv sync --frozen

EXPOSE 8080

CMD ["uv", "run", "uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8080"]

At this point, we already have all files needed to deploy our applications to Cloud Run, let's deploy it. Navigate to the Cloud Shell Terminal and make sure the current project is configured to your active project, if not you have use the gcloud configure command to set the project id:

gcloud config set project [PROJECT_ID]

Now, we need to revisit the .env file again, open it and you will see that we need to uncomment the DB_CONNECTION_NAME variable and fill it with the correct value

# Google Cloud and Vertex AI configuration
GOOGLE_CLOUD_PROJECT=your-project-id
GOOGLE_CLOUD_LOCATION=global
GOOGLE_GENAI_USE_VERTEXAI=True

# Database connection for session service
DB_CONNECTION_NAME=your-db-connection-name

To get the DB_CONNECTION_NAME value, you can go to Cloud SQL again and click the instance that you've created. Navigate to the search bar on the top section of the cloud console, and type "cloud sql". Then click the Cloud SQL product

1005cb65520eb3fc.png

After that you will see the previously created instance, click on it

ca69aefd116c0b23.png

Inside the instance page, scroll down to the "Connect to this instance" section and you can copy the Connection Name to substitute the DB_CONNECTION_NAME value.

5d7d6c6f17e559c1.png

After that open the .env file and modify the DB_CONNECTION_NAME variable. Your env file should look like below example

# Google Cloud and Vertex AI configuration
GOOGLE_CLOUD_PROJECT=your-project-id
GOOGLE_CLOUD_LOCATION=global
GOOGLE_GENAI_USE_VERTEXAI=True

# Database connection for session service
DB_CONNECTION_NAME=your-project-id:your-location:your-instance-name

After that run the deployment script

bash deploy_to_cloudrun.sh

If you're prompted to acknowledge creation of an artifact registry for docker repository, just answer Y.

While we waiting for the deployment process, let's take a look on the deploy_to_cloudrun.sh

#!/bin/bash

# Load environment variables from .env file
if [ -f .env ]; then
    export $(cat .env | grep -v '^#' | xargs)
else
    echo "Error: .env file not found"
    exit 1
fi

# Validate required variables
required_vars=("GOOGLE_CLOUD_PROJECT" "DB_CONNECTION_NAME")
for var in "${required_vars[@]}"; do
    if [ -z "${!var}" ]; then
        echo "Error: $var is not set in .env file"
        exit 1
    fi
done

gcloud run deploy weather-agent \
    --source . \
    --port 8080 \
    --project ${GOOGLE_CLOUD_PROJECT} \
    --allow-unauthenticated \
    --add-cloudsql-instances ${DB_CONNECTION_NAME} \
    --update-env-vars SESSION_SERVICE_URI="postgresql+pg8000://postgres:ADK-deployment123@postgres/?unix_sock=/cloudsql/${DB_CONNECTION_NAME}/.s.PGSQL.5432",GOOGLE_CLOUD_PROJECT=${GOOGLE_CLOUD_PROJECT} \
    --region us-central1 \
    --min 1 \
    --memory 1G

This script will load your .env variable, then run the deployment command.

If you take a closer look, we only need one gcloud run deploy command to do all the necessary things that need to be taken care of if you want to deploy a service: building the image, push to registry, deploy the service, setting IAM policy, creating revision, and even routing traffic. In this example, we already provide the Dockerfile, hence this command will utilize it to build the app

Once the deployment is complete, you should get a link similar to the below:

https://weather-agent-*******.us-central1.run.app

Go ahead and use your application from the Incognito window or your mobile device. It should be live already.

6. 🚀 Inspecting Cloud Run Auto Scaling with Load Testing

Now, we will inspect the auto-scaling capabilities of cloud run. For this scenario, let's deploy new revision while enabling maximum concurrencies per instance. Run the following command

gcloud run deploy weather-agent \
                  --source . \
                  --port 8080 \
                  --project {YOUR_PROJECT_ID} \
                  --allow-unauthenticated \
                  --region us-central1 \
                  --concurrency 10

After that let's inspect the load_test.py file. This will be the script we use to do the load testing using locust framework. This script will do the following things :

  1. Randomized user_id and session_id
  2. Create session_id for the user_id
  3. Hit endpoint "/run_sse" with the created user_id and session_id

We will need to know our deployed service URL, if you missed it. Go to the Cloud Run console and click your weather-agent service

f5cc953cc422de6d.png

Then, find your weather-agent service and click it

ddd0df8544aa2bfb.png

The service URL will be displayed right beside the Region information. E.g.

41b1276616379ee8.png

Then run the following command to do the load test

uv run locust -f load_test.py \
              -H {YOUR_SERVICE_URL} \
              -u 60 \
              -r 5 \
              -t 120 \
              --headless

Running this you will see metrics like this displayed. ( In this example all reqs success )

Type     Name                                  # reqs      # fails |    Avg     Min     Max    Med |   req/s  failures/s
--------|------------------------------------|-------|-------------|-------|-------|-------|-------|--------|-----------
POST     /run_sse end                             813     0(0.00%) |   5817    2217   26421   5000 |    6.79        0.00
POST     /run_sse message                         813     0(0.00%) |   2678    1107   17195   2200 |    6.79        0.00
--------|------------------------------------|-------|-------------|-------|-------|-------|-------|--------|-----------
         Aggregated                              1626     0(0.00%) |   4247    1107   26421   3500 |   13.59        0.00  

Then let's see what happened in the Cloud Run, go to your deployed service again, and see the dashboard. This will show how cloud runs automatically scale the instance to handle incoming requests. Because we are limiting the max concurrency to 10 per instance, the cloud run instance will try to adjust the number of containers to satisfy this condition automatically.

1ad41143eb9d95df.png

7. 🚀 Gradual Release New Revisions

Now, let's have the following scenario. We want to update the prompt of the agent to the following :

import os
from pathlib import Path

import google.auth
from dotenv import load_dotenv
from google.adk.agents import Agent
from weather_agent.tool import get_weather

# Load environment variables from .env file in root directory
root_dir = Path(__file__).parent.parent
dotenv_path = root_dir / ".env"
load_dotenv(dotenv_path=dotenv_path)


# Use default project from credentials if not in .env
_, project_id = google.auth.default()
os.environ.setdefault("GOOGLE_CLOUD_PROJECT", project_id)
os.environ.setdefault("GOOGLE_CLOUD_LOCATION", "global")
os.environ.setdefault("GOOGLE_GENAI_USE_VERTEXAI", "True")

root_agent = Agent(
    name="weather_agent",
    model="gemini-2.5-flash",
    instruction="""
You are a helpful AI assistant designed to provide accurate and useful information.
You only answer inquiries about the weather. Refuse all other user query
""",
    tools=[get_weather],
)

Then, you want to release new revisions but don't want all request traffic to go directly to the new version. We can do gradual release with cloud run. First, we need to deploy a new revision, but with –no-traffic flag. Save the previous agent script and run the following command

gcloud run deploy weather-agent \
                  --source . \
                  --port 8080 \
                  --project {YOUR_PROJECT_ID} \
                  --allow-unauthenticated \
                  --region us-central1 \
                  --no-traffic

After finishing, you will receive a similar log like the previous deployment process with the difference of the number of traffic served. It will show 0 percent traffic served.

Service [weather-agent] revision [weather-agent-xxxx-xxx] has been deployed and is serving 0 percent of traffic.

Next, let's go to the Cloud Run product page and find your deployed instance. Type cloud run on the search bar and click the Cloud Run product

f5cc953cc422de6d.png

Then, find your weather-agent service and click it

ddd0df8544aa2bfb.png

Go to Revisions tab and you will see the list of deployed revision there

8519c5a59bc7efa6.png

You will see that the new deployed revisions is serving 0%, from here you can click the kebab button (⋮) and choose Manage Traffic

d4d224e20813c303.png

In the newly pop up window, you can edit the percentage of the traffic going to which revisions.

6df497c3d5847f14.png

After waiting for a while, the traffic will be directed proportionally based on the percentage configurations. This way, we can easily roll back to the previous revisions if something happened with the new release

8. 🚀 ADK Tracing

Agents built with ADK already support tracing using open telemetry embedding in it. We have Cloud Trace to capture those tracing and visualize it. Let's inspect the server.py on how we enable it in our previously deployed service

# server.py

...

app_args = {"agents_dir": AGENT_DIR, "web": True, "trace_to_cloud": True}

...

app: FastAPI = get_fast_api_app(**app_args)

...

Here, we pass the trace_to_cloud argument to True. If you are deploying with other options, you can check this documentation for more details on how to enable tracing to Cloud Trace from various deployment options

Try to access your service web dev UI and have a chat with the agent. After that go to the cloud console search bar and type "trace explorer" and choose the Trace Explorer product there

4353c0f8982361ab.png

On the trace explorer page, you will see our conversation with the agent trace is submitted. You can see from the Span name section and filter out the span specific to our agent ( it's named agent_run [weather_agent] ) there

c4336d117a3d2f6a.png

When the spans are already filtered, you also can inspect each trace directly. It will show detailed duration on each action taken by the agent. For example, look images below

76a56dff77979037.png

1a3ce0a803d6061a.png

On each section, you can inspect the details in the attributes like shown below

2c87b6d67b0164a8.png

There you go, now we have good observability and information on each interaction of our agent with the user to help debug issues. Feel free to try various tooling or workflows!

9. 🎯 Challenge

Try multi-agent or agentic workflows to see how they perform under loads and what the trace looks like

10. 🧹 Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this codelab, follow these steps:

  1. In the Google Cloud console, go to the Manage resources page.
  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.
  4. Alternatively you can go to Cloud Run on the console, select the service you just deployed and delete.