Aidemy: Building Multi-Agent Systems with LangGraph, EDA, and Generative AI on Google Cloud, Aidemy: Building Multi-Agent Systems with LangGraph, EDA, and Generative AI on Google Cloud

Aidemy:
ساختن سیستم‌های چند عاملی با LangGraph، EDA، و هوش مصنوعی در Google Cloud

درباره این codelab

subjectآخرین به‌روزرسانی: مارس ۱۳, ۲۰۲۵
account_circleنویسنده: Christina Lin

1. مقدمه

سلام! بنابراین، شما به ایده عواملی علاقه دارید - کمک های کوچکی که می توانند کارها را برای شما انجام دهند بدون اینکه شما حتی یک انگشت بلند کنید، درست است؟ عالی! اما بیایید واقعی باشیم، یک نماینده همیشه قرار نیست آن را کاهش دهد، به خصوص زمانی که در حال انجام پروژه های بزرگتر و پیچیده تر هستید. احتمالاً به یک تیم کامل از آنها نیاز خواهید داشت! اینجاست که سیستم های چند عاملی وارد می شوند.

Agent ها، زمانی که توسط LLM ها پشتیبانی می شوند، در مقایسه با کدنویسی سخت قدیمی، انعطاف پذیری باورنکردنی به شما می دهند. اما، و همیشه یک اما وجود دارد، آنها با مجموعه ای از چالش های روی حیله و تزویر همراه هستند. و این دقیقاً همان چیزی است که ما در این کارگاه به آن می پردازیم!

عنوان

در اینجا چیزی است که می توانید انتظار یادگیری داشته باشید - آن را به عنوان ارتقاء سطح بازی عامل خود در نظر بگیرید:

ساختن اولین نماینده خود با LangGraph : ما دستمان را کثیف می کنیم تا نماینده خود را با استفاده از LangGraph، یک چارچوب محبوب، بسازیم. شما یاد خواهید گرفت که چگونه ابزارهایی ایجاد کنید که به پایگاه‌های داده متصل می‌شوند، برای جستجوی اینترنتی به آخرین API Gemini 2 ضربه بزنید و اعلان‌ها و پاسخ‌ها را بهینه کنید تا نماینده شما بتواند نه تنها با LLM‌ها بلکه با سرویس‌های موجود تعامل داشته باشد. ما همچنین به شما نشان خواهیم داد که فراخوانی تابع چگونه کار می کند.

Agent Orchestration, Your Way : ما راه های مختلفی را برای هماهنگ کردن عوامل شما، از مسیرهای مستقیم ساده تا سناریوهای چند مسیره پیچیده تر، بررسی خواهیم کرد. به آن به عنوان هدایت جریان تیم عامل خود فکر کنید.

سیستم‌های چند عاملی : خواهید فهمید که چگونه می‌توانید سیستمی را راه‌اندازی کنید که در آن نمایندگان شما بتوانند با یکدیگر همکاری کنند و کارها را با هم انجام دهند - همه اینها به لطف یک معماری رویداد محور.

آزادی LLM - از بهترین ها برای شغل استفاده کنید: ما فقط در یک LLM گیر نکرده ایم! خواهید دید که چگونه می توان از چندین LLM استفاده کرد و نقش های متفاوتی را به آنها اختصاص داد تا با استفاده از "مدل های تفکر" جذاب، قدرت حل مسئله را تقویت کنند.

محتوای پویا؟ مشکلی نیست! : تصور کنید که نماینده شما محتوای پویا ایجاد می کند که به طور خاص برای هر کاربر در زمان واقعی طراحی شده است. ما به شما نشان خواهیم داد که چگونه این کار را انجام دهید!

بردن آن به Cloud با Google Cloud : فقط بازی کردن در یک نوت بوک را فراموش کنید. ما به شما نشان خواهیم داد که چگونه سیستم چند عاملی خود را در Google Cloud طراحی و استقرار دهید تا برای دنیای واقعی آماده شود!

این پروژه نمونه خوبی از نحوه استفاده از تمام تکنیک هایی خواهد بود که در مورد آنها صحبت کردیم.

2. معماری

معلم بودن یا کار کردن در آموزش و پرورش می‌تواند بسیار مفید باشد، اما اجازه دهید با آن روبرو شویم، حجم کار، به‌ویژه تمام کارهای آمادگی، می‌تواند چالش‌برانگیز باشد! به علاوه، اغلب کارکنان کافی وجود ندارد و تدریس خصوصی ممکن است گران باشد. به همین دلیل است که ما یک دستیار آموزشی مبتنی بر هوش مصنوعی را پیشنهاد می کنیم. این ابزار می تواند بار را برای مربیان سبک کند و به پر کردن شکاف ناشی از کمبود کارکنان و نبود تدریس خصوصی مقرون به صرفه کمک کند.

دستیار آموزشی هوش مصنوعی ما می‌تواند طرح‌های درسی دقیق، آزمون‌های سرگرم‌کننده، خلاصه‌های صوتی ساده و تکالیف شخصی‌سازی شده را تنظیم کند. این به معلمان اجازه می دهد تا بر آنچه که به بهترین شکل انجام می دهند تمرکز کنند: ارتباط با دانش آموزان و کمک به آنها که عاشق یادگیری شوند.

این سیستم دو سایت دارد: یکی برای معلمان برای ایجاد برنامه های درسی برای هفته های آینده،

برنامه ریز

و یکی برای دانش آموزان برای دسترسی به آزمون ها، خلاصه های صوتی، و تکالیف. پورتال

خوب، بیایید از طریق معماری که به دستیار آموزشی ما، آیدمی، نیرو می دهد قدم بزنیم. همانطور که می بینید، ما آن را به چندین جزء کلیدی تقسیم کرده ایم، که همه با هم کار می کنند تا این اتفاق بیفتد.

معماری

عناصر و فناوری های کلیدی معماری :

Google Cloud Platform (GCP) : مرکزی برای کل سیستم:

  • Vertex AI: به Gemini LLM های گوگل دسترسی دارد.
  • Cloud Run: پلتفرم بدون سرور برای استقرار عوامل و توابع کانتینری.
  • Cloud SQL: پایگاه داده PostgreSQL برای داده های برنامه درسی.
  • Pub/Sub & Eventarc: پایه و اساس معماری رویداد محور، امکان ارتباط ناهمزمان بین اجزا را فراهم می کند.
  • Cloud Storage: خلاصه های صوتی و فایل های انتساب را ذخیره می کند.
  • مدیر مخفی: به طور ایمن اعتبار پایگاه داده را مدیریت می کند.
  • رجیستری مصنوع: تصاویر داکر را برای نمایندگان ذخیره می کند.
  • Compute Engine: برای استقرار LLM خود میزبان به جای تکیه بر راه حل های فروشنده

LLMs : "مغزهای" سیستم:

  • مدل‌های Gemini Google: (Gemini 1.0 Pro، Gemini 2 Flash، Gemini 2 Flash Thinking، Gemini 1.5-pro) برای برنامه‌ریزی درس، تولید محتوا، ایجاد HTML پویا، توضیح مسابقه و ترکیب تکالیف استفاده می‌شود.
  • DeepSeek: برای کار تخصصی ایجاد تکالیف خودآموزی استفاده می شود

LangChain & LangGraph : چارچوب‌هایی برای توسعه برنامه LLM

  • ایجاد گردش کار پیچیده چند عاملی را تسهیل می کند.
  • هماهنگ سازی هوشمند ابزارها (تماس های API، پرس و جوهای پایگاه داده، جستجوهای وب) را فعال می کند.
  • معماری رویداد محور را برای مقیاس پذیری و انعطاف پذیری سیستم پیاده سازی می کند.

در اصل، معماری ما قدرت LLM ها را با داده های ساختاریافته و ارتباطات مبتنی بر رویداد ترکیب می کند که همه در Google Cloud اجرا می شوند. این به ما امکان می دهد یک دستیار آموزشی مقیاس پذیر، قابل اعتماد و موثر بسازیم.

3. قبل از شروع

در Google Cloud Console ، در صفحه انتخاب پروژه، یک پروژه Google Cloud را انتخاب یا ایجاد کنید. مطمئن شوید که صورتحساب برای پروژه Cloud شما فعال است. با نحوه بررسی فعال بودن صورت‌حساب در پروژه آشنا شوید .

👉روی Activate Cloud Shell در بالای کنسول Google Cloud کلیک کنید (این نماد شکل ترمینال در بالای صفحه Cloud Shell است)، روی دکمه "Open Editor" کلیک کنید (به نظر می رسد یک پوشه باز با یک مداد است). با این کار ویرایشگر کد Cloud Shell در پنجره باز می شود. در سمت چپ یک فایل کاوشگر خواهید دید.

پوسته ابری

همانطور که نشان داده شده است، روی دکمه ورود به سیستم Cloud Code در نوار وضعیت پایین کلیک کنید. پلاگین را طبق دستورالعمل مجاز کنید. اگر Cloud Code - بدون پروژه را در نوار وضعیت می‌بینید، آن را در منوی کشویی «انتخاب یک پروژه Google Cloud» انتخاب کنید و سپس پروژه Google Cloud خاص را از لیست پروژه‌هایی که ایجاد کرده‌اید انتخاب کنید.

پروژه ورود

👉ترمینال را در IDE ابری باز کنید، ترمینال جدید

👉در ترمینال، با استفاده از دستور زیر بررسی کنید که قبلا احراز هویت شده اید و پروژه به ID پروژه شما تنظیم شده است:

gcloud auth list

👉 و اجرا:

gcloud config set project <YOUR_PROJECT_ID>

دستور زیر را برای فعال کردن APIهای Google Cloud لازم اجرا کنید:

gcloud services enable compute.googleapis.com  \
                       
storage.googleapis.com  \
                       
run.googleapis.com  \
                       
artifactregistry.googleapis.com  \
                       
aiplatform.googleapis.com \
                       
eventarc.googleapis.com \
                       
sqladmin.googleapis.com \
                       
secretmanager.googleapis.com \
                       
cloudbuild.googleapis.com \
                       
cloudresourcemanager.googleapis.com \
                       
cloudfunctions.googleapis.com

این ممکن است چند دقیقه طول بکشد..

Gemini Code Assist را در Cloud Shell IDE فعال کنید

همانطور که نشان داده شده است روی دکمه Code Assist در پانل سمت چپ کلیک کنید و برای آخرین بار پروژه صحیح Google Cloud را انتخاب کنید. اگر از شما خواسته شد که Cloud AI Companion API را فعال کنید، لطفاً این کار را انجام دهید و به جلو بروید. هنگامی که پروژه Google Cloud خود را انتخاب کردید، مطمئن شوید که می‌توانید آن را در پیام وضعیت Cloud Code در نوار وضعیت مشاهده کنید و همچنین Code Assist را در سمت راست، در نوار وضعیت، مانند شکل زیر، فعال کرده‌اید:

کد کمک را فعال کنید

راه اندازی مجوز

👉مجوز حساب سرویس را راه اندازی کنید. در ترمینال، اجرا کنید:

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")

echo "Here's your SERVICE_ACCOUNT_NAME $SERVICE_ACCOUNT_NAME"

👉 اعطای مجوز. در ترمینال، اجرا کنید:

#Cloud Storage (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/storage.objectAdmin"

#Pub/Sub (Publish/Receive):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.publisher"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.subscriber"


#Cloud SQL (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/cloudsql.editor"


#Eventarc (Receive Events):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/iam.serviceAccountTokenCreator"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/eventarc.eventReceiver"

#Vertex AI (User):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/aiplatform.user"

#Secret Manager (Read):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/secretmanager.secretAccessor"

👉نتیجه را در کنسول IAM خود تأیید کنید کنسول IAM

دستورات زیر را در ترمینال اجرا کنید تا یک نمونه Cloud SQL با نام aidemy ایجاد کنید. ما بعداً به این نیاز خواهیم داشت، اما از آنجایی که این فرآیند ممکن است کمی طول بکشد، اکنون آن را انجام خواهیم داد.

gcloud sql instances create aidemy \
   
--database-version=POSTGRES_14 \
   
--cpu=2 \
   
--memory=4GB \
   
--region=us-central1 \
   
--root-password=1234qwer \
   
--storage-size=10GB \
   
--storage-auto-increase

4. ساخت اولین عامل

قبل از اینکه وارد سیستم‌های چند عاملی پیچیده شویم، باید یک بلوک ساختمانی اساسی ایجاد کنیم: یک عامل واحد و کاربردی. در این بخش، اولین گام‌های خود را با ایجاد یک عامل ساده «ارائه‌دهنده کتاب» برمی‌داریم. عامل ارائه دهنده کتاب یک دسته را به عنوان ورودی می گیرد و از Gemini LLM برای تولید یک کتاب نمایندگی JSON در آن دسته استفاده می کند. سپس این توصیه های کتاب را به عنوان نقطه پایانی REST API ارائه می کند.

ارائه دهنده کتاب

👉در یک برگه مرورگر دیگر، Google Cloud Console را در مرورگر وب خود باز کنید، در منوی پیمایش (☰)، به "Cloud Run" بروید. روی دکمه "+ ... WRITE A FUNCTION" کلیک کنید.

ایجاد تابع

👉بعد تنظیمات اصلی عملکرد Cloud Run را پیکربندی می‌کنیم:

  • نام خدمات: book-provider
  • منطقه: us-central1
  • زمان اجرا: Python 3.12
  • احراز هویت: Allow unauthenticated invocations تا فعال شوند.

👉تنظیمات دیگر را به عنوان پیش فرض بگذارید و روی Create کلیک کنید. این شما را به ویرایشگر کد منبع می برد.

فایل های main.py و requirements.txt از قبل پر شده را خواهید دید.

main.py حاوی منطق کسب و کار تابع است، requirements.txt شامل بسته های مورد نیاز خواهد بود.

👉اکنون ما آماده ایم تا مقداری کد بنویسیم! اما قبل از غواصی، بیایید ببینیم که آیا Gemini Code Assist می‌تواند به ما یک شروع عالی بدهد یا خیر. به ویرایشگر Cloud Shell برگردید، روی نماد Gemini Code Assist کلیک کنید و درخواست زیر را در کادر درخواست قرار دهید: Gemini Code Assist

Use the functions_framework library to be deployable as an HTTP function. 
Accept a request with category and number_of_book parameters (either in JSON body or query string).
Use langchain and gemini to generate the data for book with fields bookname, author, publisher, publishing_date.
Use pydantic to define a Book model with the fields: bookname (string, description: "Name of the book"), author (string, description: "Name of the author"), publisher (string, description: "Name of the publisher"), and publishing_date (string, description: "Date of publishing").
Use langchain and gemini model to generate book data. the output should follow the format defined in Book model.

The logic should use JsonOutputParser from langchain to enforce output format defined in Book Model.
Have a function get_recommended_books(category) that internally uses langchain and gemini to return a single book object.
The main function, exposed as the Cloud Function, should call get_recommended_books() multiple times (based on number_of_book) and return a JSON list of the generated book objects.
Handle the case where category or number_of_book are missing by returning an error JSON response with a 400 status code.
return a JSON string representing the recommended books. use os library to retrieve GOOGLE_CLOUD_PROJECT env var. Use ChatVertexAI from langchain for the LLM call

Code Assist سپس یک راه حل بالقوه ایجاد می کند که هم کد منبع و هم یک فایل وابستگی requirement.txt را ارائه می کند.

ما شما را تشویق می کنیم که کد تولید شده Code Assist را با راه حل آزمایش شده و صحیح ارائه شده در زیر مقایسه کنید. این به شما امکان می دهد تا کارایی ابزار را ارزیابی کنید و هرگونه اختلاف احتمالی را شناسایی کنید. در حالی که هرگز نباید کورکورانه به LLM ها اعتماد کرد، Code Assist می تواند ابزاری عالی برای نمونه سازی سریع و تولید ساختارهای کد اولیه باشد و باید برای یک شروع خوب از آن استفاده کرد.

از آنجایی که این یک کارگاه است، با کد تایید شده ارائه شده در زیر ادامه خواهیم داد. با این حال، با خیال راحت کد تولید شده توسط Code Assist را در زمان خود آزمایش کنید تا درک عمیق تری از قابلیت ها و محدودیت های آن به دست آورید.

👉به ویرایشگر کد منبع تابع Cloud Run (در برگه مرورگر دیگر) بازگردید. محتوای موجود main.py را با دقت با کد زیر جایگزین کنید:

import functions_framework
import json
from flask import Flask, jsonify, request
from langchain_google_vertexai import ChatVertexAI
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
import os

class Book(BaseModel):
   
bookname: str = Field(description="Name of the book")
   
author: str = Field(description="Name of the author")
   
publisher: str = Field(description="Name of the publisher")
   
publishing_date: str = Field(description="Date of publishing")


project_id = os.environ.get("GOOGLE_CLOUD_PROJECT")  

llm = ChatVertexAI(model_name="gemini-2.0-flash-lite-001")

def get_recommended_books(category):
    """
    A simple book recommendation function.

    Args:
        category (str): category

    Returns:
        str: A JSON string representing the recommended books.
    """
   
parser = JsonOutputParser(pydantic_object=Book)
   
question = f"Generate a random made up book on {category} with bookname, author and publisher and publishing_date"

   
prompt = PromptTemplate(
       
template="Answer the user query.\n{format_instructions}\n{query}\n",
       
input_variables=["query"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"query": question})

   
return  json.dumps(response)
   

@functions_framework.http
def recommended(request):
   
request_json = request.get_json(silent=True) # Get JSON data
   
if request_json and 'category' in request_json and 'number_of_book' in request_json:
       
category = request_json['category']
       
number_of_book = int(request_json['number_of_book'])
   
elif request.args and 'category' in request.args and 'number_of_book' in request.args:
       
category = request.args.get('category')
       
number_of_book = int(request.args.get('number_of_book'))

   
else:
       
return jsonify({'error': 'Missing category or number_of_book parameters'}), 400


   
recommendations_list = []
   
for i in range(number_of_book):
       
book_dict = json.loads(get_recommended_books(category))
       
print(f"book_dict=======>{book_dict}")
   
       
recommendations_list.append(book_dict)

   
   
return jsonify(recommendations_list)

👉محتوای request.txt را با موارد زیر جایگزین کنید:

functions-framework==3.*
google-genai==1.0.0
flask==3.1.0
jsonify==0.5
langchain_google_vertexai==2.0.13
langchain_core==0.3.34
pydantic==2.10.5

👉ما نقطه ورود Function را تنظیم می کنیم: recommended

03-02-function-create.png

👉روی SAVE AND DEPLOY کلیک کنید. برای استقرار تابع منتظر بمانید تا فرآیند استقرار کامل شود. Cloud Console وضعیت را نمایش می دهد. این ممکن است چند دقیقه طول بکشد.

متن جایگزین 👉پس از استقرار، به ویرایشگر پوسته ابری، در اجرای ترمینال برگردید:

export PROJECT_ID=$(gcloud config get project)
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

curl -X POST -H "Content-Type: application/json" -d '{"category": "Science Fiction", "number_of_book": 2}' $BOOK_PROVIDER_URL

باید برخی از داده های کتاب را در قالب JSON نشان دهد.

[
 
{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},
 
{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}
]

تبریک می گویم! شما با موفقیت یک عملکرد Cloud Run را اجرا کردید. این یکی از خدماتی است که ما در هنگام توسعه نماینده Aidemy خود را ادغام خواهیم کرد.

5. ابزارهای ساخت: اتصال عوامل به سرویس و داده های RESTFUL

بیایید پیش برویم و پروژه Bootstrap Skeleton را دانلود کنیم، مطمئن شوید که در ویرایشگر پوسته ابری هستید. در اجرای ترمینال،

git clone https://github.com/weimeilin79/aidemy-bootstrap.git

پس از اجرای این دستور، یک پوشه جدید به نام aidemy-bootstrap در محیط Cloud Shell شما ایجاد می شود.

در پنجره اکسپلورر ویرایشگر Cloud Shell (معمولاً در سمت چپ)، اکنون باید پوشه ای را ببینید که هنگام کلون کردن مخزن Git aidemy-bootstrap ایجاد شده است. پوشه ریشه پروژه خود را در Explorer باز کنید. یک زیر پوشه planner در آن پیدا خواهید کرد، آن را نیز باز کنید. کاوشگر پروژه

بیایید شروع به ساخت ابزارهایی کنیم که نمایندگان ما از آنها استفاده می کنند تا واقعاً مفید باشند. همانطور که می دانید، LLM ها در استدلال و تولید متن عالی هستند، اما برای انجام کارهای دنیای واقعی و ارائه اطلاعات دقیق و به روز نیاز به دسترسی به منابع خارجی دارند. این ابزارها را به عنوان "چاقوی ارتش سوئیس" مامور در نظر بگیرید که به آن توانایی تعامل با جهان را می دهد.

هنگام ساخت یک عامل، به راحتی می توان به سختی کدگذاری جزئیات زیادی گرفت. این عاملی را ایجاد می کند که انعطاف پذیر نیست. در عوض، با ایجاد و استفاده از ابزارها، عامل به منطق یا سیستم های خارجی دسترسی دارد که به آن مزایای برنامه نویسی LLM و سنتی را می دهد.

در این بخش، پایه‌ای را برای عامل برنامه‌ریز ایجاد می‌کنیم که معلمان از آن برای تولید طرح‌های درسی استفاده می‌کنند. قبل از اینکه عامل شروع به تولید یک طرح کند، می خواهیم با ارائه جزئیات بیشتر در مورد موضوع و موضوع، مرزهایی را تعیین کنیم. ما سه ابزار می سازیم:

  1. Restful API Call: تعامل با یک API از قبل موجود برای بازیابی داده ها.
  2. پرس و جو پایگاه داده: واکشی داده های ساختار یافته از پایگاه داده Cloud SQL.
  3. جستجوی گوگل: دسترسی به اطلاعات بلادرنگ از وب.

واکشی توصیه‌های کتاب از یک API

ابتدا، بیایید ابزاری ایجاد کنیم که توصیه‌های کتاب را از API ارائه‌دهنده کتابی که در بخش قبل اجرا کردیم، بازیابی می‌کند. این نشان می دهد که چگونه یک نماینده می تواند از خدمات موجود استفاده کند.

توصیه کتاب

در Cloud Shell Editor، پروژه aidemy-bootstrap را که در بخش قبلی کلون کردید، باز کنید.

book.py در پوشه planner ویرایش کنید و کد زیر را در انتهای فایل قرار دهید:

def recommend_book(query: str):
    """
    Get a list of recommended book from an API endpoint
   
    Args:
        query: User's request string
    """

   
region = get_next_region();
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)

   
query = f"""The user is trying to plan a education course, you are the teaching assistant. Help define the category of what the user requested to teach, respond the categroy with no more than two word.

    user request:   {query}
    """
   
print(f"-------->{query}")
   
response = llm.invoke(query)
   
print(f"CATEGORY RESPONSE------------>: {response}")
   
   
# call this using python and parse the json back to dict
   
category = response.strip()
   
   
headers = {"Content-Type": "application/json"}
   
data = {"category": category, "number_of_book": 2}

   
books = requests.post(BOOK_PROVIDER_URL, headers=headers, json=data)
   
   
return books.text

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

توضیح:

  • rekomand_book(query: str) : این تابع پرس و جوی کاربر را به عنوان ورودی می گیرد.
  • تعامل LLM : از LLM برای استخراج دسته از پرس و جو استفاده می کند. این نشان می دهد که چگونه می توانید از LLM برای کمک به ایجاد پارامترها برای ابزارها استفاده کنید.
  • API Call : درخواست POST را به API ارائه‌دهنده کتاب می‌دهد و دسته و تعداد مورد نظر کتاب را پاس می‌کند.

👉برای تست این تابع جدید، متغیر محیطی را تنظیم کنید، اجرا کنید:

cd ~/aidemy-bootstrap/planner/
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

👉وابستگی ها را نصب کنید و کد را اجرا کنید تا مطمئن شوید کار می کند، اجرا کنید:

cd ~/aidemy-bootstrap/planner/
python -m venv env
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
pip install -r requirements.txt
python book.py

پنجره پاپ آپ هشدار Git را نادیده بگیرید.

شما باید یک رشته JSON حاوی توصیه‌های کتاب را ببینید که از API ارائه‌دهنده کتاب بازیابی شده است. نتایج به صورت تصادفی تولید می شوند. ممکن است کتاب‌های شما یکسان نباشند، اما باید دو توصیه کتاب در قالب JSON دریافت کنید.

[{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}]

اگر این را می بینید، ابزار اول درست کار می کند!

به جای ایجاد صریح یک تماس API RESTful با پارامترهای خاص، از زبان طبیعی استفاده می‌کنیم ("من در حال انجام یک دوره..."). سپس عامل به طور هوشمند پارامترهای لازم (مانند دسته) را با استفاده از NLP استخراج می‌کند و نشان می‌دهد که چگونه عامل از درک زبان طبیعی برای تعامل با API استفاده می‌کند.

تماس را مقایسه کنید

👉 کد تست زیر را از book.py حذف کنید

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

دریافت اطلاعات برنامه درسی از پایگاه داده

در مرحله بعد، ابزاری خواهیم ساخت که داده های برنامه درسی ساختاریافته را از پایگاه داده Cloud SQL PostgreSQL واکشی می کند. این به عامل اجازه می دهد تا به یک منبع اطلاعاتی قابل اعتماد برای برنامه ریزی درسی دسترسی پیدا کند.

db ایجاد کنید

نمونه aidemy Cloud SQL را که در مرحله قبل ایجاد کرده اید به خاطر دارید؟ اینجا جایی است که از آن استفاده خواهد شد.

👉در نمونه جدید یک پایگاه داده با نام aidemy-db ایجاد کنید.

gcloud sql databases create aidemy-db \
   
--instance=aidemy

بیایید نمونه موجود در Cloud SQL را در Google Cloud Console تأیید کنیم، باید یک نمونه Cloud SQL به نام aidemy در لیست مشاهده کنید. روی نام نمونه کلیک کنید تا جزئیات آن را مشاهده کنید. در صفحه جزئیات نمونه Cloud SQL، روی "SQL Studio" در منوی ناوبری سمت چپ کلیک کنید. با این کار یک تب جدید باز می شود.

برای اتصال به پایگاه داده کلیک کنید. وارد SQL Studio شوید

aidemy-db به عنوان پایگاه داده انتخاب کنید. postgres به عنوان کاربر و 1234qwer به عنوان رمز عبور وارد کنید. ورود به sql studio

👉در ویرایشگر کوئری SQL Studio، کد SQL زیر را قرار دهید:

CREATE TABLE curriculums (
   
id SERIAL PRIMARY KEY,
   
year INT,
   
subject VARCHAR(255),
   
description TEXT
);

-- Inserting detailed curriculum data for different school years and subjects
INSERT INTO curriculums (year, subject, description) VALUES
-- Year 5
(5, 'Mathematics', 'Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.'),
(5, 'English', 'Developing reading comprehension, creative writing, and basic grammar, with a focus on storytelling and poetry.'),
(5, 'Science', 'Exploring basic physics, chemistry, and biology concepts, including forces, materials, and ecosystems.'),
(5, 'Computer Science', 'Basic coding concepts using block-based programming and an introduction to digital literacy.'),

-- Year 6
(6, 'Mathematics', 'Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.'),
(6, 'English', 'Introduction to persuasive writing, character analysis, and deeper comprehension of literary texts.'),
(6, 'Science', 'Forces and motion, the human body, and introductory chemical reactions with hands-on experiments.'),
(6, 'Computer Science', 'Introduction to algorithms, logical reasoning, and basic text-based programming (Python, Scratch).'),

-- Year 7
(7, 'Mathematics', 'Algebraic expressions, geometry, and introduction to statistics and probability.'),
(7, 'English', 'Analytical reading of classic and modern literature, essay writing, and advanced grammar skills.'),
(7, 'Science', 'Introduction to cells and organisms, chemical reactions, and energy transfer in physics.'),
(7, 'Computer Science', 'Building on programming skills with Python, introduction to web development, and cyber safety.');

این کد SQL جدولی به نام curriculums ایجاد می کند و برخی از داده های نمونه را درج می کند. برای اجرای کد SQL روی Run کلیک کنید. شما باید یک پیام تأیید را ببینید که نشان می دهد دستورات با موفقیت اجرا شده اند.

👉کاوشگر را گسترش دهید، جدول جدید ایجاد شده را پیدا کنید و روی Query کلیک کنید. باید یک تب ویرایشگر جدید با SQL ایجاد شده برای شما باز کند،

جدول انتخاب sql studio

SELECT * FROM
 
"public"."curriculums" LIMIT 1000;

👉روی Run کلیک کنید.

جدول نتایج باید ردیف‌هایی از داده‌هایی را که در مرحله قبل درج کرده‌اید نشان دهد و تأیید کند که جدول و داده‌ها درست ایجاد شده‌اند.

اکنون که با موفقیت یک پایگاه داده با داده های نمونه برنامه درسی پر شده ایجاد کرده اید، ابزاری برای بازیابی آن می سازیم.

👉در ویرایشگر کد Cloud، فایل curriculums.py را در پوشه aidemy-bootstrap ویرایش کنید و کد زیر را در انتهای فایل قرار دهید:

def connect_with_connector() -> sqlalchemy.engine.base.Engine:

   
db_user = os.environ["DB_USER"]
   
db_pass = os.environ["DB_PASS"]
   
db_name = os.environ["DB_NAME"]

   
encoded_db_user = os.environ.get("DB_USER")
   
print(f"--------------------------->db_user: {db_user!r}")  
   
print(f"--------------------------->db_pass: {db_pass!r}")
   
print(f"--------------------------->db_name: {db_name!r}")

   
ip_type = IPTypes.PRIVATE if os.environ.get("PRIVATE_IP") else IPTypes.PUBLIC

   
connector = Connector()

   
def getconn() -> pg8000.dbapi.Connection:
       
conn: pg8000.dbapi.Connection = connector.connect(
           
instance_connection_name,
           
"pg8000",
           
user=db_user,
           
password=db_pass,
           
db=db_name,
           
ip_type=ip_type,
       
)
       
return conn

   
pool = sqlalchemy.create_engine(
       
"postgresql+pg8000://",
       
creator=getconn,
       
pool_size=2,
       
max_overflow=2,
       
pool_timeout=30,  # 30 seconds
       
pool_recycle=1800,  # 30 minutes
   
)
   
return pool



def init_connection_pool() -> sqlalchemy.engine.base.Engine:
   
   
return (
       
connect_with_connector()
   
)

   
raise ValueError(
       
"Missing database connection type. Please define one of INSTANCE_HOST, INSTANCE_UNIX_SOCKET, or INSTANCE_CONNECTION_NAME"
   
)

def get_curriculum(year: int, subject: str):
    """
    Get school curriculum
   
    Args:
        subject: User's request subject string
        year: User's request year int
    """
   
try:
       
stmt = sqlalchemy.text(
           
"SELECT description FROM curriculums WHERE year = :year AND subject = :subject"
       
)

       
with db.connect() as conn:
           
result = conn.execute(stmt, parameters={"year": year, "subject": subject})
           
row = result.fetchone()
       
if row:
           
return row[0]  
       
else:
           
return None  

   
except Exception as e:
       
print(e)
       
return None

db = init_connection_pool()

توضیح:

  • متغیرهای محیطی : کد اعتبار پایگاه داده و اطلاعات اتصال را از متغیرهای محیطی بازیابی می کند (در زیر در این مورد بیشتر توضیح می دهیم).
  • connect_with_connector() : این تابع از Cloud SQL Connector برای ایجاد یک اتصال امن به پایگاه داده استفاده می کند.
  • get_curriculum(year: int، موضوع: str) : این تابع سال و موضوع را به عنوان ورودی می گیرد، جدول برنامه های درسی را پرس و جو می کند و توضیحات برنامه درسی مربوطه را برمی گرداند.

👉قبل از اینکه بتوانیم کد را اجرا کنیم، باید چند متغیر محیطی را تنظیم کنیم، در ترمینال، اجرا کنید:

export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉برای تست کد زیر را به انتهای curriculums.py اضافه کنید:

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉کد را اجرا کنید:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python curriculums.py

شما باید توضیحات برنامه درسی ریاضی ششم ابتدایی را روی کنسول چاپ شده ببینید.

Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.

اگر توضیحات برنامه درسی را مشاهده کردید، ابزار پایگاه داده به درستی کار می کند! ادامه دهید و اسکریپت را با فشار دادن Ctrl+C متوقف کنید.

👉 کد تست زیر را از curriculums.py حذف کنید

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉خروج از محیط مجازی، در ترمینال اجرا:

deactivate

6. ابزارهای ساختمان: به اطلاعات بلادرنگ از وب دسترسی پیدا کنید

در نهایت، ابزاری خواهیم ساخت که از ادغام Gemini 2 و Google Search برای دسترسی به اطلاعات بلادرنگ از وب استفاده می‌کند. این به نماینده کمک می کند تا به روز بماند و نتایج مرتبط را ارائه دهد.

ادغام Gemini 2 با Google Search API با ارائه نتایج جستجوی دقیق‌تر و مرتبط‌تر، قابلیت‌های عامل را افزایش می‌دهد. این به عوامل اجازه می دهد تا به اطلاعات به روز دسترسی داشته باشند و پاسخ های خود را در داده های دنیای واقعی مستقر کنند و توهمات را به حداقل برسانند. یکپارچه سازی API بهبود یافته همچنین درخواست های زبان طبیعی بیشتری را تسهیل می کند و عوامل را قادر می سازد تا درخواست های جستجوی پیچیده و ظریف را فرموله کنند.

جستجو کنید

این تابع یک عبارت جستجو، برنامه درسی، موضوع و سال را به عنوان ورودی می گیرد و از Gemini API و ابزار جستجوی Google برای بازیابی اطلاعات مرتبط از اینترنت استفاده می کند. اگر دقت کنید، از Google Generative AI SDK برای انجام فراخوانی عملکرد بدون استفاده از هیچ چارچوب دیگری استفاده می کند.

👉 search.py در پوشه aidemy-bootstrap ویرایش کنید و کد زیر را در انتهای فایل قرار دهید:

model_id = "gemini-2.0-flash-001"

google_search_tool = Tool(
   
google_search = GoogleSearch()
)

def search_latest_resource(search_text: str, curriculum: str, subject: str, year: int):
    """
    Get latest information from the internet
   
    Args:
        search_text: User's request category   string
        subject: "User's request subject" string
        year: "User's request year"  integer
    """
   
search_text = "%s in the context of year %d and subject %s with following curriculum detail %s " % (search_text, year, subject, curriculum)
   
region = get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"search_latest_resource text-----> {search_text}")
   
response = client.models.generate_content(
       
model=model_id,
       
contents=search_text,
       
config=GenerateContentConfig(
           
tools=[google_search_tool],
           
response_modalities=["TEXT"],
       
)
   
)
   
print(f"search_latest_resource response-----> {response}")
   
return response

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

توضیح:

  • ابزار تعریف - google_search_tool : قرار دادن شی GoogleSearch در یک ابزار
  • search_latest_resource(search_text: str, subject: str, year: int) : این تابع یک عبارت جستجو، موضوع و سال را به عنوان ورودی می گیرد و از Gemini API برای انجام جستجوی گوگل استفاده می کند. مدل جمینی
  • GenerateContentConfig : تعریف کنید که به ابزار GoogleSearch دسترسی داشته باشد

مدل Gemini به صورت داخلی جستجوی_متن را تحلیل می‌کند و تعیین می‌کند که آیا می‌تواند مستقیماً به سؤال پاسخ دهد یا نیاز به استفاده از ابزار GoogleSearch دارد. این یک مرحله حیاتی است که در فرآیند استدلال LLM اتفاق می افتد. این مدل برای تشخیص موقعیت هایی که ابزارهای خارجی ضروری هستند آموزش دیده است. اگر مدل تصمیم به استفاده از ابزار GoogleSearch داشته باشد، Google Generative AI SDK فراخوان واقعی را مدیریت می کند. SDK تصمیم مدل و پارامترهای تولید شده را می گیرد و به API جستجوی Google ارسال می کند. این قسمت در کد از دید کاربر پنهان می شود.

سپس مدل Gemini نتایج جستجو را در پاسخ خود ادغام می کند. می‌تواند از اطلاعات برای پاسخ به سؤال کاربر، ایجاد خلاصه یا انجام کارهای دیگر استفاده کند.

👉برای تست کد زیر را اجرا کنید:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
source env/bin/activate
python search.py

شما باید پاسخ Gemini Search API را که حاوی نتایج جستجوی مرتبط با «برنامه درسی سال پنجم ریاضیات» است، ببینید. خروجی دقیق به نتایج جستجو بستگی دارد، اما یک شی JSON با اطلاعات مربوط به جستجو خواهد بود.

اگر نتایج جستجو را می بینید، ابزار جستجوی گوگل به درستی کار می کند! ادامه دهید و اسکریپت را با فشار دادن Ctrl+C متوقف کنید.

👉و قسمت آخر کد را حذف کنید .

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

👉خروج از محیط مجازی، در ترمینال اجرا:

deactivate

تبریک می گویم! اکنون سه ابزار قدرتمند برای عامل برنامه‌ریز خود ساخته‌اید: یک رابط API، یک رابط پایگاه داده و یک ابزار جستجوی Google. این ابزارها عامل را قادر می‌سازد تا به اطلاعات و قابلیت‌هایی که برای ایجاد برنامه‌های آموزشی مؤثر نیاز دارد دسترسی داشته باشد.

7. ارکستراسیون با LangGraph

اکنون که ابزارهای فردی خود را ساخته ایم، زمان آن رسیده است که آنها را با استفاده از LangGraph هماهنگ کنیم. این به ما این امکان را می‌دهد که یک عامل «برنامه‌ریز» پیچیده‌تر ایجاد کنیم که می‌تواند بر اساس درخواست کاربر به‌طور هوشمندانه تصمیم بگیرد که از کدام ابزار و چه زمانی استفاده کنیم.

LangGraph یک کتابخانه پایتون است که برای آسان‌تر ساختن برنامه‌های کاربردی چند عامله با استفاده از مدل‌های زبان بزرگ (LLM) طراحی شده است. به آن به عنوان چارچوبی برای سازماندهی مکالمات پیچیده و گردش کار شامل LLMها، ابزارها و سایر عوامل فکر کنید.

مفاهیم کلیدی:

  • ساختار گراف: LangGraph منطق برنامه شما را به عنوان یک گراف جهت دار نشان می دهد. هر گره در نمودار نشان دهنده یک مرحله از فرآیند است (به عنوان مثال، فراخوانی به LLM، فراخوانی ابزار، بررسی شرطی). لبه ها جریان اجرا را بین گره ها تعریف می کنند.
  • وضعیت: LangGraph وضعیت برنامه شما را هنگام حرکت در نمودار مدیریت می کند. این حالت می‌تواند شامل متغیرهایی مانند ورودی کاربر، نتایج فراخوانی ابزار، خروجی‌های میانی از LLMها و هر اطلاعات دیگری باشد که باید بین مراحل حفظ شود.
  • گره ها: هر گره نشان دهنده یک محاسبات یا تعامل است. آنها می توانند:
    • گره های ابزار: از ابزاری استفاده کنید (مثلاً جستجوی وب انجام دهید، از پایگاه داده پرس و جو کنید)
    • گره های تابع: یک تابع پایتون را اجرا کنید.
  • لبه ها: گره ها را به هم متصل کنید، جریان اجرا را تعریف می کند. آنها می توانند:
    • لبه های مستقیم: یک جریان ساده و بدون قید و شرط از یک گره به گره دیگر.
    • لبه های شرطی: جریان به نتیجه یک گره شرطی بستگی دارد.

لانگ گراف

ما از LangGraph برای اجرای ارکستراسیون استفاده خواهیم کرد. بیایید فایل aidemy.py را در پوشه aidemy-bootstrap ویرایش کنیم تا منطق LangGraph خود را تعریف کنیم.

👉کد فالو را به انتهای aidemy.py اضافه کنید:

tools = [get_curriculum, search_latest_resource, recommend_book]

def determine_tool(state: MessagesState):
   
llm = ChatVertexAI(model_name="gemini-2.0-flash-001", location=get_next_region())
   
sys_msg = SystemMessage(
                   
content=(
                       
f"""You are a helpful teaching assistant that helps gather all needed information.
                            Your ultimate goal is to create a detailed 3-week teaching plan.
                            You have access to tools that help you gather information.  
                            Based on the user request, decide which tool(s) are needed.

                        """
                   
)
               
)

   
llm_with_tools = llm.bind_tools(tools)
   
return {"messages": llm_with_tools.invoke([sys_msg] + state["messages"])}

این تابع مسئول گرفتن وضعیت فعلی مکالمه، ارائه یک پیام سیستمی به LLM و سپس درخواست از LLM برای ایجاد پاسخ است. LLM می‌تواند مستقیماً به کاربر پاسخ دهد یا استفاده از یکی از ابزارهای موجود را انتخاب کند.

tools : این لیست مجموعه ابزارهایی را نشان می دهد که عامل در اختیار دارد. این شامل سه تابع ابزار است که در مراحل قبلی تعریف کردیم: get_curriculum ، search_latest_resource ، و recommend_book . llm.bind_tools(tools) : لیست ابزارها را به شی llm "پیوند" می کند. اتصال ابزارها به LLM می‌گوید که این ابزارها در دسترس هستند و اطلاعاتی در مورد نحوه استفاده از آنها در اختیار LLM قرار می‌دهد (مثلاً نام ابزارها، پارامترهایی که می‌پذیرند و کاری که انجام می‌دهند).

ما از LangGraph برای اجرای ارکستراسیون استفاده خواهیم کرد.

👉کد زیر را به انتهای aidemy.py اضافه کنید:

def prep_class(prep_needs):
   
   
builder = StateGraph(MessagesState)
   
builder.add_node("determine_tool", determine_tool)
   
builder.add_node("tools", ToolNode(tools))
   
   
builder.add_edge(START, "determine_tool")
   
builder.add_conditional_edges("determine_tool",tools_condition)
   
builder.add_edge("tools", "determine_tool")

   
   
memory = MemorySaver()
   
graph = builder.compile(checkpointer=memory)

   
config = {"configurable": {"thread_id": "1"}}
   
messages = graph.invoke({"messages": prep_needs},config)
   
print(messages)
   
for m in messages['messages']:
       
m.pretty_print()
   
teaching_plan_result = messages["messages"][-1].content  


   
return teaching_plan_result

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan")

توضیح:

  • StateGraph(MessagesState) : یک شی StateGraph ایجاد می کند. StateGraph یک مفهوم اصلی در LangGraph است. گردش کار عامل شما را به عنوان یک نمودار نشان می دهد، جایی که هر گره در نمودار نشان دهنده مرحله ای از فرآیند است. آن را به عنوان تعریف طرحی برای چگونگی استدلال و عمل عامل در نظر بگیرید.
  • Conditional Edge: که از گره "determine_tool" منشا می گیرد، آرگومان tools_condition احتمالا تابعی است که بر اساس خروجی تابع determine_tool تعیین می کند کدام لبه را دنبال کند. یال های شرطی به گراف اجازه می دهند بر اساس تصمیم LLM در مورد اینکه از کدام ابزار استفاده شود (یا اینکه آیا مستقیماً به کاربر پاسخ دهد) منشعب شود. اینجاست که «هوش» عامل وارد عمل می‌شود – او می‌تواند رفتار خود را به صورت پویا بر اساس موقعیت تطبیق دهد.
  • حلقه: یک یال به نمودار اضافه می کند که گره "tools" را به گره "determine_tool" وصل می کند. این یک حلقه در نمودار ایجاد می کند و به عامل اجازه می دهد تا به طور مکرر از ابزارها استفاده کند تا زمانی که اطلاعات کافی برای تکمیل کار و ارائه یک پاسخ رضایت بخش را جمع آوری کند. این حلقه برای کارهای پیچیده ای که نیازمند مراحل متعدد استدلال و جمع آوری اطلاعات است، حیاتی است.

اکنون، بیایید عامل برنامه ریز خود را آزمایش کنیم تا ببینیم چگونه ابزارهای مختلف را هماهنگ می کند.

این کد تابع prep_class را با ورودی کاربر خاص اجرا می‌کند و با استفاده از برنامه درسی، توصیه‌های کتاب و آخرین منابع اینترنتی، درخواستی برای ایجاد یک طرح تدریس برای ریاضیات پایه پنجم در هندسه شبیه‌سازی می‌کند.

اگر ترمینال خود را بسته اید یا متغیرهای محیط دیگر تنظیم نشده اند، دستورات زیر را دوباره اجرا کنید

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉کد را اجرا کنید:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
pip install -r requirements.txt
python aidemy.py

گزارش ورود را در ترمینال تماشا کنید. شما باید شواهدی را ببینید که نشان می‌دهد نماینده هر سه ابزار (دریافت برنامه درسی مدرسه، دریافت توصیه‌های کتاب و جستجوی جدیدترین منابع) را قبل از ارائه برنامه آموزشی نهایی می‌بیند. این نشان می دهد که ارکستراسیون LangGraph به درستی کار می کند و عامل به طور هوشمندانه از تمام ابزارهای موجود برای انجام درخواست کاربر استفاده می کند.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
search_text: Geometry
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
subject: Mathematics
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.....) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================
Tool Calls:
 
recommend_book (93b48189-4d69-4c09-a3bd-4e60cdc5f1c6)
 
Call ID: 93b48189-4d69-4c09-a3bd-4e60cdc5f1c6
 
Args:
   
query: Mathematics Geometry Year 5
================================= Tool Message =================================
Name: recommend_book

[{.....}]

================================== Ai Message ==================================

Based on the curriculum outcome, here is a 3-week teaching plan for year 5 Mathematics Geometry:

**Week 1: Introduction to Shapes and Properties**
.........

اسکریپت را با فشار دادن Ctrl+C متوقف کنید.

👉(این مرحله اختیاری است) کد تست را با یک اعلان متفاوت جایگزین کنید، که برای فراخوانی ابزارهای مختلفی نیاز دارد.

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

اگر ترمینال خود را بسته اید یا متغیرهای محیط دیگر تنظیم نشده اند، دستورات زیر را دوباره اجرا کنید

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉(این مرحله اختیاری است، این کار را فقط در صورتی انجام دهید که مرحله قبل را انجام داده باشید) کد را دوباره اجرا کنید:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python aidemy.py

این بار متوجه چه چیزی شدید؟ نماینده با کدام ابزار تماس گرفت؟ باید ببینید که عامل این بار فقط با ابزار search_latest_resource تماس می گیرد. این به این دلیل است که اعلان مشخص نمی کند که به دو ابزار دیگر نیاز دارد و LLM ما آنقدر هوشمند است که ابزارهای دیگر را فراخوانی نمی کند.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
subject: Mathematics
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
search_text: Geometry
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.......token_count=40, total_token_count=772) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================

Based on the information provided, a 3-week teaching plan for Year 5 Mathematics focusing on Geometry could look like this:

**Week 1:  Introducing 2D Shapes**
........
* Use visuals, manipulatives, and real-world examples to make the learning experience engaging and relevant.

اسکریپت را با فشار دادن Ctrl+C متوقف کنید.

👉 برای تمیز نگه داشتن فایل aidemy.py کد تست را حذف کنید (این مرحله را رد نکنید!):

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

اکنون که منطق عامل ما تعریف شده است، بیایید برنامه وب Flask را راه اندازی کنیم. این یک رابط مبتنی بر فرم آشنا برای معلمان برای تعامل با عامل فراهم می کند. در حالی که تعاملات چت بات در LLM ها رایج است، ما یک فرم سنتی را برای ارسال UI انتخاب می کنیم، زیرا ممکن است برای بسیاری از مربیان بصری تر باشد.

اگر ترمینال خود را بسته اید یا متغیرهای محیط دیگر تنظیم نشده اند، دستورات زیر را دوباره اجرا کنید

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉اکنون، رابط کاربری وب را راه اندازی کنید.

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python app.py

به دنبال پیام های راه اندازی در خروجی ترمینال Cloud Shell بگردید. Flask معمولاً پیام هایی را چاپ می کند که نشان می دهد در حال اجرا و روی چه پورتی است.

Running on http://127.0.0.1:8080
Running on http://127.0.0.1:8080
The application needs to keep running to serve requests.

👉از منوی "Web preview"، Preview در پورت 8080 را انتخاب کنید. Cloud Shell یک برگه یا پنجره جدید مرورگر را با پیش‌نمایش وب برنامه شما باز می‌کند.

صفحه وب

در رابط برنامه، 5 برای سال انتخاب کنید، موضوع Mathematics را انتخاب کنید و Geometry را در درخواست افزودنی تایپ کنید.

در حالی که منتظر پاسخ هستید، به جای خیره شدن خالی، به ترمینال Cloud Editor بروید. شما می توانید پیشرفت و هرگونه خروجی یا پیام خطای تولید شده توسط عملکرد خود را در ترمینال شبیه ساز مشاهده کنید. 😁

👉با فشار دادن Ctrl+C در ترمینال اسکریپت را متوقف کنید.

👈خروج از محیط مجازی:

deactivate

8. استقرار عامل برنامه ریز در فضای ابری

تصویر را بسازید و به رجیستری فشار دهید

نمای کلی

👉زمان استقرار آن در فضای ابری است. در ترمینال، یک مخزن مصنوعات ایجاد کنید تا تصویر داکری که می‌خواهیم بسازیم را ذخیره کنیم.

gcloud artifacts repositories create agent-repository \
   
--repository-format=docker \
   
--location=us-central1 \
   
--description="My agent repository"

باید مخزن ایجاد شده [agent-repository] را ببینید.

دستور زیر را برای ساخت تصویر داکر اجرا کنید.

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .

👉ما باید تصویر را دوباره تگ کنیم تا به جای GCR در آرتیفکت رجیستری میزبانی شود و تصویر تگ شده را به آرتیفکت رجیستری فشار دهیم:

export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

هنگامی که فشار کامل شد، می توانید تأیید کنید که تصویر با موفقیت در آرتیفکت رجیستری ذخیره شده است. به رجیستری Artifact در Google Cloud Console بروید. شما باید تصویر aidemy-planner را در مخزن agent-repository پیدا کنید. تصویر برنامه ریز Aidemy

ایمن سازی اعتبار پایگاه داده با مدیر مخفی

برای مدیریت ایمن و دسترسی به اعتبار پایگاه داده، از Google Cloud Secret Manager استفاده می کنیم. این از اطلاعات حساس کدگذاری سخت در کد برنامه ما جلوگیری می کند و امنیت را افزایش می دهد.

👉ما اسرار فردی را برای نام کاربری، رمز عبور و نام پایگاه داده پایگاه داده ایجاد خواهیم کرد. این رویکرد به ما این امکان را می دهد که هر اعتبار را به طور مستقل مدیریت کنیم. در اجرای ترمینال:

gcloud secrets create db-user
printf "postgres" | gcloud secrets versions add db-user --data-file=-

gcloud secrets create db-pass
printf "1234qwer" | gcloud secrets versions add db-pass --data-file=-

gcloud secrets create db-name
printf "aidemy-db" | gcloud secrets versions add db-name --data-file=-

استفاده از Secret Manager یک گام مهم در ایمن سازی برنامه شما و جلوگیری از افشای تصادفی اعتبارنامه های حساس است. از بهترین شیوه های امنیتی برای استقرار ابری پیروی می کند.

در Cloud Run مستقر شوید

Cloud Run یک پلت فرم بدون سرور کاملاً مدیریت شده است که به شما امکان می دهد برنامه های کاربردی کانتینری را به سرعت و به راحتی اجرا کنید. مدیریت زیرساخت را انتزاعی می کند و به شما امکان می دهد روی نوشتن و استقرار کد خود تمرکز کنید. ما برنامه ریز خود را به عنوان یک سرویس Cloud Run به کار خواهیم گرفت.

👉در Google Cloud Console، به " Cloud Run " بروید. روی DEPLOY CONTAINER کلیک کنید و SERVICE را انتخاب کنید. سرویس Cloud Run خود را پیکربندی کنید:

اجرای ابری

  1. تصویر کانتینر : روی «انتخاب» در قسمت URL کلیک کنید. نشانی اینترنتی تصویری را که به رجیستری Artifact فشار داده‌اید پیدا کنید (به عنوان مثال، us-central1-docker.pkg.dev/YOUR_PROJECT_ID/agent-repository/agent-planner/YOUR_IMG).
  2. نام سرویس : aidemy-planner
  3. منطقه : منطقه us-central1 را انتخاب کنید.
  4. احراز هویت : برای اهداف این کارگاه، می‌توانید «اجازه فراخوان‌های احراز هویت نشده» را مجاز کنید. برای تولید، احتمالاً می خواهید دسترسی را محدود کنید.
  5. برگه کانتینر(ها) (گسترش کانتینرها، شبکه):
    • برگه تنظیمات:
      • منبع
        • حافظه: 2 گیگابایت
    • تب متغیرها و اسرار:
      • متغیرهای محیطی:
        • اضافه کردن نام: GOOGLE_CLOUD_PROJECT و مقدار: <YOUR_PROJECT_ID>
        • نام را اضافه کنید: BOOK_PROVIDER_URL ، و مقدار را به URL تابع book-provider خود تنظیم کنید، که می توانید با استفاده از دستور زیر در ترمینال تعیین کنید:
          gcloud run services describe book-provider \
             
          --region=us-central1 \
             
          --project=$PROJECT_ID \
             
          --format="value(status.url)"
      • اسرار آشکار به عنوان متغیرهای محیطی:
        • اضافه کردن نام: DB_USER ، مخفی: db-user را انتخاب کنید و نسخه: latest
        • اضافه کردن نام: DB_PASS ، مخفی: db-pass انتخاب کنید و نسخه: latest
        • افزودن نام: DB_NAME ، مخفی: db-name و نسخه: latest انتخاب کنید

راز تنظیم کنید

موارد دیگر را به عنوان پیش فرض بگذارید.

👉 روی CREATE کلیک کنید.

Cloud Run سرویس شما را مستقر می کند.

پس از استقرار، روی سرویس به صفحه جزئیات آن کلیک کنید، می توانید URL مستقر شده را در بالا پیدا کنید.

URL

در رابط برنامه، 7 برای سال انتخاب کنید، Mathematics به عنوان موضوع انتخاب کنید و Algebra در قسمت درخواست افزودنی وارد کنید. این موضوع زمینه لازم را برای ایجاد یک طرح درس مناسب در اختیار نماینده قرار می دهد.

تبریک می گویم! شما با استفاده از عامل قدرتمند هوش مصنوعی ما با موفقیت یک برنامه آموزشی ایجاد کردید. این نشان‌دهنده پتانسیل عوامل برای کاهش قابل توجه حجم کار و ساده‌سازی وظایف، در نهایت بهبود کارایی و آسان‌تر کردن زندگی برای مربیان است.

9. سیستم های چند عاملی

اکنون که ما ابزار ایجاد برنامه تدریس را با موفقیت پیاده سازی کرده ایم ، بیایید تمرکز خود را به ساخت پورتال دانشجویی تغییر دهیم. این پورتال دسترسی به آزمونها ، ضبط های صوتی و تکالیف مربوط به دوره های خود را در اختیار دانشجویان قرار می دهد. با توجه به دامنه این قابلیت ، ما از قدرت سیستم های چند عامل برای ایجاد یک راه حل مدولار و مقیاس پذیر استفاده خواهیم کرد.

همانطور که قبلاً بحث کردیم ، به جای تکیه بر یک عامل واحد برای رسیدگی به همه چیز ، یک سیستم چند عامل به ما امکان می دهد بار کار را به کارهای کوچکتر و تخصصی تقسیم کنیم ، هر یک توسط یک عامل اختصاصی انجام می شود. این رویکرد چندین مزیت کلیدی را ارائه می دهد:

مدولار بودن و قابلیت حفظ : به جای ایجاد یک عامل واحد که همه کارها را انجام می دهد ، عوامل کوچکتر و تخصصی با مسئولیت های تعریف شده ایجاد می کند. این مدولار ، درک ، حفظ و اشکال زدایی سیستم را آسان تر می کند. هنگامی که مشکلی پیش می آید ، می توانید آن را به یک عامل خاص منزوی کنید ، نه اینکه مجبور شوید از طریق یک پایگاه عظیم کد کنید.

مقیاس پذیری : مقیاس بندی یک عامل پیچیده و پیچیده می تواند یک تنگنا باشد. با داشتن یک سیستم چند عامل ، می توانید بر اساس نیازهای خاص آنها ، عوامل فردی را مقیاس بندی کنید. به عنوان مثال ، اگر یک عامل در حال رسیدگی به حجم زیاد درخواست ها است ، می توانید به راحتی نمونه های بیشتری از آن عامل را بدون تأثیرگذاری بر بقیه سیستم بچرخانید.

تخصص تیم : به این شکل فکر کنید: از یک مهندس نمی خواهید که یک برنامه کامل را از ابتدا بسازد. درعوض ، شما تیمی از متخصصان را جمع می کنید که هرکدام در یک منطقه خاص تخصص دارند. به همین ترتیب ، یک سیستم چند عامل به شما امکان می دهد از نقاط قوت LLM ها و ابزارهای مختلف استفاده کنید و آنها را به نمایندگانی اختصاص دهید که برای کارهای خاص مناسب هستند.

توسعه موازی : تیم های مختلف می توانند به طور همزمان روی عوامل مختلف کار کنند و روند توسعه را سرعت بخشند. از آنجا که عوامل مستقل هستند ، تغییرات در یک عامل کمتر بر سایر عوامل تأثیر می گذارد.

معماری رویداد محور

برای فعال کردن ارتباط و هماهنگی مؤثر بین این عوامل ، ما از یک معماری رویداد محور استفاده خواهیم کرد. این بدان معناست که نمایندگان نسبت به "وقایع" که در سیستم اتفاق می افتد واکنش نشان می دهند.

نمایندگان در انواع رویدادهای خاص مشترک هستند (به عنوان مثال ، "برنامه تدریس تولید شده" ، "تکالیف ایجاد شده"). هنگامی که یک واقعه رخ می دهد ، به عوامل مربوطه اطلاع داده می شود و می توانند بر این اساس واکنش نشان دهند. این جداسازی باعث انعطاف پذیری ، مقیاس پذیری و پاسخگویی در زمان واقعی می شود.

نمای کلی

اکنون ، برای شروع کار ، ما به راهی برای پخش این رویدادها نیاز داریم. برای انجام این کار ، ما یک موضوع میخانه/زیر را تنظیم خواهیم کرد. بیایید با ایجاد موضوعی به نام برنامه شروع کنیم.

- به Google Cloud Console Pub/Sub بروید و روی دکمه "ایجاد موضوع" کلیک کنید.

the موضوع را با plan شناسه/Name تنظیم کنید و علامت گذاری را اضافه کنید ، Add a default subscription ، استراحت را به عنوان پیش فرض بگذارید و روی ایجاد کلیک کنید.

صفحه Pub/Sub RECRESS می کند ، و اکنون باید موضوع تازه ایجاد شده خود را در جدول مشاهده کنید. موضوع ایجاد کنید

حال ، بیایید قابلیت انتشار Pub/Sub Event را در نماینده برنامه ریز ما ادغام کنیم. ما یک ابزار جدید اضافه خواهیم کرد که یک رویداد "برنامه" را به موضوع میخانه/فرعی که اخیراً ایجاد کرده ایم ارسال می کند. این رویداد به سایر عوامل موجود در سیستم (مانند موارد موجود در پورتال دانشجویی) نشان می دهد که یک برنامه تدریس جدید در دسترس است.

- به ویرایشگر کد ابر برگردید و پرونده app.py که در پوشه planner قرار دارد باز کنید. ما تابعی را اضافه خواهیم کرد که این رویداد را منتشر می کند. جایگزین کنید:

##ADD SEND PLAN EVENT FUNCTION HERE

با

def send_plan_event(teaching_plan:str):
    """
    Send the teaching event to the topic called plan
   
    Args:
        teaching_plan: teaching plan
    """
   
publisher = pubsub_v1.PublisherClient()
   
print(f"-------------> Sending event to topic plan: {teaching_plan}")
   
topic_path = publisher.topic_path(PROJECT_ID, "plan")

   
message_data = {"teaching_plan": teaching_plan}
   
data = json.dumps(message_data).encode("utf-8")

   
future = publisher.publish(topic_path, data)

   
return f"Published message ID: {future.result()}"

  • SEND_PLAN_EVENT : این عملکرد برنامه تدریس تولید شده را به عنوان ورودی در نظر می گیرد ، یک مشتری ناشر میخانه/فرعی را ایجاد می کند ، مسیر موضوع را می سازد ، برنامه تدریس را به یک رشته JSON تبدیل می کند ، پیام را به موضوع منتشر می کند.

در همان پرونده app.py

- سریعاً به نماینده دستور دهید تا پس از تولید برنامه تدریس ، رویداد برنامه تدریس را به موضوع Pub/Sub ارسال کند. جایگزین کنید

### ADD send_plan_event CALL

با موارد زیر:

send_plan_event(teaching_plan)

با افزودن ابزار SEND_PLAN_EVENT و اصلاح سریع ، ما به نماینده برنامه ریز خود این امکان را داده ایم تا رویدادها را به Pub/Sub منتشر کند ، و به سایر مؤلفه های سیستم ما اجازه می دهد تا در ایجاد برنامه های جدید تدریس واکنش نشان دهند. اکنون در بخش های زیر یک سیستم چند عامل کاربردی خواهیم داشت.

10. توانمندسازی دانش آموزان با آزمونهای تقاضا

یک محیط یادگیری را تصور کنید که دانش آموزان به یک عرضه بی پایان از آزمونهای متناسب با برنامه های یادگیری خاص خود دسترسی داشته باشند. این آزمونها بازخورد فوری ، از جمله پاسخ ها و توضیحات را ارائه می دهند و باعث درک عمیق تر از مطالب می شوند. این پتانسیل ما برای باز کردن قفل با پورتال مسابقه AI ما است.

برای زنده ماندن این دیدگاه ، ما یک مؤلفه تولید مسابقه ایجاد خواهیم کرد که می تواند بر اساس محتوای برنامه تدریس ، سوالات چند گزینه ای ایجاد کند.

نمای کلی

در صفحه Explorer Cloud Editor's Explorer Pane ، به پوشه portal بروید. کپی پرونده quiz.py را باز کنید و کد زیر را تا انتهای پرونده بچسبانید.

def generate_quiz_question(file_name: str, difficulty: str, region:str ):
    """Generates a single multiple-choice quiz question using the LLM.
   
    ```json
    {
      "question": "The question itself",
      "options": ["Option A", "Option B", "Option C", "Option D"],
      "answer": "The correct answer letter (A, B, C, or D)"
    }
    ```
    """

   
print(f"region: {region}")
   
# Connect to resourse needed from Google Cloud
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)


   
plan=None
   
#load the file using file_name and read content into string call plan
   
with open(file_name, 'r') as f:
       
plan = f.read()

   
parser = JsonOutputParser(pydantic_object=QuizQuestion)


   
instruction = f"You'll provide one question with difficulty level of {difficulty}, 4 options as multiple choices and provide the anwsers, the quiz needs to be related to the teaching plan {plan}"

   
prompt = PromptTemplate(
       
template="Generates a single multiple-choice quiz question\n {format_instructions}\n  {instruction}\n",
       
input_variables=["instruction"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"instruction": instruction})

   
print(f"{response}")
   
return  response


در عامل آن یک تجزیه کننده خروجی JSON ایجاد می کند که به طور خاص برای درک و ساختار خروجی LLM طراحی شده است. از مدل QuizQuestion که قبلاً تعریف کردیم برای اطمینان از مطابقت خروجی تجزیه شده با فرمت صحیح (سؤال ، گزینه ها و پاسخ) استفاده می کند.

- دستورات زیر را در ترمینال برای تنظیم یک محیط مجازی ، نصب وابستگی ها و شروع عامل:

cd ~/aidemy-bootstrap/portal/
python -m venv env
source env/bin/activate
pip install -r requirements.txt
python app.py

برای دسترسی به برنامه در حال اجرا از ویژگی پیش نمایش وب Cloud Shell استفاده کنید. بر روی پیوند "آزمونها" ، یا در نوار ناوبری بالا یا از کارت در صفحه فهرست کلیک کنید. شما باید سه آزمونهای تولید شده به طور تصادفی برای دانش آموز نمایش داده شود. این آزمونها بر اساس برنامه تدریس ساخته شده و قدرت سیستم تولید مسابقه AI ما را نشان می دهند.

آزمون ها

برای متوقف کردن فرآیند اجرای محلی ، Ctrl+C را در ترمینال فشار دهید.

جمینی 2 فکر کردن برای توضیحات

خوب ، بنابراین ما آزمونهایی داریم که شروع خوبی است! اما اگر دانش آموزان مشکلی ایجاد کنند ، چه می شود؟ این جایی است که یادگیری واقعی اتفاق می افتد ، درست است؟ اگر بتوانیم توضیح دهیم که چرا پاسخ آنها خاموش است و چگونه می توان به صحیح رسید ، آنها به احتمال زیاد آن را به یاد می آورند. به علاوه ، این به پاک کردن هرگونه سردرگمی و تقویت اعتماد به نفس آنها کمک می کند.

به همین دلیل ما قصد داریم اسلحه های بزرگ را وارد کنیم: مدل "تفکر" Gemini 2! به آن فکر کنید مثل اینکه به هوش مصنوعی کمی وقت اضافی بدهید تا قبل از توضیح چیزها فکر کنید. این امکان را به شما می دهد تا بازخورد مفصل تر و بهتری ارائه دهد.

ما می خواهیم ببینیم که آیا می تواند با کمک ، پاسخ دادن و توضیح جزئیات به دانشجویان کمک کند. برای آزمایش آن ، ما با یک موضوع بسیار دشوار ، حساب کاربری ، شروع خواهیم کرد.

نمای کلی

- اول ، به ویرایشگر کد ابری بروید ، در answer.py در داخل پوشه portal جایگزین کنید

def answer_thinking(question, options, user_response, answer, region):
   
return ""

با قطعه کد زیر:

def answer_thinking(question, options, user_response, answer, region):
   
try:
       
llm = VertexAI(model_name="gemini-2.0-flash-001",location=region)
       
       
input_msg = HumanMessage(content=[f"Here the question{question}, here are the available options {options}, this student's answer {user_response}, whereas the correct answer is {answer}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                       
"You are a helpful teacher trying to teach the student on question, you were given the question and a set of multiple choices "
                       
"what's the correct answer. use friendly tone"
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)
       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"



if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

این یک برنامه Langchain بسیار ساده است که در آن مدل فلش Gemini 2 را آغاز می کند ، جایی که ما به آن دستور می دهیم که به عنوان یک معلم مفید عمل کند و توضیحاتی را ارائه دهد

- دستور زیر را در ترمینال بیان کنید:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

شما باید خروجی مشابه نمونه ارائه شده در دستورالعمل های اصلی را ببینید. مدل فعلی ممکن است همانطور که از طریق توضیحات ارائه نمی دهد.

Okay, I see the question and the choices. The question is to evaluate the limit:

lim (x0) [(sin(5x) - 5x) / x^3]

You chose option B, which is -5/3, but the correct answer is A, which is -125/6.

It looks like you might have missed a step or made a small error in your calculations. This type of limit often involves using L'Hôpital's Rule or Taylor series expansion. Since we have the form 0/0, L'Hôpital's Rule is a good way to go! You need to apply it multiple times. Alternatively, you can use the Taylor series expansion of sin(x) which is:
sin(x) = x - x^3/3! + x^5/5! - ...
So, sin(5x) = 5x - (5x)^3/3! + (5x)^5/5! - ...
Then,  (sin(5x) - 5x) = - (5x)^3/3! + (5x)^5/5! - ...
Finally, (sin(5x) - 5x) / x^3 = - 5^3/3! + (5^5 * x^2)/5! - ...
Taking the limit as x approaches 0, we get -125/6.

Keep practicing, you'll get there!

در پرونده Answer.py ، Model_Name را از gemini-2.0-flash-001 جایگزین کنید تا gemini-2.0-flash-thinking-exp-01-21 در عملکرد Answer_Thinking.

این امر LLM را به دلایل بیشتر تغییر می دهد ، که به ایجاد توضیحات بهتر کمک می کند. و دوباره آن را اجرا کنید.

- برای آزمایش مدل تفکر جدید:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

در اینجا نمونه ای از پاسخ از مدل تفکر که بسیار دقیق تر و دقیق تر است ، ارائه می دهد ، توضیح گام به گام در مورد چگونگی حل مشکل حساب. این قدرت مدل های "تفکر" را در ایجاد توضیحات با کیفیت بالا برجسته می کند. شما باید خروجی مشابه این را ببینید:

Hey there! Let's take a look at this limit problem together. You were asked to evaluate:

lim (x0) [(sin(5x) - 5x) / x^3]

and you picked option B, -5/3, but the correct answer is actually A, -125/6. Let's figure out why!

It's a tricky one because if we directly substitute x=0, we get (sin(0) - 0) / 0^3 = (0 - 0) / 0 = 0/0, which is an indeterminate form. This tells us we need to use a more advanced technique like L'Hopital's Rule or Taylor series expansion.

Let's use the Taylor series expansion for sin(y) around y=0. Do you remember it?  It looks like this:

sin(y) = y - y^3/3! + y^5/5! - ...
where 3! (3 factorial) is 3 × 2 × 1 = 6, 5! is 5 × 4 × 3 × 2 × 1 = 120, and so on.

In our problem, we have sin(5x), so we can substitute y = 5x into the Taylor series:

sin(5x) = (5x) - (5x)^3/3! + (5x)^5/5! - ...
sin(5x) = 5x - (125x^3)/6 + (3125x^5)/120 - ...

Now let's plug this back into our limit expression:

[(sin(5x) - 5x) / x^3] =  [ (5x - (125x^3)/6 + (3125x^5)/120 - ...) - 5x ] / x^3
Notice that the '5x' and '-5x' cancel out!  So we are left with:
= [ - (125x^3)/6 + (3125x^5)/120 - ... ] / x^3
Now, we can divide every term in the numerator by x^3:
= -125/6 + (3125x^2)/120 - ...

Finally, let's take the limit as x approaches 0.  As x gets closer and closer to zero, terms with x^2 and higher powers will become very, very small and approach zero.  So, we are left with:
lim (x0) [ -125/6 + (3125x^2)/120 - ... ] = -125/6

Therefore, the correct answer is indeed **A) -125/6**.

It seems like your answer B, -5/3, might have come from perhaps missing a factor somewhere during calculation or maybe using an incorrect simplification. Double-check your steps when you were trying to solve it!

Don't worry, these limit problems can be a bit tricky sometimes! Keep practicing and you'll get the hang of it.  Let me know if you want to go through another similar example or if you have any more questions! 😊


Now that we have confirmed it works, let's use the portal.

code کد آزمون زیر را از answer.py حذف کنید :

if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

- دستورات زیر را در ترمینال برای تنظیم یک محیط مجازی ، نصب وابستگی ها و شروع عامل:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python app.py

برای دسترسی به برنامه در حال اجرا از ویژگی پیش نمایش وب Cloud Shell استفاده کنید. روی پیوند "آزمونها" کلیک کنید ، به همه آزمونها پاسخ دهید و مطمئن شوید که حداقل یک پاسخ اشتباه دریافت کرده و روی ارسال کلیک کنید

پاسخهای تفکر

به جای اینکه در حالی که منتظر پاسخ است ، خالی خیره شوید ، به ترمینال ویرایشگر ابر تغییر دهید. می توانید پیشرفت و هرگونه پیام خروجی یا خطای ایجاد شده توسط عملکرد خود را در ترمینال شبیه ساز مشاهده کنید. 😁

برای متوقف کردن فرآیند اجرای محلی ، Ctrl+C را در ترمینال فشار دهید.

11. اختیاری: ارکستر نمایندگان با EventArc

تاکنون ، پورتال دانشجویی بر اساس مجموعه پیش فرض برنامه های تدریس ، آزمونها را تولید کرده است. این مفید است ، اما این بدان معنی است که نماینده برنامه ریز و نماینده مسابقه پورتال ما واقعاً با یکدیگر صحبت نمی کنند. به یاد داشته باشید که چگونه ما آن ویژگی را اضافه کردیم که نماینده برنامه ریز برنامه های آموزش تازه تولید شده خود را به یک موضوع میخانه/فرعی منتشر می کند؟ اکنون وقت آن است که آن را به نماینده پورتال ما وصل کنیم!

نمای کلی

ما می خواهیم پورتال هر زمان که یک برنامه تدریس جدید ایجاد شود ، به طور خودکار محتوای مسابقه خود را به روز کند. برای انجام این کار ، ما یک نقطه پایانی در پورتال ایجاد خواهیم کرد که می تواند این برنامه های جدید را دریافت کند.

در صفحه Explorer Cloud Editor's Explorer Pane ، به پوشه portal بروید. پرونده app.py را برای ویرایش باز کنید. کد زیر را بین ## اضافه کنید کد خود را در اینجا اضافه کنید :

## Add your code here

@app.route('/new_teaching_plan', methods=['POST'])
def new_teaching_plan():
   
try:
       
       
# Get data from Pub/Sub message delivered via Eventarc
       
envelope = request.get_json()
       
if not envelope:
           
return jsonify({'error': 'No Pub/Sub message received'}), 400

       
if not isinstance(envelope, dict) or 'message' not in envelope:
           
return jsonify({'error': 'Invalid Pub/Sub message format'}), 400

       
pubsub_message = envelope['message']
       
print(f"data: {pubsub_message['data']}")

       
data = pubsub_message['data']
       
data_str = base64.b64decode(data).decode('utf-8')
       
data = json.loads(data_str)

       
teaching_plan = data['teaching_plan']

       
print(f"File content: {teaching_plan}")

       
with open("teaching_plan.txt", "w") as f:
           
f.write(teaching_plan)

       
print(f"Teaching plan saved to local file: teaching_plan.txt")

       
return jsonify({'message': 'File processed successfully'})


   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500
## Add your code here

بازسازی و استقرار به Cloud Run

شما باید برنامه ریز و مأمور پورتال ما را به روز کنید و مجدداً کار خود را برای اجرای ابر انجام دهید. این تضمین می کند که آنها آخرین کد را دارند و برای برقراری ارتباط از طریق رویدادها پیکربندی شده اند.

نمای کلی استقرار

- در ابتدا ما دوباره به تصویر برنامه ریز بازسازی و فشار خواهیم داد ، به عقب در اجرا ترمینال:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

ما همین کار را می کنیم ، تصویر عامل پورتال را می سازیم و فشار می دهیم:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

در رجیستری Artifact ، باید تصاویر کانتینر aidemy-planner و aidemy-portal را مشاهده کنید.

کانتینر

در ترمینال ، این کار را اجرا کنید تا تصویر Cloud Run را برای نماینده برنامه ریز به روز کنید:

export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-planner \
   
--region=us-central1 \
   
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner:latest

شما باید خروجی مشابه این را ببینید:

OK Deploying... Done.                                                                                                                                                     
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
Done.                                                                                                                                                                    
Service [aidemy-planner] revision [aidemy-planner-xxxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-planner-xxx.us-central1.run.app

یادداشت URL خدمات ؛ این پیوند به نماینده برنامه ریز مستقر شما است. اگر نیاز به تعیین URL سرویس دهنده برنامه ریز دارید ، از این دستور استفاده کنید:

gcloud run services describe aidemy-planner \
   
--region=us-central1 \
   
--format 'value(status.url)'

- این را برای ایجاد نمونه Cloud Run برای عامل پورتال ایجاد کنید

export PROJECT_ID=$(gcloud config get project)
gcloud run deploy aidemy-portal \
 
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal:latest \
 
--region=us-central1 \
 
--platform=managed \
 
--allow-unauthenticated \
 
--memory=2Gi \
 
--cpu=2 \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID}

شما باید خروجی مشابه این را ببینید:

Deploying container to Cloud Run service [aidemy-portal] in project [xxxx] region [us-central1]
OK Deploying new service... Done.                                                                                                                                        
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
 
OK Setting IAM Policy...                                                                                                                                                
Done.                                                                                                                                                                    
Service [aidemy-portal] revision [aidemy-portal-xxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-portal-xxxx.us-central1.run.app

یادداشت URL خدمات ؛ این پیوند به پورتال دانشجویی مستقر شما است. اگر بعداً URL سرویس پورتال دانشجویی را تعیین کنید ، از این دستور استفاده کنید:

gcloud run services describe aidemy-portal \
   
--region=us-central1 \
   
--format 'value(status.url)'

ایجاد ماشه EventArc

اما این سؤال بزرگ وجود دارد: وقتی یک برنامه تازه در حال انتظار در میخانه/زیر موضوع وجود دارد ، چگونه این نقطه پایانی اطلاع داده می شود؟ اینجاست که Eventarc برای نجات روز وارد می شود!

EventArc به عنوان یک پل عمل می کند ، گوش دادن به رویدادهای خاص (مانند یک پیام جدید که به موضوع میخانه/زیر ما وارد می شود) و به طور خودکار اقدامات را در پاسخ ایجاد می کند. در مورد ما ، هنگام انتشار یک برنامه تدریس جدید ، تشخیص می دهد و سپس سیگنالی را به نقطه پایانی پورتال ما ارسال می کند ، و به آن اطلاع می دهد که زمان به روزرسانی است.

با استفاده از EventArc ارتباطات محور رویداد ، می توانیم یکپارچه عامل برنامه ریز و عامل پورتال خود را به هم وصل کنیم و یک سیستم یادگیری واقعاً پویا و پاسخگو ایجاد کنیم. این مانند داشتن یک پیام رسان هوشمند است که به طور خودکار آخرین برنامه های درسی را به مکان مناسب ارائه می دهد!

- در سر کنسول به سمت EventArc .

دکمه "+ ایجاد ماشه" را کلیک کنید.

Trigger (اصول) را پیکربندی کنید:

  • نام Trigger: plan-topic-trigger
  • نوع ماشه: منابع گوگل
  • ارائه دهنده رویداد: Pub Cloud/Sub
  • نوع رویداد: google.cloud.pubsub.topic.v1.messagePublished
  • Cloud Pub/Sub موضوع: انتخاب projects/PROJECT_ID/topics/plan
  • منطقه: us-central1 .
  • حساب خدمات:
    • حساب خدمات را با roles/iam.serviceAccountTokenCreator اعطا کنید
    • از مقدار پیش فرض: حساب سرویس محاسبه پیش فرض استفاده کنید
  • مقصد رویداد: اجرای ابر
  • سرویس Cloud Run: aidemy-portal
  • پیام خطا را نادیده بگیرید: مجوز در "مکان ها/me-central2" رد شد (یا ممکن است وجود نداشته باشد).
  • مسیر URL سرویس: /new_teaching_plan

روی "ایجاد" کلیک کنید.

صفحه EventArc Triggers تازه می شود ، و اکنون باید ماشه تازه ایجاد شده خود را که در جدول ذکر شده است ، ببینید.

- درو ، با استفاده از URL خدمات خود به نماینده برنامه ریز دسترسی پیدا کنید تا یک برنامه تدریس جدید درخواست کنید.

این کار را در ترمینال اجرا کنید تا URL خدمات عامل برنامه ریز را تعیین کنید:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

این بار سال 5 ، Science موضوعی و atoms درخواست افزودنی را امتحان کنید.

سپس ، یک دقیقه یا دو دقیقه صبر کنید ، دوباره این تأخیر به دلیل محدودیت صورتحساب این آزمایشگاه معرفی شده است ، در شرایط عادی ، نباید تأخیر ایجاد شود.

سرانجام ، با استفاده از URL سرویس خود به پورتال دانشجویی دسترسی پیدا کنید.

این کار را در ترمینال اجرا کنید تا URL خدمات عامل پورتال دانشجویی را تعیین کنید:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

باید ببینید که آزمونها به روز شده اند و اکنون با برنامه جدید تدریس که تازه تولید کرده اید همسو است! این نشان دهنده ادغام موفقیت آمیز EventArc در سیستم Aidemy است!

عبادت

تبریک می گویم! شما با موفقیت یک سیستم چند عامل را در Google Cloud ایجاد کرده اید و از معماری رویداد محور برای افزایش مقیاس پذیری و انعطاف پذیری استفاده می کنید! شما پایه و اساس محکمی را ایجاد کرده اید ، اما حتی موارد بیشتری برای کشف وجود دارد. برای عمیق تر کردن مزایای واقعی این معماری ، قدرت API Live Multimodal Gemini 2 را کشف کنید و یاد بگیرید که چگونه ارکستراسیون تک مسیر را با Langgraph پیاده سازی کنید ، احساس راحتی کنید تا به دو فصل بعدی ادامه دهید.

12. اختیاری: بازپرداختهای صوتی با جمینی

جمینی می تواند اطلاعات را از منابع مختلف ، مانند متن ، تصاویر و حتی صوتی ، درک و پردازش کند و طیف کاملی از امکانات را برای یادگیری و ایجاد محتوا باز کند. توانایی جمینی در "دیدن" ، "شنیدن" و "خواندن" واقعاً تجربه های خلاقانه و جذاب کاربر را باز می کند.

فراتر از ایجاد تصاویر یا متن ، یک گام مهم دیگر در یادگیری خلاصه و جمع بندی مؤثر است. به آن فکر کنید: چند بار یک متن ترانه جذاب را راحت تر از چیزی که در یک کتاب درسی می خوانید به یاد می آورید؟ صدا می تواند فوق العاده به یاد ماندنی باشد! به همین دلیل ما قصد داریم از قابلیت های چندمادی Gemini برای تولید برنامه های صوتی برنامه های تدریس خود استفاده کنیم. این امر به دانش آموزان روشی مناسب و جذاب برای مرور مطالب ، افزایش بالقوه حفظ و درک از طریق قدرت یادگیری شنوایی را در اختیار دانش آموزان قرار می دهد.

نمای کلی API زنده

ما به مکانی برای ذخیره پرونده های صوتی تولید شده نیاز داریم. ذخیره سازی ابری یک راه حل مقیاس پذیر و قابل اعتماد را ارائه می دهد.

به ذخیره سازی در کنسول بروید. در منوی سمت چپ روی "سطل" کلیک کنید. روی دکمه "+ ایجاد" در بالا کلیک کنید.

👉 سطل جدید خود را پیکربندی کنید:

  • نام سطل: aidemy-recap-UNIQUE_NAME .
    • نکته مهم : اطمینان حاصل کنید که یک نام سطل منحصر به فرد را که با aidemy-recap- آغاز می شود ، تعریف می کنید. این پیشوند منحصر به فرد برای جلوگیری از نامگذاری درگیری ها هنگام ایجاد سطل ذخیره سازی ابر شما بسیار مهم است.
  • منطقه: us-central1 .
  • کلاس ذخیره سازی: "استاندارد". استاندارد برای داده های مکرر مناسب است.
  • کنترل دسترسی: کنترل دسترسی "یکنواخت" پیش فرض را انتخاب کنید. این کنترل دسترسی به سطح سطل را فراهم می کند.
  • گزینه های پیشرفته: برای این کارگاه ، تنظیمات پیش فرض معمولاً کافی است.

برای ایجاد سطل خود روی دکمه ایجاد کلیک کنید.

  • ممکن است در مورد پیشگیری از دسترسی عمومی ، پاپ را ببینید. کادر "اجرای پیشگیری از دسترسی عمومی را روی این سطل" بگذارید و روی Confirm کلیک کنید.

اکنون سطل تازه ایجاد شده خود را در لیست سطل ها مشاهده خواهید کرد. نام سطل خود را به خاطر بسپارید ، بعداً به آن احتیاج دارید.

در ترمینال ویرایشگر کد ابر ، دستورات زیر را اجرا کنید تا حساب خدمات دسترسی به سطل را اعطا کنید:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

در ویرایشگر کد ابر ، audio.py در پوشه courses باز کنید. کد زیر را تا انتهای پرونده بچسبانید:

config = LiveConnectConfig(
   
response_modalities=["AUDIO"],
   
speech_config=SpeechConfig(
       
voice_config=VoiceConfig(
           
prebuilt_voice_config=PrebuiltVoiceConfig(
               
voice_name="Charon",
           
)
       
)
   
),
)

async def process_weeks(teaching_plan: str):
   
region = "us-east5" #To workaround onRamp quota limits
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
   
clientAudio = genai.Client(vertexai=True, project=PROJECT_ID, location="us-central1")
   
async with clientAudio.aio.live.connect(
       
model=MODEL_ID,
       
config=config,
   
) as session:
       
for week in range(1, 4):  
           
response = client.models.generate_content(
               
model="gemini-2.0-flash-001",
               
contents=f"Given the following teaching plan: {teaching_plan}, Extrace content plan for week {week}. And return just the plan, nothingh else  " # Clarified prompt
           
)

           
prompt = f"""
                Assume you are the instructor.  
                Prepare a concise and engaging recap of the key concepts and topics covered.
                This recap should be suitable for generating a short audio summary for students.
                Focus on the most important learnings and takeaways, and frame it as a direct address to the students.  
                Avoid overly formal language and aim for a conversational tone, tell a few jokes.
               
                Teaching plan: {response.text} """
           
print(f"prompt --->{prompt}")

           
await session.send(input=prompt, end_of_turn=True)
           
with open(f"temp_audio_week_{week}.raw", "wb") as temp_file:
               
async for message in session.receive():
                   
if message.server_content.model_turn:
                       
for part in message.server_content.model_turn.parts:
                           
if part.inline_data:
                               
temp_file.write(part.inline_data.data)
                           
           
data, samplerate = sf.read(f"temp_audio_week_{week}.raw", channels=1, samplerate=24000, subtype='PCM_16', format='RAW')
           
sf.write(f"course-week-{week}.wav", data, samplerate)
       
           
storage_client = storage.Client()
           
bucket = storage_client.bucket(BUCKET_NAME)
           
blob = bucket.blob(f"course-week-{week}.wav")  # Or give it a more descriptive name
           
blob.upload_from_filename(f"course-week-{week}.wav")
           
print(f"Audio saved to GCS: gs://{BUCKET_NAME}/course-week-{week}.wav")
   
await session.close()

 
def breakup_sessions(teaching_plan: str):
   
asyncio.run(process_weeks(teaching_plan))
  • اتصال جریان : اول ، یک اتصال مداوم با نقطه پایانی API زنده برقرار می شود. بر خلاف تماس API استاندارد که در آن درخواست ارسال می کنید و پاسخی دریافت می کنید ، این اتصال برای تبادل مداوم داده ها باز است.
  • پیکربندی multimodal : از پیکربندی برای مشخص کردن نوع خروجی مورد نظر خود استفاده کنید (در این حالت ، صوتی) ، و حتی می توانید مشخص کنید که می خواهید از چه پارامترهایی استفاده کنید (به عنوان مثال ، انتخاب صدا ، رمزگذاری صوتی)
  • پردازش ناهمزمان : این API به صورت غیر همزمان کار می کند ، به این معنی که در حالی که منتظر تکمیل تولید صوتی هستیم ، موضوع اصلی را مسدود نمی کند. با پردازش داده ها در زمان واقعی و ارسال خروجی در تکه ها ، یک تجربه نزدیک به همزمان را فراهم می کند.

حال سوال اصلی این است: این روند تولید صوتی چه زمانی باید اجرا شود؟ در حالت ایده آل ، ما می خواهیم که به محض ایجاد یک برنامه تدریس جدید ، مجدداً ضبط های صوتی در دسترس باشد. از آنجا که ما قبلاً با انتشار برنامه تدریس به یک موضوع میخانه/زیر ، یک معماری مبتنی بر رویداد را پیاده سازی کرده ایم ، می توانیم به سادگی در آن موضوع مشترک شویم.

با این حال ، ما اغلب برنامه های تدریس جدید ایجاد نمی کنیم. این کارآمد نخواهد بود که یک عامل دائماً در حال اجرا و منتظر برنامه های جدید باشد. به همین دلیل است که استقرار این منطق نسل صوتی به عنوان یک تابع اجرا ابر کاملاً منطقی است.

با استفاده از آن به عنوان یک تابع ، تا زمانی که پیام جدیدی به موضوع Pub/Sub منتشر نشود ، خفته باقی می ماند. وقتی این اتفاق بیفتد ، به طور خودکار عملکرد را ایجاد می کند ، که باعث ایجاد مجدد صوتی می شود و آنها را در سطل ما ذخیره می کند.

stunner پوشه courses در پرونده main.py ، این پرونده عملکرد Cloud Run را تعریف می کند که در صورت موجود بودن یک برنامه تدریس جدید ایجاد می شود. این طرح را دریافت می کند و تولید recap صوتی را آغاز می کند. قطعه کد زیر را به انتهای پرونده اضافه کنید.

@functions_framework.cloud_event
def process_teaching_plan(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")
   
time.sleep(60)
   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):  # Check for base64 encoding
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan') # Get the teaching plan
       
elif 'teaching_plan' in cloud_event.data: # No base64
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found") # Handle error explicitly

       
#Load the teaching_plan as string and from cloud event, call audio breakup_sessions
       
breakup_sessions(teaching_plan)

       
return "Teaching plan processed successfully", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error processing teaching plan: {e}")
       
return "Error processing teaching plan", 500

@tworks_framework.cloud_event : این دکوراتور عملکرد را به عنوان یک تابع اجرا ابر که توسط CloudEvents ایجاد می شود ، علامت گذاری می کند.

تست محلی

ما این کار را در یک محیط مجازی اجرا می کنیم و کتابخانه های لازم پایتون را برای عملکرد Cloud Run نصب می کنیم.

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

emator عملکرد Cloud Run Emulator به ما اجازه می دهد تا قبل از استقرار آن به Google Cloud ، عملکرد خود را به صورت محلی آزمایش کنیم. با اجرای یک شبیه ساز محلی را شروع کنید:

functions-framework --target process_teaching_plan --signature-type=cloudevent --source main.py

- در حالی که شبیه ساز در حال اجرا است ، می توانید CloudEvents Test را به شبیه ساز ارسال کنید تا یک برنامه تدریس جدید منتشر شود. در یک ترمینال جدید:

دو پایانه

👉run:

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

به جای اینکه در حالی که منتظر پاسخ هستید ، خالی خیره شوید ، به ترمینال دیگر پوسته ابر تغییر دهید. می توانید پیشرفت و هرگونه پیام خروجی یا خطای ایجاد شده توسط عملکرد خود را در ترمینال شبیه ساز مشاهده کنید. 😁

در ترمینال 2 باید ببینید که باید OK برگردد.

شما داده ها را در سطل تأیید می کنید ، به Cloud Storage بروید و برگه "سطل" را انتخاب کنید و سپس aidemy-recap-UNIQUE_NAME انتخاب کنید

سطل

در ترمینال که شبیه ساز را اجرا می کنید ، ctrl+c را برای خروج تایپ کنید. و ترمینال دوم را ببندید. و ترمینال دوم را ببندید. و برای خروج از محیط مجازی ، غیرفعال کنید.

deactivate

اعزام به Google Cloud

نمای کلی استقرار - پس از آزمایش محلی ، وقت آن است که نماینده دوره را به Google Cloud مستقر کنیم. در ترمینال این دستورات را اجرا کنید:

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud functions deploy courses-agent \
 
--region=us-central1 \
 
--gen2 \
 
--source=. \
 
--runtime=python312 \
 
--trigger-topic=plan \
 
--entry-point=process_teaching_plan \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

استقرار را با Going Cloud Run در Google Cloud Console تأیید کنید. شما باید یک سرویس جدید به نام دوره های Agent ذکر شده را ببینید.

لیست اجرای ابر

برای بررسی پیکربندی ماشه ، برای مشاهده جزئیات آن ، روی سرویس Agent-Agent کلیک کنید. به برگه "محرک" بروید.

شما باید یک ماشه پیکربندی شده را برای گوش دادن به پیام های منتشر شده در موضوع برنامه مشاهده کنید.

Cloud Run Trigger

در آخر ، بیایید ببینیم که پایان آن پایان می یابد.

ما نیاز به پیکربندی عامل پورتال داریم تا می داند پرونده های صوتی تولید شده را از کجا پیدا کنیم. در ترمینال اجرا کنید:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

- تولید یک برنامه آموزشی جدید با استفاده از صفحه وب عامل برنامه ریز. ممکن است چند دقیقه طول بکشد ، نگران نباشید ، این یک سرویس بدون سرور است.

برای دسترسی به نماینده برنامه ریز ، URL خدمات خود را با اجرای این کار در ترمینال دریافت کنید:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep planner

پس از تولید برنامه جدید ، 2-3 دقیقه صبر کنید تا صدا تولید شود ، دوباره به دلیل محدودیت صورتحساب با این حساب آزمایشگاهی ، این کار چند دقیقه بیشتر طول می کشد.

شما می توانید نظارت کنید که آیا عملکرد courses-agent با بررسی برگه "محرک" عملکرد ، برنامه تدریس را دریافت کرده است. صفحه را به صورت دوره ای تازه کنید. در نهایت باید ببینید که این عملکرد فراخوانی شده است. اگر این عملکرد بعد از بیش از 2 دقیقه مورد استفاده قرار نگرفته است ، می توانید دوباره برنامه تدریس را تولید کنید. با این حال ، از تولید برنامه ها به طور مکرر در پشت سر هم خودداری کنید ، زیرا هر طرح تولید شده به طور متوالی توسط عامل مصرف و پردازش می شود ، به طور بالقوه ایجاد یک عقب ماندگی.

رعایت ماشه

- در پورتال بازدید کرده و روی "دوره ها" کلیک کنید. شما باید سه کارت را ببینید که هر یک از آنها یک رکورد صوتی را نشان می دهد. برای پیدا کردن URL نماینده پورتال خود:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep portal

در هر دوره "بازی" را کلیک کنید تا اطمینان حاصل شود که recaps های صوتی با برنامه تدریس که تازه تولید کرده اید مطابقت دارد! دوره های پورتال

از محیط مجازی خارج شوید.

deactivate

13. اختیاری: همکاری مبتنی بر نقش با جمینی و Deepseek

داشتن چندین دیدگاه بسیار ارزشمند است ، به خصوص هنگام کار با تکالیف جذاب و متفکر. اکنون ما یک سیستم چند عامل ایجاد خواهیم کرد که از دو مدل مختلف با نقش های مجزا استفاده می کند تا تکالیف ایجاد کند: یکی همکاری را ارتقا می بخشد و دیگری خودآموزی را تشویق می کند. ما از یک معماری "تک شات" استفاده خواهیم کرد ، جایی که گردش کار از یک مسیر ثابت پیروی می کند.

ژنراتور تکلیف جمینی

بررسی اجمالی جمینی ما با تنظیم عملکرد جمینی برای تولید تکالیف با تأکید مشترک شروع خواهیم کرد. فایل gemini.py را که در پوشه assignment قرار دارد ویرایش کنید.

کد زیر را تا انتهای پرونده gemini.py خرد کنید:

def gen_assignment_gemini(state):
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"---------------gen_assignment_gemini")
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        You are an instructor

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {state["teaching_plan"]}
        """
   
)

   
print(f"---------------gen_assignment_gemini answer {response.text}")
   
   
state["model_one_assignment"] = response.text
   
   
return state


import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

از مدل Gemini برای تولید تکالیف استفاده می کند.

ما آماده آزمایش عامل جمینی هستیم.

- این دستورات را در ترمینال برای تنظیم محیط تنظیم کنید:

cd ~/aidemy-bootstrap/assignment
export PROJECT_ID=$(gcloud config get project)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

- شما می توانید برای آزمایش آن اجرا کنید:

python gemini.py

شما باید یک تکلیف را ببینید که کار گروهی بیشتری در خروجی دارد. آزمون ادعا در پایان نیز نتایج را به دست می آورد.

Here are some engaging and practical assignments for each week, designed to build progressively upon the teaching plan's objectives:

**Week 1: Exploring the World of 2D Shapes**

* **Learning Objectives Assessed:**
   
* Identify and name basic 2D shapes (squares, rectangles, triangles, circles).
   
* .....

* **Description:**
   
* **Shape Scavenger Hunt:** Students will go on a scavenger hunt in their homes or neighborhoods, taking pictures of objects that represent different 2D shapes. They will then create a presentation or poster showcasing their findings, classifying each shape and labeling its properties (e.g., number of sides, angles, etc.).
   
* **Triangle Trivia:** Students will research and create a short quiz or presentation about different types of triangles, focusing on their properties and real-world examples.
   
* **Angle Exploration:** Students will use a protractor to measure various angles in their surroundings, such as corners of furniture, windows, or doors. They will record their measurements and create a chart categorizing the angles as right, acute, or obtuse.
....

**Week 2: Delving into the World of 3D Shapes and Symmetry**

* **Learning Objectives Assessed:**
   
* Identify and name basic 3D shapes.
   
* ....

* **Description:**
   
* **3D Shape Construction:** Students will work in groups to build 3D shapes using construction paper, cardboard, or other materials. They will then create a presentation showcasing their creations, describing the number of faces, edges, and vertices for each shape.
   
* **Symmetry Exploration:** Students will investigate the concept of symmetry by creating a visual representation of various symmetrical objects (e.g., butterflies, leaves, snowflakes) using drawing or digital tools. They will identify the lines of symmetry and explain their findings.
   
* **Symmetry Puzzles:** Students will be given a half-image of a symmetrical figure and will be asked to complete the other half, demonstrating their understanding of symmetry. This can be done through drawing, cut-out activities, or digital tools.

**Week 3: Navigating Position, Direction, and Problem Solving**

* **Learning Objectives Assessed:**
   
* Describe position using coordinates in the first quadrant.
   
* ....

* **Description:**
   
* **Coordinate Maze:** Students will create a maze using coordinates on a grid paper. They will then provide directions for navigating the maze using a combination of coordinate movements and translation/reflection instructions.
   
* **Shape Transformations:** Students will draw shapes on a grid paper and then apply transformations such as translation and reflection, recording the new coordinates of the transformed shapes.
   
* **Geometry Challenge:** Students will solve real-world problems involving perimeter, area, and angles. For example, they could be asked to calculate the perimeter of a room, the area of a garden, or the missing angle in a triangle.
....

با ctl+c متوقف شوید و برای تمیز کردن کد آزمون. کد زیر را از gemini.py حذف کنید

import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

ژنراتور واگذاری Deepseek را پیکربندی کنید

در حالی که سیستم عامل های هوش مصنوعی مبتنی بر ابر مناسب هستند ، LLM های خود میزبان می توانند برای محافظت از حریم خصوصی داده ها و اطمینان از حاکمیت داده ها بسیار مهم باشند. ما کوچکترین مدل Deepseek (پارامترهای 1.5B) را در نمونه موتور محاسباتی ابر مستقر خواهیم کرد. روش های دیگری مانند میزبانی آن در پلت فرم Vertex AI Google یا میزبانی آن در نمونه GKE شما وجود دارد ، اما از آنجا که این فقط یک کارگاه آموزشی در مورد AI است ، و من نمی خواهم شما را برای همیشه در اینجا نگه دارم ، بیایید فقط از ساده ترین راه استفاده کنیم. اما اگر علاقه مند هستید و می خواهید به گزینه های دیگر حفر کنید ، به پرونده deepseek-vertexai.py در زیر پوشه Assignment نگاهی بیندازید ، جایی که یک کد نمونه از نحوه تعامل با مدل های مستقر در Vertexai را ارائه می دهد.

نمای کلی Deepseek

-این دستور را در ترمینال ایجاد کنید تا یک پلت فرم LLM خود میزبان Ollama ایجاد کنید:

cd ~/aidemy-bootstrap/assignment
gcloud compute instances create ollama-instance \
   
--image-family=ubuntu-2204-lts \
   
--image-project=ubuntu-os-cloud \
   
--machine-type=e2-standard-4 \
   
--zone=us-central1-a \
   
--metadata-from-file startup-script=startup.sh \
   
--boot-disk-size=50GB \
   
--tags=ollama \
   
--scopes=https://www.googleapis.com/auth/cloud-platform

برای تأیید نمونه موتور محاسباتی در حال اجرا است:

برای محاسبه موتور > "نمونه های VM" در کنسول Google Cloud حرکت کنید. شما باید ollama-instance با یک علامت چک سبز ذکر کنید که نشان می دهد در حال اجرا است. اگر نمی توانید آن را ببینید ، اطمینان حاصل کنید که منطقه us-central1 است. اگر اینگونه نباشد ، ممکن است نیاز به جستجوی آن داشته باشید.

لیست موتور محاسبه

ما کوچکترین مدل Deepseek را نصب می کنیم و آن را آزمایش می کنیم ، دوباره در ویرایشگر Cloud Shell ، در یک ترمینال جدید ، به دنبال دستور SSH به نمونه GCE اجرا کنیم.

gcloud compute ssh ollama-instance --zone=us-central1-a

پس از برقراری اتصال SSH ، ممکن است از موارد زیر خواسته شود:

"آیا می خواهید ادامه دهید (Y/N)؟"

به سادگی Y (مورد حساس) را تایپ کنید و Enter را فشار دهید تا ادامه یابد.

در مرحله بعد ، ممکن است از شما خواسته شود که یک کلید عبور برای کلید SSH ایجاد کنید. اگر ترجیح می دهید از یک عبارت عبور استفاده نکنید ، کافی است Enter را دو بار فشار دهید تا پیش فرض را بپذیرید (بدون عبارت).

- اکنون شما در دستگاه ویروسی هستید ، کوچکترین مدل Deepseek R1 را بکشید و در صورت کار تست کنید؟

ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b "who are you?"

- نمونه GCE را در ترمینال SSH وارد کنید:

exit

- متن ، خط مشی شبکه را تنظیم کنید ، بنابراین سایر خدمات می توانند به LLM دسترسی پیدا کنند ، لطفاً اگر می خواهید این کار را برای تولید انجام دهید ، دسترسی به نمونه را محدود کنید ، یا ورود به سیستم امنیتی را برای سرویس یا محدود کردن دسترسی IP انجام دهید. اجرا کنید:

gcloud compute firewall-rules create allow-ollama-11434 \
   
--allow=tcp:11434 \
   
--target-tags=ollama \
   
--description="Allow access to Ollama on port 11434"

برای تأیید اینکه آیا خط مشی فایروال شما به درستی کار می کند ، سعی کنید اجرا کنید:

export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
curl -X POST "${OLLAMA_HOST}/api/generate" \
     
-H "Content-Type: application/json" \
     
-d '{
         
"prompt": "Hello, what are you?",
         
"model": "deepseek-r1:1.5b",
         
"stream": false
       
}'

در مرحله بعد ، ما روی عملکرد Deepseek در عامل واگذاری کار خواهیم کرد تا تکالیف با تأکید کار فردی ایجاد کنیم.

👉edit deepseek.py در زیر پوشه assignment به زیر قطعه اضافه کنید تا پایان:

def gen_assignment_deepseek(state):
   
print(f"---------------gen_assignment_deepseek")

   
template = """
        You are an instructor who favor student to focus on individual work.

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {teaching_plan}
        """

   
   
prompt = ChatPromptTemplate.from_template(template)

   
model = OllamaLLM(model="deepseek-r1:1.5b",
                   
base_url=OLLAMA_HOST)

   
chain = prompt | model


   
response = chain.invoke({"teaching_plan":state["teaching_plan"]})
   
state["model_two_assignment"] = response
   
   
return state

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

test آن را با اجرا آزمایش کنید:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
python deepseek.py

شما باید یک تکلیف را مشاهده کنید که کار خود در مطالعه بیشتری داشته باشد.

**Assignment Plan for Each Week**

---

### **Week 1: 2D Shapes and Angles**
- **Week Title:** "Exploring 2D Shapes"
Assign students to research and present on various 2D shapes. Include a project where they create models using straws and tape for triangles, draw quadrilaterals with specific measurements, and compare their properties.

### **Week 2: 3D Shapes and Symmetry**
Assign students to create models or nets for cubes and cuboids. They will also predict how folding these nets form the 3D shapes. Include a project where they identify symmetrical properties using mirrors or folding techniques.

### **Week 3: Position, Direction, and Problem Solving**

Assign students to use mirrors or folding techniques for reflections. Include activities where they measure angles, use a protractor, solve problems involving perimeter/area, and create symmetrical designs.
....

ctl+c را تنظیم کنید و برای تمیز کردن کد آزمایش. کد زیر را از deepseek.py حذف کنید

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

اکنون ، ما از همان مدل جمینی استفاده خواهیم کرد تا هر دو تکلیف را در یک مدل جدید ترکیب کنیم. فایل gemini.py را که در پوشه assignment قرار دارد ویرایش کنید.

کد زیر را تا انتهای پرونده gemini.py خرد کنید:

def combine_assignments(state):
   
print(f"---------------combine_assignments ")
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        Look at all the proposed assignment so far {state["model_one_assignment"]} and {state["model_two_assignment"]}, combine them and come up with a final assignment for student.
        """
   
)

   
state["final_assignment"] = response.text
   
   
return state

برای ترکیب نقاط قوت هر دو مدل ، ما یک گردش کار تعریف شده را با استفاده از Langgraph ارکستر می کنیم. این گردش کار از سه مرحله تشکیل شده است: اول ، مدل جمینی تکلیفی را با محوریت همکاری ایجاد می کند. دوم ، مدل Deepseek تکلیفی را با تأکید بر کار فردی ایجاد می کند. سرانجام ، جمینی این دو تکلیف را در یک تکلیف واحد و جامع ترکیب می کند. از آنجا که ما دنباله مراحل را بدون تصمیم گیری LLM پیش بینی می کنیم ، این یک ارکستراسیون تک مسیر و تعریف شده توسط کاربر است.

بررسی اجمالی Langraph

code کد زیر را تا انتهای پرونده main.py تحت پوشه assignment : خرد کنید:

def create_assignment(teaching_plan: str):
   
print(f"create_assignment---->{teaching_plan}")
   
builder = StateGraph(State)
   
builder.add_node("gen_assignment_gemini", gen_assignment_gemini)
   
builder.add_node("gen_assignment_deepseek", gen_assignment_deepseek)
   
builder.add_node("combine_assignments", combine_assignments)
   
   
builder.add_edge(START, "gen_assignment_gemini")
   
builder.add_edge("gen_assignment_gemini", "gen_assignment_deepseek")
   
builder.add_edge("gen_assignment_deepseek", "combine_assignments")
   
builder.add_edge("combine_assignments", END)

   
graph = builder.compile()
   
state = graph.invoke({"teaching_plan": teaching_plan})

   
return state["final_assignment"]



import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

برای آزمایش در ابتدا عملکرد create_assignment و تأیید اینکه گردش کار ترکیب جمینی و Deepseek کاربردی است ، دستور زیر را اجرا کنید:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
pip install -r requirements.txt
python main.py

شما باید چیزی را ببینید که هر دو مدل را با دیدگاه فردی خود برای تحصیل دانشجویی و همچنین برای گروه های دانشجویی ترکیب می کند.

**Tasks:**

1. **Clue Collection:** Gather all the clues left by the thieves. These clues will include:
   
* Descriptions of shapes and their properties (angles, sides, etc.)
   
* Coordinate grids with hidden messages
   
* Geometric puzzles requiring transformation (translation, reflection, rotation)
   
* Challenges involving area, perimeter, and angle calculations

2. **Clue Analysis:** Decipher each clue using your geometric knowledge. This will involve:
   
* Identifying the shape and its properties
   
* Plotting coordinates and interpreting patterns on the grid
   
* Solving geometric puzzles by applying transformations
   
* Calculating area, perimeter, and missing angles

3. **Case Report:** Create a comprehensive case report outlining your findings. This report should include:
   
* A detailed explanation of each clue and its solution
   
* Sketches and diagrams to support your explanations
   
* A step-by-step account of how you followed the clues to locate the artifact
   
* A final conclusion about the thieves and their motives

ctl+c را تنظیم کنید و برای تمیز کردن کد آزمایش. کد زیر را از main.py حذف کنید

import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

ایجاد تکالیف. png

برای اینکه تولید تکلیف به صورت خودکار و پاسخگو به برنامه های جدید تدریس انجام شود ، ما از معماری موجود محور رویداد استفاده خواهیم کرد. کد زیر یک تابع Cloud Run (Generate_Assignment) را تعریف می کند که هر زمان که یک برنامه تدریس جدید در برنامه "Pub/Sub" منتشر شود ، ایجاد می شود.

کد زیر را به انتهای main.py در پوشه assignment اضافه کنید:

@functions_framework.cloud_event
def generate_assignment(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")

   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan')
       
elif 'teaching_plan' in cloud_event.data:
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found")

       
assignment = create_assignment(teaching_plan)

       
print(f"Assignment---->{assignment}")

       
#Store the return assignment into bucket as a text file
       
storage_client = storage.Client()
       
bucket = storage_client.bucket(ASSIGNMENT_BUCKET)
       
file_name = f"assignment-{random.randint(1, 1000)}.txt"
       
blob = bucket.blob(file_name)
       
blob.upload_from_string(assignment)

       
return f"Assignment generated and stored in {ASSIGNMENT_BUCKET}/{file_name}", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error generate assignment: {e}")
       
return "Error generate assignment", 500

تست محلی

قبل از استقرار در Google Cloud ، آزمایش عملکرد Cloud Run به صورت محلی خوب است. این امکان تکرار سریعتر و اشکال زدایی آسان تر را فراهم می کند.

ابتدا یک سطل ذخیره سازی ابری ایجاد کنید تا پرونده های تکالیف تولید شده را ذخیره کنید و به حساب خدمات دسترسی به سطل اعطا کنید. دستورات زیر را در ترمینال اجرا کنید:

👉 مهم : اطمینان حاصل کنید که یک نام منحصر به فرد را تعیین کرده اید که نام خود را با " Aidemy-Assignment " آغاز می کند. این نام منحصر به فرد برای جلوگیری از نامگذاری درگیری ها هنگام ایجاد سطل ذخیره سازی ابری شما بسیار مهم است. (جایگزین <your_name> با هر کلمه تصادفی)

export ASSIGNMENT_BUCKET=aidemy-assignment-<YOUR_NAME> #Name must be unqiue

👉 و اجرا:

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gsutil mb -p $PROJECT_ID -l us-central1 gs://$ASSIGNMENT_BUCKET

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉 👉 👉 👉 emulator عملکرد Cloud Run:

cd ~/aidemy-bootstrap/assignment
functions-framework \
   
--target generate_assignment \
   
--signature-type=cloudevent \
   
--source main.py

- در حالی که شبیه ساز در یک ترمینال در حال اجرا است ، یک ترمینال دوم را در پوسته ابر باز کنید. در این ترمینال دوم ، یک تست CloudEvent را به شبیه ساز ارسال کنید تا یک برنامه تدریس جدید منتشر شود:

دو پایانه

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

به جای اینکه در حالی که منتظر پاسخ هستید ، خالی خیره شوید ، به ترمینال دیگر پوسته ابر تغییر دهید. می توانید پیشرفت و هرگونه پیام خروجی یا خطای ایجاد شده توسط عملکرد خود را در ترمینال شبیه ساز مشاهده کنید. 😁

دستور curl باید "OK" را چاپ کند (بدون خط جدید ، بنابراین "OK" ممکن است در همان خط سریع پوسته ترمینال شما ظاهر شود).

برای تأیید اینکه این تکلیف با موفقیت تولید شده و ذخیره شده است ، به کنسول Google Cloud بروید و به ذخیره سازی > "ذخیره ابری" بروید. سطل aidemy-assignment را که ایجاد کرده اید انتخاب کنید. شما باید یک فایل متنی به نام assignment-{random number}.txt در سطل مشاهده کنید. برای بارگیری آن و تأیید محتوای آن ، روی فایل کلیک کنید. این تأیید می کند که یک پرونده جدید حاوی تکلیف جدیدی است که به تازگی ایجاد شده است.

12-01-Assignment-Bucket

در ترمینال که شبیه ساز را اجرا می کنید ، ctrl+c را برای خروج تایپ کنید. و ترمینال دوم را ببندید. - همه ، در ترمینال اجرای شبیه ساز ، از محیط مجازی خارج شوید.

deactivate

نمای کلی استقرار

- در متن ، ما نماینده واگذاری را به ابر مستقر خواهیم کرد

cd ~/aidemy-bootstrap/assignment
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
export PROJECT_ID=$(gcloud config get project)
gcloud functions deploy assignment-agent \
 
--gen2 \
 
--timeout=540 \
 
--memory=2Gi \
 
--cpu=1 \
 
--set-env-vars="ASSIGNMENT_BUCKET=${ASSIGNMENT_BUCKET}" \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${GOOGLE_CLOUD_PROJECT} \
 
--set-env-vars=OLLAMA_HOST=${OLLAMA_HOST} \
 
--region=us-central1 \
 
--runtime=python312 \
 
--source=. \
 
--entry-point=generate_assignment \
 
--trigger-topic=plan

با مراجعه به کنسول Google Cloud ، به سمت Cloud Run ، استقرار را تأیید کنید. شما باید یک سرویس جدید به نام دوره های Agent ذکر شده را مشاهده کنید. لیست 12-03

با استفاده از گردش کار تولید تکلیف که اکنون اجرا شده و آزمایش شده و مستقر شده است ، می توانیم به مرحله بعدی حرکت کنیم: در دسترس قرار دادن این تکالیف در پورتال دانشجویی.

14. اختیاری: همکاری مبتنی بر نقش با Gemini و Deepseek - Contd.

Dynamic website generation

To enhance the student portal and make it more engaging, we'll implement dynamic HTML generation for assignment pages. The goal is to automatically update the portal with a fresh, visually appealing design whenever a new assignment is generated. This leverages the LLM's coding capabilities to create a more dynamic and interesting user experience.

14-01-generate-html

👉In Cloud Shell Editor, edit the render.py file within the portal folder, replace

def render_assignment_page():
   
return ""

with following code snippet:

def render_assignment_page(assignment: str):
   
try:
       
region=get_next_region()
       
llm = VertexAI(model_name="gemini-2.0-flash-001", location=region)
       
input_msg = HumanMessage(content=[f"Here the assignment {assignment}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                        """
                        As a frontend developer, create HTML to display a student assignment with a creative look and feel. Include the following navigation bar at the top:
                        ```
                        <nav>
                            <a href="/">Home</a>
                            <a href="/quiz">Quizzes</a>
                            <a href="/courses">Courses</a>
                            <a href="/assignment">Assignments</a>
                        </nav>
                        ```
                        Also include these links in the <head> section:
                        ```
                        <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
                        <link rel="preconnect" href="https://fonts.googleapis.com">
                        <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
                        <link href="https://fonts.googleapis.com/css2?family=Roboto:wght@400;500&display=swap" rel="stylesheet">

                        ```
                        Do not apply inline styles to the navigation bar.
                        The HTML should display the full assignment content. In its CSS, be creative with the rainbow colors and aesthetic.
                        Make it creative and pretty
                        The assignment content should be well-structured and easy to read.
                        respond with JUST the html file
                        """
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)

       
response = response.replace("```html", "")
       
response = response.replace("```", "")
       
with open("templates/assignment.html", "w") as f:
           
f.write(response)


       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"

It uses the Gemini model to dynamically generate HTML for the assignment. It takes the assignment content as input and uses a prompt to instruct Gemini to create a visually appealing HTML page with a creative style.

Next, we'll create an endpoint that will be triggered whenever a new document is added to the assignment bucket:

👉Within the portal folder, edit the app.py file and add the following code within the ## Add your code here" comments , AFTER the new_teaching_plan function:

## Add your code here

def new_teaching_plan():
       
...
       
...
       
...

   
except Exception as e:
       
...
       
...

@app.route('/render_assignment', methods=['POST'])
def render_assignment():
   
try:
       
data = request.get_json()
       
file_name = data.get('name')
       
bucket_name = data.get('bucket')

       
if not file_name or not bucket_name:
           
return jsonify({'error': 'Missing file name or bucket name'}), 400

       
storage_client = storage.Client()
       
bucket = storage_client.bucket(bucket_name)
       
blob = bucket.blob(file_name)
       
content = blob.download_as_text()

       
print(f"File content: {content}")

       
render_assignment_page(content)

       
return jsonify({'message': 'Assignment rendered successfully'})

   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500

## Add your code here

When triggered, it retrieves the file name and bucket name from the request data, downloads the assignment content from Cloud Storage, and calls the render_assignment_page function to generate the HTML.

👉We'll go ahead and run it locally:

cd ~/aidemy-bootstrap/portal
source env/bin/activate
python app.py

👉From the "Web preview" menu at the top of the Cloud Shell window, select "Preview on port 8080". This will open your application in a new browser tab. Navigate to the Assignment link in the navigation bar. You should see a blank page at this point, which is expected behavior since we haven't yet established the communication bridge between the assignment agent and the portal to dynamically populate the content.

14-02-deployment-overview

o ahead and stop the script by pressing Ctrl+C .

👉To incorporate these changes and deploy the updated code, rebuild and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

👉After pushing the new image, redeploy the Cloud Run service. Run the following script to force the Cloud Run update:

export PROJECT_ID=$(gcloud config get project)
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Now, we'll deploy an Eventarc trigger that listens for any new object created (finalized) in the assignment bucket. This trigger will automatically invoke the /render_assignment endpoint on the portal service when a new assignment file is created.

export PROJECT_ID=$(gcloud config get project)
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$(gcloud storage service-agent --project $PROJECT_ID)" \
 
--role="roles/pubsub.publisher"
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud eventarc triggers create portal-assignment-trigger \
--location=us-central1 \
--service-account=$SERVICE_ACCOUNT_NAME \
--destination-run-service=aidemy-portal \
--destination-run-region=us-central1 \
--destination-run-path="/render_assignment" \
--event-filters="bucket=$ASSIGNMENT_BUCKET" \
--event-filters="type=google.cloud.storage.object.v1.finalized"

To verify that the trigger was created successfully, navigate to the Eventarc Triggers page in the Google Cloud Console. You should see portal-assignment-trigger listed in the table. Click on the trigger name to view its details. Assignment Trigger

It may take up to 2-3 minutes for the new trigger to become active.

To see the dynamic assignment generation in action, run the following command to find the URL of your planner agent (if you don't have it handy):

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

Find the URL of your portal agent:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

In the planner agent, generate a new teaching plan.

13-02-assignment

After a few minutes (to allow for the audio generation, assignment generation, and HTML rendering to complete), navigate to the student portal.

👉Click on the "Assignment" link in the navigation bar. You should see a newly created assignment with a dynamically generated HTML. Each time a teaching plan is generated it should be a dynamic assignment.

13-02-assignment

Congratulations on completing the Aidemy multi-agent system ! You've gained practical experience and valuable insights into:

  • The benefits of multi-agent systems, including modularity, scalability, specialization, and simplified maintenance.
  • The importance of event-driven architectures for building responsive and loosely coupled applications.
  • The strategic use of LLMs, matching the right model to the task and integrating them with tools for real-world impact.
  • Cloud-native development practices using Google Cloud services to create scalable and reliable solutions.
  • The importance of considering data privacy and self-hosting models as an alternative to vendor solutions.

You now have a solid foundation for building sophisticated AI-powered applications on Google Cloud!

15. Challenges and Next Steps

Congratulations on building the Aidemy multi-agent system! You've laid a strong foundation for AI-powered education. Now, let's consider some challenges and potential future enhancements to further expand its capabilities and address real-world needs:

Interactive Learning with Live Q&A:

  • Challenge: Can you leverage Gemini 2's Live API to create a real-time Q&A feature for students? Imagine a virtual classroom where students can ask questions and receive immediate, AI-powered responses.

Automated Assignment Submission and Grading:

  • Challenge: Design and implement a system that allows students to submit assignments digitally and have them automatically graded by AI, with a mechanism to detect and prevent plagiarism. This challenge presents a great opportunity to explore Retrieval Augmented Generation (RAG) to enhance the accuracy and reliability of the grading and plagiarism detection processes.

aidemy-climb

16. تمیز کردن

Now that we've built and explored our Aidemy multi-agent system, it's time to clean up our Google Cloud environment.

👉Delete Cloud Run services

gcloud run services delete aidemy-planner --region=us-central1 --quiet
gcloud run services delete aidemy-portal --region=us-central1 --quiet
gcloud run services delete courses-agent --region=us-central1 --quiet
gcloud run services delete book-provider --region=us-central1 --quiet
gcloud run services delete assignment-agent --region=us-central1 --quiet

👉Delete Eventarc trigger

gcloud eventarc triggers delete portal-assignment-trigger --location=us --quiet
gcloud eventarc triggers delete plan-topic-trigger --location=us-central1 --quiet
gcloud eventarc triggers delete portal-assignment-trigger --location=us-central1 --quiet
ASSIGNMENT_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:assignment-agent" --format="value(name)")
COURSES_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:courses-agent" --format="value(name)")
gcloud eventarc triggers delete $ASSIGNMENT_AGENT_TRIGGER --location=us-central1 --quiet
gcloud eventarc triggers delete $COURSES_AGENT_TRIGGER --location=us-central1 --quiet

👉Delete Pub/Sub topic

gcloud pubsub topics delete plan --project="$PROJECT_ID" --quiet

👉Delete Cloud SQL instance

gcloud sql instances delete aidemy --quiet

👉Delete Artifact Registry repository

gcloud artifacts repositories delete agent-repository --location=us-central1 --quiet

👉Delete Secret Manager secrets

gcloud secrets delete db-user --quiet
gcloud secrets delete db-pass --quiet
gcloud secrets delete db-name --quiet

👉Delete Compute Engine instance (if created for Deepseek)

gcloud compute instances delete ollama-instance --zone=us-central1-a --quiet

👉Delete the firewall rule for Deepseek instance

gcloud compute firewall-rules delete allow-ollama-11434 --quiet

👉Delete Cloud Storage buckets

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
gsutil rm -r gs://$COURSE_BUCKET_NAME
gsutil rm -r gs://$ASSIGNMENT_BUCKET

aidemy-broom

،
Aidemy:
Building Multi-Agent Systems with LangGraph, EDA, and Generative AI on Google Cloud

درباره این codelab

subjectآخرین به‌روزرسانی: مارس ۱۳, ۲۰۲۵
account_circleنویسنده: Christina Lin

1. مقدمه

سلام! So, you're into the idea of agents – little helpers that can get things done for you without you even lifting a finger, right? عالی! But let's be real, one agent isn't always going to cut it, especially when you're tackling bigger, more complex projects. You're probably going to need a whole team of them! That's where multi-agent systems come in.

Agents, when powered by LLMs, give you incredible flexibility compared to old-school hard coding. But, and there's always a but, they come with their own set of tricky challenges. And that's exactly what we're going to dive into in this workshop!

عنوان

Here's what you can expect to learn – think of it as leveling up your agent game:

Building Your First Agent with LangGraph : We'll get our hands dirty building your very own agent using LangGraph, a popular framework. You'll learn how to create tools that connect to databases, tap into the latest Gemini 2 API for some internet searching, and optimize the prompts and response, so your agent can interact with not only LLMs but existing services. We'll also show you how function calling works.

Agent Orchestration, Your Way : We'll explore different ways to orchestrate your agents, from simple straight paths to more complex multi-path scenarios. Think of it as directing the flow of your agent team.

Multi-Agent Systems : You'll discover how to set up a system where your agents can collaborate, and get things done together – all thanks to an event-driven architecture.

LLM Freedom – Use the Best for the Job: We're not stuck on just one LLM! You'll see how to use multiple LLMs, assigning them different roles to boost problem-solving power using cool "thinking models."

Dynamic Content? مشکلی نیست! : Imagine your agent creating dynamic content that's tailored specifically for each user, in real-time. We'll show you how to do it!

Taking it to the Cloud with Google Cloud : Forget just playing around in a notebook. We'll show you how to architect and deploy your multi-agent system on Google Cloud so it's ready for the real world!

This project will be a good example of how to use all the techniques we talked about.

2. معماری

Being a teacher or working in education can be super rewarding, but let's face it, the workload, especially all the prep work, can be challenging! Plus, there's often not enough staff and tutoring can be expensive. That's why we're proposing an AI-powered teaching assistant. This tool can lighten the load for educators and help bridge the gap caused by staff shortages and the lack of affordable tutoring.

Our AI teaching assistant can whip up detailed lesson plans, fun quizzes, easy-to-follow audio recaps, and personalized assignments. This lets teachers focus on what they do best: connecting with students and helping them fall in love with learning.

The system has two sites: one for teachers to create lesson plans for upcoming weeks,

برنامه ریز

and one for students to access quizzes, audio recaps, and assignments. پورتال

Alright, let's walk through the architecture powering our teaching assistant, Aidemy. As you can see, we've broken it down into several key components, all working together to make this happen.

معماری

Key Architectural Elements and Technologies :

Google Cloud Platform (GCP) : Central to the entire system:

  • Vertex AI: Accesses Google's Gemini LLMs.
  • Cloud Run: Serverless platform for deploying containerized agents and functions.
  • Cloud SQL: PostgreSQL database for curriculum data.
  • Pub/Sub & Eventarc: Foundation of the event-driven architecture, enabling asynchronous communication between components.
  • Cloud Storage: Stores audio recaps and assignment files.
  • Secret Manager: Securely manages database credentials.
  • Artifact Registry: Stores Docker images for the agents.
  • Compute Engine: To deploy self-hosted LLM instead of relying on vendor solutions

LLMs : The "brains" of the system:

  • Google's Gemini models: (Gemini 1.0 Pro, Gemini 2 Flash, Gemini 2 Flash Thinking, Gemini 1.5-pro) Used for lesson planning, content generation, dynamic HTML creation, quiz explanation and combining the assignments.
  • DeepSeek: Utilized for the specialized task of generating self-study assignments

LangChain & LangGraph : Frameworks for LLM Application Development

  • Facilitates the creation of complex multi-agent workflows.
  • Enables the intelligent orchestration of tools (API calls, database queries, web searches).
  • Implements event-driven architecture for system scalability and flexibility.

In essence, our architecture combines the power of LLMs with structured data and event-driven communication, all running on Google Cloud. This lets us build a scalable, reliable, and effective teaching assistant.

3. قبل از شروع

In the Google Cloud Console , on the project selector page, select or create a Google Cloud project . Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project .

👉Click Activate Cloud Shell at the top of the Google Cloud console (It's the terminal shape icon at the top of the Cloud Shell pane), click on the "Open Editor" button (it looks like an open folder with a pencil). This will open the Cloud Shell Code Editor in the window. You'll see a file explorer on the left side.

پوسته ابری

👉Click on the Cloud Code Sign-in button in the bottom status bar as shown. Authorize the plugin as instructed. If you see Cloud Code - no project in the status bar, select that then in the drop down 'Select a Google Cloud Project' and then select the specific Google Cloud Project from the list of projects that you created.

Login project

👉Open the terminal in the cloud IDE, ترمینال جدید

👉In the terminal, verify that you're already authenticated and that the project is set to your project ID using the following command:

gcloud auth list

👉And run:

gcloud config set project <YOUR_PROJECT_ID>

👉Run the following command to enable the necessary Google Cloud APIs:

gcloud services enable compute.googleapis.com  \
                       
storage.googleapis.com  \
                       
run.googleapis.com  \
                       
artifactregistry.googleapis.com  \
                       
aiplatform.googleapis.com \
                       
eventarc.googleapis.com \
                       
sqladmin.googleapis.com \
                       
secretmanager.googleapis.com \
                       
cloudbuild.googleapis.com \
                       
cloudresourcemanager.googleapis.com \
                       
cloudfunctions.googleapis.com

This may take a couple of minutes..

Enable Gemini Code Assist in Cloud Shell IDE

Click on the Code Assist button in the on left panel as shown and select one last time the correct Google Cloud project. If you are asked to enable the Cloud AI Companion API, please do so and move forward. Once you've selected your Google Cloud project, ensure that you are able to see that in the Cloud Code status message in the status bar and that you also have Code Assist enabled on the right, in the status bar as shown below:

Enable codeassist

Setting up permission

👉Setup service account permission. In the terminal, run :

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")

echo "Here's your SERVICE_ACCOUNT_NAME $SERVICE_ACCOUNT_NAME"

👉 Grant Permissions. In the terminal, run :

#Cloud Storage (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/storage.objectAdmin"

#Pub/Sub (Publish/Receive):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.publisher"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.subscriber"


#Cloud SQL (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/cloudsql.editor"


#Eventarc (Receive Events):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/iam.serviceAccountTokenCreator"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/eventarc.eventReceiver"

#Vertex AI (User):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/aiplatform.user"

#Secret Manager (Read):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/secretmanager.secretAccessor"

👉Validate result in your IAM console کنسول IAM

👉Run the following commands in the terminal to create a Cloud SQL instance named aidemy . We'll need this later, but since this process can take some time, we'll do it now.

gcloud sql instances create aidemy \
   
--database-version=POSTGRES_14 \
   
--cpu=2 \
   
--memory=4GB \
   
--region=us-central1 \
   
--root-password=1234qwer \
   
--storage-size=10GB \
   
--storage-auto-increase

4. Building the first agent

Before we dive into complex multi-agent systems, we need to establish a fundamental building block: a single, functional agent. In this section, we'll take our first steps by creating a simple "book provider" agent. The book provider agent takes a category as input and uses a Gemini LLM to generate a JSON representation book within that category. It then serves these book recommendations as a REST API endpoint .

Book Provider

👉In another browser tab, open the Google Cloud Console in your web browser,in the navigation menu (☰), go to "Cloud Run". Click the "+ ... WRITE A FUNCTION" button.

Create Function

👉Next we'll configures the basic settings of the Cloud Run Function:

  • Service name: book-provider
  • Region: us-central1
  • Runtime: Python 3.12
  • Authentication: Allow unauthenticated invocations to Enabled.

👉Leave other settings as default and click Create . This will take you to the source code editor.

You'll see pre-populated main.py and requirements.txt files.

The main.py will contain the business logic of the function, requirements.txt will contain the packages needed.

👉Now we are ready to write some code! But before diving in, let's see if Gemini Code Assist can give us a head start. Return to the Cloud Shell Editor, click on the Gemini Code Assist icon, and paste the following request into the prompt box: Gemini Code Assist

Use the functions_framework library to be deployable as an HTTP function. 
Accept a request with category and number_of_book parameters (either in JSON body or query string).
Use langchain and gemini to generate the data for book with fields bookname, author, publisher, publishing_date.
Use pydantic to define a Book model with the fields: bookname (string, description: "Name of the book"), author (string, description: "Name of the author"), publisher (string, description: "Name of the publisher"), and publishing_date (string, description: "Date of publishing").
Use langchain and gemini model to generate book data. the output should follow the format defined in Book model.

The logic should use JsonOutputParser from langchain to enforce output format defined in Book Model.
Have a function get_recommended_books(category) that internally uses langchain and gemini to return a single book object.
The main function, exposed as the Cloud Function, should call get_recommended_books() multiple times (based on number_of_book) and return a JSON list of the generated book objects.
Handle the case where category or number_of_book are missing by returning an error JSON response with a 400 status code.
return a JSON string representing the recommended books. use os library to retrieve GOOGLE_CLOUD_PROJECT env var. Use ChatVertexAI from langchain for the LLM call

Code Assist will then generate a potential solution, providing both the source code and a requirements.txt dependency file.

We encourage you to compare the Code Assist's generated code with the tested, correct solution provided below. This allows you to evaluate the tool's effectiveness and identify any potential discrepancies. While LLMs should never be blindly trusted, Code Assist can be a great tool for rapid prototyping and generating initial code structures, and should be use for a good head start.

Since this is a workshop, we'll proceed with the verified code provided below. However, feel free to experiment with the Code Assist-generated code in your own time to gain a deeper understanding of its capabilities and limitations.

👉Return to the Cloud Run Function's source code editor (in the other browser tab). Carefully replace the existing content of main.py with the code provided below:

import functions_framework
import json
from flask import Flask, jsonify, request
from langchain_google_vertexai import ChatVertexAI
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
import os

class Book(BaseModel):
   
bookname: str = Field(description="Name of the book")
   
author: str = Field(description="Name of the author")
   
publisher: str = Field(description="Name of the publisher")
   
publishing_date: str = Field(description="Date of publishing")


project_id = os.environ.get("GOOGLE_CLOUD_PROJECT")  

llm = ChatVertexAI(model_name="gemini-2.0-flash-lite-001")

def get_recommended_books(category):
    """
    A simple book recommendation function.

    Args:
        category (str): category

    Returns:
        str: A JSON string representing the recommended books.
    """
   
parser = JsonOutputParser(pydantic_object=Book)
   
question = f"Generate a random made up book on {category} with bookname, author and publisher and publishing_date"

   
prompt = PromptTemplate(
       
template="Answer the user query.\n{format_instructions}\n{query}\n",
       
input_variables=["query"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"query": question})

   
return  json.dumps(response)
   

@functions_framework.http
def recommended(request):
   
request_json = request.get_json(silent=True) # Get JSON data
   
if request_json and 'category' in request_json and 'number_of_book' in request_json:
       
category = request_json['category']
       
number_of_book = int(request_json['number_of_book'])
   
elif request.args and 'category' in request.args and 'number_of_book' in request.args:
       
category = request.args.get('category')
       
number_of_book = int(request.args.get('number_of_book'))

   
else:
       
return jsonify({'error': 'Missing category or number_of_book parameters'}), 400


   
recommendations_list = []
   
for i in range(number_of_book):
       
book_dict = json.loads(get_recommended_books(category))
       
print(f"book_dict=======>{book_dict}")
   
       
recommendations_list.append(book_dict)

   
   
return jsonify(recommendations_list)

👉Replace the contents of requirements.txt with the following:

functions-framework==3.*
google-genai==1.0.0
flask==3.1.0
jsonify==0.5
langchain_google_vertexai==2.0.13
langchain_core==0.3.34
pydantic==2.10.5

👉we'll set the Function entry point : recommended

03-02-function-create.png

👉Click SAVE AND DEPLOY . to deploy the Function. Wait for the deployment process to complete. The Cloud Console will display the status. این ممکن است چند دقیقه طول بکشد.

متن جایگزین 👉Once deployed, go back in the cloud shell editor, in the terminal run:

export PROJECT_ID=$(gcloud config get project)
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

curl -X POST -H "Content-Type: application/json" -d '{"category": "Science Fiction", "number_of_book": 2}' $BOOK_PROVIDER_URL

It should show some book data in JSON format.

[
 
{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},
 
{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}
]

تبریک می گویم! You have successfully deployed a Cloud Run Function. This is one of the services we will be integrating when developing our Aidemy agent.

5. Building Tools: Connecting Agents to RESTFUL service and Data

Let's go ahead and download the Bootstrap Skeleton Project, make sure you are in the Cloud Shell Editor. In the terminal run,

git clone https://github.com/weimeilin79/aidemy-bootstrap.git

After running this command, a new folder named aidemy-bootstrap will be created in your Cloud Shell environment.

In the Cloud Shell Editor's Explorer pane (usually on the left side), you should now see the folder that was created when you cloned the Git repository aidemy-bootstrap . Open the root folder of your project in the Explorer. You'll find a planner subfolder within it, open that as well. project explorer

Let's start building the tools our agents will use to become truly helpful. As you know, LLMs are excellent at reasoning and generating text, but they need access to external resources to perform real-world tasks and provide accurate, up-to-date information. Think of these tools as the agent's "Swiss Army knife," giving it the ability to interact with the world.

When building an agent, it's easy to fall into hard-coding a ton of details. This creates an agent that is not flexible. Instead, by creating and using tools, the agent has access to external logic or systems which gives it the benefits of both the LLM and traditional programming.

In this section, we'll create the foundation for the planner agent, which teachers will use to generate lesson plans. Before the agent starts generating a plan, we want to set boundaries by providing more details on the subject and topic. We'll build three tools:

  1. Restful API Call: Interacting with a pre-existing API to retrieve data.
  2. Database Query: Fetching structured data from a Cloud SQL database.
  3. Google Search: Accessing real-time information from the web.

Fetching Book Recommendations from an API

First, let's create a tool that retrieves book recommendations from the book-provider API we deployed in the previous section. This demonstrates how an agent can leverage existing services.

Recommend book

In the Cloud Shell Editor, open the aidemy-bootstrap project that you cloned in the previous section.

👉Edit the book.py in the planner folder, and paste the following code at the end of the file:

def recommend_book(query: str):
    """
    Get a list of recommended book from an API endpoint
   
    Args:
        query: User's request string
    """

   
region = get_next_region();
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)

   
query = f"""The user is trying to plan a education course, you are the teaching assistant. Help define the category of what the user requested to teach, respond the categroy with no more than two word.

    user request:   {query}
    """
   
print(f"-------->{query}")
   
response = llm.invoke(query)
   
print(f"CATEGORY RESPONSE------------>: {response}")
   
   
# call this using python and parse the json back to dict
   
category = response.strip()
   
   
headers = {"Content-Type": "application/json"}
   
data = {"category": category, "number_of_book": 2}

   
books = requests.post(BOOK_PROVIDER_URL, headers=headers, json=data)
   
   
return books.text

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

توضیح:

  • recommend_book(query: str) : This function takes a user's query as input.
  • LLM Interaction : It uses the LLM to extract the category from the query. This demonstrates how you can use the LLM to help create parameters for tools.
  • API Call : It makes a POST request to the book-provider API, passing the category and the desired number of books.

👉To test this new function, set the environment variable, run :

cd ~/aidemy-bootstrap/planner/
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

👉Install the dependencies and run the code to ensure it works, run:

cd ~/aidemy-bootstrap/planner/
python -m venv env
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
pip install -r requirements.txt
python book.py

Ignore the Git warning pop-up window.

You should see a JSON string containing book recommendations retrieved from the book-provider API. The results are randomly generated. Your books may not be the same, but you should receive two book recommendations in JSON format.

[{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}]

If you see this, the first tool is working correctly!

Instead of explicitly crafting a RESTful API call with specific parameters, we're using natural language ("I'm doing a course..."). The agent then intelligently extracts the necessary parameters (like the category) using NLP, highlighting how the agent leverages natural language understanding to interact with the API.

compare call

👉 Remove the following testing code from the book.py

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

Getting Curriculum Data from a Database

Next, we'll build a tool that fetches structured curriculum data from a Cloud SQL PostgreSQL database. This allows the agent to access a reliable source of information for lesson planning.

create db

Remember the aidemy Cloud SQL instance you've created in previous step? Here's where it will be used.

👉Create a database named aidemy-db in the new instance.

gcloud sql databases create aidemy-db \
   
--instance=aidemy

Let's verify the instance in the Cloud SQL in the Google Cloud Console, You should see a Cloud SQL instance named aidemy listed. Click on the instance name to view its details. In the Cloud SQL instance details page, click on "SQL Studio" in the left-hand navigation menu. با این کار یک تب جدید باز می شود.

Click to connect to the database. Sign in to the SQL Studio

Select aidemy-db as the database. enter postgres as user and 1234qwer as the password . sql studio sign in

👉In the SQL Studio query editor, paste the following SQL code:

CREATE TABLE curriculums (
   
id SERIAL PRIMARY KEY,
   
year INT,
   
subject VARCHAR(255),
   
description TEXT
);

-- Inserting detailed curriculum data for different school years and subjects
INSERT INTO curriculums (year, subject, description) VALUES
-- Year 5
(5, 'Mathematics', 'Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.'),
(5, 'English', 'Developing reading comprehension, creative writing, and basic grammar, with a focus on storytelling and poetry.'),
(5, 'Science', 'Exploring basic physics, chemistry, and biology concepts, including forces, materials, and ecosystems.'),
(5, 'Computer Science', 'Basic coding concepts using block-based programming and an introduction to digital literacy.'),

-- Year 6
(6, 'Mathematics', 'Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.'),
(6, 'English', 'Introduction to persuasive writing, character analysis, and deeper comprehension of literary texts.'),
(6, 'Science', 'Forces and motion, the human body, and introductory chemical reactions with hands-on experiments.'),
(6, 'Computer Science', 'Introduction to algorithms, logical reasoning, and basic text-based programming (Python, Scratch).'),

-- Year 7
(7, 'Mathematics', 'Algebraic expressions, geometry, and introduction to statistics and probability.'),
(7, 'English', 'Analytical reading of classic and modern literature, essay writing, and advanced grammar skills.'),
(7, 'Science', 'Introduction to cells and organisms, chemical reactions, and energy transfer in physics.'),
(7, 'Computer Science', 'Building on programming skills with Python, introduction to web development, and cyber safety.');

This SQL code creates a table named curriculums and inserts some sample data. Click Run to execute the SQL code. You should see a confirmation message indicating that the commands were executed successfully.

👉Expand the explorer, find the newly created table and click query . It should open a new editor tab with SQL generated for you,

sql studio select table

SELECT * FROM
 
"public"."curriculums" LIMIT 1000;

👉Click Run .

The results table should display the rows of data you inserted in the previous step, confirming that the table and data were created correctly.

Now that you have successfully created a database with populated sample curriculum data, we'll build a tool to retrieve it.

👉In the Cloud Code Editor, edit file curriculums.py in the aidemy-bootstrap folder and paste the following code at the end of the file:

def connect_with_connector() -> sqlalchemy.engine.base.Engine:

   
db_user = os.environ["DB_USER"]
   
db_pass = os.environ["DB_PASS"]
   
db_name = os.environ["DB_NAME"]

   
encoded_db_user = os.environ.get("DB_USER")
   
print(f"--------------------------->db_user: {db_user!r}")  
   
print(f"--------------------------->db_pass: {db_pass!r}")
   
print(f"--------------------------->db_name: {db_name!r}")

   
ip_type = IPTypes.PRIVATE if os.environ.get("PRIVATE_IP") else IPTypes.PUBLIC

   
connector = Connector()

   
def getconn() -> pg8000.dbapi.Connection:
       
conn: pg8000.dbapi.Connection = connector.connect(
           
instance_connection_name,
           
"pg8000",
           
user=db_user,
           
password=db_pass,
           
db=db_name,
           
ip_type=ip_type,
       
)
       
return conn

   
pool = sqlalchemy.create_engine(
       
"postgresql+pg8000://",
       
creator=getconn,
       
pool_size=2,
       
max_overflow=2,
       
pool_timeout=30,  # 30 seconds
       
pool_recycle=1800,  # 30 minutes
   
)
   
return pool



def init_connection_pool() -> sqlalchemy.engine.base.Engine:
   
   
return (
       
connect_with_connector()
   
)

   
raise ValueError(
       
"Missing database connection type. Please define one of INSTANCE_HOST, INSTANCE_UNIX_SOCKET, or INSTANCE_CONNECTION_NAME"
   
)

def get_curriculum(year: int, subject: str):
    """
    Get school curriculum
   
    Args:
        subject: User's request subject string
        year: User's request year int
    """
   
try:
       
stmt = sqlalchemy.text(
           
"SELECT description FROM curriculums WHERE year = :year AND subject = :subject"
       
)

       
with db.connect() as conn:
           
result = conn.execute(stmt, parameters={"year": year, "subject": subject})
           
row = result.fetchone()
       
if row:
           
return row[0]  
       
else:
           
return None  

   
except Exception as e:
       
print(e)
       
return None

db = init_connection_pool()

توضیح:

  • Environment Variables : The code retrieves database credentials and connection information from environment variables (more on this below).
  • connect_with_connector() : This function uses the Cloud SQL Connector to establish a secure connection to the database.
  • get_curriculum(year: int, subject: str) : This function takes the year and subject as input, queries the curriculums table, and returns the corresponding curriculum description.

👉Before we can run the code, we must set some environment variables, in the terminal, run:

export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉To test add the following code to the end of curriculums.py :

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉Run the code:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python curriculums.py

You should see the curriculum description for 6th-grade Mathematics printed to the console.

Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.

If you see the curriculum description, the database tool is working correctly! Go ahead and stop the script by pressing Ctrl+C .

👉 Remove the following testing code from the curriculums.py

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉Exit virtual environment, in terminal run:

deactivate

6. Building Tools: Access real-time information from the web

Finally, we'll build a tool that uses the Gemini 2 and Google Search integration to access real-time information from the web. This helps the agent stay up-to-date and provide relevant results.

Gemini 2's integration with the Google Search API enhances agent capabilities by providing more accurate and contextually relevant search results. This allows agents to access up-to-date information and ground their responses in real-world data, minimizing hallucinations. The improved API integration also facilitates more natural language queries, enabling agents to formulate complex and nuanced search requests.

جستجو کنید

This function takes a search query, curriculum, subject, and year as input and uses the Gemini API and the Google Search tool to retrieve relevant information from the internet. If you look closely, it's using the Google Generative AI SDK to do function calling without using any other framework.

👉Edit search.py in the aidemy-bootstrap folder and paste the following code at the end of the file:

model_id = "gemini-2.0-flash-001"

google_search_tool = Tool(
   
google_search = GoogleSearch()
)

def search_latest_resource(search_text: str, curriculum: str, subject: str, year: int):
    """
    Get latest information from the internet
   
    Args:
        search_text: User's request category   string
        subject: "User's request subject" string
        year: "User's request year"  integer
    """
   
search_text = "%s in the context of year %d and subject %s with following curriculum detail %s " % (search_text, year, subject, curriculum)
   
region = get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"search_latest_resource text-----> {search_text}")
   
response = client.models.generate_content(
       
model=model_id,
       
contents=search_text,
       
config=GenerateContentConfig(
           
tools=[google_search_tool],
           
response_modalities=["TEXT"],
       
)
   
)
   
print(f"search_latest_resource response-----> {response}")
   
return response

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

توضیح:

  • Defining Tool - google_search_tool : Wrapping the GoogleSearch object within a Tool
  • search_latest_resource(search_text: str, subject: str, year: int) : This function takes a search query, subject, and year as input and uses the Gemini API to perform a Google search. Gemini model
  • GenerateContentConfig : Define that it has access to the GoogleSearch tool

The Gemini model internally analyzes the search_text and determines whether it can answer the question directly or if it needs to use the GoogleSearch tool. This is a critical step that happens within the LLM's reasoning process. The model has been trained to recognize situations where external tools are necessary. If the model decides to use the GoogleSearch tool, the Google Generative AI SDK handles the actual invocation. The SDK takes the model's decision and the parameters it generates and sends them to the Google Search API. This part is hidden from the user in the code.

The Gemini model then integrates the search results into its response. It can use the information to answer the user's question, generate a summary, or perform some other task.

👉To test, run the code:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
source env/bin/activate
python search.py

You should see the Gemini Search API response containing search results related to "Syllabus for Year 5 Mathematics." The exact output will depend on the search results, but it will be a JSON object with information about the search.

If you see search results, the Google Search tool is working correctly! Go ahead and stop the script by pressing Ctrl+C .

👉And remove the last part in the code.

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

👉Exit virtual environment, in terminal run:

deactivate

تبریک می گویم! You have now built three powerful tools for your planner agent: an API connector, a database connector, and a Google Search tool. These tools will enable the agent to access the information and capabilities it needs to create effective teaching plans.

7. Orchestrating with LangGraph

Now that we have built our individual tools, it's time to orchestrate them using LangGraph. This will allow us to create a more sophisticated "planner" agent that can intelligently decide which tools to use and when, based on the user's request.

LangGraph is a Python library designed to make it easier to build stateful, multi-actor applications using Large Language Models (LLMs). Think of it as a framework for orchestrating complex conversations and workflows involving LLMs, tools, and other agents.

مفاهیم کلیدی:

  • Graph Structure: LangGraph represents your application's logic as a directed graph. Each node in the graph represents a step in the process (eg, a call to an LLM, a tool invocation, a conditional check). Edges define the flow of execution between nodes.
  • State: LangGraph manages the state of your application as it moves through the graph. This state can include variables like the user's input, the results of tool calls, intermediate outputs from LLMs, and any other information that needs to be preserved between steps.
  • Nodes: Each node represents a computation or interaction. آنها می توانند:
    • Tool Nodes: Use a tool (eg, perform a web search, query a database)
    • Function Nodes: Execute a Python function.
  • Edges: Connect nodes, defining the flow of execution. آنها می توانند:
    • Direct Edges: A simple, unconditional flow from one node to another.
    • Conditional Edges: The flow depends on the outcome of a conditional node.

لانگ گراف

We will use LangGraph to implement the orchestration. Let's edit the aidemy.py file under aidemy-bootstrap folder to define our LangGraph logic.

👉Append follow code to the end of aidemy.py :

tools = [get_curriculum, search_latest_resource, recommend_book]

def determine_tool(state: MessagesState):
   
llm = ChatVertexAI(model_name="gemini-2.0-flash-001", location=get_next_region())
   
sys_msg = SystemMessage(
                   
content=(
                       
f"""You are a helpful teaching assistant that helps gather all needed information.
                            Your ultimate goal is to create a detailed 3-week teaching plan.
                            You have access to tools that help you gather information.  
                            Based on the user request, decide which tool(s) are needed.

                        """
                   
)
               
)

   
llm_with_tools = llm.bind_tools(tools)
   
return {"messages": llm_with_tools.invoke([sys_msg] + state["messages"])}

This function is responsible for taking the current state of the conversation, providing the LLM with a system message, and then asking the LLM to generate a response. The LLM can either respond directly to the user or choose to use one of the available tools.

tools : This list represents the set of tools that the agent has available to it. It contains three tool functions that we defined in the previous steps: get_curriculum , search_latest_resource , and recommend_book . llm.bind_tools(tools) : It "binds" the tools list to the llm object. Binding the tools tells the LLM that these tools are available and provides the LLM with information about how to use them (eg, the names of the tools, the parameters they accept, and what they do).

We will use LangGraph to implement the orchestration.

👉Append following code to the end of aidemy.py :

def prep_class(prep_needs):
   
   
builder = StateGraph(MessagesState)
   
builder.add_node("determine_tool", determine_tool)
   
builder.add_node("tools", ToolNode(tools))
   
   
builder.add_edge(START, "determine_tool")
   
builder.add_conditional_edges("determine_tool",tools_condition)
   
builder.add_edge("tools", "determine_tool")

   
   
memory = MemorySaver()
   
graph = builder.compile(checkpointer=memory)

   
config = {"configurable": {"thread_id": "1"}}
   
messages = graph.invoke({"messages": prep_needs},config)
   
print(messages)
   
for m in messages['messages']:
       
m.pretty_print()
   
teaching_plan_result = messages["messages"][-1].content  


   
return teaching_plan_result

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan")

توضیح:

  • StateGraph(MessagesState) : Creates a StateGraph object. A StateGraph is a core concept in LangGraph. It represents the workflow of your agent as a graph, where each node in the graph represents a step in the process. Think of it as defining the blueprint for how the agent will reason and act.
  • Conditional Edge: Originating from the "determine_tool" node, the tools_condition argument is likely a function that determines which edge to follow based on the output of the determine_tool function. Conditional edges allow the graph to branch based on the LLM's decision about which tool to use (or whether to respond to the user directly). This is where the agent's "intelligence" comes into play – it can dynamically adapt its behavior based on the situation.
  • Loop: Adds an edge to the graph that connects the "tools" node back to the "determine_tool" node. This creates a loop in the graph, allowing the agent to repeatedly use tools until it has gathered enough information to complete the task and provide a satisfactory answer. This loop is crucial for complex tasks that require multiple steps of reasoning and information gathering.

Now, let's test our planner agent to see how it orchestrates the different tools.

This code will run the prep_class function with a specific user input, simulating a request to create a teaching plan for 5th-grade Mathematics in Geometry, using the curriculum, book recommendations, and the latest internet resources.

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉Run the code:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
pip install -r requirements.txt
python aidemy.py

Watch the log in the terminal. You should see evidence that the agent is calling all three tools (getting the school curriculum, getting book recommendations, and searching for the latest resources) before providing the final teaching plan. This demonstrates that the LangGraph orchestration is working correctly, and the agent is intelligently using all available tools to fulfill the user's request.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
search_text: Geometry
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
subject: Mathematics
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.....) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================
Tool Calls:
 
recommend_book (93b48189-4d69-4c09-a3bd-4e60cdc5f1c6)
 
Call ID: 93b48189-4d69-4c09-a3bd-4e60cdc5f1c6
 
Args:
   
query: Mathematics Geometry Year 5
================================= Tool Message =================================
Name: recommend_book

[{.....}]

================================== Ai Message ==================================

Based on the curriculum outcome, here is a 3-week teaching plan for year 5 Mathematics Geometry:

**Week 1: Introduction to Shapes and Properties**
.........

Stop the script by pressing Ctrl+C .

👉(THIS STEP IS OPTIONAL) replace the testing code with a different prompt, which requires different tools to be called.

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉(THIS STEP IS OPTIONAL, do this ONLY IF you ran the previous step) Run the code again:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python aidemy.py

What did you notice this time? Which tools did the agent call? You should see that the agent only calls the search_latest_resource tool this time. This is because the prompt does not specify that it needs the other two tools, and our LLM is smart enough to not call the other tools.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
subject: Mathematics
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
search_text: Geometry
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.......token_count=40, total_token_count=772) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================

Based on the information provided, a 3-week teaching plan for Year 5 Mathematics focusing on Geometry could look like this:

**Week 1:  Introducing 2D Shapes**
........
* Use visuals, manipulatives, and real-world examples to make the learning experience engaging and relevant.

Stop the script by pressing Ctrl+C .

👉 Remove the testing code to keep your aidemy.py file clean (DO NOT SKIP THIS STEP!):

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

With our agent logic now defined, let's launch the Flask web application. This will provide a familiar form-based interface for teachers to interact with the agent. While chatbot interactions are common with LLMs, we're opting for a traditional form submit UI, as it may be more intuitive for many educators.

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉Now, start the Web UI.

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python app.py

Look for startup messages in the Cloud Shell terminal output. Flask usually prints messages indicating that it's running and on what port.

Running on http://127.0.0.1:8080
Running on http://127.0.0.1:8080
The application needs to keep running to serve requests.

👉From the "Web preview" menu, choose Preview on port 8080. Cloud Shell will open a new browser tab or window with the web preview of your application.

صفحه وب

In the application interface, select 5 for Year, select subject Mathematics and type in Geometry in the Add-on Request

Rather than staring blankly while waiting for the response, switch over to the Cloud Editor's terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

👉Stop the script by pressing Ctrl+C in the terminal.

👉Exit the virtual environment:

deactivate

8. Deploying planner agent to the cloud

Build and push image to registry

نمای کلی

👉Time to deploy this to the cloud. In the terminal, create an artifacts repository to store the docker image we are going to build.

gcloud artifacts repositories create agent-repository \
   
--repository-format=docker \
   
--location=us-central1 \
   
--description="My agent repository"

You should see Created repository [agent-repository].

👉Run the following command to build the Docker image.

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .

👉We need to retag the image so that it's hosted in Artifact Registry instead of GCR and push the tagged image to Artifact Registry:

export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

Once the push is complete, you can verify that the image is successfully stored in Artifact Registry. Navigate to the Artifact Registry in the Google Cloud Console. You should find the aidemy-planner image within the agent-repository repository. Aidemy planner image

Securing Database Credentials with Secret Manager

To securely manage and access database credentials, we'll use Google Cloud Secret Manager. This prevents hardcoding sensitive information in our application code and enhances security.

👉We'll create individual secrets for the database username, password, and database name. This approach allows us to manage each credential independently. In the terminal run:

gcloud secrets create db-user
printf "postgres" | gcloud secrets versions add db-user --data-file=-

gcloud secrets create db-pass
printf "1234qwer" | gcloud secrets versions add db-pass --data-file=-

gcloud secrets create db-name
printf "aidemy-db" | gcloud secrets versions add db-name --data-file=-

Using Secret Manager is a important step in securing your application and preventing accidental exposure of sensitive credentials. It follows security best practices for cloud deployments.

در Cloud Run مستقر شوید

Cloud Run is a fully managed serverless platform that allows you to deploy containerized applications quickly and easily. It abstracts away the infrastructure management, letting you focus on writing and deploying your code. We'll be deploying our planner as a Cloud Run service.

👉In the Google Cloud Console, navigate to " Cloud Run ". Click on DEPLOY CONTAINER and select SERVICE . Configure your Cloud Run service:

Cloud run

  1. Container image : Click "Select" in the URL field. Find the image URL you pushed to Artifact Registry (eg, us-central1-docker.pkg.dev/YOUR_PROJECT_ID/agent-repository/agent-planner/YOUR_IMG).
  2. Service name : aidemy-planner
  3. Region : Select the us-central1 region.
  4. Authentication : For the purpose of this workshop, you can allow "Allow unauthenticated invocations". For production, you'll likely want to restrict access.
  5. Container(s) tab (Expand the Containers, Network):
    • Setting tab:
      • منبع
        • memory : 2GiB
    • Variables & Secrets tab:
      • متغیرهای محیطی:
        • Add name: GOOGLE_CLOUD_PROJECT and value: <YOUR_PROJECT_ID>
        • Add name: BOOK_PROVIDER_URL , and set the value to your book-provider function URL, which you can determine using the following command in the terminal:
          gcloud run services describe book-provider \
             
          --region=us-central1 \
             
          --project=$PROJECT_ID \
             
          --format="value(status.url)"
      • Secrets exposed as environment variables:
        • Add name: DB_USER , secret: select db-user and version: latest
        • Add name: DB_PASS , secret: select db-pass and version: latest
        • Add name: DB_NAME , secret: select db-name and version: latest

Set secret

Leave other as default.

👉Click CREATE .

Cloud Run will deploy your service.

Once deployed, click on the service to its detail page, you can find the deployed URL available on the top.

URL

In the application interface, select 7 for the Year, choose Mathematics as the subject, and enter Algebra in the Add-on Request field. This will provide the agent with the necessary context to generate a tailored lesson plan.

تبریک می گویم! You've successfully created a teaching plan using our powerful AI agent. This demonstrates the potential of agents to significantly reduce workload and streamline tasks, ultimately improving efficiency and making life easier for educators.

9. سیستم های چند عاملی

Now that we've successfully implemented the teaching plan creation tool, let's shift our focus to building the student portal. This portal will provide students with access to quizzes, audio recaps, and assignments related to their coursework. Given the scope of this functionality, we'll leverage the power of multi-agent systems to create a modular and scalable solution.

As we discussed earlier, instead of relying on a single agent to handle everything, a multi-agent system allows us to break down the workload into smaller, specialized tasks, each handled by a dedicated agent. This approach offers several key advantages:

Modularity and Maintainability : Instead of creating a single agent that does everything, build smaller, specialized agents with well-defined responsibilities. This modularity makes the system easier to understand, maintain, and debug. When a problem arises, you can isolate it to a specific agent, rather than having to sift through a massive codebase.

Scalability : Scaling a single, complex agent can be a bottleneck. With a multi-agent system, you can scale individual agents based on their specific needs. For example, if one agent is handling a high volume of requests, you can easily spin up more instances of that agent without affecting the rest of the system.

Team Specialization : Think of it like this: you wouldn't ask one engineer to build an entire application from scratch. Instead, you assemble a team of specialists, each with expertise in a particular area. Similarly, a multi-agent system allows you to leverage the strengths of different LLMs and tools, assigning them to agents that are best suited for specific tasks.

Parallel Development : Different teams can work on different agents concurrently, speeding up the development process. Since agents are independent, changes to one agent are less likely to impact other agents.

معماری رویداد محور

To enable effective communication and coordination between these agents, we'll employ an event-driven architecture. This means that agents will react to "events" happening within the system.

Agents subscribe to specific event types (eg, "teaching plan generated," "assignment created"). When an event occurs, the relevant agents are notified and can react accordingly. This decoupling promotes flexibility, scalability, and real-time responsiveness.

نمای کلی

Now, to kick things off, we need a way to broadcast these events. To do this, we will set up a Pub/Sub topic. Let's start by creating a topic called plan .

👉Go to Google Cloud Console pub/sub and click on the "Create Topic" button.

👉Configure the Topic with ID/name plan and uncheck Add a default subscription , leave rest as default and click Create .

The Pub/Sub page will refresh, and you should now see your newly created topic listed in the table. موضوع ایجاد کنید

Now, let's integrate the Pub/Sub event publishing functionality into our planner agent. We'll add a new tool that sends a "plan" event to the Pub/Sub topic we just created. This event will signal to other agents in the system (like those in the student portal) that a new teaching plan is available.

👉Go back to the Cloud Code Editor and open the app.py file located in the planner folder. We will be adding a function that publishes the event. جایگزین کنید:

##ADD SEND PLAN EVENT FUNCTION HERE

با

def send_plan_event(teaching_plan:str):
    """
    Send the teaching event to the topic called plan
   
    Args:
        teaching_plan: teaching plan
    """
   
publisher = pubsub_v1.PublisherClient()
   
print(f"-------------> Sending event to topic plan: {teaching_plan}")
   
topic_path = publisher.topic_path(PROJECT_ID, "plan")

   
message_data = {"teaching_plan": teaching_plan}
   
data = json.dumps(message_data).encode("utf-8")

   
future = publisher.publish(topic_path, data)

   
return f"Published message ID: {future.result()}"

  • send_plan_event : This function takes the generated teaching plan as input, creates a Pub/Sub publisher client, constructs the topic path, converts the teaching plan into a JSON string , publishes the message to the topic.

In the same app.py file

👉Update the prompt to instruct the agent to send the teaching plan event to the Pub/Sub topic after generating the teaching plan. جایگزین کنید

### ADD send_plan_event CALL

with the following:

send_plan_event(teaching_plan)

By adding the send_plan_event tool and modifying the prompt, we've enabled our planner agent to publish events to Pub/Sub, allowing other components of our system to react to the creation of new teaching plans. We will now have a functional multi-agent system in the following sections.

10. Empowering Students with On-Demand Quizzes

Imagine a learning environment where students have access to an endless supply of quizzes tailored to their specific learning plans. These quizzes provide immediate feedback, including answers and explanations, fostering a deeper understanding of the material. This is the potential we aim to unlock with our AI-powered quiz portal.

To bring this vision to life, we'll build a quiz generation component that can create multiple-choice questions based on the content of the teaching plan.

نمای کلی

👉In the Cloud Code Editor's Explorer pane, navigate to the portal folder. Open the quiz.py file copy and paste the following code to the end of the file.

def generate_quiz_question(file_name: str, difficulty: str, region:str ):
    """Generates a single multiple-choice quiz question using the LLM.
   
    ```json
    {
      "question": "The question itself",
      "options": ["Option A", "Option B", "Option C", "Option D"],
      "answer": "The correct answer letter (A, B, C, or D)"
    }
    ```
    """

   
print(f"region: {region}")
   
# Connect to resourse needed from Google Cloud
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)


   
plan=None
   
#load the file using file_name and read content into string call plan
   
with open(file_name, 'r') as f:
       
plan = f.read()

   
parser = JsonOutputParser(pydantic_object=QuizQuestion)


   
instruction = f"You'll provide one question with difficulty level of {difficulty}, 4 options as multiple choices and provide the anwsers, the quiz needs to be related to the teaching plan {plan}"

   
prompt = PromptTemplate(
       
template="Generates a single multiple-choice quiz question\n {format_instructions}\n  {instruction}\n",
       
input_variables=["instruction"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"instruction": instruction})

   
print(f"{response}")
   
return  response


In the agent it creates a JSON output parser that's specifically designed to understand and structure the LLM's output. It uses the QuizQuestion model we defined earlier to ensure the parsed output conforms to the correct format (question, options, and answer).

👉Execute the following commands in the terminal to set up a virtual environment, install dependencies, and start the agent:

cd ~/aidemy-bootstrap/portal/
python -m venv env
source env/bin/activate
pip install -r requirements.txt
python app.py

Use the Cloud Shell's web preview feature to access the running application. Click on the "Quizzes" link, either in the top navigation bar or from the card on the index page. You should see three randomly generated quizzes displayed for the student. These quizzes are based on the teaching plan and demonstrate the power of our AI-powered quiz generation system.

آزمون ها

👉To stop the locally running process, press Ctrl+C in the terminal.

Gemini 2 Thinking for Explanations

Okay, so we've got quizzes, which is a great start! But what if students get something wrong? That's where the real learning happens, right? If we can explain why their answer was off and how to get to the correct one, they're way more likely to remember it. Plus, it helps clear up any confusion and boost their confidence.

That's why we're going to bring in the big guns: Gemini 2's "thinking" model! Think of it like giving the AI a little extra time to think things through before explaining. It lets it give more detailed and better feedback.

We want to see if it can help students by assisting, answering and explaining in detail. To test it out, we'll start with a notoriously tricky subject, Calculus.

نمای کلی

👉First, head over to the Cloud Code Editor, in answer.py inside the portal folder replace

def answer_thinking(question, options, user_response, answer, region):
   
return ""

with following code snippet:

def answer_thinking(question, options, user_response, answer, region):
   
try:
       
llm = VertexAI(model_name="gemini-2.0-flash-001",location=region)
       
       
input_msg = HumanMessage(content=[f"Here the question{question}, here are the available options {options}, this student's answer {user_response}, whereas the correct answer is {answer}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                       
"You are a helpful teacher trying to teach the student on question, you were given the question and a set of multiple choices "
                       
"what's the correct answer. use friendly tone"
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)
       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"



if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

This is a very simple langchain app where it Initializes the Gemini 2 Flash model, where we are instructing it to act as a helpful teacher and provide explanations

👉Execute the following command in the terminal:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

You should see output similar to the example provided in the original instructions. The current model may not provide as through explanation.

Okay, I see the question and the choices. The question is to evaluate the limit:

lim (x0) [(sin(5x) - 5x) / x^3]

You chose option B, which is -5/3, but the correct answer is A, which is -125/6.

It looks like you might have missed a step or made a small error in your calculations. This type of limit often involves using L'Hôpital's Rule or Taylor series expansion. Since we have the form 0/0, L'Hôpital's Rule is a good way to go! You need to apply it multiple times. Alternatively, you can use the Taylor series expansion of sin(x) which is:
sin(x) = x - x^3/3! + x^5/5! - ...
So, sin(5x) = 5x - (5x)^3/3! + (5x)^5/5! - ...
Then,  (sin(5x) - 5x) = - (5x)^3/3! + (5x)^5/5! - ...
Finally, (sin(5x) - 5x) / x^3 = - 5^3/3! + (5^5 * x^2)/5! - ...
Taking the limit as x approaches 0, we get -125/6.

Keep practicing, you'll get there!

In the answer.py file, replace the model_name from gemini-2.0-flash-001 to gemini-2.0-flash-thinking-exp-01-21 in the answer_thinking function.

This changes the LLM that reasons more, which will help it generate better explanations. And run it again.

👉Run to test the new thinking model:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

Here is an example of the response from the thinking model that is much more thorough and detailed, providing a step-by-step explanation of how to solve the calculus problem. This highlights the power of "thinking" models in generating high-quality explanations. You should see output similar to this:

Hey there! Let's take a look at this limit problem together. You were asked to evaluate:

lim (x0) [(sin(5x) - 5x) / x^3]

and you picked option B, -5/3, but the correct answer is actually A, -125/6. Let's figure out why!

It's a tricky one because if we directly substitute x=0, we get (sin(0) - 0) / 0^3 = (0 - 0) / 0 = 0/0, which is an indeterminate form. This tells us we need to use a more advanced technique like L'Hopital's Rule or Taylor series expansion.

Let's use the Taylor series expansion for sin(y) around y=0. Do you remember it?  It looks like this:

sin(y) = y - y^3/3! + y^5/5! - ...
where 3! (3 factorial) is 3 × 2 × 1 = 6, 5! is 5 × 4 × 3 × 2 × 1 = 120, and so on.

In our problem, we have sin(5x), so we can substitute y = 5x into the Taylor series:

sin(5x) = (5x) - (5x)^3/3! + (5x)^5/5! - ...
sin(5x) = 5x - (125x^3)/6 + (3125x^5)/120 - ...

Now let's plug this back into our limit expression:

[(sin(5x) - 5x) / x^3] =  [ (5x - (125x^3)/6 + (3125x^5)/120 - ...) - 5x ] / x^3
Notice that the '5x' and '-5x' cancel out!  So we are left with:
= [ - (125x^3)/6 + (3125x^5)/120 - ... ] / x^3
Now, we can divide every term in the numerator by x^3:
= -125/6 + (3125x^2)/120 - ...

Finally, let's take the limit as x approaches 0.  As x gets closer and closer to zero, terms with x^2 and higher powers will become very, very small and approach zero.  So, we are left with:
lim (x0) [ -125/6 + (3125x^2)/120 - ... ] = -125/6

Therefore, the correct answer is indeed **A) -125/6**.

It seems like your answer B, -5/3, might have come from perhaps missing a factor somewhere during calculation or maybe using an incorrect simplification. Double-check your steps when you were trying to solve it!

Don't worry, these limit problems can be a bit tricky sometimes! Keep practicing and you'll get the hang of it.  Let me know if you want to go through another similar example or if you have any more questions! 😊


Now that we have confirmed it works, let's use the portal.

👉 REMOVE the following test code from answer.py :

if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

👉Execute the following commands in the terminal to set up a virtual environment, install dependencies, and start the agent:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python app.py

👉Use the Cloud Shell's web preview feature to access the running application. Click on the "Quizzes" link, answer all the quizzes and make sure at least get one answer wrong and click submit

thinking answers

Rather than staring blankly while waiting for the response, switch over to the Cloud Editor's terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

To stop the locally running process, press Ctrl+C in the terminal.

11. OPTIONAL: Orchestrating the Agents with Eventarc

So far, the student portal has been generating quizzes based on a default set of teaching plans. That's helpful, but it means our planner agent and portal's quiz agent aren't really talking to each other. Remember how we added that feature where the planner agent publishes its newly generated teaching plans to a Pub/Sub topic? Now it's time to connect that to our portal agent!

نمای کلی

We want the portal to automatically update its quiz content whenever a new teaching plan is generated. To do that, we'll create an endpoint in the portal that can receive these new plans.

👉In the Cloud Code Editor's Explorer pane, navigate to the portal folder. Open the app.py file for editing. Add the follow code in between ## Add your code here :

## Add your code here

@app.route('/new_teaching_plan', methods=['POST'])
def new_teaching_plan():
   
try:
       
       
# Get data from Pub/Sub message delivered via Eventarc
       
envelope = request.get_json()
       
if not envelope:
           
return jsonify({'error': 'No Pub/Sub message received'}), 400

       
if not isinstance(envelope, dict) or 'message' not in envelope:
           
return jsonify({'error': 'Invalid Pub/Sub message format'}), 400

       
pubsub_message = envelope['message']
       
print(f"data: {pubsub_message['data']}")

       
data = pubsub_message['data']
       
data_str = base64.b64decode(data).decode('utf-8')
       
data = json.loads(data_str)

       
teaching_plan = data['teaching_plan']

       
print(f"File content: {teaching_plan}")

       
with open("teaching_plan.txt", "w") as f:
           
f.write(teaching_plan)

       
print(f"Teaching plan saved to local file: teaching_plan.txt")

       
return jsonify({'message': 'File processed successfully'})


   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500
## Add your code here

Rebuilding and Deploying to Cloud Run

You'll need to update and redeploy both our planner and portal agents to Cloud Run. This ensures they have the latest code and are configured to communicate via events.

نمای کلی استقرار

👉First we'll rebuild and push the planner agent image, back in the terminal run:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

👉We'll do the same, build and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

In Artifact Registry , you should see both the aidemy-planner and aidemy-portal container images listed.

Container Repo

👉Back in the terminal, run this to update the Cloud Run image for the planner agent:

export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-planner \
   
--region=us-central1 \
   
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner:latest

You should see output similar to this:

OK Deploying... Done.                                                                                                                                                     
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
Done.                                                                                                                                                                    
Service [aidemy-planner] revision [aidemy-planner-xxxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-planner-xxx.us-central1.run.app

Make note of the Service URL; this is the link to your deployed planner agent. If you need to later determine the planner agent Service URL, use this command:

gcloud run services describe aidemy-planner \
   
--region=us-central1 \
   
--format 'value(status.url)'

👉Run this to create the Cloud Run instance for the portal agent

export PROJECT_ID=$(gcloud config get project)
gcloud run deploy aidemy-portal \
 
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal:latest \
 
--region=us-central1 \
 
--platform=managed \
 
--allow-unauthenticated \
 
--memory=2Gi \
 
--cpu=2 \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID}

You should see output similar to this:

Deploying container to Cloud Run service [aidemy-portal] in project [xxxx] region [us-central1]
OK Deploying new service... Done.                                                                                                                                        
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
 
OK Setting IAM Policy...                                                                                                                                                
Done.                                                                                                                                                                    
Service [aidemy-portal] revision [aidemy-portal-xxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-portal-xxxx.us-central1.run.app

Make note of the Service URL; this is the link to your deployed student portal. If you need to later determine the student portal Service URL, use this command:

gcloud run services describe aidemy-portal \
   
--region=us-central1 \
   
--format 'value(status.url)'

Creating the Eventarc Trigger

But here's the big question: how does this endpoint get notified when there's a fresh plan waiting in the Pub/Sub topic? That's where Eventarc swoops in to save the day!

Eventarc acts as a bridge, listening for specific events (like a new message arriving in our Pub/Sub topic) and automatically triggering actions in response. In our case, it will detect when a new teaching plan is published and then send a signal to our portal's endpoint, letting it know that it's time to update.

With Eventarc handling the event-driven communication, we can seamlessly connect our planner agent and portal agent, creating a truly dynamic and responsive learning system. It's like having a smart messenger that automatically delivers the latest lesson plans to the right place!

👉In the console head to the Eventarc .

👉Click the "+ CREATE TRIGGER" button.

Configure the Trigger (Basics):

  • Trigger name: plan-topic-trigger
  • Trigger type: Google sources
  • Event provider: Cloud Pub/Sub
  • Event type: google.cloud.pubsub.topic.v1.messagePublished
  • Cloud Pub/Sub Topic: select projects/PROJECT_ID/topics/plan
  • Region: us-central1 .
  • Service account:
    • GRANT the service account with role roles/iam.serviceAccountTokenCreator
    • Use the default value: Default compute service account
  • Event destination: Cloud Run
  • Cloud Run service: aidemy-portal
  • Ignore error message: Permission denied on 'locations/me-central2' (or it may not exist).
  • Service URL path: /new_teaching_plan

روی "ایجاد" کلیک کنید.

The Eventarc Triggers page will refresh, and you should now see your newly created trigger listed in the table.

👉Now, access the planner agent using its Service URL to request a new teaching plan.

Run this in the terminal to determine the planner agent Service URL:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

This time try Year 5 , Subject Science , and Add-on Request atoms .

Then, wait a minute or two, again this delay has been introduced due to billing limitation of this lab, under normal condition, there shouldn't be a delay.

Finally, access the student portal using its Service URL.

Run this in the terminal to determine the student portal agent Service URL:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

You should see that the quizzes have been updated and now align with the new teaching plan you just generated! This demonstrates the successful integration of Eventarc in the Aidemy system!

Aidemy-celebrate

تبریک می گویم! You've successfully built a multi-agent system on Google Cloud, leveraging event-driven architecture for enhanced scalability and flexibility! You've laid a solid foundation, but there's even more to explore. To delve deeper into the real benefits of this architecture, discover the power of Gemini 2's multimodal Live API, and learn how to implement single-path orchestration with LangGraph, feel free to continue on to the next two chapters.

12. OPTIONAL: Audio Recaps with Gemini

Gemini can understand and process information from various sources, like text, images, and even audio, opening up a whole new range of possibilities for learning and content creation. Gemini's ability to "see," "hear," and "read" truly unlocks creative and engaging user experiences.

Beyond just creating visuals or text, another important step in learning is effective summarization and recap. Think about it: how often do you remember a catchy song lyric more easily than something you read in a textbook? Sound can be incredibly memorable! That's why we're going to leverage Gemini's multimodal capabilities to generate audio recaps of our teaching plans. This will provide students with a convenient and engaging way to review material, potentially boosting retention and comprehension through the power of auditory learning.

Live API Overview

We need a place to store the generated audio files. Cloud Storage provides a scalable and reliable solution.

👉Head to the Storage in the console. Click on "Buckets" in the left-hand menu. Click on the "+ CREATE" button at the top.

👉Configure your new bucket:

  • bucket name: aidemy-recap-UNIQUE_NAME .
    • IMPORTANT : Ensure you define a unique bucket name that begins with aidemy-recap- . This unique prefix is crucial for avoiding naming conflicts when creating your Cloud Storage bucket.
  • region: us-central1 .
  • Storage class: "Standard". Standard is suitable for frequently accessed data.
  • Access control: Leave the default "Uniform" access control selected. This provides consistent, bucket-level access control.
  • Advanced options: For this workshop, the default settings are usually sufficient.

Click the CREATE button to create your bucket.

  • You may see a pop up about public access prevention. Leave the "Enforce public access prevention on this bucket" box checked and click Confirm .

You will now see your newly created bucket in the Buckets list. Remember your bucket name, you'll need it later.

👉In the Cloud Code Editor's terminal, run the following commands to grant the service account access to the bucket:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉In the Cloud Code Editor, open audio.py inside the courses folder. Paste the following code to the end of the file:

config = LiveConnectConfig(
   
response_modalities=["AUDIO"],
   
speech_config=SpeechConfig(
       
voice_config=VoiceConfig(
           
prebuilt_voice_config=PrebuiltVoiceConfig(
               
voice_name="Charon",
           
)
       
)
   
),
)

async def process_weeks(teaching_plan: str):
   
region = "us-east5" #To workaround onRamp quota limits
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
   
clientAudio = genai.Client(vertexai=True, project=PROJECT_ID, location="us-central1")
   
async with clientAudio.aio.live.connect(
       
model=MODEL_ID,
       
config=config,
   
) as session:
       
for week in range(1, 4):  
           
response = client.models.generate_content(
               
model="gemini-2.0-flash-001",
               
contents=f"Given the following teaching plan: {teaching_plan}, Extrace content plan for week {week}. And return just the plan, nothingh else  " # Clarified prompt
           
)

           
prompt = f"""
                Assume you are the instructor.  
                Prepare a concise and engaging recap of the key concepts and topics covered.
                This recap should be suitable for generating a short audio summary for students.
                Focus on the most important learnings and takeaways, and frame it as a direct address to the students.  
                Avoid overly formal language and aim for a conversational tone, tell a few jokes.
               
                Teaching plan: {response.text} """
           
print(f"prompt --->{prompt}")

           
await session.send(input=prompt, end_of_turn=True)
           
with open(f"temp_audio_week_{week}.raw", "wb") as temp_file:
               
async for message in session.receive():
                   
if message.server_content.model_turn:
                       
for part in message.server_content.model_turn.parts:
                           
if part.inline_data:
                               
temp_file.write(part.inline_data.data)
                           
           
data, samplerate = sf.read(f"temp_audio_week_{week}.raw", channels=1, samplerate=24000, subtype='PCM_16', format='RAW')
           
sf.write(f"course-week-{week}.wav", data, samplerate)
       
           
storage_client = storage.Client()
           
bucket = storage_client.bucket(BUCKET_NAME)
           
blob = bucket.blob(f"course-week-{week}.wav")  # Or give it a more descriptive name
           
blob.upload_from_filename(f"course-week-{week}.wav")
           
print(f"Audio saved to GCS: gs://{BUCKET_NAME}/course-week-{week}.wav")
   
await session.close()

 
def breakup_sessions(teaching_plan: str):
   
asyncio.run(process_weeks(teaching_plan))
  • Streaming Connection : First, a persistent connection is established with the Live API endpoint. Unlike a standard API call where you send a request and get a response, this connection remains open for a continuous exchange of data.
  • Configuration Multimodal : Use configuration to specifying what type of output you want (in this case, audio), and you can even specify what parameters you'd like to use (eg, voice selection, audio encoding)
  • Asynchronous Processing : This API works asynchronously, meaning it doesn't block the main thread while waiting for the audio generation to complete. By processing data in real-time and sending the output in chunks, it provides a near-instantaneous experience.

Now, the key question is: when should this audio generation process run? Ideally, we want the audio recaps to be available as soon as a new teaching plan is created. Since we've already implemented an event-driven architecture by publishing the teaching plan to a Pub/Sub topic, we can simply subscribe to that topic.

However, we don't generate new teaching plans very often. It wouldn't be efficient to have an agent constantly running and waiting for new plans. That's why it makes perfect sense to deploy this audio generation logic as a Cloud Run Function.

By deploying it as a function, it remains dormant until a new message is published to the Pub/Sub topic. When that happens, it automatically triggers the function, which generates the audio recaps and stores them in our bucket.

👉Under the courses folder in main.py file, this file defines the Cloud Run Function that will be triggered when a new teaching plan is available. It receives the plan and initiates the audio recap generation. Add the following code snippet to the end of the file.

@functions_framework.cloud_event
def process_teaching_plan(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")
   
time.sleep(60)
   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):  # Check for base64 encoding
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan') # Get the teaching plan
       
elif 'teaching_plan' in cloud_event.data: # No base64
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found") # Handle error explicitly

       
#Load the teaching_plan as string and from cloud event, call audio breakup_sessions
       
breakup_sessions(teaching_plan)

       
return "Teaching plan processed successfully", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error processing teaching plan: {e}")
       
return "Error processing teaching plan", 500

@functions_framework.cloud_event : This decorator marks the function as a Cloud Run Function that will be triggered by CloudEvents.

تست محلی

👉We'll run this in a virtual environment and install the necessary Python libraries for the Cloud Run function.

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

👉The Cloud Run Function emulator allows us to test our function locally before deploying it to Google Cloud. Start a local emulator by running:

functions-framework --target process_teaching_plan --signature-type=cloudevent --source main.py

👉While the emulator is running, you can send test CloudEvents to the emulator to simulate a new teaching plan being published. In a new terminal:

Two terminal

👉Run:

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

Rather than staring blankly while waiting for the response, switch over to the other Cloud Shell terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

Back in the 2nd terminal you should see it should returned OK .

👉You'll verify Data in bucket, go to Cloud Storage and select the "Bucket" tab and then the aidemy-recap-UNIQUE_NAME

سطل

👉In the terminal running the emulator, type ctrl+c to exit. And close the second terminal. And close the second terminal. and run deactivate to exit the virtual environment.

deactivate

Deploying to Google Cloud

نمای کلی استقرار 👉After testing locally, it's time to deploy the course agent to Google Cloud. In the terminal, run these commands:

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud functions deploy courses-agent \
 
--region=us-central1 \
 
--gen2 \
 
--source=. \
 
--runtime=python312 \
 
--trigger-topic=plan \
 
--entry-point=process_teaching_plan \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

Verify deployment by going Cloud Run in the Google Cloud Console.You should see a new service named courses-agent listed.

Cloud Run List

To check the trigger configuration, click on the courses-agent service to view its details. Go to the "TRIGGERS" tab.

You should see a trigger configured to listen for messages published to the plan topic.

Cloud Run Trigger

Finally, let's see it running end to end.

👉We need to configure the portal agent so it knows where to find the generated audio files. در ترمینال اجرا کنید:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Try generating a new teaching plan using the planner agent web page. It might take a few minutes to start, don't be alarmed, it's a serverless service.

To access the planner agent, get its Service URL by running this in the terminal:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep planner

After generating the new plan, wait 2-3 minutes for the audio to be generated, again this will take a few more minutes due to billing limitation with this lab account.

You can monitor whether the courses-agent function has received the teaching plan by checking the function's "TRIGGERS" tab. Refresh the page periodically; you should eventually see that the function has been invoked. If the function hasn't been invoked after more than 2 minutes, you can try generating the teaching plan again. However, avoid generating plans repeatedly in quick succession, as each generated plan will be sequentially consumed and processed by the agent, potentially creating a backlog.

Trigger Observe

👉Visit the portal and click on "Courses". You should see three cards, each displaying an audio recap. To find the URL of your portal agent:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep portal

Click "play" on each course to ensure the audio recaps are aligned with the teaching plan you just generated! Portal Courses

Exit the virtual environment.

deactivate

13. OPTIONAL: Role-Based collaboration with Gemini and DeepSeek

Having multiple perspectives is invaluable, especially when crafting engaging and thoughtful assignments. We'll now build a multi-agent system that leverages two different models with distinct roles, to generate assignments: one promotes collaboration, and the other encourages self-study. We'll use a "single-shot" architecture, where the workflow follows a fixed route.

Gemini Assignment Generator

بررسی اجمالی جمینی We'll start by setting up the Gemini function to generate assignments with a collaborative emphasis. Edit the gemini.py file located in the assignment folder.

👉Paste the following code to the end of the gemini.py file:

def gen_assignment_gemini(state):
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"---------------gen_assignment_gemini")
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        You are an instructor

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {state["teaching_plan"]}
        """
   
)

   
print(f"---------------gen_assignment_gemini answer {response.text}")
   
   
state["model_one_assignment"] = response.text
   
   
return state


import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

It uses the Gemini model to generate assignments.

We are ready to test the Gemini Agent.

👉Run these commands in the terminal to setup the environment:

cd ~/aidemy-bootstrap/assignment
export PROJECT_ID=$(gcloud config get project)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

👉You can run to test it:

python gemini.py

You should see an assignment that has more group work in the output. The assert test at the end will also output the results.

Here are some engaging and practical assignments for each week, designed to build progressively upon the teaching plan's objectives:

**Week 1: Exploring the World of 2D Shapes**

* **Learning Objectives Assessed:**
   
* Identify and name basic 2D shapes (squares, rectangles, triangles, circles).
   
* .....

* **Description:**
   
* **Shape Scavenger Hunt:** Students will go on a scavenger hunt in their homes or neighborhoods, taking pictures of objects that represent different 2D shapes. They will then create a presentation or poster showcasing their findings, classifying each shape and labeling its properties (e.g., number of sides, angles, etc.).
   
* **Triangle Trivia:** Students will research and create a short quiz or presentation about different types of triangles, focusing on their properties and real-world examples.
   
* **Angle Exploration:** Students will use a protractor to measure various angles in their surroundings, such as corners of furniture, windows, or doors. They will record their measurements and create a chart categorizing the angles as right, acute, or obtuse.
....

**Week 2: Delving into the World of 3D Shapes and Symmetry**

* **Learning Objectives Assessed:**
   
* Identify and name basic 3D shapes.
   
* ....

* **Description:**
   
* **3D Shape Construction:** Students will work in groups to build 3D shapes using construction paper, cardboard, or other materials. They will then create a presentation showcasing their creations, describing the number of faces, edges, and vertices for each shape.
   
* **Symmetry Exploration:** Students will investigate the concept of symmetry by creating a visual representation of various symmetrical objects (e.g., butterflies, leaves, snowflakes) using drawing or digital tools. They will identify the lines of symmetry and explain their findings.
   
* **Symmetry Puzzles:** Students will be given a half-image of a symmetrical figure and will be asked to complete the other half, demonstrating their understanding of symmetry. This can be done through drawing, cut-out activities, or digital tools.

**Week 3: Navigating Position, Direction, and Problem Solving**

* **Learning Objectives Assessed:**
   
* Describe position using coordinates in the first quadrant.
   
* ....

* **Description:**
   
* **Coordinate Maze:** Students will create a maze using coordinates on a grid paper. They will then provide directions for navigating the maze using a combination of coordinate movements and translation/reflection instructions.
   
* **Shape Transformations:** Students will draw shapes on a grid paper and then apply transformations such as translation and reflection, recording the new coordinates of the transformed shapes.
   
* **Geometry Challenge:** Students will solve real-world problems involving perimeter, area, and angles. For example, they could be asked to calculate the perimeter of a room, the area of a garden, or the missing angle in a triangle.
....

Stop with ctl+c , and to clean up the test code. REMOVE the following code from gemini.py

import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

Configure the DeepSeek Assignment Generator

While cloud-based AI platforms are convenient, self-hosting LLMs can be crucial for protecting data privacy and ensuring data sovereignty. We'll deploy the smallest DeepSeek model (1.5B parameters) on a Cloud Compute Engine instance. There are other ways like hosting it on Google's Vertex AI platform or hosting it on your GKE instance, but since this is just a workshop on AI agents, and I don't want to keep you here forever, let's just use the most simplest way. But if you are interested and want to dig into other options, take a look at deepseek-vertexai.py file under assignment folder, where it provides an sample code of how to interact with models deployed on VertexAI.

Deepseek Overview

👉Run this command in the terminal to create a self-hosted LLM platform Ollama:

cd ~/aidemy-bootstrap/assignment
gcloud compute instances create ollama-instance \
   
--image-family=ubuntu-2204-lts \
   
--image-project=ubuntu-os-cloud \
   
--machine-type=e2-standard-4 \
   
--zone=us-central1-a \
   
--metadata-from-file startup-script=startup.sh \
   
--boot-disk-size=50GB \
   
--tags=ollama \
   
--scopes=https://www.googleapis.com/auth/cloud-platform

To verify the Compute Engine instance is running:

Navigate to Compute Engine > "VM instances" in the Google Cloud Console. You should see the ollama-instance listed with a green check mark indicating that it's running. If you can't see it, make sure the zone is us-central1. If it's not, you may need to search for it.

Compute Engine List

👉We'll install the smallest DeepSeek model and test it, back in the Cloud Shell Editor, in a New terminal, run following command to ssh into the GCE instance.

gcloud compute ssh ollama-instance --zone=us-central1-a

Upon establishing the SSH connection, you may be prompted with the following:

"Do you want to continue (Y/n)?"

Simply type Y (case-insensitive) and press Enter to proceed.

Next, you might be asked to create a passphrase for the SSH key. If you prefer not to use a passphrase, just press Enter twice to accept the default (no passphrase).

👉Now you are in the virutal machine, pull the smallest DeepSeek R1 model, and test if it works?

ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b "who are you?"

👉Exit the GCE instance enter following in the ssh terminal:

exit

👉Next, setup the network policy, so other services can access the LLM, please limit the access to the instance if you want to do this for production, either implement security login for the service or restrict IP access. اجرا کنید:

gcloud compute firewall-rules create allow-ollama-11434 \
   
--allow=tcp:11434 \
   
--target-tags=ollama \
   
--description="Allow access to Ollama on port 11434"

👉To verify if your firewall policy is working correctly, try running:

export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
curl -X POST "${OLLAMA_HOST}/api/generate" \
     
-H "Content-Type: application/json" \
     
-d '{
         
"prompt": "Hello, what are you?",
         
"model": "deepseek-r1:1.5b",
         
"stream": false
       
}'

Next, we'll work on the Deepseek function in the assignment agent to generate assignments with individual work emphasis.

👉Edit deepseek.py under assignment folder add following snippet to the end:

def gen_assignment_deepseek(state):
   
print(f"---------------gen_assignment_deepseek")

   
template = """
        You are an instructor who favor student to focus on individual work.

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {teaching_plan}
        """

   
   
prompt = ChatPromptTemplate.from_template(template)

   
model = OllamaLLM(model="deepseek-r1:1.5b",
                   
base_url=OLLAMA_HOST)

   
chain = prompt | model


   
response = chain.invoke({"teaching_plan":state["teaching_plan"]})
   
state["model_two_assignment"] = response
   
   
return state

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

👉let's test it by running:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
python deepseek.py

You should see an assignment that has more self study work.

**Assignment Plan for Each Week**

---

### **Week 1: 2D Shapes and Angles**
- **Week Title:** "Exploring 2D Shapes"
Assign students to research and present on various 2D shapes. Include a project where they create models using straws and tape for triangles, draw quadrilaterals with specific measurements, and compare their properties.

### **Week 2: 3D Shapes and Symmetry**
Assign students to create models or nets for cubes and cuboids. They will also predict how folding these nets form the 3D shapes. Include a project where they identify symmetrical properties using mirrors or folding techniques.

### **Week 3: Position, Direction, and Problem Solving**

Assign students to use mirrors or folding techniques for reflections. Include activities where they measure angles, use a protractor, solve problems involving perimeter/area, and create symmetrical designs.
....

👉Stop the ctl+c , and to clean up the test code. REMOVE the following code from deepseek.py

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

Now, we'll use the same gemini model to combine both assignments into a new one. Edit the gemini.py file located in the assignment folder.

👉Paste the following code to the end of the gemini.py file:

def combine_assignments(state):
   
print(f"---------------combine_assignments ")
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        Look at all the proposed assignment so far {state["model_one_assignment"]} and {state["model_two_assignment"]}, combine them and come up with a final assignment for student.
        """
   
)

   
state["final_assignment"] = response.text
   
   
return state

To combine the strengths of both models, we'll orchestrate a defined workflow using LangGraph. This workflow consists of three steps: first, the Gemini model generates an assignment focused on collaboration; second, the DeepSeek model generates an assignment emphasizing individual work; finally, Gemini synthesizes these two assignments into a single, comprehensive assignment. Because we predefine the sequence of steps without LLM decision-making, this constitutes a single-path, user-defined orchestration.

Langraph combine overview

👉Paste the following code to the end of the main.py file under assignment folder:

def create_assignment(teaching_plan: str):
   
print(f"create_assignment---->{teaching_plan}")
   
builder = StateGraph(State)
   
builder.add_node("gen_assignment_gemini", gen_assignment_gemini)
   
builder.add_node("gen_assignment_deepseek", gen_assignment_deepseek)
   
builder.add_node("combine_assignments", combine_assignments)
   
   
builder.add_edge(START, "gen_assignment_gemini")
   
builder.add_edge("gen_assignment_gemini", "gen_assignment_deepseek")
   
builder.add_edge("gen_assignment_deepseek", "combine_assignments")
   
builder.add_edge("combine_assignments", END)

   
graph = builder.compile()
   
state = graph.invoke({"teaching_plan": teaching_plan})

   
return state["final_assignment"]



import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

👉To initially test the create_assignment function and confirm that the workflow combining Gemini and DeepSeek is functional, run the following command:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
pip install -r requirements.txt
python main.py

You should see something that combine both models with their individual perspective for student study and also for student group works.

**Tasks:**

1. **Clue Collection:** Gather all the clues left by the thieves. These clues will include:
   
* Descriptions of shapes and their properties (angles, sides, etc.)
   
* Coordinate grids with hidden messages
   
* Geometric puzzles requiring transformation (translation, reflection, rotation)
   
* Challenges involving area, perimeter, and angle calculations

2. **Clue Analysis:** Decipher each clue using your geometric knowledge. This will involve:
   
* Identifying the shape and its properties
   
* Plotting coordinates and interpreting patterns on the grid
   
* Solving geometric puzzles by applying transformations
   
* Calculating area, perimeter, and missing angles

3. **Case Report:** Create a comprehensive case report outlining your findings. This report should include:
   
* A detailed explanation of each clue and its solution
   
* Sketches and diagrams to support your explanations
   
* A step-by-step account of how you followed the clues to locate the artifact
   
* A final conclusion about the thieves and their motives

👉Stop the ctl+c , and to clean up the test code. REMOVE the following code from main.py

import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

Generate Assignment.png

To make the assignment generation process automatic and responsive to new teaching plans, we'll leverage the existing event-driven architecture. The following code defines a Cloud Run Function (generate_assignment) that will be triggered whenever a new teaching plan is published to the Pub/Sub topic ' plan '.

👉Add the following code to the end of main.py in the assignment folder:

@functions_framework.cloud_event
def generate_assignment(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")

   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan')
       
elif 'teaching_plan' in cloud_event.data:
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found")

       
assignment = create_assignment(teaching_plan)

       
print(f"Assignment---->{assignment}")

       
#Store the return assignment into bucket as a text file
       
storage_client = storage.Client()
       
bucket = storage_client.bucket(ASSIGNMENT_BUCKET)
       
file_name = f"assignment-{random.randint(1, 1000)}.txt"
       
blob = bucket.blob(file_name)
       
blob.upload_from_string(assignment)

       
return f"Assignment generated and stored in {ASSIGNMENT_BUCKET}/{file_name}", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error generate assignment: {e}")
       
return "Error generate assignment", 500

تست محلی

Before deploying to Google Cloud, it's good practice to test the Cloud Run Function locally. This allows for faster iteration and easier debugging.

First, create a Cloud Storage bucket to store the generated assignment files and grant the service account access to the bucket. دستورات زیر را در ترمینال اجرا کنید:

👉 IMPORTANT : Ensure you define a unique ASSIGNMENT_BUCKET name that begins with " aidemy-assignment- ". This unique name is crucial for avoiding naming conflicts when creating your Cloud Storage bucket. (Replace <YOUR_NAME> with any random word)

export ASSIGNMENT_BUCKET=aidemy-assignment-<YOUR_NAME> #Name must be unqiue

👉And run:

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gsutil mb -p $PROJECT_ID -l us-central1 gs://$ASSIGNMENT_BUCKET

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉Now, start the Cloud Run Function emulator:

cd ~/aidemy-bootstrap/assignment
functions-framework \
   
--target generate_assignment \
   
--signature-type=cloudevent \
   
--source main.py

👉While the emulator is running in one terminal, open a second terminal in the Cloud Shell. In this second terminal, send a test CloudEvent to the emulator to simulate a new teaching plan being published:

Two terminal

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

Rather than staring blankly while waiting for the response, switch over to the other Cloud Shell terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

The curl command should print "OK" (without a newline, so "OK" may appear on the same line your terminal shell prompt).

To confirm that the assignment was successfully generated and stored, go to the Google Cloud Console and navigate to Storage > "Cloud Storage". Select the aidemy-assignment bucket you created. You should see a text file named assignment-{random number}.txt in the bucket. Click on the file to download it and verify its contents. This verifies that a new file contains new assignment just generated.

12-01-assignment-bucket

👉In the terminal running the emulator, type ctrl+c to exit. And close the second terminal. 👉Also, in the terminal running the emulator, exit the virtual environment.

deactivate

نمای کلی استقرار

👉Next, we'll deploy the assignment agent to the cloud

cd ~/aidemy-bootstrap/assignment
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
export PROJECT_ID=$(gcloud config get project)
gcloud functions deploy assignment-agent \
 
--gen2 \
 
--timeout=540 \
 
--memory=2Gi \
 
--cpu=1 \
 
--set-env-vars="ASSIGNMENT_BUCKET=${ASSIGNMENT_BUCKET}" \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${GOOGLE_CLOUD_PROJECT} \
 
--set-env-vars=OLLAMA_HOST=${OLLAMA_HOST} \
 
--region=us-central1 \
 
--runtime=python312 \
 
--source=. \
 
--entry-point=generate_assignment \
 
--trigger-topic=plan

Verify deployment by going to Google Cloud Console, navigate to Cloud Run. You should see a new service named courses-agent listed. 12-03-function-list

With the assignment generation workflow now implemented and tested and deployed, we can move on to the next step: making these assignments accessible within the student portal.

14. OPTIONAL: Role-Based collaboration with Gemini and DeepSeek - Contd.

Dynamic website generation

To enhance the student portal and make it more engaging, we'll implement dynamic HTML generation for assignment pages. The goal is to automatically update the portal with a fresh, visually appealing design whenever a new assignment is generated. This leverages the LLM's coding capabilities to create a more dynamic and interesting user experience.

14-01-generate-html

👉In Cloud Shell Editor, edit the render.py file within the portal folder, replace

def render_assignment_page():
   
return ""

with following code snippet:

def render_assignment_page(assignment: str):
   
try:
       
region=get_next_region()
       
llm = VertexAI(model_name="gemini-2.0-flash-001", location=region)
       
input_msg = HumanMessage(content=[f"Here the assignment {assignment}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                        """
                        As a frontend developer, create HTML to display a student assignment with a creative look and feel. Include the following navigation bar at the top:
                        ```
                        <nav>
                            <a href="/">Home</a>
                            <a href="/quiz">Quizzes</a>
                            <a href="/courses">Courses</a>
                            <a href="/assignment">Assignments</a>
                        </nav>
                        ```
                        Also include these links in the <head> section:
                        ```
                        <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
                        <link rel="preconnect" href="https://fonts.googleapis.com">
                        <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
                        <link href="https://fonts.googleapis.com/css2?family=Roboto:wght@400;500&display=swap" rel="stylesheet">

                        ```
                        Do not apply inline styles to the navigation bar.
                        The HTML should display the full assignment content. In its CSS, be creative with the rainbow colors and aesthetic.
                        Make it creative and pretty
                        The assignment content should be well-structured and easy to read.
                        respond with JUST the html file
                        """
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)

       
response = response.replace("```html", "")
       
response = response.replace("```", "")
       
with open("templates/assignment.html", "w") as f:
           
f.write(response)


       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"

It uses the Gemini model to dynamically generate HTML for the assignment. It takes the assignment content as input and uses a prompt to instruct Gemini to create a visually appealing HTML page with a creative style.

Next, we'll create an endpoint that will be triggered whenever a new document is added to the assignment bucket:

👉Within the portal folder, edit the app.py file and add the following code within the ## Add your code here" comments , AFTER the new_teaching_plan function:

## Add your code here

def new_teaching_plan():
       
...
       
...
       
...

   
except Exception as e:
       
...
       
...

@app.route('/render_assignment', methods=['POST'])
def render_assignment():
   
try:
       
data = request.get_json()
       
file_name = data.get('name')
       
bucket_name = data.get('bucket')

       
if not file_name or not bucket_name:
           
return jsonify({'error': 'Missing file name or bucket name'}), 400

       
storage_client = storage.Client()
       
bucket = storage_client.bucket(bucket_name)
       
blob = bucket.blob(file_name)
       
content = blob.download_as_text()

       
print(f"File content: {content}")

       
render_assignment_page(content)

       
return jsonify({'message': 'Assignment rendered successfully'})

   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500

## Add your code here

When triggered, it retrieves the file name and bucket name from the request data, downloads the assignment content from Cloud Storage, and calls the render_assignment_page function to generate the HTML.

👉We'll go ahead and run it locally:

cd ~/aidemy-bootstrap/portal
source env/bin/activate
python app.py

👉From the "Web preview" menu at the top of the Cloud Shell window, select "Preview on port 8080". This will open your application in a new browser tab. Navigate to the Assignment link in the navigation bar. You should see a blank page at this point, which is expected behavior since we haven't yet established the communication bridge between the assignment agent and the portal to dynamically populate the content.

14-02-deployment-overview

o ahead and stop the script by pressing Ctrl+C .

👉To incorporate these changes and deploy the updated code, rebuild and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

👉After pushing the new image, redeploy the Cloud Run service. Run the following script to force the Cloud Run update:

export PROJECT_ID=$(gcloud config get project)
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Now, we'll deploy an Eventarc trigger that listens for any new object created (finalized) in the assignment bucket. This trigger will automatically invoke the /render_assignment endpoint on the portal service when a new assignment file is created.

export PROJECT_ID=$(gcloud config get project)
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$(gcloud storage service-agent --project $PROJECT_ID)" \
 
--role="roles/pubsub.publisher"
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud eventarc triggers create portal-assignment-trigger \
--location=us-central1 \
--service-account=$SERVICE_ACCOUNT_NAME \
--destination-run-service=aidemy-portal \
--destination-run-region=us-central1 \
--destination-run-path="/render_assignment" \
--event-filters="bucket=$ASSIGNMENT_BUCKET" \
--event-filters="type=google.cloud.storage.object.v1.finalized"

To verify that the trigger was created successfully, navigate to the Eventarc Triggers page in the Google Cloud Console. You should see portal-assignment-trigger listed in the table. Click on the trigger name to view its details. Assignment Trigger

It may take up to 2-3 minutes for the new trigger to become active.

To see the dynamic assignment generation in action, run the following command to find the URL of your planner agent (if you don't have it handy):

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

Find the URL of your portal agent:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

In the planner agent, generate a new teaching plan.

13-02-assignment

After a few minutes (to allow for the audio generation, assignment generation, and HTML rendering to complete), navigate to the student portal.

👉Click on the "Assignment" link in the navigation bar. You should see a newly created assignment with a dynamically generated HTML. Each time a teaching plan is generated it should be a dynamic assignment.

13-02-assignment

Congratulations on completing the Aidemy multi-agent system ! You've gained practical experience and valuable insights into:

  • The benefits of multi-agent systems, including modularity, scalability, specialization, and simplified maintenance.
  • The importance of event-driven architectures for building responsive and loosely coupled applications.
  • The strategic use of LLMs, matching the right model to the task and integrating them with tools for real-world impact.
  • Cloud-native development practices using Google Cloud services to create scalable and reliable solutions.
  • The importance of considering data privacy and self-hosting models as an alternative to vendor solutions.

You now have a solid foundation for building sophisticated AI-powered applications on Google Cloud!

15. Challenges and Next Steps

Congratulations on building the Aidemy multi-agent system! You've laid a strong foundation for AI-powered education. Now, let's consider some challenges and potential future enhancements to further expand its capabilities and address real-world needs:

Interactive Learning with Live Q&A:

  • Challenge: Can you leverage Gemini 2's Live API to create a real-time Q&A feature for students? Imagine a virtual classroom where students can ask questions and receive immediate, AI-powered responses.

Automated Assignment Submission and Grading:

  • Challenge: Design and implement a system that allows students to submit assignments digitally and have them automatically graded by AI, with a mechanism to detect and prevent plagiarism. This challenge presents a great opportunity to explore Retrieval Augmented Generation (RAG) to enhance the accuracy and reliability of the grading and plagiarism detection processes.

aidemy-climb

16. تمیز کردن

Now that we've built and explored our Aidemy multi-agent system, it's time to clean up our Google Cloud environment.

👉Delete Cloud Run services

gcloud run services delete aidemy-planner --region=us-central1 --quiet
gcloud run services delete aidemy-portal --region=us-central1 --quiet
gcloud run services delete courses-agent --region=us-central1 --quiet
gcloud run services delete book-provider --region=us-central1 --quiet
gcloud run services delete assignment-agent --region=us-central1 --quiet

👉Delete Eventarc trigger

gcloud eventarc triggers delete portal-assignment-trigger --location=us --quiet
gcloud eventarc triggers delete plan-topic-trigger --location=us-central1 --quiet
gcloud eventarc triggers delete portal-assignment-trigger --location=us-central1 --quiet
ASSIGNMENT_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:assignment-agent" --format="value(name)")
COURSES_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:courses-agent" --format="value(name)")
gcloud eventarc triggers delete $ASSIGNMENT_AGENT_TRIGGER --location=us-central1 --quiet
gcloud eventarc triggers delete $COURSES_AGENT_TRIGGER --location=us-central1 --quiet

👉Delete Pub/Sub topic

gcloud pubsub topics delete plan --project="$PROJECT_ID" --quiet

👉Delete Cloud SQL instance

gcloud sql instances delete aidemy --quiet

👉Delete Artifact Registry repository

gcloud artifacts repositories delete agent-repository --location=us-central1 --quiet

👉Delete Secret Manager secrets

gcloud secrets delete db-user --quiet
gcloud secrets delete db-pass --quiet
gcloud secrets delete db-name --quiet

👉Delete Compute Engine instance (if created for Deepseek)

gcloud compute instances delete ollama-instance --zone=us-central1-a --quiet

👉Delete the firewall rule for Deepseek instance

gcloud compute firewall-rules delete allow-ollama-11434 --quiet

👉Delete Cloud Storage buckets

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
gsutil rm -r gs://$COURSE_BUCKET_NAME
gsutil rm -r gs://$ASSIGNMENT_BUCKET

aidemy-broom

،
Aidemy:
Building Multi-Agent Systems with LangGraph, EDA, and Generative AI on Google Cloud

درباره این codelab

subjectآخرین به‌روزرسانی: مارس ۱۳, ۲۰۲۵
account_circleنویسنده: Christina Lin

1. مقدمه

سلام! So, you're into the idea of agents – little helpers that can get things done for you without you even lifting a finger, right? عالی! But let's be real, one agent isn't always going to cut it, especially when you're tackling bigger, more complex projects. You're probably going to need a whole team of them! That's where multi-agent systems come in.

Agents, when powered by LLMs, give you incredible flexibility compared to old-school hard coding. But, and there's always a but, they come with their own set of tricky challenges. And that's exactly what we're going to dive into in this workshop!

عنوان

Here's what you can expect to learn – think of it as leveling up your agent game:

Building Your First Agent with LangGraph : We'll get our hands dirty building your very own agent using LangGraph, a popular framework. You'll learn how to create tools that connect to databases, tap into the latest Gemini 2 API for some internet searching, and optimize the prompts and response, so your agent can interact with not only LLMs but existing services. We'll also show you how function calling works.

Agent Orchestration, Your Way : We'll explore different ways to orchestrate your agents, from simple straight paths to more complex multi-path scenarios. Think of it as directing the flow of your agent team.

Multi-Agent Systems : You'll discover how to set up a system where your agents can collaborate, and get things done together – all thanks to an event-driven architecture.

LLM Freedom – Use the Best for the Job: We're not stuck on just one LLM! You'll see how to use multiple LLMs, assigning them different roles to boost problem-solving power using cool "thinking models."

Dynamic Content? مشکلی نیست! : Imagine your agent creating dynamic content that's tailored specifically for each user, in real-time. We'll show you how to do it!

Taking it to the Cloud with Google Cloud : Forget just playing around in a notebook. We'll show you how to architect and deploy your multi-agent system on Google Cloud so it's ready for the real world!

This project will be a good example of how to use all the techniques we talked about.

2. معماری

Being a teacher or working in education can be super rewarding, but let's face it, the workload, especially all the prep work, can be challenging! Plus, there's often not enough staff and tutoring can be expensive. That's why we're proposing an AI-powered teaching assistant. This tool can lighten the load for educators and help bridge the gap caused by staff shortages and the lack of affordable tutoring.

Our AI teaching assistant can whip up detailed lesson plans, fun quizzes, easy-to-follow audio recaps, and personalized assignments. This lets teachers focus on what they do best: connecting with students and helping them fall in love with learning.

The system has two sites: one for teachers to create lesson plans for upcoming weeks,

برنامه ریز

and one for students to access quizzes, audio recaps, and assignments. پورتال

Alright, let's walk through the architecture powering our teaching assistant, Aidemy. As you can see, we've broken it down into several key components, all working together to make this happen.

معماری

Key Architectural Elements and Technologies :

Google Cloud Platform (GCP) : Central to the entire system:

  • Vertex AI: Accesses Google's Gemini LLMs.
  • Cloud Run: Serverless platform for deploying containerized agents and functions.
  • Cloud SQL: PostgreSQL database for curriculum data.
  • Pub/Sub & Eventarc: Foundation of the event-driven architecture, enabling asynchronous communication between components.
  • Cloud Storage: Stores audio recaps and assignment files.
  • Secret Manager: Securely manages database credentials.
  • Artifact Registry: Stores Docker images for the agents.
  • Compute Engine: To deploy self-hosted LLM instead of relying on vendor solutions

LLMs : The "brains" of the system:

  • Google's Gemini models: (Gemini 1.0 Pro, Gemini 2 Flash, Gemini 2 Flash Thinking, Gemini 1.5-pro) Used for lesson planning, content generation, dynamic HTML creation, quiz explanation and combining the assignments.
  • DeepSeek: Utilized for the specialized task of generating self-study assignments

LangChain & LangGraph : Frameworks for LLM Application Development

  • Facilitates the creation of complex multi-agent workflows.
  • Enables the intelligent orchestration of tools (API calls, database queries, web searches).
  • Implements event-driven architecture for system scalability and flexibility.

In essence, our architecture combines the power of LLMs with structured data and event-driven communication, all running on Google Cloud. This lets us build a scalable, reliable, and effective teaching assistant.

3. قبل از شروع

In the Google Cloud Console , on the project selector page, select or create a Google Cloud project . Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project .

👉Click Activate Cloud Shell at the top of the Google Cloud console (It's the terminal shape icon at the top of the Cloud Shell pane), click on the "Open Editor" button (it looks like an open folder with a pencil). This will open the Cloud Shell Code Editor in the window. You'll see a file explorer on the left side.

پوسته ابری

👉Click on the Cloud Code Sign-in button in the bottom status bar as shown. Authorize the plugin as instructed. If you see Cloud Code - no project in the status bar, select that then in the drop down 'Select a Google Cloud Project' and then select the specific Google Cloud Project from the list of projects that you created.

Login project

👉Open the terminal in the cloud IDE, ترمینال جدید

👉In the terminal, verify that you're already authenticated and that the project is set to your project ID using the following command:

gcloud auth list

👉And run:

gcloud config set project <YOUR_PROJECT_ID>

👉Run the following command to enable the necessary Google Cloud APIs:

gcloud services enable compute.googleapis.com  \
                       
storage.googleapis.com  \
                       
run.googleapis.com  \
                       
artifactregistry.googleapis.com  \
                       
aiplatform.googleapis.com \
                       
eventarc.googleapis.com \
                       
sqladmin.googleapis.com \
                       
secretmanager.googleapis.com \
                       
cloudbuild.googleapis.com \
                       
cloudresourcemanager.googleapis.com \
                       
cloudfunctions.googleapis.com

This may take a couple of minutes..

Enable Gemini Code Assist in Cloud Shell IDE

Click on the Code Assist button in the on left panel as shown and select one last time the correct Google Cloud project. If you are asked to enable the Cloud AI Companion API, please do so and move forward. Once you've selected your Google Cloud project, ensure that you are able to see that in the Cloud Code status message in the status bar and that you also have Code Assist enabled on the right, in the status bar as shown below:

Enable codeassist

Setting up permission

👉Setup service account permission. In the terminal, run :

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")

echo "Here's your SERVICE_ACCOUNT_NAME $SERVICE_ACCOUNT_NAME"

👉 Grant Permissions. In the terminal, run :

#Cloud Storage (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/storage.objectAdmin"

#Pub/Sub (Publish/Receive):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.publisher"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.subscriber"


#Cloud SQL (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/cloudsql.editor"


#Eventarc (Receive Events):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/iam.serviceAccountTokenCreator"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/eventarc.eventReceiver"

#Vertex AI (User):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/aiplatform.user"

#Secret Manager (Read):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/secretmanager.secretAccessor"

👉Validate result in your IAM console کنسول IAM

👉Run the following commands in the terminal to create a Cloud SQL instance named aidemy . We'll need this later, but since this process can take some time, we'll do it now.

gcloud sql instances create aidemy \
   
--database-version=POSTGRES_14 \
   
--cpu=2 \
   
--memory=4GB \
   
--region=us-central1 \
   
--root-password=1234qwer \
   
--storage-size=10GB \
   
--storage-auto-increase

4. Building the first agent

Before we dive into complex multi-agent systems, we need to establish a fundamental building block: a single, functional agent. In this section, we'll take our first steps by creating a simple "book provider" agent. The book provider agent takes a category as input and uses a Gemini LLM to generate a JSON representation book within that category. It then serves these book recommendations as a REST API endpoint .

Book Provider

👉In another browser tab, open the Google Cloud Console in your web browser,in the navigation menu (☰), go to "Cloud Run". Click the "+ ... WRITE A FUNCTION" button.

Create Function

👉Next we'll configures the basic settings of the Cloud Run Function:

  • Service name: book-provider
  • Region: us-central1
  • Runtime: Python 3.12
  • Authentication: Allow unauthenticated invocations to Enabled.

👉Leave other settings as default and click Create . This will take you to the source code editor.

You'll see pre-populated main.py and requirements.txt files.

The main.py will contain the business logic of the function, requirements.txt will contain the packages needed.

👉Now we are ready to write some code! But before diving in, let's see if Gemini Code Assist can give us a head start. Return to the Cloud Shell Editor, click on the Gemini Code Assist icon, and paste the following request into the prompt box: Gemini Code Assist

Use the functions_framework library to be deployable as an HTTP function. 
Accept a request with category and number_of_book parameters (either in JSON body or query string).
Use langchain and gemini to generate the data for book with fields bookname, author, publisher, publishing_date.
Use pydantic to define a Book model with the fields: bookname (string, description: "Name of the book"), author (string, description: "Name of the author"), publisher (string, description: "Name of the publisher"), and publishing_date (string, description: "Date of publishing").
Use langchain and gemini model to generate book data. the output should follow the format defined in Book model.

The logic should use JsonOutputParser from langchain to enforce output format defined in Book Model.
Have a function get_recommended_books(category) that internally uses langchain and gemini to return a single book object.
The main function, exposed as the Cloud Function, should call get_recommended_books() multiple times (based on number_of_book) and return a JSON list of the generated book objects.
Handle the case where category or number_of_book are missing by returning an error JSON response with a 400 status code.
return a JSON string representing the recommended books. use os library to retrieve GOOGLE_CLOUD_PROJECT env var. Use ChatVertexAI from langchain for the LLM call

Code Assist will then generate a potential solution, providing both the source code and a requirements.txt dependency file.

We encourage you to compare the Code Assist's generated code with the tested, correct solution provided below. This allows you to evaluate the tool's effectiveness and identify any potential discrepancies. While LLMs should never be blindly trusted, Code Assist can be a great tool for rapid prototyping and generating initial code structures, and should be use for a good head start.

Since this is a workshop, we'll proceed with the verified code provided below. However, feel free to experiment with the Code Assist-generated code in your own time to gain a deeper understanding of its capabilities and limitations.

👉Return to the Cloud Run Function's source code editor (in the other browser tab). Carefully replace the existing content of main.py with the code provided below:

import functions_framework
import json
from flask import Flask, jsonify, request
from langchain_google_vertexai import ChatVertexAI
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
import os

class Book(BaseModel):
   
bookname: str = Field(description="Name of the book")
   
author: str = Field(description="Name of the author")
   
publisher: str = Field(description="Name of the publisher")
   
publishing_date: str = Field(description="Date of publishing")


project_id = os.environ.get("GOOGLE_CLOUD_PROJECT")  

llm = ChatVertexAI(model_name="gemini-2.0-flash-lite-001")

def get_recommended_books(category):
    """
    A simple book recommendation function.

    Args:
        category (str): category

    Returns:
        str: A JSON string representing the recommended books.
    """
   
parser = JsonOutputParser(pydantic_object=Book)
   
question = f"Generate a random made up book on {category} with bookname, author and publisher and publishing_date"

   
prompt = PromptTemplate(
       
template="Answer the user query.\n{format_instructions}\n{query}\n",
       
input_variables=["query"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"query": question})

   
return  json.dumps(response)
   

@functions_framework.http
def recommended(request):
   
request_json = request.get_json(silent=True) # Get JSON data
   
if request_json and 'category' in request_json and 'number_of_book' in request_json:
       
category = request_json['category']
       
number_of_book = int(request_json['number_of_book'])
   
elif request.args and 'category' in request.args and 'number_of_book' in request.args:
       
category = request.args.get('category')
       
number_of_book = int(request.args.get('number_of_book'))

   
else:
       
return jsonify({'error': 'Missing category or number_of_book parameters'}), 400


   
recommendations_list = []
   
for i in range(number_of_book):
       
book_dict = json.loads(get_recommended_books(category))
       
print(f"book_dict=======>{book_dict}")
   
       
recommendations_list.append(book_dict)

   
   
return jsonify(recommendations_list)

👉Replace the contents of requirements.txt with the following:

functions-framework==3.*
google-genai==1.0.0
flask==3.1.0
jsonify==0.5
langchain_google_vertexai==2.0.13
langchain_core==0.3.34
pydantic==2.10.5

👉we'll set the Function entry point : recommended

03-02-function-create.png

👉Click SAVE AND DEPLOY . to deploy the Function. Wait for the deployment process to complete. The Cloud Console will display the status. این ممکن است چند دقیقه طول بکشد.

متن جایگزین 👉Once deployed, go back in the cloud shell editor, in the terminal run:

export PROJECT_ID=$(gcloud config get project)
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

curl -X POST -H "Content-Type: application/json" -d '{"category": "Science Fiction", "number_of_book": 2}' $BOOK_PROVIDER_URL

It should show some book data in JSON format.

[
 
{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},
 
{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}
]

تبریک می گویم! You have successfully deployed a Cloud Run Function. This is one of the services we will be integrating when developing our Aidemy agent.

5. Building Tools: Connecting Agents to RESTFUL service and Data

Let's go ahead and download the Bootstrap Skeleton Project, make sure you are in the Cloud Shell Editor. In the terminal run,

git clone https://github.com/weimeilin79/aidemy-bootstrap.git

After running this command, a new folder named aidemy-bootstrap will be created in your Cloud Shell environment.

In the Cloud Shell Editor's Explorer pane (usually on the left side), you should now see the folder that was created when you cloned the Git repository aidemy-bootstrap . Open the root folder of your project in the Explorer. You'll find a planner subfolder within it, open that as well. project explorer

Let's start building the tools our agents will use to become truly helpful. As you know, LLMs are excellent at reasoning and generating text, but they need access to external resources to perform real-world tasks and provide accurate, up-to-date information. Think of these tools as the agent's "Swiss Army knife," giving it the ability to interact with the world.

When building an agent, it's easy to fall into hard-coding a ton of details. This creates an agent that is not flexible. Instead, by creating and using tools, the agent has access to external logic or systems which gives it the benefits of both the LLM and traditional programming.

In this section, we'll create the foundation for the planner agent, which teachers will use to generate lesson plans. Before the agent starts generating a plan, we want to set boundaries by providing more details on the subject and topic. We'll build three tools:

  1. Restful API Call: Interacting with a pre-existing API to retrieve data.
  2. Database Query: Fetching structured data from a Cloud SQL database.
  3. Google Search: Accessing real-time information from the web.

Fetching Book Recommendations from an API

First, let's create a tool that retrieves book recommendations from the book-provider API we deployed in the previous section. This demonstrates how an agent can leverage existing services.

Recommend book

In the Cloud Shell Editor, open the aidemy-bootstrap project that you cloned in the previous section.

👉Edit the book.py in the planner folder, and paste the following code at the end of the file:

def recommend_book(query: str):
    """
    Get a list of recommended book from an API endpoint
   
    Args:
        query: User's request string
    """

   
region = get_next_region();
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)

   
query = f"""The user is trying to plan a education course, you are the teaching assistant. Help define the category of what the user requested to teach, respond the categroy with no more than two word.

    user request:   {query}
    """
   
print(f"-------->{query}")
   
response = llm.invoke(query)
   
print(f"CATEGORY RESPONSE------------>: {response}")
   
   
# call this using python and parse the json back to dict
   
category = response.strip()
   
   
headers = {"Content-Type": "application/json"}
   
data = {"category": category, "number_of_book": 2}

   
books = requests.post(BOOK_PROVIDER_URL, headers=headers, json=data)
   
   
return books.text

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

توضیح:

  • recommend_book(query: str) : This function takes a user's query as input.
  • LLM Interaction : It uses the LLM to extract the category from the query. This demonstrates how you can use the LLM to help create parameters for tools.
  • API Call : It makes a POST request to the book-provider API, passing the category and the desired number of books.

👉To test this new function, set the environment variable, run :

cd ~/aidemy-bootstrap/planner/
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

👉Install the dependencies and run the code to ensure it works, run:

cd ~/aidemy-bootstrap/planner/
python -m venv env
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
pip install -r requirements.txt
python book.py

Ignore the Git warning pop-up window.

You should see a JSON string containing book recommendations retrieved from the book-provider API. The results are randomly generated. Your books may not be the same, but you should receive two book recommendations in JSON format.

[{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}]

If you see this, the first tool is working correctly!

Instead of explicitly crafting a RESTful API call with specific parameters, we're using natural language ("I'm doing a course..."). The agent then intelligently extracts the necessary parameters (like the category) using NLP, highlighting how the agent leverages natural language understanding to interact with the API.

compare call

👉 Remove the following testing code from the book.py

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

Getting Curriculum Data from a Database

Next, we'll build a tool that fetches structured curriculum data from a Cloud SQL PostgreSQL database. This allows the agent to access a reliable source of information for lesson planning.

create db

Remember the aidemy Cloud SQL instance you've created in previous step? Here's where it will be used.

👉Create a database named aidemy-db in the new instance.

gcloud sql databases create aidemy-db \
   
--instance=aidemy

Let's verify the instance in the Cloud SQL in the Google Cloud Console, You should see a Cloud SQL instance named aidemy listed. Click on the instance name to view its details. In the Cloud SQL instance details page, click on "SQL Studio" in the left-hand navigation menu. با این کار یک تب جدید باز می شود.

Click to connect to the database. Sign in to the SQL Studio

Select aidemy-db as the database. enter postgres as user and 1234qwer as the password . sql studio sign in

👉In the SQL Studio query editor, paste the following SQL code:

CREATE TABLE curriculums (
   
id SERIAL PRIMARY KEY,
   
year INT,
   
subject VARCHAR(255),
   
description TEXT
);

-- Inserting detailed curriculum data for different school years and subjects
INSERT INTO curriculums (year, subject, description) VALUES
-- Year 5
(5, 'Mathematics', 'Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.'),
(5, 'English', 'Developing reading comprehension, creative writing, and basic grammar, with a focus on storytelling and poetry.'),
(5, 'Science', 'Exploring basic physics, chemistry, and biology concepts, including forces, materials, and ecosystems.'),
(5, 'Computer Science', 'Basic coding concepts using block-based programming and an introduction to digital literacy.'),

-- Year 6
(6, 'Mathematics', 'Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.'),
(6, 'English', 'Introduction to persuasive writing, character analysis, and deeper comprehension of literary texts.'),
(6, 'Science', 'Forces and motion, the human body, and introductory chemical reactions with hands-on experiments.'),
(6, 'Computer Science', 'Introduction to algorithms, logical reasoning, and basic text-based programming (Python, Scratch).'),

-- Year 7
(7, 'Mathematics', 'Algebraic expressions, geometry, and introduction to statistics and probability.'),
(7, 'English', 'Analytical reading of classic and modern literature, essay writing, and advanced grammar skills.'),
(7, 'Science', 'Introduction to cells and organisms, chemical reactions, and energy transfer in physics.'),
(7, 'Computer Science', 'Building on programming skills with Python, introduction to web development, and cyber safety.');

This SQL code creates a table named curriculums and inserts some sample data. Click Run to execute the SQL code. You should see a confirmation message indicating that the commands were executed successfully.

👉Expand the explorer, find the newly created table and click query . It should open a new editor tab with SQL generated for you,

sql studio select table

SELECT * FROM
 
"public"."curriculums" LIMIT 1000;

👉Click Run .

The results table should display the rows of data you inserted in the previous step, confirming that the table and data were created correctly.

Now that you have successfully created a database with populated sample curriculum data, we'll build a tool to retrieve it.

👉In the Cloud Code Editor, edit file curriculums.py in the aidemy-bootstrap folder and paste the following code at the end of the file:

def connect_with_connector() -> sqlalchemy.engine.base.Engine:

   
db_user = os.environ["DB_USER"]
   
db_pass = os.environ["DB_PASS"]
   
db_name = os.environ["DB_NAME"]

   
encoded_db_user = os.environ.get("DB_USER")
   
print(f"--------------------------->db_user: {db_user!r}")  
   
print(f"--------------------------->db_pass: {db_pass!r}")
   
print(f"--------------------------->db_name: {db_name!r}")

   
ip_type = IPTypes.PRIVATE if os.environ.get("PRIVATE_IP") else IPTypes.PUBLIC

   
connector = Connector()

   
def getconn() -> pg8000.dbapi.Connection:
       
conn: pg8000.dbapi.Connection = connector.connect(
           
instance_connection_name,
           
"pg8000",
           
user=db_user,
           
password=db_pass,
           
db=db_name,
           
ip_type=ip_type,
       
)
       
return conn

   
pool = sqlalchemy.create_engine(
       
"postgresql+pg8000://",
       
creator=getconn,
       
pool_size=2,
       
max_overflow=2,
       
pool_timeout=30,  # 30 seconds
       
pool_recycle=1800,  # 30 minutes
   
)
   
return pool



def init_connection_pool() -> sqlalchemy.engine.base.Engine:
   
   
return (
       
connect_with_connector()
   
)

   
raise ValueError(
       
"Missing database connection type. Please define one of INSTANCE_HOST, INSTANCE_UNIX_SOCKET, or INSTANCE_CONNECTION_NAME"
   
)

def get_curriculum(year: int, subject: str):
    """
    Get school curriculum
   
    Args:
        subject: User's request subject string
        year: User's request year int
    """
   
try:
       
stmt = sqlalchemy.text(
           
"SELECT description FROM curriculums WHERE year = :year AND subject = :subject"
       
)

       
with db.connect() as conn:
           
result = conn.execute(stmt, parameters={"year": year, "subject": subject})
           
row = result.fetchone()
       
if row:
           
return row[0]  
       
else:
           
return None  

   
except Exception as e:
       
print(e)
       
return None

db = init_connection_pool()

توضیح:

  • Environment Variables : The code retrieves database credentials and connection information from environment variables (more on this below).
  • connect_with_connector() : This function uses the Cloud SQL Connector to establish a secure connection to the database.
  • get_curriculum(year: int, subject: str) : This function takes the year and subject as input, queries the curriculums table, and returns the corresponding curriculum description.

👉Before we can run the code, we must set some environment variables, in the terminal, run:

export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉To test add the following code to the end of curriculums.py :

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉Run the code:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python curriculums.py

You should see the curriculum description for 6th-grade Mathematics printed to the console.

Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.

If you see the curriculum description, the database tool is working correctly! Go ahead and stop the script by pressing Ctrl+C .

👉 Remove the following testing code from the curriculums.py

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉Exit virtual environment, in terminal run:

deactivate

6. Building Tools: Access real-time information from the web

Finally, we'll build a tool that uses the Gemini 2 and Google Search integration to access real-time information from the web. This helps the agent stay up-to-date and provide relevant results.

Gemini 2's integration with the Google Search API enhances agent capabilities by providing more accurate and contextually relevant search results. This allows agents to access up-to-date information and ground their responses in real-world data, minimizing hallucinations. The improved API integration also facilitates more natural language queries, enabling agents to formulate complex and nuanced search requests.

جستجو کنید

This function takes a search query, curriculum, subject, and year as input and uses the Gemini API and the Google Search tool to retrieve relevant information from the internet. If you look closely, it's using the Google Generative AI SDK to do function calling without using any other framework.

👉Edit search.py in the aidemy-bootstrap folder and paste the following code at the end of the file:

model_id = "gemini-2.0-flash-001"

google_search_tool = Tool(
   
google_search = GoogleSearch()
)

def search_latest_resource(search_text: str, curriculum: str, subject: str, year: int):
    """
    Get latest information from the internet
   
    Args:
        search_text: User's request category   string
        subject: "User's request subject" string
        year: "User's request year"  integer
    """
   
search_text = "%s in the context of year %d and subject %s with following curriculum detail %s " % (search_text, year, subject, curriculum)
   
region = get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"search_latest_resource text-----> {search_text}")
   
response = client.models.generate_content(
       
model=model_id,
       
contents=search_text,
       
config=GenerateContentConfig(
           
tools=[google_search_tool],
           
response_modalities=["TEXT"],
       
)
   
)
   
print(f"search_latest_resource response-----> {response}")
   
return response

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

توضیح:

  • Defining Tool - google_search_tool : Wrapping the GoogleSearch object within a Tool
  • search_latest_resource(search_text: str, subject: str, year: int) : This function takes a search query, subject, and year as input and uses the Gemini API to perform a Google search. Gemini model
  • GenerateContentConfig : Define that it has access to the GoogleSearch tool

The Gemini model internally analyzes the search_text and determines whether it can answer the question directly or if it needs to use the GoogleSearch tool. This is a critical step that happens within the LLM's reasoning process. The model has been trained to recognize situations where external tools are necessary. If the model decides to use the GoogleSearch tool, the Google Generative AI SDK handles the actual invocation. The SDK takes the model's decision and the parameters it generates and sends them to the Google Search API. This part is hidden from the user in the code.

The Gemini model then integrates the search results into its response. It can use the information to answer the user's question, generate a summary, or perform some other task.

👉To test, run the code:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
source env/bin/activate
python search.py

You should see the Gemini Search API response containing search results related to "Syllabus for Year 5 Mathematics." The exact output will depend on the search results, but it will be a JSON object with information about the search.

If you see search results, the Google Search tool is working correctly! Go ahead and stop the script by pressing Ctrl+C .

👉And remove the last part in the code.

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

👉Exit virtual environment, in terminal run:

deactivate

تبریک می گویم! You have now built three powerful tools for your planner agent: an API connector, a database connector, and a Google Search tool. These tools will enable the agent to access the information and capabilities it needs to create effective teaching plans.

7. Orchestrating with LangGraph

Now that we have built our individual tools, it's time to orchestrate them using LangGraph. This will allow us to create a more sophisticated "planner" agent that can intelligently decide which tools to use and when, based on the user's request.

LangGraph is a Python library designed to make it easier to build stateful, multi-actor applications using Large Language Models (LLMs). Think of it as a framework for orchestrating complex conversations and workflows involving LLMs, tools, and other agents.

مفاهیم کلیدی:

  • Graph Structure: LangGraph represents your application's logic as a directed graph. Each node in the graph represents a step in the process (eg, a call to an LLM, a tool invocation, a conditional check). Edges define the flow of execution between nodes.
  • State: LangGraph manages the state of your application as it moves through the graph. This state can include variables like the user's input, the results of tool calls, intermediate outputs from LLMs, and any other information that needs to be preserved between steps.
  • Nodes: Each node represents a computation or interaction. آنها می توانند:
    • Tool Nodes: Use a tool (eg, perform a web search, query a database)
    • Function Nodes: Execute a Python function.
  • Edges: Connect nodes, defining the flow of execution. آنها می توانند:
    • Direct Edges: A simple, unconditional flow from one node to another.
    • Conditional Edges: The flow depends on the outcome of a conditional node.

لانگ گراف

We will use LangGraph to implement the orchestration. Let's edit the aidemy.py file under aidemy-bootstrap folder to define our LangGraph logic.

👉Append follow code to the end of aidemy.py :

tools = [get_curriculum, search_latest_resource, recommend_book]

def determine_tool(state: MessagesState):
   
llm = ChatVertexAI(model_name="gemini-2.0-flash-001", location=get_next_region())
   
sys_msg = SystemMessage(
                   
content=(
                       
f"""You are a helpful teaching assistant that helps gather all needed information.
                            Your ultimate goal is to create a detailed 3-week teaching plan.
                            You have access to tools that help you gather information.  
                            Based on the user request, decide which tool(s) are needed.

                        """
                   
)
               
)

   
llm_with_tools = llm.bind_tools(tools)
   
return {"messages": llm_with_tools.invoke([sys_msg] + state["messages"])}

This function is responsible for taking the current state of the conversation, providing the LLM with a system message, and then asking the LLM to generate a response. The LLM can either respond directly to the user or choose to use one of the available tools.

tools : This list represents the set of tools that the agent has available to it. It contains three tool functions that we defined in the previous steps: get_curriculum , search_latest_resource , and recommend_book . llm.bind_tools(tools) : It "binds" the tools list to the llm object. Binding the tools tells the LLM that these tools are available and provides the LLM with information about how to use them (eg, the names of the tools, the parameters they accept, and what they do).

We will use LangGraph to implement the orchestration.

👉Append following code to the end of aidemy.py :

def prep_class(prep_needs):
   
   
builder = StateGraph(MessagesState)
   
builder.add_node("determine_tool", determine_tool)
   
builder.add_node("tools", ToolNode(tools))
   
   
builder.add_edge(START, "determine_tool")
   
builder.add_conditional_edges("determine_tool",tools_condition)
   
builder.add_edge("tools", "determine_tool")

   
   
memory = MemorySaver()
   
graph = builder.compile(checkpointer=memory)

   
config = {"configurable": {"thread_id": "1"}}
   
messages = graph.invoke({"messages": prep_needs},config)
   
print(messages)
   
for m in messages['messages']:
       
m.pretty_print()
   
teaching_plan_result = messages["messages"][-1].content  


   
return teaching_plan_result

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan")

توضیح:

  • StateGraph(MessagesState) : Creates a StateGraph object. A StateGraph is a core concept in LangGraph. It represents the workflow of your agent as a graph, where each node in the graph represents a step in the process. Think of it as defining the blueprint for how the agent will reason and act.
  • Conditional Edge: Originating from the "determine_tool" node, the tools_condition argument is likely a function that determines which edge to follow based on the output of the determine_tool function. Conditional edges allow the graph to branch based on the LLM's decision about which tool to use (or whether to respond to the user directly). This is where the agent's "intelligence" comes into play – it can dynamically adapt its behavior based on the situation.
  • Loop: Adds an edge to the graph that connects the "tools" node back to the "determine_tool" node. This creates a loop in the graph, allowing the agent to repeatedly use tools until it has gathered enough information to complete the task and provide a satisfactory answer. This loop is crucial for complex tasks that require multiple steps of reasoning and information gathering.

Now, let's test our planner agent to see how it orchestrates the different tools.

This code will run the prep_class function with a specific user input, simulating a request to create a teaching plan for 5th-grade Mathematics in Geometry, using the curriculum, book recommendations, and the latest internet resources.

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉Run the code:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
pip install -r requirements.txt
python aidemy.py

Watch the log in the terminal. You should see evidence that the agent is calling all three tools (getting the school curriculum, getting book recommendations, and searching for the latest resources) before providing the final teaching plan. This demonstrates that the LangGraph orchestration is working correctly, and the agent is intelligently using all available tools to fulfill the user's request.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
search_text: Geometry
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
subject: Mathematics
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.....) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================
Tool Calls:
 
recommend_book (93b48189-4d69-4c09-a3bd-4e60cdc5f1c6)
 
Call ID: 93b48189-4d69-4c09-a3bd-4e60cdc5f1c6
 
Args:
   
query: Mathematics Geometry Year 5
================================= Tool Message =================================
Name: recommend_book

[{.....}]

================================== Ai Message ==================================

Based on the curriculum outcome, here is a 3-week teaching plan for year 5 Mathematics Geometry:

**Week 1: Introduction to Shapes and Properties**
.........

Stop the script by pressing Ctrl+C .

👉(THIS STEP IS OPTIONAL) replace the testing code with a different prompt, which requires different tools to be called.

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉(THIS STEP IS OPTIONAL, do this ONLY IF you ran the previous step) Run the code again:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python aidemy.py

What did you notice this time? Which tools did the agent call? You should see that the agent only calls the search_latest_resource tool this time. This is because the prompt does not specify that it needs the other two tools, and our LLM is smart enough to not call the other tools.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
subject: Mathematics
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
search_text: Geometry
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.......token_count=40, total_token_count=772) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================

Based on the information provided, a 3-week teaching plan for Year 5 Mathematics focusing on Geometry could look like this:

**Week 1:  Introducing 2D Shapes**
........
* Use visuals, manipulatives, and real-world examples to make the learning experience engaging and relevant.

Stop the script by pressing Ctrl+C .

👉 Remove the testing code to keep your aidemy.py file clean (DO NOT SKIP THIS STEP!):

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

With our agent logic now defined, let's launch the Flask web application. This will provide a familiar form-based interface for teachers to interact with the agent. While chatbot interactions are common with LLMs, we're opting for a traditional form submit UI, as it may be more intuitive for many educators.

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉Now, start the Web UI.

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python app.py

Look for startup messages in the Cloud Shell terminal output. Flask usually prints messages indicating that it's running and on what port.

Running on http://127.0.0.1:8080
Running on http://127.0.0.1:8080
The application needs to keep running to serve requests.

👉From the "Web preview" menu, choose Preview on port 8080. Cloud Shell will open a new browser tab or window with the web preview of your application.

صفحه وب

In the application interface, select 5 for Year, select subject Mathematics and type in Geometry in the Add-on Request

Rather than staring blankly while waiting for the response, switch over to the Cloud Editor's terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

👉Stop the script by pressing Ctrl+C in the terminal.

👉Exit the virtual environment:

deactivate

8. Deploying planner agent to the cloud

Build and push image to registry

نمای کلی

👉Time to deploy this to the cloud. In the terminal, create an artifacts repository to store the docker image we are going to build.

gcloud artifacts repositories create agent-repository \
   
--repository-format=docker \
   
--location=us-central1 \
   
--description="My agent repository"

You should see Created repository [agent-repository].

👉Run the following command to build the Docker image.

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .

👉We need to retag the image so that it's hosted in Artifact Registry instead of GCR and push the tagged image to Artifact Registry:

export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

Once the push is complete, you can verify that the image is successfully stored in Artifact Registry. Navigate to the Artifact Registry in the Google Cloud Console. You should find the aidemy-planner image within the agent-repository repository. Aidemy planner image

Securing Database Credentials with Secret Manager

To securely manage and access database credentials, we'll use Google Cloud Secret Manager. This prevents hardcoding sensitive information in our application code and enhances security.

👉We'll create individual secrets for the database username, password, and database name. This approach allows us to manage each credential independently. In the terminal run:

gcloud secrets create db-user
printf "postgres" | gcloud secrets versions add db-user --data-file=-

gcloud secrets create db-pass
printf "1234qwer" | gcloud secrets versions add db-pass --data-file=-

gcloud secrets create db-name
printf "aidemy-db" | gcloud secrets versions add db-name --data-file=-

Using Secret Manager is a important step in securing your application and preventing accidental exposure of sensitive credentials. It follows security best practices for cloud deployments.

در Cloud Run مستقر شوید

Cloud Run is a fully managed serverless platform that allows you to deploy containerized applications quickly and easily. It abstracts away the infrastructure management, letting you focus on writing and deploying your code. We'll be deploying our planner as a Cloud Run service.

👉In the Google Cloud Console, navigate to " Cloud Run ". Click on DEPLOY CONTAINER and select SERVICE . Configure your Cloud Run service:

Cloud run

  1. Container image : Click "Select" in the URL field. Find the image URL you pushed to Artifact Registry (eg, us-central1-docker.pkg.dev/YOUR_PROJECT_ID/agent-repository/agent-planner/YOUR_IMG).
  2. Service name : aidemy-planner
  3. Region : Select the us-central1 region.
  4. Authentication : For the purpose of this workshop, you can allow "Allow unauthenticated invocations". For production, you'll likely want to restrict access.
  5. Container(s) tab (Expand the Containers, Network):
    • Setting tab:
      • منبع
        • memory : 2GiB
    • Variables & Secrets tab:
      • متغیرهای محیطی:
        • Add name: GOOGLE_CLOUD_PROJECT and value: <YOUR_PROJECT_ID>
        • Add name: BOOK_PROVIDER_URL , and set the value to your book-provider function URL, which you can determine using the following command in the terminal:
          gcloud run services describe book-provider \
             
          --region=us-central1 \
             
          --project=$PROJECT_ID \
             
          --format="value(status.url)"
      • Secrets exposed as environment variables:
        • Add name: DB_USER , secret: select db-user and version: latest
        • Add name: DB_PASS , secret: select db-pass and version: latest
        • Add name: DB_NAME , secret: select db-name and version: latest

Set secret

Leave other as default.

👉Click CREATE .

Cloud Run will deploy your service.

Once deployed, click on the service to its detail page, you can find the deployed URL available on the top.

URL

In the application interface, select 7 for the Year, choose Mathematics as the subject, and enter Algebra in the Add-on Request field. This will provide the agent with the necessary context to generate a tailored lesson plan.

تبریک می گویم! You've successfully created a teaching plan using our powerful AI agent. This demonstrates the potential of agents to significantly reduce workload and streamline tasks, ultimately improving efficiency and making life easier for educators.

9. سیستم های چند عاملی

Now that we've successfully implemented the teaching plan creation tool, let's shift our focus to building the student portal. This portal will provide students with access to quizzes, audio recaps, and assignments related to their coursework. Given the scope of this functionality, we'll leverage the power of multi-agent systems to create a modular and scalable solution.

As we discussed earlier, instead of relying on a single agent to handle everything, a multi-agent system allows us to break down the workload into smaller, specialized tasks, each handled by a dedicated agent. This approach offers several key advantages:

Modularity and Maintainability : Instead of creating a single agent that does everything, build smaller, specialized agents with well-defined responsibilities. This modularity makes the system easier to understand, maintain, and debug. When a problem arises, you can isolate it to a specific agent, rather than having to sift through a massive codebase.

Scalability : Scaling a single, complex agent can be a bottleneck. With a multi-agent system, you can scale individual agents based on their specific needs. For example, if one agent is handling a high volume of requests, you can easily spin up more instances of that agent without affecting the rest of the system.

Team Specialization : Think of it like this: you wouldn't ask one engineer to build an entire application from scratch. Instead, you assemble a team of specialists, each with expertise in a particular area. Similarly, a multi-agent system allows you to leverage the strengths of different LLMs and tools, assigning them to agents that are best suited for specific tasks.

Parallel Development : Different teams can work on different agents concurrently, speeding up the development process. Since agents are independent, changes to one agent are less likely to impact other agents.

معماری رویداد محور

To enable effective communication and coordination between these agents, we'll employ an event-driven architecture. This means that agents will react to "events" happening within the system.

Agents subscribe to specific event types (eg, "teaching plan generated," "assignment created"). When an event occurs, the relevant agents are notified and can react accordingly. This decoupling promotes flexibility, scalability, and real-time responsiveness.

نمای کلی

Now, to kick things off, we need a way to broadcast these events. To do this, we will set up a Pub/Sub topic. Let's start by creating a topic called plan .

👉Go to Google Cloud Console pub/sub and click on the "Create Topic" button.

👉Configure the Topic with ID/name plan and uncheck Add a default subscription , leave rest as default and click Create .

The Pub/Sub page will refresh, and you should now see your newly created topic listed in the table. موضوع ایجاد کنید

Now, let's integrate the Pub/Sub event publishing functionality into our planner agent. We'll add a new tool that sends a "plan" event to the Pub/Sub topic we just created. This event will signal to other agents in the system (like those in the student portal) that a new teaching plan is available.

👉Go back to the Cloud Code Editor and open the app.py file located in the planner folder. We will be adding a function that publishes the event. جایگزین کنید:

##ADD SEND PLAN EVENT FUNCTION HERE

با

def send_plan_event(teaching_plan:str):
    """
    Send the teaching event to the topic called plan
   
    Args:
        teaching_plan: teaching plan
    """
   
publisher = pubsub_v1.PublisherClient()
   
print(f"-------------> Sending event to topic plan: {teaching_plan}")
   
topic_path = publisher.topic_path(PROJECT_ID, "plan")

   
message_data = {"teaching_plan": teaching_plan}
   
data = json.dumps(message_data).encode("utf-8")

   
future = publisher.publish(topic_path, data)

   
return f"Published message ID: {future.result()}"

  • send_plan_event : This function takes the generated teaching plan as input, creates a Pub/Sub publisher client, constructs the topic path, converts the teaching plan into a JSON string , publishes the message to the topic.

In the same app.py file

👉Update the prompt to instruct the agent to send the teaching plan event to the Pub/Sub topic after generating the teaching plan. جایگزین کنید

### ADD send_plan_event CALL

with the following:

send_plan_event(teaching_plan)

By adding the send_plan_event tool and modifying the prompt, we've enabled our planner agent to publish events to Pub/Sub, allowing other components of our system to react to the creation of new teaching plans. We will now have a functional multi-agent system in the following sections.

10. Empowering Students with On-Demand Quizzes

Imagine a learning environment where students have access to an endless supply of quizzes tailored to their specific learning plans. These quizzes provide immediate feedback, including answers and explanations, fostering a deeper understanding of the material. This is the potential we aim to unlock with our AI-powered quiz portal.

To bring this vision to life, we'll build a quiz generation component that can create multiple-choice questions based on the content of the teaching plan.

نمای کلی

👉In the Cloud Code Editor's Explorer pane, navigate to the portal folder. Open the quiz.py file copy and paste the following code to the end of the file.

def generate_quiz_question(file_name: str, difficulty: str, region:str ):
    """Generates a single multiple-choice quiz question using the LLM.
   
    ```json
    {
      "question": "The question itself",
      "options": ["Option A", "Option B", "Option C", "Option D"],
      "answer": "The correct answer letter (A, B, C, or D)"
    }
    ```
    """

   
print(f"region: {region}")
   
# Connect to resourse needed from Google Cloud
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)


   
plan=None
   
#load the file using file_name and read content into string call plan
   
with open(file_name, 'r') as f:
       
plan = f.read()

   
parser = JsonOutputParser(pydantic_object=QuizQuestion)


   
instruction = f"You'll provide one question with difficulty level of {difficulty}, 4 options as multiple choices and provide the anwsers, the quiz needs to be related to the teaching plan {plan}"

   
prompt = PromptTemplate(
       
template="Generates a single multiple-choice quiz question\n {format_instructions}\n  {instruction}\n",
       
input_variables=["instruction"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"instruction": instruction})

   
print(f"{response}")
   
return  response


In the agent it creates a JSON output parser that's specifically designed to understand and structure the LLM's output. It uses the QuizQuestion model we defined earlier to ensure the parsed output conforms to the correct format (question, options, and answer).

👉Execute the following commands in the terminal to set up a virtual environment, install dependencies, and start the agent:

cd ~/aidemy-bootstrap/portal/
python -m venv env
source env/bin/activate
pip install -r requirements.txt
python app.py

Use the Cloud Shell's web preview feature to access the running application. Click on the "Quizzes" link, either in the top navigation bar or from the card on the index page. You should see three randomly generated quizzes displayed for the student. These quizzes are based on the teaching plan and demonstrate the power of our AI-powered quiz generation system.

آزمون ها

👉To stop the locally running process, press Ctrl+C in the terminal.

Gemini 2 Thinking for Explanations

Okay, so we've got quizzes, which is a great start! But what if students get something wrong? That's where the real learning happens, right? If we can explain why their answer was off and how to get to the correct one, they're way more likely to remember it. Plus, it helps clear up any confusion and boost their confidence.

That's why we're going to bring in the big guns: Gemini 2's "thinking" model! Think of it like giving the AI a little extra time to think things through before explaining. It lets it give more detailed and better feedback.

We want to see if it can help students by assisting, answering and explaining in detail. To test it out, we'll start with a notoriously tricky subject, Calculus.

نمای کلی

👉First, head over to the Cloud Code Editor, in answer.py inside the portal folder replace

def answer_thinking(question, options, user_response, answer, region):
   
return ""

with following code snippet:

def answer_thinking(question, options, user_response, answer, region):
   
try:
       
llm = VertexAI(model_name="gemini-2.0-flash-001",location=region)
       
       
input_msg = HumanMessage(content=[f"Here the question{question}, here are the available options {options}, this student's answer {user_response}, whereas the correct answer is {answer}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                       
"You are a helpful teacher trying to teach the student on question, you were given the question and a set of multiple choices "
                       
"what's the correct answer. use friendly tone"
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)
       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"



if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

This is a very simple langchain app where it Initializes the Gemini 2 Flash model, where we are instructing it to act as a helpful teacher and provide explanations

👉Execute the following command in the terminal:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

You should see output similar to the example provided in the original instructions. The current model may not provide as through explanation.

Okay, I see the question and the choices. The question is to evaluate the limit:

lim (x0) [(sin(5x) - 5x) / x^3]

You chose option B, which is -5/3, but the correct answer is A, which is -125/6.

It looks like you might have missed a step or made a small error in your calculations. This type of limit often involves using L'Hôpital's Rule or Taylor series expansion. Since we have the form 0/0, L'Hôpital's Rule is a good way to go! You need to apply it multiple times. Alternatively, you can use the Taylor series expansion of sin(x) which is:
sin(x) = x - x^3/3! + x^5/5! - ...
So, sin(5x) = 5x - (5x)^3/3! + (5x)^5/5! - ...
Then,  (sin(5x) - 5x) = - (5x)^3/3! + (5x)^5/5! - ...
Finally, (sin(5x) - 5x) / x^3 = - 5^3/3! + (5^5 * x^2)/5! - ...
Taking the limit as x approaches 0, we get -125/6.

Keep practicing, you'll get there!

In the answer.py file, replace the model_name from gemini-2.0-flash-001 to gemini-2.0-flash-thinking-exp-01-21 in the answer_thinking function.

This changes the LLM that reasons more, which will help it generate better explanations. And run it again.

👉Run to test the new thinking model:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

Here is an example of the response from the thinking model that is much more thorough and detailed, providing a step-by-step explanation of how to solve the calculus problem. This highlights the power of "thinking" models in generating high-quality explanations. You should see output similar to this:

Hey there! Let's take a look at this limit problem together. You were asked to evaluate:

lim (x0) [(sin(5x) - 5x) / x^3]

and you picked option B, -5/3, but the correct answer is actually A, -125/6. Let's figure out why!

It's a tricky one because if we directly substitute x=0, we get (sin(0) - 0) / 0^3 = (0 - 0) / 0 = 0/0, which is an indeterminate form. This tells us we need to use a more advanced technique like L'Hopital's Rule or Taylor series expansion.

Let's use the Taylor series expansion for sin(y) around y=0. Do you remember it?  It looks like this:

sin(y) = y - y^3/3! + y^5/5! - ...
where 3! (3 factorial) is 3 × 2 × 1 = 6, 5! is 5 × 4 × 3 × 2 × 1 = 120, and so on.

In our problem, we have sin(5x), so we can substitute y = 5x into the Taylor series:

sin(5x) = (5x) - (5x)^3/3! + (5x)^5/5! - ...
sin(5x) = 5x - (125x^3)/6 + (3125x^5)/120 - ...

Now let's plug this back into our limit expression:

[(sin(5x) - 5x) / x^3] =  [ (5x - (125x^3)/6 + (3125x^5)/120 - ...) - 5x ] / x^3
Notice that the '5x' and '-5x' cancel out!  So we are left with:
= [ - (125x^3)/6 + (3125x^5)/120 - ... ] / x^3
Now, we can divide every term in the numerator by x^3:
= -125/6 + (3125x^2)/120 - ...

Finally, let's take the limit as x approaches 0.  As x gets closer and closer to zero, terms with x^2 and higher powers will become very, very small and approach zero.  So, we are left with:
lim (x0) [ -125/6 + (3125x^2)/120 - ... ] = -125/6

Therefore, the correct answer is indeed **A) -125/6**.

It seems like your answer B, -5/3, might have come from perhaps missing a factor somewhere during calculation or maybe using an incorrect simplification. Double-check your steps when you were trying to solve it!

Don't worry, these limit problems can be a bit tricky sometimes! Keep practicing and you'll get the hang of it.  Let me know if you want to go through another similar example or if you have any more questions! 😊


Now that we have confirmed it works, let's use the portal.

👉 REMOVE the following test code from answer.py :

if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

👉Execute the following commands in the terminal to set up a virtual environment, install dependencies, and start the agent:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python app.py

👉Use the Cloud Shell's web preview feature to access the running application. Click on the "Quizzes" link, answer all the quizzes and make sure at least get one answer wrong and click submit

thinking answers

Rather than staring blankly while waiting for the response, switch over to the Cloud Editor's terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

To stop the locally running process, press Ctrl+C in the terminal.

11. OPTIONAL: Orchestrating the Agents with Eventarc

So far, the student portal has been generating quizzes based on a default set of teaching plans. That's helpful, but it means our planner agent and portal's quiz agent aren't really talking to each other. Remember how we added that feature where the planner agent publishes its newly generated teaching plans to a Pub/Sub topic? Now it's time to connect that to our portal agent!

نمای کلی

We want the portal to automatically update its quiz content whenever a new teaching plan is generated. To do that, we'll create an endpoint in the portal that can receive these new plans.

👉In the Cloud Code Editor's Explorer pane, navigate to the portal folder. Open the app.py file for editing. Add the follow code in between ## Add your code here :

## Add your code here

@app.route('/new_teaching_plan', methods=['POST'])
def new_teaching_plan():
   
try:
       
       
# Get data from Pub/Sub message delivered via Eventarc
       
envelope = request.get_json()
       
if not envelope:
           
return jsonify({'error': 'No Pub/Sub message received'}), 400

       
if not isinstance(envelope, dict) or 'message' not in envelope:
           
return jsonify({'error': 'Invalid Pub/Sub message format'}), 400

       
pubsub_message = envelope['message']
       
print(f"data: {pubsub_message['data']}")

       
data = pubsub_message['data']
       
data_str = base64.b64decode(data).decode('utf-8')
       
data = json.loads(data_str)

       
teaching_plan = data['teaching_plan']

       
print(f"File content: {teaching_plan}")

       
with open("teaching_plan.txt", "w") as f:
           
f.write(teaching_plan)

       
print(f"Teaching plan saved to local file: teaching_plan.txt")

       
return jsonify({'message': 'File processed successfully'})


   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500
## Add your code here

Rebuilding and Deploying to Cloud Run

You'll need to update and redeploy both our planner and portal agents to Cloud Run. This ensures they have the latest code and are configured to communicate via events.

نمای کلی استقرار

👉First we'll rebuild and push the planner agent image, back in the terminal run:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

👉We'll do the same, build and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

In Artifact Registry , you should see both the aidemy-planner and aidemy-portal container images listed.

Container Repo

👉Back in the terminal, run this to update the Cloud Run image for the planner agent:

export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-planner \
   
--region=us-central1 \
   
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner:latest

You should see output similar to this:

OK Deploying... Done.                                                                                                                                                     
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
Done.                                                                                                                                                                    
Service [aidemy-planner] revision [aidemy-planner-xxxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-planner-xxx.us-central1.run.app

Make note of the Service URL; this is the link to your deployed planner agent. If you need to later determine the planner agent Service URL, use this command:

gcloud run services describe aidemy-planner \
   
--region=us-central1 \
   
--format 'value(status.url)'

👉Run this to create the Cloud Run instance for the portal agent

export PROJECT_ID=$(gcloud config get project)
gcloud run deploy aidemy-portal \
 
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal:latest \
 
--region=us-central1 \
 
--platform=managed \
 
--allow-unauthenticated \
 
--memory=2Gi \
 
--cpu=2 \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID}

You should see output similar to this:

Deploying container to Cloud Run service [aidemy-portal] in project [xxxx] region [us-central1]
OK Deploying new service... Done.                                                                                                                                        
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
 
OK Setting IAM Policy...                                                                                                                                                
Done.                                                                                                                                                                    
Service [aidemy-portal] revision [aidemy-portal-xxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-portal-xxxx.us-central1.run.app

Make note of the Service URL; this is the link to your deployed student portal. If you need to later determine the student portal Service URL, use this command:

gcloud run services describe aidemy-portal \
   
--region=us-central1 \
   
--format 'value(status.url)'

Creating the Eventarc Trigger

But here's the big question: how does this endpoint get notified when there's a fresh plan waiting in the Pub/Sub topic? That's where Eventarc swoops in to save the day!

Eventarc acts as a bridge, listening for specific events (like a new message arriving in our Pub/Sub topic) and automatically triggering actions in response. In our case, it will detect when a new teaching plan is published and then send a signal to our portal's endpoint, letting it know that it's time to update.

With Eventarc handling the event-driven communication, we can seamlessly connect our planner agent and portal agent, creating a truly dynamic and responsive learning system. It's like having a smart messenger that automatically delivers the latest lesson plans to the right place!

👉In the console head to the Eventarc .

👉Click the "+ CREATE TRIGGER" button.

Configure the Trigger (Basics):

  • Trigger name: plan-topic-trigger
  • Trigger type: Google sources
  • Event provider: Cloud Pub/Sub
  • Event type: google.cloud.pubsub.topic.v1.messagePublished
  • Cloud Pub/Sub Topic: select projects/PROJECT_ID/topics/plan
  • Region: us-central1 .
  • Service account:
    • GRANT the service account with role roles/iam.serviceAccountTokenCreator
    • Use the default value: Default compute service account
  • Event destination: Cloud Run
  • Cloud Run service: aidemy-portal
  • Ignore error message: Permission denied on 'locations/me-central2' (or it may not exist).
  • Service URL path: /new_teaching_plan

روی "ایجاد" کلیک کنید.

The Eventarc Triggers page will refresh, and you should now see your newly created trigger listed in the table.

👉Now, access the planner agent using its Service URL to request a new teaching plan.

Run this in the terminal to determine the planner agent Service URL:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

This time try Year 5 , Subject Science , and Add-on Request atoms .

Then, wait a minute or two, again this delay has been introduced due to billing limitation of this lab, under normal condition, there shouldn't be a delay.

Finally, access the student portal using its Service URL.

Run this in the terminal to determine the student portal agent Service URL:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

You should see that the quizzes have been updated and now align with the new teaching plan you just generated! This demonstrates the successful integration of Eventarc in the Aidemy system!

Aidemy-celebrate

تبریک می گویم! You've successfully built a multi-agent system on Google Cloud, leveraging event-driven architecture for enhanced scalability and flexibility! You've laid a solid foundation, but there's even more to explore. To delve deeper into the real benefits of this architecture, discover the power of Gemini 2's multimodal Live API, and learn how to implement single-path orchestration with LangGraph, feel free to continue on to the next two chapters.

12. OPTIONAL: Audio Recaps with Gemini

Gemini can understand and process information from various sources, like text, images, and even audio, opening up a whole new range of possibilities for learning and content creation. Gemini's ability to "see," "hear," and "read" truly unlocks creative and engaging user experiences.

Beyond just creating visuals or text, another important step in learning is effective summarization and recap. Think about it: how often do you remember a catchy song lyric more easily than something you read in a textbook? Sound can be incredibly memorable! That's why we're going to leverage Gemini's multimodal capabilities to generate audio recaps of our teaching plans. This will provide students with a convenient and engaging way to review material, potentially boosting retention and comprehension through the power of auditory learning.

Live API Overview

We need a place to store the generated audio files. Cloud Storage provides a scalable and reliable solution.

👉Head to the Storage in the console. Click on "Buckets" in the left-hand menu. Click on the "+ CREATE" button at the top.

👉Configure your new bucket:

  • bucket name: aidemy-recap-UNIQUE_NAME .
    • IMPORTANT : Ensure you define a unique bucket name that begins with aidemy-recap- . This unique prefix is crucial for avoiding naming conflicts when creating your Cloud Storage bucket.
  • region: us-central1 .
  • Storage class: "Standard". Standard is suitable for frequently accessed data.
  • Access control: Leave the default "Uniform" access control selected. This provides consistent, bucket-level access control.
  • Advanced options: For this workshop, the default settings are usually sufficient.

Click the CREATE button to create your bucket.

  • You may see a pop up about public access prevention. Leave the "Enforce public access prevention on this bucket" box checked and click Confirm .

You will now see your newly created bucket in the Buckets list. Remember your bucket name, you'll need it later.

👉In the Cloud Code Editor's terminal, run the following commands to grant the service account access to the bucket:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉In the Cloud Code Editor, open audio.py inside the courses folder. Paste the following code to the end of the file:

config = LiveConnectConfig(
   
response_modalities=["AUDIO"],
   
speech_config=SpeechConfig(
       
voice_config=VoiceConfig(
           
prebuilt_voice_config=PrebuiltVoiceConfig(
               
voice_name="Charon",
           
)
       
)
   
),
)

async def process_weeks(teaching_plan: str):
   
region = "us-east5" #To workaround onRamp quota limits
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
   
clientAudio = genai.Client(vertexai=True, project=PROJECT_ID, location="us-central1")
   
async with clientAudio.aio.live.connect(
       
model=MODEL_ID,
       
config=config,
   
) as session:
       
for week in range(1, 4):  
           
response = client.models.generate_content(
               
model="gemini-2.0-flash-001",
               
contents=f"Given the following teaching plan: {teaching_plan}, Extrace content plan for week {week}. And return just the plan, nothingh else  " # Clarified prompt
           
)

           
prompt = f"""
                Assume you are the instructor.  
                Prepare a concise and engaging recap of the key concepts and topics covered.
                This recap should be suitable for generating a short audio summary for students.
                Focus on the most important learnings and takeaways, and frame it as a direct address to the students.  
                Avoid overly formal language and aim for a conversational tone, tell a few jokes.
               
                Teaching plan: {response.text} """
           
print(f"prompt --->{prompt}")

           
await session.send(input=prompt, end_of_turn=True)
           
with open(f"temp_audio_week_{week}.raw", "wb") as temp_file:
               
async for message in session.receive():
                   
if message.server_content.model_turn:
                       
for part in message.server_content.model_turn.parts:
                           
if part.inline_data:
                               
temp_file.write(part.inline_data.data)
                           
           
data, samplerate = sf.read(f"temp_audio_week_{week}.raw", channels=1, samplerate=24000, subtype='PCM_16', format='RAW')
           
sf.write(f"course-week-{week}.wav", data, samplerate)
       
           
storage_client = storage.Client()
           
bucket = storage_client.bucket(BUCKET_NAME)
           
blob = bucket.blob(f"course-week-{week}.wav")  # Or give it a more descriptive name
           
blob.upload_from_filename(f"course-week-{week}.wav")
           
print(f"Audio saved to GCS: gs://{BUCKET_NAME}/course-week-{week}.wav")
   
await session.close()

 
def breakup_sessions(teaching_plan: str):
   
asyncio.run(process_weeks(teaching_plan))
  • Streaming Connection : First, a persistent connection is established with the Live API endpoint. Unlike a standard API call where you send a request and get a response, this connection remains open for a continuous exchange of data.
  • Configuration Multimodal : Use configuration to specifying what type of output you want (in this case, audio), and you can even specify what parameters you'd like to use (eg, voice selection, audio encoding)
  • Asynchronous Processing : This API works asynchronously, meaning it doesn't block the main thread while waiting for the audio generation to complete. By processing data in real-time and sending the output in chunks, it provides a near-instantaneous experience.

Now, the key question is: when should this audio generation process run? Ideally, we want the audio recaps to be available as soon as a new teaching plan is created. Since we've already implemented an event-driven architecture by publishing the teaching plan to a Pub/Sub topic, we can simply subscribe to that topic.

However, we don't generate new teaching plans very often. It wouldn't be efficient to have an agent constantly running and waiting for new plans. That's why it makes perfect sense to deploy this audio generation logic as a Cloud Run Function.

By deploying it as a function, it remains dormant until a new message is published to the Pub/Sub topic. When that happens, it automatically triggers the function, which generates the audio recaps and stores them in our bucket.

👉Under the courses folder in main.py file, this file defines the Cloud Run Function that will be triggered when a new teaching plan is available. It receives the plan and initiates the audio recap generation. Add the following code snippet to the end of the file.

@functions_framework.cloud_event
def process_teaching_plan(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")
   
time.sleep(60)
   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):  # Check for base64 encoding
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan') # Get the teaching plan
       
elif 'teaching_plan' in cloud_event.data: # No base64
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found") # Handle error explicitly

       
#Load the teaching_plan as string and from cloud event, call audio breakup_sessions
       
breakup_sessions(teaching_plan)

       
return "Teaching plan processed successfully", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error processing teaching plan: {e}")
       
return "Error processing teaching plan", 500

@functions_framework.cloud_event : This decorator marks the function as a Cloud Run Function that will be triggered by CloudEvents.

تست محلی

👉We'll run this in a virtual environment and install the necessary Python libraries for the Cloud Run function.

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

👉The Cloud Run Function emulator allows us to test our function locally before deploying it to Google Cloud. Start a local emulator by running:

functions-framework --target process_teaching_plan --signature-type=cloudevent --source main.py

👉While the emulator is running, you can send test CloudEvents to the emulator to simulate a new teaching plan being published. In a new terminal:

Two terminal

👉Run:

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

Rather than staring blankly while waiting for the response, switch over to the other Cloud Shell terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

Back in the 2nd terminal you should see it should returned OK .

👉You'll verify Data in bucket, go to Cloud Storage and select the "Bucket" tab and then the aidemy-recap-UNIQUE_NAME

سطل

👉In the terminal running the emulator, type ctrl+c to exit. And close the second terminal. And close the second terminal. and run deactivate to exit the virtual environment.

deactivate

Deploying to Google Cloud

نمای کلی استقرار 👉After testing locally, it's time to deploy the course agent to Google Cloud. In the terminal, run these commands:

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud functions deploy courses-agent \
 
--region=us-central1 \
 
--gen2 \
 
--source=. \
 
--runtime=python312 \
 
--trigger-topic=plan \
 
--entry-point=process_teaching_plan \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

Verify deployment by going Cloud Run in the Google Cloud Console.You should see a new service named courses-agent listed.

Cloud Run List

To check the trigger configuration, click on the courses-agent service to view its details. Go to the "TRIGGERS" tab.

You should see a trigger configured to listen for messages published to the plan topic.

Cloud Run Trigger

Finally, let's see it running end to end.

👉We need to configure the portal agent so it knows where to find the generated audio files. در ترمینال اجرا کنید:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Try generating a new teaching plan using the planner agent web page. It might take a few minutes to start, don't be alarmed, it's a serverless service.

To access the planner agent, get its Service URL by running this in the terminal:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep planner

After generating the new plan, wait 2-3 minutes for the audio to be generated, again this will take a few more minutes due to billing limitation with this lab account.

You can monitor whether the courses-agent function has received the teaching plan by checking the function's "TRIGGERS" tab. Refresh the page periodically; you should eventually see that the function has been invoked. If the function hasn't been invoked after more than 2 minutes, you can try generating the teaching plan again. However, avoid generating plans repeatedly in quick succession, as each generated plan will be sequentially consumed and processed by the agent, potentially creating a backlog.

Trigger Observe

👉Visit the portal and click on "Courses". You should see three cards, each displaying an audio recap. To find the URL of your portal agent:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep portal

Click "play" on each course to ensure the audio recaps are aligned with the teaching plan you just generated! Portal Courses

Exit the virtual environment.

deactivate

13. OPTIONAL: Role-Based collaboration with Gemini and DeepSeek

Having multiple perspectives is invaluable, especially when crafting engaging and thoughtful assignments. We'll now build a multi-agent system that leverages two different models with distinct roles, to generate assignments: one promotes collaboration, and the other encourages self-study. We'll use a "single-shot" architecture, where the workflow follows a fixed route.

Gemini Assignment Generator

بررسی اجمالی جمینی We'll start by setting up the Gemini function to generate assignments with a collaborative emphasis. Edit the gemini.py file located in the assignment folder.

👉Paste the following code to the end of the gemini.py file:

def gen_assignment_gemini(state):
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"---------------gen_assignment_gemini")
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        You are an instructor

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {state["teaching_plan"]}
        """
   
)

   
print(f"---------------gen_assignment_gemini answer {response.text}")
   
   
state["model_one_assignment"] = response.text
   
   
return state


import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

It uses the Gemini model to generate assignments.

We are ready to test the Gemini Agent.

👉Run these commands in the terminal to setup the environment:

cd ~/aidemy-bootstrap/assignment
export PROJECT_ID=$(gcloud config get project)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

👉You can run to test it:

python gemini.py

You should see an assignment that has more group work in the output. The assert test at the end will also output the results.

Here are some engaging and practical assignments for each week, designed to build progressively upon the teaching plan's objectives:

**Week 1: Exploring the World of 2D Shapes**

* **Learning Objectives Assessed:**
   
* Identify and name basic 2D shapes (squares, rectangles, triangles, circles).
   
* .....

* **Description:**
   
* **Shape Scavenger Hunt:** Students will go on a scavenger hunt in their homes or neighborhoods, taking pictures of objects that represent different 2D shapes. They will then create a presentation or poster showcasing their findings, classifying each shape and labeling its properties (e.g., number of sides, angles, etc.).
   
* **Triangle Trivia:** Students will research and create a short quiz or presentation about different types of triangles, focusing on their properties and real-world examples.
   
* **Angle Exploration:** Students will use a protractor to measure various angles in their surroundings, such as corners of furniture, windows, or doors. They will record their measurements and create a chart categorizing the angles as right, acute, or obtuse.
....

**Week 2: Delving into the World of 3D Shapes and Symmetry**

* **Learning Objectives Assessed:**
   
* Identify and name basic 3D shapes.
   
* ....

* **Description:**
   
* **3D Shape Construction:** Students will work in groups to build 3D shapes using construction paper, cardboard, or other materials. They will then create a presentation showcasing their creations, describing the number of faces, edges, and vertices for each shape.
   
* **Symmetry Exploration:** Students will investigate the concept of symmetry by creating a visual representation of various symmetrical objects (e.g., butterflies, leaves, snowflakes) using drawing or digital tools. They will identify the lines of symmetry and explain their findings.
   
* **Symmetry Puzzles:** Students will be given a half-image of a symmetrical figure and will be asked to complete the other half, demonstrating their understanding of symmetry. This can be done through drawing, cut-out activities, or digital tools.

**Week 3: Navigating Position, Direction, and Problem Solving**

* **Learning Objectives Assessed:**
   
* Describe position using coordinates in the first quadrant.
   
* ....

* **Description:**
   
* **Coordinate Maze:** Students will create a maze using coordinates on a grid paper. They will then provide directions for navigating the maze using a combination of coordinate movements and translation/reflection instructions.
   
* **Shape Transformations:** Students will draw shapes on a grid paper and then apply transformations such as translation and reflection, recording the new coordinates of the transformed shapes.
   
* **Geometry Challenge:** Students will solve real-world problems involving perimeter, area, and angles. For example, they could be asked to calculate the perimeter of a room, the area of a garden, or the missing angle in a triangle.
....

Stop with ctl+c , and to clean up the test code. REMOVE the following code from gemini.py

import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

Configure the DeepSeek Assignment Generator

While cloud-based AI platforms are convenient, self-hosting LLMs can be crucial for protecting data privacy and ensuring data sovereignty. We'll deploy the smallest DeepSeek model (1.5B parameters) on a Cloud Compute Engine instance. There are other ways like hosting it on Google's Vertex AI platform or hosting it on your GKE instance, but since this is just a workshop on AI agents, and I don't want to keep you here forever, let's just use the most simplest way. But if you are interested and want to dig into other options, take a look at deepseek-vertexai.py file under assignment folder, where it provides an sample code of how to interact with models deployed on VertexAI.

Deepseek Overview

👉Run this command in the terminal to create a self-hosted LLM platform Ollama:

cd ~/aidemy-bootstrap/assignment
gcloud compute instances create ollama-instance \
   
--image-family=ubuntu-2204-lts \
   
--image-project=ubuntu-os-cloud \
   
--machine-type=e2-standard-4 \
   
--zone=us-central1-a \
   
--metadata-from-file startup-script=startup.sh \
   
--boot-disk-size=50GB \
   
--tags=ollama \
   
--scopes=https://www.googleapis.com/auth/cloud-platform

To verify the Compute Engine instance is running:

Navigate to Compute Engine > "VM instances" in the Google Cloud Console. You should see the ollama-instance listed with a green check mark indicating that it's running. If you can't see it, make sure the zone is us-central1. If it's not, you may need to search for it.

Compute Engine List

👉We'll install the smallest DeepSeek model and test it, back in the Cloud Shell Editor, in a New terminal, run following command to ssh into the GCE instance.

gcloud compute ssh ollama-instance --zone=us-central1-a

Upon establishing the SSH connection, you may be prompted with the following:

"Do you want to continue (Y/n)?"

Simply type Y (case-insensitive) and press Enter to proceed.

Next, you might be asked to create a passphrase for the SSH key. If you prefer not to use a passphrase, just press Enter twice to accept the default (no passphrase).

👉Now you are in the virutal machine, pull the smallest DeepSeek R1 model, and test if it works?

ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b "who are you?"

👉Exit the GCE instance enter following in the ssh terminal:

exit

👉Next, setup the network policy, so other services can access the LLM, please limit the access to the instance if you want to do this for production, either implement security login for the service or restrict IP access. اجرا کنید:

gcloud compute firewall-rules create allow-ollama-11434 \
   
--allow=tcp:11434 \
   
--target-tags=ollama \
   
--description="Allow access to Ollama on port 11434"

👉To verify if your firewall policy is working correctly, try running:

export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
curl -X POST "${OLLAMA_HOST}/api/generate" \
     
-H "Content-Type: application/json" \
     
-d '{
         
"prompt": "Hello, what are you?",
         
"model": "deepseek-r1:1.5b",
         
"stream": false
       
}'

Next, we'll work on the Deepseek function in the assignment agent to generate assignments with individual work emphasis.

👉Edit deepseek.py under assignment folder add following snippet to the end:

def gen_assignment_deepseek(state):
   
print(f"---------------gen_assignment_deepseek")

   
template = """
        You are an instructor who favor student to focus on individual work.

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {teaching_plan}
        """

   
   
prompt = ChatPromptTemplate.from_template(template)

   
model = OllamaLLM(model="deepseek-r1:1.5b",
                   
base_url=OLLAMA_HOST)

   
chain = prompt | model


   
response = chain.invoke({"teaching_plan":state["teaching_plan"]})
   
state["model_two_assignment"] = response
   
   
return state

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

👉let's test it by running:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
python deepseek.py

You should see an assignment that has more self study work.

**Assignment Plan for Each Week**

---

### **Week 1: 2D Shapes and Angles**
- **Week Title:** "Exploring 2D Shapes"
Assign students to research and present on various 2D shapes. Include a project where they create models using straws and tape for triangles, draw quadrilaterals with specific measurements, and compare their properties.

### **Week 2: 3D Shapes and Symmetry**
Assign students to create models or nets for cubes and cuboids. They will also predict how folding these nets form the 3D shapes. Include a project where they identify symmetrical properties using mirrors or folding techniques.

### **Week 3: Position, Direction, and Problem Solving**

Assign students to use mirrors or folding techniques for reflections. Include activities where they measure angles, use a protractor, solve problems involving perimeter/area, and create symmetrical designs.
....

👉Stop the ctl+c , and to clean up the test code. REMOVE the following code from deepseek.py

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

Now, we'll use the same gemini model to combine both assignments into a new one. Edit the gemini.py file located in the assignment folder.

👉Paste the following code to the end of the gemini.py file:

def combine_assignments(state):
   
print(f"---------------combine_assignments ")
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        Look at all the proposed assignment so far {state["model_one_assignment"]} and {state["model_two_assignment"]}, combine them and come up with a final assignment for student.
        """
   
)

   
state["final_assignment"] = response.text
   
   
return state

To combine the strengths of both models, we'll orchestrate a defined workflow using LangGraph. This workflow consists of three steps: first, the Gemini model generates an assignment focused on collaboration; second, the DeepSeek model generates an assignment emphasizing individual work; finally, Gemini synthesizes these two assignments into a single, comprehensive assignment. Because we predefine the sequence of steps without LLM decision-making, this constitutes a single-path, user-defined orchestration.

Langraph combine overview

👉Paste the following code to the end of the main.py file under assignment folder:

def create_assignment(teaching_plan: str):
   
print(f"create_assignment---->{teaching_plan}")
   
builder = StateGraph(State)
   
builder.add_node("gen_assignment_gemini", gen_assignment_gemini)
   
builder.add_node("gen_assignment_deepseek", gen_assignment_deepseek)
   
builder.add_node("combine_assignments", combine_assignments)
   
   
builder.add_edge(START, "gen_assignment_gemini")
   
builder.add_edge("gen_assignment_gemini", "gen_assignment_deepseek")
   
builder.add_edge("gen_assignment_deepseek", "combine_assignments")
   
builder.add_edge("combine_assignments", END)

   
graph = builder.compile()
   
state = graph.invoke({"teaching_plan": teaching_plan})

   
return state["final_assignment"]



import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

👉To initially test the create_assignment function and confirm that the workflow combining Gemini and DeepSeek is functional, run the following command:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
pip install -r requirements.txt
python main.py

You should see something that combine both models with their individual perspective for student study and also for student group works.

**Tasks:**

1. **Clue Collection:** Gather all the clues left by the thieves. These clues will include:
   
* Descriptions of shapes and their properties (angles, sides, etc.)
   
* Coordinate grids with hidden messages
   
* Geometric puzzles requiring transformation (translation, reflection, rotation)
   
* Challenges involving area, perimeter, and angle calculations

2. **Clue Analysis:** Decipher each clue using your geometric knowledge. This will involve:
   
* Identifying the shape and its properties
   
* Plotting coordinates and interpreting patterns on the grid
   
* Solving geometric puzzles by applying transformations
   
* Calculating area, perimeter, and missing angles

3. **Case Report:** Create a comprehensive case report outlining your findings. This report should include:
   
* A detailed explanation of each clue and its solution
   
* Sketches and diagrams to support your explanations
   
* A step-by-step account of how you followed the clues to locate the artifact
   
* A final conclusion about the thieves and their motives

👉Stop the ctl+c , and to clean up the test code. REMOVE the following code from main.py

import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

Generate Assignment.png

To make the assignment generation process automatic and responsive to new teaching plans, we'll leverage the existing event-driven architecture. The following code defines a Cloud Run Function (generate_assignment) that will be triggered whenever a new teaching plan is published to the Pub/Sub topic ' plan '.

👉Add the following code to the end of main.py in the assignment folder:

@functions_framework.cloud_event
def generate_assignment(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")

   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan')
       
elif 'teaching_plan' in cloud_event.data:
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found")

       
assignment = create_assignment(teaching_plan)

       
print(f"Assignment---->{assignment}")

       
#Store the return assignment into bucket as a text file
       
storage_client = storage.Client()
       
bucket = storage_client.bucket(ASSIGNMENT_BUCKET)
       
file_name = f"assignment-{random.randint(1, 1000)}.txt"
       
blob = bucket.blob(file_name)
       
blob.upload_from_string(assignment)

       
return f"Assignment generated and stored in {ASSIGNMENT_BUCKET}/{file_name}", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error generate assignment: {e}")
       
return "Error generate assignment", 500

تست محلی

Before deploying to Google Cloud, it's good practice to test the Cloud Run Function locally. This allows for faster iteration and easier debugging.

First, create a Cloud Storage bucket to store the generated assignment files and grant the service account access to the bucket. دستورات زیر را در ترمینال اجرا کنید:

👉 IMPORTANT : Ensure you define a unique ASSIGNMENT_BUCKET name that begins with " aidemy-assignment- ". This unique name is crucial for avoiding naming conflicts when creating your Cloud Storage bucket. (Replace <YOUR_NAME> with any random word)

export ASSIGNMENT_BUCKET=aidemy-assignment-<YOUR_NAME> #Name must be unqiue

👉And run:

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gsutil mb -p $PROJECT_ID -l us-central1 gs://$ASSIGNMENT_BUCKET

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉Now, start the Cloud Run Function emulator:

cd ~/aidemy-bootstrap/assignment
functions-framework \
   
--target generate_assignment \
   
--signature-type=cloudevent \
   
--source main.py

👉While the emulator is running in one terminal, open a second terminal in the Cloud Shell. In this second terminal, send a test CloudEvent to the emulator to simulate a new teaching plan being published:

Two terminal

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

Rather than staring blankly while waiting for the response, switch over to the other Cloud Shell terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

The curl command should print "OK" (without a newline, so "OK" may appear on the same line your terminal shell prompt).

To confirm that the assignment was successfully generated and stored, go to the Google Cloud Console and navigate to Storage > "Cloud Storage". Select the aidemy-assignment bucket you created. You should see a text file named assignment-{random number}.txt in the bucket. Click on the file to download it and verify its contents. This verifies that a new file contains new assignment just generated.

12-01-assignment-bucket

👉In the terminal running the emulator, type ctrl+c to exit. And close the second terminal. 👉Also, in the terminal running the emulator, exit the virtual environment.

deactivate

نمای کلی استقرار

👉Next, we'll deploy the assignment agent to the cloud

cd ~/aidemy-bootstrap/assignment
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
export PROJECT_ID=$(gcloud config get project)
gcloud functions deploy assignment-agent \
 
--gen2 \
 
--timeout=540 \
 
--memory=2Gi \
 
--cpu=1 \
 
--set-env-vars="ASSIGNMENT_BUCKET=${ASSIGNMENT_BUCKET}" \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${GOOGLE_CLOUD_PROJECT} \
 
--set-env-vars=OLLAMA_HOST=${OLLAMA_HOST} \
 
--region=us-central1 \
 
--runtime=python312 \
 
--source=. \
 
--entry-point=generate_assignment \
 
--trigger-topic=plan

Verify deployment by going to Google Cloud Console, navigate to Cloud Run. You should see a new service named courses-agent listed. 12-03-function-list

With the assignment generation workflow now implemented and tested and deployed, we can move on to the next step: making these assignments accessible within the student portal.

14. OPTIONAL: Role-Based collaboration with Gemini and DeepSeek - Contd.

Dynamic website generation

To enhance the student portal and make it more engaging, we'll implement dynamic HTML generation for assignment pages. The goal is to automatically update the portal with a fresh, visually appealing design whenever a new assignment is generated. This leverages the LLM's coding capabilities to create a more dynamic and interesting user experience.

14-01-generate-html

👉In Cloud Shell Editor, edit the render.py file within the portal folder, replace

def render_assignment_page():
   
return ""

with following code snippet:

def render_assignment_page(assignment: str):
   
try:
       
region=get_next_region()
       
llm = VertexAI(model_name="gemini-2.0-flash-001", location=region)
       
input_msg = HumanMessage(content=[f"Here the assignment {assignment}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                        """
                        As a frontend developer, create HTML to display a student assignment with a creative look and feel. Include the following navigation bar at the top:
                        ```
                        <nav>
                            <a href="/">Home</a>
                            <a href="/quiz">Quizzes</a>
                            <a href="/courses">Courses</a>
                            <a href="/assignment">Assignments</a>
                        </nav>
                        ```
                        Also include these links in the <head> section:
                        ```
                        <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
                        <link rel="preconnect" href="https://fonts.googleapis.com">
                        <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
                        <link href="https://fonts.googleapis.com/css2?family=Roboto:wght@400;500&display=swap" rel="stylesheet">

                        ```
                        Do not apply inline styles to the navigation bar.
                        The HTML should display the full assignment content. In its CSS, be creative with the rainbow colors and aesthetic.
                        Make it creative and pretty
                        The assignment content should be well-structured and easy to read.
                        respond with JUST the html file
                        """
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)

       
response = response.replace("```html", "")
       
response = response.replace("```", "")
       
with open("templates/assignment.html", "w") as f:
           
f.write(response)


       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"

It uses the Gemini model to dynamically generate HTML for the assignment. It takes the assignment content as input and uses a prompt to instruct Gemini to create a visually appealing HTML page with a creative style.

Next, we'll create an endpoint that will be triggered whenever a new document is added to the assignment bucket:

👉Within the portal folder, edit the app.py file and add the following code within the ## Add your code here" comments , AFTER the new_teaching_plan function:

## Add your code here

def new_teaching_plan():
       
...
       
...
       
...

   
except Exception as e:
       
...
       
...

@app.route('/render_assignment', methods=['POST'])
def render_assignment():
   
try:
       
data = request.get_json()
       
file_name = data.get('name')
       
bucket_name = data.get('bucket')

       
if not file_name or not bucket_name:
           
return jsonify({'error': 'Missing file name or bucket name'}), 400

       
storage_client = storage.Client()
       
bucket = storage_client.bucket(bucket_name)
       
blob = bucket.blob(file_name)
       
content = blob.download_as_text()

       
print(f"File content: {content}")

       
render_assignment_page(content)

       
return jsonify({'message': 'Assignment rendered successfully'})

   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500

## Add your code here

When triggered, it retrieves the file name and bucket name from the request data, downloads the assignment content from Cloud Storage, and calls the render_assignment_page function to generate the HTML.

👉We'll go ahead and run it locally:

cd ~/aidemy-bootstrap/portal
source env/bin/activate
python app.py

👉From the "Web preview" menu at the top of the Cloud Shell window, select "Preview on port 8080". This will open your application in a new browser tab. Navigate to the Assignment link in the navigation bar. You should see a blank page at this point, which is expected behavior since we haven't yet established the communication bridge between the assignment agent and the portal to dynamically populate the content.

14-02-deployment-overview

o ahead and stop the script by pressing Ctrl+C .

👉To incorporate these changes and deploy the updated code, rebuild and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

👉After pushing the new image, redeploy the Cloud Run service. Run the following script to force the Cloud Run update:

export PROJECT_ID=$(gcloud config get project)
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Now, we'll deploy an Eventarc trigger that listens for any new object created (finalized) in the assignment bucket. This trigger will automatically invoke the /render_assignment endpoint on the portal service when a new assignment file is created.

export PROJECT_ID=$(gcloud config get project)
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$(gcloud storage service-agent --project $PROJECT_ID)" \
 
--role="roles/pubsub.publisher"
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud eventarc triggers create portal-assignment-trigger \
--location=us-central1 \
--service-account=$SERVICE_ACCOUNT_NAME \
--destination-run-service=aidemy-portal \
--destination-run-region=us-central1 \
--destination-run-path="/render_assignment" \
--event-filters="bucket=$ASSIGNMENT_BUCKET" \
--event-filters="type=google.cloud.storage.object.v1.finalized"

To verify that the trigger was created successfully, navigate to the Eventarc Triggers page in the Google Cloud Console. You should see portal-assignment-trigger listed in the table. Click on the trigger name to view its details. Assignment Trigger

It may take up to 2-3 minutes for the new trigger to become active.

To see the dynamic assignment generation in action, run the following command to find the URL of your planner agent (if you don't have it handy):

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

Find the URL of your portal agent:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

In the planner agent, generate a new teaching plan.

13-02-assignment

After a few minutes (to allow for the audio generation, assignment generation, and HTML rendering to complete), navigate to the student portal.

👉Click on the "Assignment" link in the navigation bar. You should see a newly created assignment with a dynamically generated HTML. Each time a teaching plan is generated it should be a dynamic assignment.

13-02-assignment

Congratulations on completing the Aidemy multi-agent system ! You've gained practical experience and valuable insights into:

  • The benefits of multi-agent systems, including modularity, scalability, specialization, and simplified maintenance.
  • The importance of event-driven architectures for building responsive and loosely coupled applications.
  • The strategic use of LLMs, matching the right model to the task and integrating them with tools for real-world impact.
  • Cloud-native development practices using Google Cloud services to create scalable and reliable solutions.
  • The importance of considering data privacy and self-hosting models as an alternative to vendor solutions.

You now have a solid foundation for building sophisticated AI-powered applications on Google Cloud!

15. Challenges and Next Steps

Congratulations on building the Aidemy multi-agent system! You've laid a strong foundation for AI-powered education. Now, let's consider some challenges and potential future enhancements to further expand its capabilities and address real-world needs:

Interactive Learning with Live Q&A:

  • Challenge: Can you leverage Gemini 2's Live API to create a real-time Q&A feature for students? Imagine a virtual classroom where students can ask questions and receive immediate, AI-powered responses.

Automated Assignment Submission and Grading:

  • Challenge: Design and implement a system that allows students to submit assignments digitally and have them automatically graded by AI, with a mechanism to detect and prevent plagiarism. This challenge presents a great opportunity to explore Retrieval Augmented Generation (RAG) to enhance the accuracy and reliability of the grading and plagiarism detection processes.

aidemy-climb

16. تمیز کردن

Now that we've built and explored our Aidemy multi-agent system, it's time to clean up our Google Cloud environment.

👉Delete Cloud Run services

gcloud run services delete aidemy-planner --region=us-central1 --quiet
gcloud run services delete aidemy-portal --region=us-central1 --quiet
gcloud run services delete courses-agent --region=us-central1 --quiet
gcloud run services delete book-provider --region=us-central1 --quiet
gcloud run services delete assignment-agent --region=us-central1 --quiet

👉Delete Eventarc trigger

gcloud eventarc triggers delete portal-assignment-trigger --location=us --quiet
gcloud eventarc triggers delete plan-topic-trigger --location=us-central1 --quiet
gcloud eventarc triggers delete portal-assignment-trigger --location=us-central1 --quiet
ASSIGNMENT_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:assignment-agent" --format="value(name)")
COURSES_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:courses-agent" --format="value(name)")
gcloud eventarc triggers delete $ASSIGNMENT_AGENT_TRIGGER --location=us-central1 --quiet
gcloud eventarc triggers delete $COURSES_AGENT_TRIGGER --location=us-central1 --quiet

👉Delete Pub/Sub topic

gcloud pubsub topics delete plan --project="$PROJECT_ID" --quiet

👉Delete Cloud SQL instance

gcloud sql instances delete aidemy --quiet

👉Delete Artifact Registry repository

gcloud artifacts repositories delete agent-repository --location=us-central1 --quiet

👉Delete Secret Manager secrets

gcloud secrets delete db-user --quiet
gcloud secrets delete db-pass --quiet
gcloud secrets delete db-name --quiet

👉Delete Compute Engine instance (if created for Deepseek)

gcloud compute instances delete ollama-instance --zone=us-central1-a --quiet

👉Delete the firewall rule for Deepseek instance

gcloud compute firewall-rules delete allow-ollama-11434 --quiet

👉Delete Cloud Storage buckets

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
gsutil rm -r gs://$COURSE_BUCKET_NAME
gsutil rm -r gs://$ASSIGNMENT_BUCKET

aidemy-broom

،
Aidemy:
Building Multi-Agent Systems with LangGraph, EDA, and Generative AI on Google Cloud

درباره این codelab

subjectآخرین به‌روزرسانی: مارس ۱۳, ۲۰۲۵
account_circleنویسنده: Christina Lin

1. مقدمه

سلام! So, you're into the idea of agents – little helpers that can get things done for you without you even lifting a finger, right? عالی! But let's be real, one agent isn't always going to cut it, especially when you're tackling bigger, more complex projects. You're probably going to need a whole team of them! That's where multi-agent systems come in.

Agents, when powered by LLMs, give you incredible flexibility compared to old-school hard coding. But, and there's always a but, they come with their own set of tricky challenges. And that's exactly what we're going to dive into in this workshop!

عنوان

Here's what you can expect to learn – think of it as leveling up your agent game:

Building Your First Agent with LangGraph : We'll get our hands dirty building your very own agent using LangGraph, a popular framework. You'll learn how to create tools that connect to databases, tap into the latest Gemini 2 API for some internet searching, and optimize the prompts and response, so your agent can interact with not only LLMs but existing services. We'll also show you how function calling works.

Agent Orchestration, Your Way : We'll explore different ways to orchestrate your agents, from simple straight paths to more complex multi-path scenarios. Think of it as directing the flow of your agent team.

Multi-Agent Systems : You'll discover how to set up a system where your agents can collaborate, and get things done together – all thanks to an event-driven architecture.

LLM Freedom – Use the Best for the Job: We're not stuck on just one LLM! You'll see how to use multiple LLMs, assigning them different roles to boost problem-solving power using cool "thinking models."

Dynamic Content? مشکلی نیست! : Imagine your agent creating dynamic content that's tailored specifically for each user, in real-time. We'll show you how to do it!

Taking it to the Cloud with Google Cloud : Forget just playing around in a notebook. We'll show you how to architect and deploy your multi-agent system on Google Cloud so it's ready for the real world!

This project will be a good example of how to use all the techniques we talked about.

2. معماری

Being a teacher or working in education can be super rewarding, but let's face it, the workload, especially all the prep work, can be challenging! Plus, there's often not enough staff and tutoring can be expensive. That's why we're proposing an AI-powered teaching assistant. This tool can lighten the load for educators and help bridge the gap caused by staff shortages and the lack of affordable tutoring.

Our AI teaching assistant can whip up detailed lesson plans, fun quizzes, easy-to-follow audio recaps, and personalized assignments. This lets teachers focus on what they do best: connecting with students and helping them fall in love with learning.

The system has two sites: one for teachers to create lesson plans for upcoming weeks,

برنامه ریز

and one for students to access quizzes, audio recaps, and assignments. پورتال

Alright, let's walk through the architecture powering our teaching assistant, Aidemy. As you can see, we've broken it down into several key components, all working together to make this happen.

معماری

Key Architectural Elements and Technologies :

Google Cloud Platform (GCP) : Central to the entire system:

  • Vertex AI: Accesses Google's Gemini LLMs.
  • Cloud Run: Serverless platform for deploying containerized agents and functions.
  • Cloud SQL: PostgreSQL database for curriculum data.
  • Pub/Sub & Eventarc: Foundation of the event-driven architecture, enabling asynchronous communication between components.
  • Cloud Storage: Stores audio recaps and assignment files.
  • Secret Manager: Securely manages database credentials.
  • Artifact Registry: Stores Docker images for the agents.
  • Compute Engine: To deploy self-hosted LLM instead of relying on vendor solutions

LLMs : The "brains" of the system:

  • Google's Gemini models: (Gemini 1.0 Pro, Gemini 2 Flash, Gemini 2 Flash Thinking, Gemini 1.5-pro) Used for lesson planning, content generation, dynamic HTML creation, quiz explanation and combining the assignments.
  • DeepSeek: Utilized for the specialized task of generating self-study assignments

LangChain & LangGraph : Frameworks for LLM Application Development

  • Facilitates the creation of complex multi-agent workflows.
  • Enables the intelligent orchestration of tools (API calls, database queries, web searches).
  • Implements event-driven architecture for system scalability and flexibility.

In essence, our architecture combines the power of LLMs with structured data and event-driven communication, all running on Google Cloud. This lets us build a scalable, reliable, and effective teaching assistant.

3. قبل از شروع

In the Google Cloud Console , on the project selector page, select or create a Google Cloud project . Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project .

👉Click Activate Cloud Shell at the top of the Google Cloud console (It's the terminal shape icon at the top of the Cloud Shell pane), click on the "Open Editor" button (it looks like an open folder with a pencil). This will open the Cloud Shell Code Editor in the window. You'll see a file explorer on the left side.

پوسته ابری

👉Click on the Cloud Code Sign-in button in the bottom status bar as shown. Authorize the plugin as instructed. If you see Cloud Code - no project in the status bar, select that then in the drop down 'Select a Google Cloud Project' and then select the specific Google Cloud Project from the list of projects that you created.

Login project

👉Open the terminal in the cloud IDE, ترمینال جدید

👉In the terminal, verify that you're already authenticated and that the project is set to your project ID using the following command:

gcloud auth list

👉And run:

gcloud config set project <YOUR_PROJECT_ID>

👉Run the following command to enable the necessary Google Cloud APIs:

gcloud services enable compute.googleapis.com  \
                       
storage.googleapis.com  \
                       
run.googleapis.com  \
                       
artifactregistry.googleapis.com  \
                       
aiplatform.googleapis.com \
                       
eventarc.googleapis.com \
                       
sqladmin.googleapis.com \
                       
secretmanager.googleapis.com \
                       
cloudbuild.googleapis.com \
                       
cloudresourcemanager.googleapis.com \
                       
cloudfunctions.googleapis.com

This may take a couple of minutes..

Enable Gemini Code Assist in Cloud Shell IDE

Click on the Code Assist button in the on left panel as shown and select one last time the correct Google Cloud project. If you are asked to enable the Cloud AI Companion API, please do so and move forward. Once you've selected your Google Cloud project, ensure that you are able to see that in the Cloud Code status message in the status bar and that you also have Code Assist enabled on the right, in the status bar as shown below:

Enable codeassist

Setting up permission

👉Setup service account permission. In the terminal, run :

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")

echo "Here's your SERVICE_ACCOUNT_NAME $SERVICE_ACCOUNT_NAME"

👉 Grant Permissions. In the terminal, run :

#Cloud Storage (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/storage.objectAdmin"

#Pub/Sub (Publish/Receive):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.publisher"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/pubsub.subscriber"


#Cloud SQL (Read/Write):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/cloudsql.editor"


#Eventarc (Receive Events):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/iam.serviceAccountTokenCreator"

gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/eventarc.eventReceiver"

#Vertex AI (User):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/aiplatform.user"

#Secret Manager (Read):
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$SERVICE_ACCOUNT_NAME" \
 
--role="roles/secretmanager.secretAccessor"

👉Validate result in your IAM console کنسول IAM

👉Run the following commands in the terminal to create a Cloud SQL instance named aidemy . We'll need this later, but since this process can take some time, we'll do it now.

gcloud sql instances create aidemy \
   
--database-version=POSTGRES_14 \
   
--cpu=2 \
   
--memory=4GB \
   
--region=us-central1 \
   
--root-password=1234qwer \
   
--storage-size=10GB \
   
--storage-auto-increase

4. Building the first agent

Before we dive into complex multi-agent systems, we need to establish a fundamental building block: a single, functional agent. In this section, we'll take our first steps by creating a simple "book provider" agent. The book provider agent takes a category as input and uses a Gemini LLM to generate a JSON representation book within that category. It then serves these book recommendations as a REST API endpoint .

Book Provider

👉In another browser tab, open the Google Cloud Console in your web browser,in the navigation menu (☰), go to "Cloud Run". Click the "+ ... WRITE A FUNCTION" button.

Create Function

👉Next we'll configures the basic settings of the Cloud Run Function:

  • Service name: book-provider
  • Region: us-central1
  • Runtime: Python 3.12
  • Authentication: Allow unauthenticated invocations to Enabled.

👉Leave other settings as default and click Create . This will take you to the source code editor.

You'll see pre-populated main.py and requirements.txt files.

The main.py will contain the business logic of the function, requirements.txt will contain the packages needed.

👉Now we are ready to write some code! But before diving in, let's see if Gemini Code Assist can give us a head start. Return to the Cloud Shell Editor, click on the Gemini Code Assist icon, and paste the following request into the prompt box: Gemini Code Assist

Use the functions_framework library to be deployable as an HTTP function. 
Accept a request with category and number_of_book parameters (either in JSON body or query string).
Use langchain and gemini to generate the data for book with fields bookname, author, publisher, publishing_date.
Use pydantic to define a Book model with the fields: bookname (string, description: "Name of the book"), author (string, description: "Name of the author"), publisher (string, description: "Name of the publisher"), and publishing_date (string, description: "Date of publishing").
Use langchain and gemini model to generate book data. the output should follow the format defined in Book model.

The logic should use JsonOutputParser from langchain to enforce output format defined in Book Model.
Have a function get_recommended_books(category) that internally uses langchain and gemini to return a single book object.
The main function, exposed as the Cloud Function, should call get_recommended_books() multiple times (based on number_of_book) and return a JSON list of the generated book objects.
Handle the case where category or number_of_book are missing by returning an error JSON response with a 400 status code.
return a JSON string representing the recommended books. use os library to retrieve GOOGLE_CLOUD_PROJECT env var. Use ChatVertexAI from langchain for the LLM call

Code Assist will then generate a potential solution, providing both the source code and a requirements.txt dependency file.

We encourage you to compare the Code Assist's generated code with the tested, correct solution provided below. This allows you to evaluate the tool's effectiveness and identify any potential discrepancies. While LLMs should never be blindly trusted, Code Assist can be a great tool for rapid prototyping and generating initial code structures, and should be use for a good head start.

Since this is a workshop, we'll proceed with the verified code provided below. However, feel free to experiment with the Code Assist-generated code in your own time to gain a deeper understanding of its capabilities and limitations.

👉Return to the Cloud Run Function's source code editor (in the other browser tab). Carefully replace the existing content of main.py with the code provided below:

import functions_framework
import json
from flask import Flask, jsonify, request
from langchain_google_vertexai import ChatVertexAI
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
import os

class Book(BaseModel):
   
bookname: str = Field(description="Name of the book")
   
author: str = Field(description="Name of the author")
   
publisher: str = Field(description="Name of the publisher")
   
publishing_date: str = Field(description="Date of publishing")


project_id = os.environ.get("GOOGLE_CLOUD_PROJECT")  

llm = ChatVertexAI(model_name="gemini-2.0-flash-lite-001")

def get_recommended_books(category):
    """
    A simple book recommendation function.

    Args:
        category (str): category

    Returns:
        str: A JSON string representing the recommended books.
    """
   
parser = JsonOutputParser(pydantic_object=Book)
   
question = f"Generate a random made up book on {category} with bookname, author and publisher and publishing_date"

   
prompt = PromptTemplate(
       
template="Answer the user query.\n{format_instructions}\n{query}\n",
       
input_variables=["query"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"query": question})

   
return  json.dumps(response)
   

@functions_framework.http
def recommended(request):
   
request_json = request.get_json(silent=True) # Get JSON data
   
if request_json and 'category' in request_json and 'number_of_book' in request_json:
       
category = request_json['category']
       
number_of_book = int(request_json['number_of_book'])
   
elif request.args and 'category' in request.args and 'number_of_book' in request.args:
       
category = request.args.get('category')
       
number_of_book = int(request.args.get('number_of_book'))

   
else:
       
return jsonify({'error': 'Missing category or number_of_book parameters'}), 400


   
recommendations_list = []
   
for i in range(number_of_book):
       
book_dict = json.loads(get_recommended_books(category))
       
print(f"book_dict=======>{book_dict}")
   
       
recommendations_list.append(book_dict)

   
   
return jsonify(recommendations_list)

👉Replace the contents of requirements.txt with the following:

functions-framework==3.*
google-genai==1.0.0
flask==3.1.0
jsonify==0.5
langchain_google_vertexai==2.0.13
langchain_core==0.3.34
pydantic==2.10.5

👉we'll set the Function entry point : recommended

03-02-function-create.png

👉Click SAVE AND DEPLOY . to deploy the Function. Wait for the deployment process to complete. The Cloud Console will display the status. این ممکن است چند دقیقه طول بکشد.

متن جایگزین 👉Once deployed, go back in the cloud shell editor, in the terminal run:

export PROJECT_ID=$(gcloud config get project)
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

curl -X POST -H "Content-Type: application/json" -d '{"category": "Science Fiction", "number_of_book": 2}' $BOOK_PROVIDER_URL

It should show some book data in JSON format.

[
 
{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},
 
{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}
]

تبریک می گویم! You have successfully deployed a Cloud Run Function. This is one of the services we will be integrating when developing our Aidemy agent.

5. Building Tools: Connecting Agents to RESTFUL service and Data

Let's go ahead and download the Bootstrap Skeleton Project, make sure you are in the Cloud Shell Editor. In the terminal run,

git clone https://github.com/weimeilin79/aidemy-bootstrap.git

After running this command, a new folder named aidemy-bootstrap will be created in your Cloud Shell environment.

In the Cloud Shell Editor's Explorer pane (usually on the left side), you should now see the folder that was created when you cloned the Git repository aidemy-bootstrap . Open the root folder of your project in the Explorer. You'll find a planner subfolder within it, open that as well. project explorer

Let's start building the tools our agents will use to become truly helpful. As you know, LLMs are excellent at reasoning and generating text, but they need access to external resources to perform real-world tasks and provide accurate, up-to-date information. Think of these tools as the agent's "Swiss Army knife," giving it the ability to interact with the world.

When building an agent, it's easy to fall into hard-coding a ton of details. This creates an agent that is not flexible. Instead, by creating and using tools, the agent has access to external logic or systems which gives it the benefits of both the LLM and traditional programming.

In this section, we'll create the foundation for the planner agent, which teachers will use to generate lesson plans. Before the agent starts generating a plan, we want to set boundaries by providing more details on the subject and topic. We'll build three tools:

  1. Restful API Call: Interacting with a pre-existing API to retrieve data.
  2. Database Query: Fetching structured data from a Cloud SQL database.
  3. Google Search: Accessing real-time information from the web.

Fetching Book Recommendations from an API

First, let's create a tool that retrieves book recommendations from the book-provider API we deployed in the previous section. This demonstrates how an agent can leverage existing services.

Recommend book

In the Cloud Shell Editor, open the aidemy-bootstrap project that you cloned in the previous section.

👉Edit the book.py in the planner folder, and paste the following code at the end of the file:

def recommend_book(query: str):
    """
    Get a list of recommended book from an API endpoint
   
    Args:
        query: User's request string
    """

   
region = get_next_region();
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)

   
query = f"""The user is trying to plan a education course, you are the teaching assistant. Help define the category of what the user requested to teach, respond the categroy with no more than two word.

    user request:   {query}
    """
   
print(f"-------->{query}")
   
response = llm.invoke(query)
   
print(f"CATEGORY RESPONSE------------>: {response}")
   
   
# call this using python and parse the json back to dict
   
category = response.strip()
   
   
headers = {"Content-Type": "application/json"}
   
data = {"category": category, "number_of_book": 2}

   
books = requests.post(BOOK_PROVIDER_URL, headers=headers, json=data)
   
   
return books.text

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

توضیح:

  • recommend_book(query: str) : This function takes a user's query as input.
  • LLM Interaction : It uses the LLM to extract the category from the query. This demonstrates how you can use the LLM to help create parameters for tools.
  • API Call : It makes a POST request to the book-provider API, passing the category and the desired number of books.

👉To test this new function, set the environment variable, run :

cd ~/aidemy-bootstrap/planner/
export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")

👉Install the dependencies and run the code to ensure it works, run:

cd ~/aidemy-bootstrap/planner/
python -m venv env
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
pip install -r requirements.txt
python book.py

Ignore the Git warning pop-up window.

You should see a JSON string containing book recommendations retrieved from the book-provider API. The results are randomly generated. Your books may not be the same, but you should receive two book recommendations in JSON format.

[{"author":"Anya Sharma","bookname":"Echoes of the Singularity","publisher":"NovaLight Publishing","publishing_date":"2077-03-15"},{"author":"Anya Sharma","bookname":"Echoes of the Quantum Dawn","publisher":"Nova Genesis Publishing","publishing_date":"2077-03-15"}]

If you see this, the first tool is working correctly!

Instead of explicitly crafting a RESTful API call with specific parameters, we're using natural language ("I'm doing a course..."). The agent then intelligently extracts the necessary parameters (like the category) using NLP, highlighting how the agent leverages natural language understanding to interact with the API.

compare call

👉 Remove the following testing code from the book.py

if __name__ == "__main__":
   
print(recommend_book("I'm doing a course for my 5th grade student on Math Geometry, I'll need to recommend few books come up with a teach plan, few quizes and also a homework assignment."))

Getting Curriculum Data from a Database

Next, we'll build a tool that fetches structured curriculum data from a Cloud SQL PostgreSQL database. This allows the agent to access a reliable source of information for lesson planning.

create db

Remember the aidemy Cloud SQL instance you've created in previous step? Here's where it will be used.

👉Create a database named aidemy-db in the new instance.

gcloud sql databases create aidemy-db \
   
--instance=aidemy

Let's verify the instance in the Cloud SQL in the Google Cloud Console, You should see a Cloud SQL instance named aidemy listed. Click on the instance name to view its details. In the Cloud SQL instance details page, click on "SQL Studio" in the left-hand navigation menu. با این کار یک تب جدید باز می شود.

Click to connect to the database. Sign in to the SQL Studio

Select aidemy-db as the database. enter postgres as user and 1234qwer as the password . sql studio sign in

👉In the SQL Studio query editor, paste the following SQL code:

CREATE TABLE curriculums (
   
id SERIAL PRIMARY KEY,
   
year INT,
   
subject VARCHAR(255),
   
description TEXT
);

-- Inserting detailed curriculum data for different school years and subjects
INSERT INTO curriculums (year, subject, description) VALUES
-- Year 5
(5, 'Mathematics', 'Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.'),
(5, 'English', 'Developing reading comprehension, creative writing, and basic grammar, with a focus on storytelling and poetry.'),
(5, 'Science', 'Exploring basic physics, chemistry, and biology concepts, including forces, materials, and ecosystems.'),
(5, 'Computer Science', 'Basic coding concepts using block-based programming and an introduction to digital literacy.'),

-- Year 6
(6, 'Mathematics', 'Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.'),
(6, 'English', 'Introduction to persuasive writing, character analysis, and deeper comprehension of literary texts.'),
(6, 'Science', 'Forces and motion, the human body, and introductory chemical reactions with hands-on experiments.'),
(6, 'Computer Science', 'Introduction to algorithms, logical reasoning, and basic text-based programming (Python, Scratch).'),

-- Year 7
(7, 'Mathematics', 'Algebraic expressions, geometry, and introduction to statistics and probability.'),
(7, 'English', 'Analytical reading of classic and modern literature, essay writing, and advanced grammar skills.'),
(7, 'Science', 'Introduction to cells and organisms, chemical reactions, and energy transfer in physics.'),
(7, 'Computer Science', 'Building on programming skills with Python, introduction to web development, and cyber safety.');

This SQL code creates a table named curriculums and inserts some sample data. Click Run to execute the SQL code. You should see a confirmation message indicating that the commands were executed successfully.

👉Expand the explorer, find the newly created table and click query . It should open a new editor tab with SQL generated for you,

sql studio select table

SELECT * FROM
 
"public"."curriculums" LIMIT 1000;

👉Click Run .

The results table should display the rows of data you inserted in the previous step, confirming that the table and data were created correctly.

Now that you have successfully created a database with populated sample curriculum data, we'll build a tool to retrieve it.

👉In the Cloud Code Editor, edit file curriculums.py in the aidemy-bootstrap folder and paste the following code at the end of the file:

def connect_with_connector() -> sqlalchemy.engine.base.Engine:

   
db_user = os.environ["DB_USER"]
   
db_pass = os.environ["DB_PASS"]
   
db_name = os.environ["DB_NAME"]

   
encoded_db_user = os.environ.get("DB_USER")
   
print(f"--------------------------->db_user: {db_user!r}")  
   
print(f"--------------------------->db_pass: {db_pass!r}")
   
print(f"--------------------------->db_name: {db_name!r}")

   
ip_type = IPTypes.PRIVATE if os.environ.get("PRIVATE_IP") else IPTypes.PUBLIC

   
connector = Connector()

   
def getconn() -> pg8000.dbapi.Connection:
       
conn: pg8000.dbapi.Connection = connector.connect(
           
instance_connection_name,
           
"pg8000",
           
user=db_user,
           
password=db_pass,
           
db=db_name,
           
ip_type=ip_type,
       
)
       
return conn

   
pool = sqlalchemy.create_engine(
       
"postgresql+pg8000://",
       
creator=getconn,
       
pool_size=2,
       
max_overflow=2,
       
pool_timeout=30,  # 30 seconds
       
pool_recycle=1800,  # 30 minutes
   
)
   
return pool



def init_connection_pool() -> sqlalchemy.engine.base.Engine:
   
   
return (
       
connect_with_connector()
   
)

   
raise ValueError(
       
"Missing database connection type. Please define one of INSTANCE_HOST, INSTANCE_UNIX_SOCKET, or INSTANCE_CONNECTION_NAME"
   
)

def get_curriculum(year: int, subject: str):
    """
    Get school curriculum
   
    Args:
        subject: User's request subject string
        year: User's request year int
    """
   
try:
       
stmt = sqlalchemy.text(
           
"SELECT description FROM curriculums WHERE year = :year AND subject = :subject"
       
)

       
with db.connect() as conn:
           
result = conn.execute(stmt, parameters={"year": year, "subject": subject})
           
row = result.fetchone()
       
if row:
           
return row[0]  
       
else:
           
return None  

   
except Exception as e:
       
print(e)
       
return None

db = init_connection_pool()

توضیح:

  • Environment Variables : The code retrieves database credentials and connection information from environment variables (more on this below).
  • connect_with_connector() : This function uses the Cloud SQL Connector to establish a secure connection to the database.
  • get_curriculum(year: int, subject: str) : This function takes the year and subject as input, queries the curriculums table, and returns the corresponding curriculum description.

👉Before we can run the code, we must set some environment variables, in the terminal, run:

export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉To test add the following code to the end of curriculums.py :

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉Run the code:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python curriculums.py

You should see the curriculum description for 6th-grade Mathematics printed to the console.

Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.

If you see the curriculum description, the database tool is working correctly! Go ahead and stop the script by pressing Ctrl+C .

👉 Remove the following testing code from the curriculums.py

if __name__ == "__main__":
   
print(get_curriculum(6, "Mathematics"))

👉Exit virtual environment, in terminal run:

deactivate

6. Building Tools: Access real-time information from the web

Finally, we'll build a tool that uses the Gemini 2 and Google Search integration to access real-time information from the web. This helps the agent stay up-to-date and provide relevant results.

Gemini 2's integration with the Google Search API enhances agent capabilities by providing more accurate and contextually relevant search results. This allows agents to access up-to-date information and ground their responses in real-world data, minimizing hallucinations. The improved API integration also facilitates more natural language queries, enabling agents to formulate complex and nuanced search requests.

جستجو کنید

This function takes a search query, curriculum, subject, and year as input and uses the Gemini API and the Google Search tool to retrieve relevant information from the internet. If you look closely, it's using the Google Generative AI SDK to do function calling without using any other framework.

👉Edit search.py in the aidemy-bootstrap folder and paste the following code at the end of the file:

model_id = "gemini-2.0-flash-001"

google_search_tool = Tool(
   
google_search = GoogleSearch()
)

def search_latest_resource(search_text: str, curriculum: str, subject: str, year: int):
    """
    Get latest information from the internet
   
    Args:
        search_text: User's request category   string
        subject: "User's request subject" string
        year: "User's request year"  integer
    """
   
search_text = "%s in the context of year %d and subject %s with following curriculum detail %s " % (search_text, year, subject, curriculum)
   
region = get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"search_latest_resource text-----> {search_text}")
   
response = client.models.generate_content(
       
model=model_id,
       
contents=search_text,
       
config=GenerateContentConfig(
           
tools=[google_search_tool],
           
response_modalities=["TEXT"],
       
)
   
)
   
print(f"search_latest_resource response-----> {response}")
   
return response

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

توضیح:

  • Defining Tool - google_search_tool : Wrapping the GoogleSearch object within a Tool
  • search_latest_resource(search_text: str, subject: str, year: int) : This function takes a search query, subject, and year as input and uses the Gemini API to perform a Google search. Gemini model
  • GenerateContentConfig : Define that it has access to the GoogleSearch tool

The Gemini model internally analyzes the search_text and determines whether it can answer the question directly or if it needs to use the GoogleSearch tool. This is a critical step that happens within the LLM's reasoning process. The model has been trained to recognize situations where external tools are necessary. If the model decides to use the GoogleSearch tool, the Google Generative AI SDK handles the actual invocation. The SDK takes the model's decision and the parameters it generates and sends them to the Google Search API. This part is hidden from the user in the code.

The Gemini model then integrates the search results into its response. It can use the information to answer the user's question, generate a summary, or perform some other task.

👉To test, run the code:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
source env/bin/activate
python search.py

You should see the Gemini Search API response containing search results related to "Syllabus for Year 5 Mathematics." The exact output will depend on the search results, but it will be a JSON object with information about the search.

If you see search results, the Google Search tool is working correctly! Go ahead and stop the script by pressing Ctrl+C .

👉And remove the last part in the code.

if __name__ == "__main__":
 
response = search_latest_resource("What are the syllabus for Year 6 Mathematics?", "Expanding on fractions, ratios, algebraic thinking, and problem-solving strategies.", "Mathematics", 6)
 
for each in response.candidates[0].content.parts:
   
print(each.text)

👉Exit virtual environment, in terminal run:

deactivate

تبریک می گویم! You have now built three powerful tools for your planner agent: an API connector, a database connector, and a Google Search tool. These tools will enable the agent to access the information and capabilities it needs to create effective teaching plans.

7. Orchestrating with LangGraph

Now that we have built our individual tools, it's time to orchestrate them using LangGraph. This will allow us to create a more sophisticated "planner" agent that can intelligently decide which tools to use and when, based on the user's request.

LangGraph is a Python library designed to make it easier to build stateful, multi-actor applications using Large Language Models (LLMs). Think of it as a framework for orchestrating complex conversations and workflows involving LLMs, tools, and other agents.

مفاهیم کلیدی:

  • Graph Structure: LangGraph represents your application's logic as a directed graph. Each node in the graph represents a step in the process (eg, a call to an LLM, a tool invocation, a conditional check). Edges define the flow of execution between nodes.
  • State: LangGraph manages the state of your application as it moves through the graph. This state can include variables like the user's input, the results of tool calls, intermediate outputs from LLMs, and any other information that needs to be preserved between steps.
  • Nodes: Each node represents a computation or interaction. آنها می توانند:
    • Tool Nodes: Use a tool (eg, perform a web search, query a database)
    • Function Nodes: Execute a Python function.
  • Edges: Connect nodes, defining the flow of execution. آنها می توانند:
    • Direct Edges: A simple, unconditional flow from one node to another.
    • Conditional Edges: The flow depends on the outcome of a conditional node.

لانگ گراف

We will use LangGraph to implement the orchestration. Let's edit the aidemy.py file under aidemy-bootstrap folder to define our LangGraph logic.

👉Append follow code to the end of aidemy.py :

tools = [get_curriculum, search_latest_resource, recommend_book]

def determine_tool(state: MessagesState):
   
llm = ChatVertexAI(model_name="gemini-2.0-flash-001", location=get_next_region())
   
sys_msg = SystemMessage(
                   
content=(
                       
f"""You are a helpful teaching assistant that helps gather all needed information.
                            Your ultimate goal is to create a detailed 3-week teaching plan.
                            You have access to tools that help you gather information.  
                            Based on the user request, decide which tool(s) are needed.

                        """
                   
)
               
)

   
llm_with_tools = llm.bind_tools(tools)
   
return {"messages": llm_with_tools.invoke([sys_msg] + state["messages"])}

This function is responsible for taking the current state of the conversation, providing the LLM with a system message, and then asking the LLM to generate a response. The LLM can either respond directly to the user or choose to use one of the available tools.

tools : This list represents the set of tools that the agent has available to it. It contains three tool functions that we defined in the previous steps: get_curriculum , search_latest_resource , and recommend_book . llm.bind_tools(tools) : It "binds" the tools list to the llm object. Binding the tools tells the LLM that these tools are available and provides the LLM with information about how to use them (eg, the names of the tools, the parameters they accept, and what they do).

We will use LangGraph to implement the orchestration.

👉Append following code to the end of aidemy.py :

def prep_class(prep_needs):
   
   
builder = StateGraph(MessagesState)
   
builder.add_node("determine_tool", determine_tool)
   
builder.add_node("tools", ToolNode(tools))
   
   
builder.add_edge(START, "determine_tool")
   
builder.add_conditional_edges("determine_tool",tools_condition)
   
builder.add_edge("tools", "determine_tool")

   
   
memory = MemorySaver()
   
graph = builder.compile(checkpointer=memory)

   
config = {"configurable": {"thread_id": "1"}}
   
messages = graph.invoke({"messages": prep_needs},config)
   
print(messages)
   
for m in messages['messages']:
       
m.pretty_print()
   
teaching_plan_result = messages["messages"][-1].content  


   
return teaching_plan_result

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan")

توضیح:

  • StateGraph(MessagesState) : Creates a StateGraph object. A StateGraph is a core concept in LangGraph. It represents the workflow of your agent as a graph, where each node in the graph represents a step in the process. Think of it as defining the blueprint for how the agent will reason and act.
  • Conditional Edge: Originating from the "determine_tool" node, the tools_condition argument is likely a function that determines which edge to follow based on the output of the determine_tool function. Conditional edges allow the graph to branch based on the LLM's decision about which tool to use (or whether to respond to the user directly). This is where the agent's "intelligence" comes into play – it can dynamically adapt its behavior based on the situation.
  • Loop: Adds an edge to the graph that connects the "tools" node back to the "determine_tool" node. This creates a loop in the graph, allowing the agent to repeatedly use tools until it has gathered enough information to complete the task and provide a satisfactory answer. This loop is crucial for complex tasks that require multiple steps of reasoning and information gathering.

Now, let's test our planner agent to see how it orchestrates the different tools.

This code will run the prep_class function with a specific user input, simulating a request to create a teaching plan for 5th-grade Mathematics in Geometry, using the curriculum, book recommendations, and the latest internet resources.

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉Run the code:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
pip install -r requirements.txt
python aidemy.py

Watch the log in the terminal. You should see evidence that the agent is calling all three tools (getting the school curriculum, getting book recommendations, and searching for the latest resources) before providing the final teaching plan. This demonstrates that the LangGraph orchestration is working correctly, and the agent is intelligently using all available tools to fulfill the user's request.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, , get school curriculum, and come up with few books recommendation plus  search latest resources on the internet base on the curriculum outcome. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
search_text: Geometry
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
subject: Mathematics
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.....) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================
Tool Calls:
 
recommend_book (93b48189-4d69-4c09-a3bd-4e60cdc5f1c6)
 
Call ID: 93b48189-4d69-4c09-a3bd-4e60cdc5f1c6
 
Args:
   
query: Mathematics Geometry Year 5
================================= Tool Message =================================
Name: recommend_book

[{.....}]

================================== Ai Message ==================================

Based on the curriculum outcome, here is a 3-week teaching plan for year 5 Mathematics Geometry:

**Week 1: Introduction to Shapes and Properties**
.........

Stop the script by pressing Ctrl+C .

👉(THIS STEP IS OPTIONAL) replace the testing code with a different prompt, which requires different tools to be called.

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉(THIS STEP IS OPTIONAL, do this ONLY IF you ran the previous step) Run the code again:

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python aidemy.py

What did you notice this time? Which tools did the agent call? You should see that the agent only calls the search_latest_resource tool this time. This is because the prompt does not specify that it needs the other two tools, and our LLM is smart enough to not call the other tools.

================================ Human Message =================================

I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan
================================== Ai Message ==================================
Tool Calls:
 
get_curriculum (xxx)
 
Call ID: xxx
 
Args:
   
year: 5.0
   
subject: Mathematics
================================= Tool Message =================================
Name: get_curriculum

Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques.
================================== Ai Message ==================================
Tool Calls:
 
search_latest_resource (xxx)
 
Call ID: xxxx
 
Args:
   
year: 5.0
   
subject: Mathematics
   
curriculum: {"content": "Introduction to fractions, decimals, and percentages, along with foundational geometry and problem-solving techniques."}
   
search_text: Geometry
================================= Tool Message =================================
Name: search_latest_resource

candidates=[Candidate(content=Content(parts=[Part(.......token_count=40, total_token_count=772) automatic_function_calling_history=[] parsed=None
================================== Ai Message ==================================

Based on the information provided, a 3-week teaching plan for Year 5 Mathematics focusing on Geometry could look like this:

**Week 1:  Introducing 2D Shapes**
........
* Use visuals, manipulatives, and real-world examples to make the learning experience engaging and relevant.

Stop the script by pressing Ctrl+C .

👉 Remove the testing code to keep your aidemy.py file clean (DO NOT SKIP THIS STEP!):

if __name__ == "__main__":
 
prep_class("I'm doing a course for  year 5 on subject Mathematics in Geometry, search latest resources on the internet base on the subject. And come up with a 3 week teaching plan")

With our agent logic now defined, let's launch the Flask web application. This will provide a familiar form-based interface for teachers to interact with the agent. While chatbot interactions are common with LLMs, we're opting for a traditional form submit UI, as it may be more intuitive for many educators.

If you've closed your terminal or the environment variables are no longer set, re-run the following commands

export BOOK_PROVIDER_URL=$(gcloud run services describe book-provider --region=us-central1 --project=$PROJECT_ID --format="value(status.url)")
export PROJECT_ID=$(gcloud config get project)
export INSTANCE_NAME="aidemy"
export REGION="us-central1"
export DB_USER="postgres"
export DB_PASS="1234qwer"
export DB_NAME="aidemy-db"

👉Now, start the Web UI.

cd ~/aidemy-bootstrap/planner/
source env/bin/activate
python app.py

Look for startup messages in the Cloud Shell terminal output. Flask usually prints messages indicating that it's running and on what port.

Running on http://127.0.0.1:8080
Running on http://127.0.0.1:8080
The application needs to keep running to serve requests.

👉From the "Web preview" menu, choose Preview on port 8080. Cloud Shell will open a new browser tab or window with the web preview of your application.

صفحه وب

In the application interface, select 5 for Year, select subject Mathematics and type in Geometry in the Add-on Request

Rather than staring blankly while waiting for the response, switch over to the Cloud Editor's terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

👉Stop the script by pressing Ctrl+C in the terminal.

👉Exit the virtual environment:

deactivate

8. Deploying planner agent to the cloud

Build and push image to registry

نمای کلی

👉Time to deploy this to the cloud. In the terminal, create an artifacts repository to store the docker image we are going to build.

gcloud artifacts repositories create agent-repository \
   
--repository-format=docker \
   
--location=us-central1 \
   
--description="My agent repository"

You should see Created repository [agent-repository].

👉Run the following command to build the Docker image.

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .

👉We need to retag the image so that it's hosted in Artifact Registry instead of GCR and push the tagged image to Artifact Registry:

export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

Once the push is complete, you can verify that the image is successfully stored in Artifact Registry. Navigate to the Artifact Registry in the Google Cloud Console. You should find the aidemy-planner image within the agent-repository repository. Aidemy planner image

Securing Database Credentials with Secret Manager

To securely manage and access database credentials, we'll use Google Cloud Secret Manager. This prevents hardcoding sensitive information in our application code and enhances security.

👉We'll create individual secrets for the database username, password, and database name. This approach allows us to manage each credential independently. In the terminal run:

gcloud secrets create db-user
printf "postgres" | gcloud secrets versions add db-user --data-file=-

gcloud secrets create db-pass
printf "1234qwer" | gcloud secrets versions add db-pass --data-file=-

gcloud secrets create db-name
printf "aidemy-db" | gcloud secrets versions add db-name --data-file=-

Using Secret Manager is a important step in securing your application and preventing accidental exposure of sensitive credentials. It follows security best practices for cloud deployments.

در Cloud Run مستقر شوید

Cloud Run is a fully managed serverless platform that allows you to deploy containerized applications quickly and easily. It abstracts away the infrastructure management, letting you focus on writing and deploying your code. We'll be deploying our planner as a Cloud Run service.

👉In the Google Cloud Console, navigate to " Cloud Run ". Click on DEPLOY CONTAINER and select SERVICE . Configure your Cloud Run service:

Cloud run

  1. Container image : Click "Select" in the URL field. Find the image URL you pushed to Artifact Registry (eg, us-central1-docker.pkg.dev/YOUR_PROJECT_ID/agent-repository/agent-planner/YOUR_IMG).
  2. Service name : aidemy-planner
  3. Region : Select the us-central1 region.
  4. Authentication : For the purpose of this workshop, you can allow "Allow unauthenticated invocations". For production, you'll likely want to restrict access.
  5. Container(s) tab (Expand the Containers, Network):
    • Setting tab:
      • منبع
        • memory : 2GiB
    • Variables & Secrets tab:
      • متغیرهای محیطی:
        • Add name: GOOGLE_CLOUD_PROJECT and value: <YOUR_PROJECT_ID>
        • Add name: BOOK_PROVIDER_URL , and set the value to your book-provider function URL, which you can determine using the following command in the terminal:
          gcloud run services describe book-provider \
             
          --region=us-central1 \
             
          --project=$PROJECT_ID \
             
          --format="value(status.url)"
      • Secrets exposed as environment variables:
        • Add name: DB_USER , secret: select db-user and version: latest
        • Add name: DB_PASS , secret: select db-pass and version: latest
        • Add name: DB_NAME , secret: select db-name and version: latest

Set secret

Leave other as default.

👉Click CREATE .

Cloud Run will deploy your service.

Once deployed, click on the service to its detail page, you can find the deployed URL available on the top.

URL

In the application interface, select 7 for the Year, choose Mathematics as the subject, and enter Algebra in the Add-on Request field. This will provide the agent with the necessary context to generate a tailored lesson plan.

تبریک می گویم! You've successfully created a teaching plan using our powerful AI agent. This demonstrates the potential of agents to significantly reduce workload and streamline tasks, ultimately improving efficiency and making life easier for educators.

9. سیستم های چند عاملی

Now that we've successfully implemented the teaching plan creation tool, let's shift our focus to building the student portal. This portal will provide students with access to quizzes, audio recaps, and assignments related to their coursework. Given the scope of this functionality, we'll leverage the power of multi-agent systems to create a modular and scalable solution.

As we discussed earlier, instead of relying on a single agent to handle everything, a multi-agent system allows us to break down the workload into smaller, specialized tasks, each handled by a dedicated agent. This approach offers several key advantages:

Modularity and Maintainability : Instead of creating a single agent that does everything, build smaller, specialized agents with well-defined responsibilities. This modularity makes the system easier to understand, maintain, and debug. When a problem arises, you can isolate it to a specific agent, rather than having to sift through a massive codebase.

Scalability : Scaling a single, complex agent can be a bottleneck. With a multi-agent system, you can scale individual agents based on their specific needs. For example, if one agent is handling a high volume of requests, you can easily spin up more instances of that agent without affecting the rest of the system.

Team Specialization : Think of it like this: you wouldn't ask one engineer to build an entire application from scratch. Instead, you assemble a team of specialists, each with expertise in a particular area. Similarly, a multi-agent system allows you to leverage the strengths of different LLMs and tools, assigning them to agents that are best suited for specific tasks.

Parallel Development : Different teams can work on different agents concurrently, speeding up the development process. Since agents are independent, changes to one agent are less likely to impact other agents.

معماری رویداد محور

To enable effective communication and coordination between these agents, we'll employ an event-driven architecture. This means that agents will react to "events" happening within the system.

Agents subscribe to specific event types (eg, "teaching plan generated," "assignment created"). When an event occurs, the relevant agents are notified and can react accordingly. This decoupling promotes flexibility, scalability, and real-time responsiveness.

نمای کلی

Now, to kick things off, we need a way to broadcast these events. To do this, we will set up a Pub/Sub topic. Let's start by creating a topic called plan .

👉Go to Google Cloud Console pub/sub and click on the "Create Topic" button.

👉Configure the Topic with ID/name plan and uncheck Add a default subscription , leave rest as default and click Create .

The Pub/Sub page will refresh, and you should now see your newly created topic listed in the table. موضوع ایجاد کنید

Now, let's integrate the Pub/Sub event publishing functionality into our planner agent. We'll add a new tool that sends a "plan" event to the Pub/Sub topic we just created. This event will signal to other agents in the system (like those in the student portal) that a new teaching plan is available.

👉Go back to the Cloud Code Editor and open the app.py file located in the planner folder. We will be adding a function that publishes the event. جایگزین کنید:

##ADD SEND PLAN EVENT FUNCTION HERE

با

def send_plan_event(teaching_plan:str):
    """
    Send the teaching event to the topic called plan
   
    Args:
        teaching_plan: teaching plan
    """
   
publisher = pubsub_v1.PublisherClient()
   
print(f"-------------> Sending event to topic plan: {teaching_plan}")
   
topic_path = publisher.topic_path(PROJECT_ID, "plan")

   
message_data = {"teaching_plan": teaching_plan}
   
data = json.dumps(message_data).encode("utf-8")

   
future = publisher.publish(topic_path, data)

   
return f"Published message ID: {future.result()}"

  • send_plan_event : This function takes the generated teaching plan as input, creates a Pub/Sub publisher client, constructs the topic path, converts the teaching plan into a JSON string , publishes the message to the topic.

In the same app.py file

👉Update the prompt to instruct the agent to send the teaching plan event to the Pub/Sub topic after generating the teaching plan. جایگزین کنید

### ADD send_plan_event CALL

with the following:

send_plan_event(teaching_plan)

By adding the send_plan_event tool and modifying the prompt, we've enabled our planner agent to publish events to Pub/Sub, allowing other components of our system to react to the creation of new teaching plans. We will now have a functional multi-agent system in the following sections.

10. Empowering Students with On-Demand Quizzes

Imagine a learning environment where students have access to an endless supply of quizzes tailored to their specific learning plans. These quizzes provide immediate feedback, including answers and explanations, fostering a deeper understanding of the material. This is the potential we aim to unlock with our AI-powered quiz portal.

To bring this vision to life, we'll build a quiz generation component that can create multiple-choice questions based on the content of the teaching plan.

نمای کلی

👉In the Cloud Code Editor's Explorer pane, navigate to the portal folder. Open the quiz.py file copy and paste the following code to the end of the file.

def generate_quiz_question(file_name: str, difficulty: str, region:str ):
    """Generates a single multiple-choice quiz question using the LLM.
   
    ```json
    {
      "question": "The question itself",
      "options": ["Option A", "Option B", "Option C", "Option D"],
      "answer": "The correct answer letter (A, B, C, or D)"
    }
    ```
    """

   
print(f"region: {region}")
   
# Connect to resourse needed from Google Cloud
   
llm = VertexAI(model_name="gemini-1.5-pro", location=region)


   
plan=None
   
#load the file using file_name and read content into string call plan
   
with open(file_name, 'r') as f:
       
plan = f.read()

   
parser = JsonOutputParser(pydantic_object=QuizQuestion)


   
instruction = f"You'll provide one question with difficulty level of {difficulty}, 4 options as multiple choices and provide the anwsers, the quiz needs to be related to the teaching plan {plan}"

   
prompt = PromptTemplate(
       
template="Generates a single multiple-choice quiz question\n {format_instructions}\n  {instruction}\n",
       
input_variables=["instruction"],
       
partial_variables={"format_instructions": parser.get_format_instructions()},
   
)
   
   
chain = prompt | llm | parser
   
response = chain.invoke({"instruction": instruction})

   
print(f"{response}")
   
return  response


In the agent it creates a JSON output parser that's specifically designed to understand and structure the LLM's output. It uses the QuizQuestion model we defined earlier to ensure the parsed output conforms to the correct format (question, options, and answer).

👉Execute the following commands in the terminal to set up a virtual environment, install dependencies, and start the agent:

cd ~/aidemy-bootstrap/portal/
python -m venv env
source env/bin/activate
pip install -r requirements.txt
python app.py

Use the Cloud Shell's web preview feature to access the running application. Click on the "Quizzes" link, either in the top navigation bar or from the card on the index page. You should see three randomly generated quizzes displayed for the student. These quizzes are based on the teaching plan and demonstrate the power of our AI-powered quiz generation system.

آزمون ها

👉To stop the locally running process, press Ctrl+C in the terminal.

Gemini 2 Thinking for Explanations

Okay, so we've got quizzes, which is a great start! But what if students get something wrong? That's where the real learning happens, right? If we can explain why their answer was off and how to get to the correct one, they're way more likely to remember it. Plus, it helps clear up any confusion and boost their confidence.

That's why we're going to bring in the big guns: Gemini 2's "thinking" model! Think of it like giving the AI a little extra time to think things through before explaining. It lets it give more detailed and better feedback.

We want to see if it can help students by assisting, answering and explaining in detail. To test it out, we'll start with a notoriously tricky subject, Calculus.

نمای کلی

👉First, head over to the Cloud Code Editor, in answer.py inside the portal folder replace

def answer_thinking(question, options, user_response, answer, region):
   
return ""

with following code snippet:

def answer_thinking(question, options, user_response, answer, region):
   
try:
       
llm = VertexAI(model_name="gemini-2.0-flash-001",location=region)
       
       
input_msg = HumanMessage(content=[f"Here the question{question}, here are the available options {options}, this student's answer {user_response}, whereas the correct answer is {answer}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                       
"You are a helpful teacher trying to teach the student on question, you were given the question and a set of multiple choices "
                       
"what's the correct answer. use friendly tone"
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)
       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"



if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

This is a very simple langchain app where it Initializes the Gemini 2 Flash model, where we are instructing it to act as a helpful teacher and provide explanations

👉Execute the following command in the terminal:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

You should see output similar to the example provided in the original instructions. The current model may not provide as through explanation.

Okay, I see the question and the choices. The question is to evaluate the limit:

lim (x0) [(sin(5x) - 5x) / x^3]

You chose option B, which is -5/3, but the correct answer is A, which is -125/6.

It looks like you might have missed a step or made a small error in your calculations. This type of limit often involves using L'Hôpital's Rule or Taylor series expansion. Since we have the form 0/0, L'Hôpital's Rule is a good way to go! You need to apply it multiple times. Alternatively, you can use the Taylor series expansion of sin(x) which is:
sin(x) = x - x^3/3! + x^5/5! - ...
So, sin(5x) = 5x - (5x)^3/3! + (5x)^5/5! - ...
Then,  (sin(5x) - 5x) = - (5x)^3/3! + (5x)^5/5! - ...
Finally, (sin(5x) - 5x) / x^3 = - 5^3/3! + (5^5 * x^2)/5! - ...
Taking the limit as x approaches 0, we get -125/6.

Keep practicing, you'll get there!

In the answer.py file, replace the model_name from gemini-2.0-flash-001 to gemini-2.0-flash-thinking-exp-01-21 in the answer_thinking function.

This changes the LLM that reasons more, which will help it generate better explanations. And run it again.

👉Run to test the new thinking model:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python answer.py

Here is an example of the response from the thinking model that is much more thorough and detailed, providing a step-by-step explanation of how to solve the calculus problem. This highlights the power of "thinking" models in generating high-quality explanations. You should see output similar to this:

Hey there! Let's take a look at this limit problem together. You were asked to evaluate:

lim (x0) [(sin(5x) - 5x) / x^3]

and you picked option B, -5/3, but the correct answer is actually A, -125/6. Let's figure out why!

It's a tricky one because if we directly substitute x=0, we get (sin(0) - 0) / 0^3 = (0 - 0) / 0 = 0/0, which is an indeterminate form. This tells us we need to use a more advanced technique like L'Hopital's Rule or Taylor series expansion.

Let's use the Taylor series expansion for sin(y) around y=0. Do you remember it?  It looks like this:

sin(y) = y - y^3/3! + y^5/5! - ...
where 3! (3 factorial) is 3 × 2 × 1 = 6, 5! is 5 × 4 × 3 × 2 × 1 = 120, and so on.

In our problem, we have sin(5x), so we can substitute y = 5x into the Taylor series:

sin(5x) = (5x) - (5x)^3/3! + (5x)^5/5! - ...
sin(5x) = 5x - (125x^3)/6 + (3125x^5)/120 - ...

Now let's plug this back into our limit expression:

[(sin(5x) - 5x) / x^3] =  [ (5x - (125x^3)/6 + (3125x^5)/120 - ...) - 5x ] / x^3
Notice that the '5x' and '-5x' cancel out!  So we are left with:
= [ - (125x^3)/6 + (3125x^5)/120 - ... ] / x^3
Now, we can divide every term in the numerator by x^3:
= -125/6 + (3125x^2)/120 - ...

Finally, let's take the limit as x approaches 0.  As x gets closer and closer to zero, terms with x^2 and higher powers will become very, very small and approach zero.  So, we are left with:
lim (x0) [ -125/6 + (3125x^2)/120 - ... ] = -125/6

Therefore, the correct answer is indeed **A) -125/6**.

It seems like your answer B, -5/3, might have come from perhaps missing a factor somewhere during calculation or maybe using an incorrect simplification. Double-check your steps when you were trying to solve it!

Don't worry, these limit problems can be a bit tricky sometimes! Keep practicing and you'll get the hang of it.  Let me know if you want to go through another similar example or if you have any more questions! 😊


Now that we have confirmed it works, let's use the portal.

👉 REMOVE the following test code from answer.py :

if __name__ == "__main__":
   
question = "Evaluate the limit: lim (x→0) [(sin(5x) - 5x) / x^3]"
   
options = ["A) -125/6", "B) -5/3 ", "C) -25/3", "D) -5/6"]
   
user_response = "B"
   
answer = "A"
   
region = "us-central1"
   
result = answer_thinking(question, options, user_response, answer, region)

👉Execute the following commands in the terminal to set up a virtual environment, install dependencies, and start the agent:

cd ~/aidemy-bootstrap/portal/
source env/bin/activate
python app.py

👉Use the Cloud Shell's web preview feature to access the running application. Click on the "Quizzes" link, answer all the quizzes and make sure at least get one answer wrong and click submit

thinking answers

Rather than staring blankly while waiting for the response, switch over to the Cloud Editor's terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

To stop the locally running process, press Ctrl+C in the terminal.

11. OPTIONAL: Orchestrating the Agents with Eventarc

So far, the student portal has been generating quizzes based on a default set of teaching plans. That's helpful, but it means our planner agent and portal's quiz agent aren't really talking to each other. Remember how we added that feature where the planner agent publishes its newly generated teaching plans to a Pub/Sub topic? Now it's time to connect that to our portal agent!

نمای کلی

We want the portal to automatically update its quiz content whenever a new teaching plan is generated. To do that, we'll create an endpoint in the portal that can receive these new plans.

👉In the Cloud Code Editor's Explorer pane, navigate to the portal folder. Open the app.py file for editing. Add the follow code in between ## Add your code here :

## Add your code here

@app.route('/new_teaching_plan', methods=['POST'])
def new_teaching_plan():
   
try:
       
       
# Get data from Pub/Sub message delivered via Eventarc
       
envelope = request.get_json()
       
if not envelope:
           
return jsonify({'error': 'No Pub/Sub message received'}), 400

       
if not isinstance(envelope, dict) or 'message' not in envelope:
           
return jsonify({'error': 'Invalid Pub/Sub message format'}), 400

       
pubsub_message = envelope['message']
       
print(f"data: {pubsub_message['data']}")

       
data = pubsub_message['data']
       
data_str = base64.b64decode(data).decode('utf-8')
       
data = json.loads(data_str)

       
teaching_plan = data['teaching_plan']

       
print(f"File content: {teaching_plan}")

       
with open("teaching_plan.txt", "w") as f:
           
f.write(teaching_plan)

       
print(f"Teaching plan saved to local file: teaching_plan.txt")

       
return jsonify({'message': 'File processed successfully'})


   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500
## Add your code here

Rebuilding and Deploying to Cloud Run

You'll need to update and redeploy both our planner and portal agents to Cloud Run. This ensures they have the latest code and are configured to communicate via events.

نمای کلی استقرار

👉First we'll rebuild and push the planner agent image, back in the terminal run:

cd ~/aidemy-bootstrap/planner/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-planner .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-planner us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner

👉We'll do the same, build and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

In Artifact Registry , you should see both the aidemy-planner and aidemy-portal container images listed.

Container Repo

👉Back in the terminal, run this to update the Cloud Run image for the planner agent:

export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-planner \
   
--region=us-central1 \
   
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-planner:latest

You should see output similar to this:

OK Deploying... Done.                                                                                                                                                     
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
Done.                                                                                                                                                                    
Service [aidemy-planner] revision [aidemy-planner-xxxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-planner-xxx.us-central1.run.app

Make note of the Service URL; this is the link to your deployed planner agent. If you need to later determine the planner agent Service URL, use this command:

gcloud run services describe aidemy-planner \
   
--region=us-central1 \
   
--format 'value(status.url)'

👉Run this to create the Cloud Run instance for the portal agent

export PROJECT_ID=$(gcloud config get project)
gcloud run deploy aidemy-portal \
 
--image=us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal:latest \
 
--region=us-central1 \
 
--platform=managed \
 
--allow-unauthenticated \
 
--memory=2Gi \
 
--cpu=2 \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID}

You should see output similar to this:

Deploying container to Cloud Run service [aidemy-portal] in project [xxxx] region [us-central1]
OK Deploying new service... Done.                                                                                                                                        
 
OK Creating Revision...                                                                                                                                                
 
OK Routing traffic...                                                                                                                                                  
 
OK Setting IAM Policy...                                                                                                                                                
Done.                                                                                                                                                                    
Service [aidemy-portal] revision [aidemy-portal-xxxx] has been deployed and is serving 100 percent of traffic.
Service URL: https://aidemy-portal-xxxx.us-central1.run.app

Make note of the Service URL; this is the link to your deployed student portal. If you need to later determine the student portal Service URL, use this command:

gcloud run services describe aidemy-portal \
   
--region=us-central1 \
   
--format 'value(status.url)'

Creating the Eventarc Trigger

But here's the big question: how does this endpoint get notified when there's a fresh plan waiting in the Pub/Sub topic? That's where Eventarc swoops in to save the day!

Eventarc acts as a bridge, listening for specific events (like a new message arriving in our Pub/Sub topic) and automatically triggering actions in response. In our case, it will detect when a new teaching plan is published and then send a signal to our portal's endpoint, letting it know that it's time to update.

With Eventarc handling the event-driven communication, we can seamlessly connect our planner agent and portal agent, creating a truly dynamic and responsive learning system. It's like having a smart messenger that automatically delivers the latest lesson plans to the right place!

👉In the console head to the Eventarc .

👉Click the "+ CREATE TRIGGER" button.

Configure the Trigger (Basics):

  • Trigger name: plan-topic-trigger
  • Trigger type: Google sources
  • Event provider: Cloud Pub/Sub
  • Event type: google.cloud.pubsub.topic.v1.messagePublished
  • Cloud Pub/Sub Topic: select projects/PROJECT_ID/topics/plan
  • Region: us-central1 .
  • Service account:
    • GRANT the service account with role roles/iam.serviceAccountTokenCreator
    • Use the default value: Default compute service account
  • Event destination: Cloud Run
  • Cloud Run service: aidemy-portal
  • Ignore error message: Permission denied on 'locations/me-central2' (or it may not exist).
  • Service URL path: /new_teaching_plan

روی "ایجاد" کلیک کنید.

The Eventarc Triggers page will refresh, and you should now see your newly created trigger listed in the table.

👉Now, access the planner agent using its Service URL to request a new teaching plan.

Run this in the terminal to determine the planner agent Service URL:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

This time try Year 5 , Subject Science , and Add-on Request atoms .

Then, wait a minute or two, again this delay has been introduced due to billing limitation of this lab, under normal condition, there shouldn't be a delay.

Finally, access the student portal using its Service URL.

Run this in the terminal to determine the student portal agent Service URL:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

You should see that the quizzes have been updated and now align with the new teaching plan you just generated! This demonstrates the successful integration of Eventarc in the Aidemy system!

Aidemy-celebrate

تبریک می گویم! You've successfully built a multi-agent system on Google Cloud, leveraging event-driven architecture for enhanced scalability and flexibility! You've laid a solid foundation, but there's even more to explore. To delve deeper into the real benefits of this architecture, discover the power of Gemini 2's multimodal Live API, and learn how to implement single-path orchestration with LangGraph, feel free to continue on to the next two chapters.

12. OPTIONAL: Audio Recaps with Gemini

Gemini can understand and process information from various sources, like text, images, and even audio, opening up a whole new range of possibilities for learning and content creation. Gemini's ability to "see," "hear," and "read" truly unlocks creative and engaging user experiences.

Beyond just creating visuals or text, another important step in learning is effective summarization and recap. Think about it: how often do you remember a catchy song lyric more easily than something you read in a textbook? Sound can be incredibly memorable! That's why we're going to leverage Gemini's multimodal capabilities to generate audio recaps of our teaching plans. This will provide students with a convenient and engaging way to review material, potentially boosting retention and comprehension through the power of auditory learning.

Live API Overview

We need a place to store the generated audio files. Cloud Storage provides a scalable and reliable solution.

👉Head to the Storage in the console. Click on "Buckets" in the left-hand menu. Click on the "+ CREATE" button at the top.

👉Configure your new bucket:

  • bucket name: aidemy-recap-UNIQUE_NAME .
    • IMPORTANT : Ensure you define a unique bucket name that begins with aidemy-recap- . This unique prefix is crucial for avoiding naming conflicts when creating your Cloud Storage bucket.
  • region: us-central1 .
  • Storage class: "Standard". Standard is suitable for frequently accessed data.
  • Access control: Leave the default "Uniform" access control selected. This provides consistent, bucket-level access control.
  • Advanced options: For this workshop, the default settings are usually sufficient.

Click the CREATE button to create your bucket.

  • You may see a pop up about public access prevention. Leave the "Enforce public access prevention on this bucket" box checked and click Confirm .

You will now see your newly created bucket in the Buckets list. Remember your bucket name, you'll need it later.

👉In the Cloud Code Editor's terminal, run the following commands to grant the service account access to the bucket:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$COURSE_BUCKET_NAME \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉In the Cloud Code Editor, open audio.py inside the courses folder. Paste the following code to the end of the file:

config = LiveConnectConfig(
   
response_modalities=["AUDIO"],
   
speech_config=SpeechConfig(
       
voice_config=VoiceConfig(
           
prebuilt_voice_config=PrebuiltVoiceConfig(
               
voice_name="Charon",
           
)
       
)
   
),
)

async def process_weeks(teaching_plan: str):
   
region = "us-east5" #To workaround onRamp quota limits
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
   
clientAudio = genai.Client(vertexai=True, project=PROJECT_ID, location="us-central1")
   
async with clientAudio.aio.live.connect(
       
model=MODEL_ID,
       
config=config,
   
) as session:
       
for week in range(1, 4):  
           
response = client.models.generate_content(
               
model="gemini-2.0-flash-001",
               
contents=f"Given the following teaching plan: {teaching_plan}, Extrace content plan for week {week}. And return just the plan, nothingh else  " # Clarified prompt
           
)

           
prompt = f"""
                Assume you are the instructor.  
                Prepare a concise and engaging recap of the key concepts and topics covered.
                This recap should be suitable for generating a short audio summary for students.
                Focus on the most important learnings and takeaways, and frame it as a direct address to the students.  
                Avoid overly formal language and aim for a conversational tone, tell a few jokes.
               
                Teaching plan: {response.text} """
           
print(f"prompt --->{prompt}")

           
await session.send(input=prompt, end_of_turn=True)
           
with open(f"temp_audio_week_{week}.raw", "wb") as temp_file:
               
async for message in session.receive():
                   
if message.server_content.model_turn:
                       
for part in message.server_content.model_turn.parts:
                           
if part.inline_data:
                               
temp_file.write(part.inline_data.data)
                           
           
data, samplerate = sf.read(f"temp_audio_week_{week}.raw", channels=1, samplerate=24000, subtype='PCM_16', format='RAW')
           
sf.write(f"course-week-{week}.wav", data, samplerate)
       
           
storage_client = storage.Client()
           
bucket = storage_client.bucket(BUCKET_NAME)
           
blob = bucket.blob(f"course-week-{week}.wav")  # Or give it a more descriptive name
           
blob.upload_from_filename(f"course-week-{week}.wav")
           
print(f"Audio saved to GCS: gs://{BUCKET_NAME}/course-week-{week}.wav")
   
await session.close()

 
def breakup_sessions(teaching_plan: str):
   
asyncio.run(process_weeks(teaching_plan))
  • Streaming Connection : First, a persistent connection is established with the Live API endpoint. Unlike a standard API call where you send a request and get a response, this connection remains open for a continuous exchange of data.
  • Configuration Multimodal : Use configuration to specifying what type of output you want (in this case, audio), and you can even specify what parameters you'd like to use (eg, voice selection, audio encoding)
  • Asynchronous Processing : This API works asynchronously, meaning it doesn't block the main thread while waiting for the audio generation to complete. By processing data in real-time and sending the output in chunks, it provides a near-instantaneous experience.

Now, the key question is: when should this audio generation process run? Ideally, we want the audio recaps to be available as soon as a new teaching plan is created. Since we've already implemented an event-driven architecture by publishing the teaching plan to a Pub/Sub topic, we can simply subscribe to that topic.

However, we don't generate new teaching plans very often. It wouldn't be efficient to have an agent constantly running and waiting for new plans. That's why it makes perfect sense to deploy this audio generation logic as a Cloud Run Function.

By deploying it as a function, it remains dormant until a new message is published to the Pub/Sub topic. When that happens, it automatically triggers the function, which generates the audio recaps and stores them in our bucket.

👉Under the courses folder in main.py file, this file defines the Cloud Run Function that will be triggered when a new teaching plan is available. It receives the plan and initiates the audio recap generation. Add the following code snippet to the end of the file.

@functions_framework.cloud_event
def process_teaching_plan(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")
   
time.sleep(60)
   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):  # Check for base64 encoding
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan') # Get the teaching plan
       
elif 'teaching_plan' in cloud_event.data: # No base64
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found") # Handle error explicitly

       
#Load the teaching_plan as string and from cloud event, call audio breakup_sessions
       
breakup_sessions(teaching_plan)

       
return "Teaching plan processed successfully", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error processing teaching plan: {e}")
       
return "Error processing teaching plan", 500

@functions_framework.cloud_event : This decorator marks the function as a Cloud Run Function that will be triggered by CloudEvents.

تست محلی

👉We'll run this in a virtual environment and install the necessary Python libraries for the Cloud Run function.

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

👉The Cloud Run Function emulator allows us to test our function locally before deploying it to Google Cloud. Start a local emulator by running:

functions-framework --target process_teaching_plan --signature-type=cloudevent --source main.py

👉While the emulator is running, you can send test CloudEvents to the emulator to simulate a new teaching plan being published. In a new terminal:

Two terminal

👉Run:

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

Rather than staring blankly while waiting for the response, switch over to the other Cloud Shell terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

Back in the 2nd terminal you should see it should returned OK .

👉You'll verify Data in bucket, go to Cloud Storage and select the "Bucket" tab and then the aidemy-recap-UNIQUE_NAME

سطل

👉In the terminal running the emulator, type ctrl+c to exit. And close the second terminal. And close the second terminal. and run deactivate to exit the virtual environment.

deactivate

Deploying to Google Cloud

نمای کلی استقرار 👉After testing locally, it's time to deploy the course agent to Google Cloud. In the terminal, run these commands:

cd ~/aidemy-bootstrap/courses
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud functions deploy courses-agent \
 
--region=us-central1 \
 
--gen2 \
 
--source=. \
 
--runtime=python312 \
 
--trigger-topic=plan \
 
--entry-point=process_teaching_plan \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

Verify deployment by going Cloud Run in the Google Cloud Console.You should see a new service named courses-agent listed.

Cloud Run List

To check the trigger configuration, click on the courses-agent service to view its details. Go to the "TRIGGERS" tab.

You should see a trigger configured to listen for messages published to the plan topic.

Cloud Run Trigger

Finally, let's see it running end to end.

👉We need to configure the portal agent so it knows where to find the generated audio files. در ترمینال اجرا کنید:

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export PROJECT_ID=$(gcloud config get project)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Try generating a new teaching plan using the planner agent web page. It might take a few minutes to start, don't be alarmed, it's a serverless service.

To access the planner agent, get its Service URL by running this in the terminal:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep planner

After generating the new plan, wait 2-3 minutes for the audio to be generated, again this will take a few more minutes due to billing limitation with this lab account.

You can monitor whether the courses-agent function has received the teaching plan by checking the function's "TRIGGERS" tab. Refresh the page periodically; you should eventually see that the function has been invoked. If the function hasn't been invoked after more than 2 minutes, you can try generating the teaching plan again. However, avoid generating plans repeatedly in quick succession, as each generated plan will be sequentially consumed and processed by the agent, potentially creating a backlog.

Trigger Observe

👉Visit the portal and click on "Courses". You should see three cards, each displaying an audio recap. To find the URL of your portal agent:

gcloud run services list \
   
--platform=managed \
   
--region=us-central1 \
   
--format='value(URL)' | grep portal

Click "play" on each course to ensure the audio recaps are aligned with the teaching plan you just generated! Portal Courses

Exit the virtual environment.

deactivate

13. OPTIONAL: Role-Based collaboration with Gemini and DeepSeek

Having multiple perspectives is invaluable, especially when crafting engaging and thoughtful assignments. We'll now build a multi-agent system that leverages two different models with distinct roles, to generate assignments: one promotes collaboration, and the other encourages self-study. We'll use a "single-shot" architecture, where the workflow follows a fixed route.

Gemini Assignment Generator

بررسی اجمالی جمینی We'll start by setting up the Gemini function to generate assignments with a collaborative emphasis. Edit the gemini.py file located in the assignment folder.

👉Paste the following code to the end of the gemini.py file:

def gen_assignment_gemini(state):
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
print(f"---------------gen_assignment_gemini")
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        You are an instructor

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {state["teaching_plan"]}
        """
   
)

   
print(f"---------------gen_assignment_gemini answer {response.text}")
   
   
state["model_one_assignment"] = response.text
   
   
return state


import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

It uses the Gemini model to generate assignments.

We are ready to test the Gemini Agent.

👉Run these commands in the terminal to setup the environment:

cd ~/aidemy-bootstrap/assignment
export PROJECT_ID=$(gcloud config get project)
python -m venv env
source env/bin/activate
pip install -r requirements.txt

👉You can run to test it:

python gemini.py

You should see an assignment that has more group work in the output. The assert test at the end will also output the results.

Here are some engaging and practical assignments for each week, designed to build progressively upon the teaching plan's objectives:

**Week 1: Exploring the World of 2D Shapes**

* **Learning Objectives Assessed:**
   
* Identify and name basic 2D shapes (squares, rectangles, triangles, circles).
   
* .....

* **Description:**
   
* **Shape Scavenger Hunt:** Students will go on a scavenger hunt in their homes or neighborhoods, taking pictures of objects that represent different 2D shapes. They will then create a presentation or poster showcasing their findings, classifying each shape and labeling its properties (e.g., number of sides, angles, etc.).
   
* **Triangle Trivia:** Students will research and create a short quiz or presentation about different types of triangles, focusing on their properties and real-world examples.
   
* **Angle Exploration:** Students will use a protractor to measure various angles in their surroundings, such as corners of furniture, windows, or doors. They will record their measurements and create a chart categorizing the angles as right, acute, or obtuse.
....

**Week 2: Delving into the World of 3D Shapes and Symmetry**

* **Learning Objectives Assessed:**
   
* Identify and name basic 3D shapes.
   
* ....

* **Description:**
   
* **3D Shape Construction:** Students will work in groups to build 3D shapes using construction paper, cardboard, or other materials. They will then create a presentation showcasing their creations, describing the number of faces, edges, and vertices for each shape.
   
* **Symmetry Exploration:** Students will investigate the concept of symmetry by creating a visual representation of various symmetrical objects (e.g., butterflies, leaves, snowflakes) using drawing or digital tools. They will identify the lines of symmetry and explain their findings.
   
* **Symmetry Puzzles:** Students will be given a half-image of a symmetrical figure and will be asked to complete the other half, demonstrating their understanding of symmetry. This can be done through drawing, cut-out activities, or digital tools.

**Week 3: Navigating Position, Direction, and Problem Solving**

* **Learning Objectives Assessed:**
   
* Describe position using coordinates in the first quadrant.
   
* ....

* **Description:**
   
* **Coordinate Maze:** Students will create a maze using coordinates on a grid paper. They will then provide directions for navigating the maze using a combination of coordinate movements and translation/reflection instructions.
   
* **Shape Transformations:** Students will draw shapes on a grid paper and then apply transformations such as translation and reflection, recording the new coordinates of the transformed shapes.
   
* **Geometry Challenge:** Students will solve real-world problems involving perimeter, area, and angles. For example, they could be asked to calculate the perimeter of a room, the area of a garden, or the missing angle in a triangle.
....

Stop with ctl+c , and to clean up the test code. REMOVE the following code from gemini.py

import unittest

class TestGenAssignmentGemini(unittest.TestCase):
   
def test_gen_assignment_gemini(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assigmodel_one_assignmentnment": "", "final_assignment": ""}

       
updated_state = gen_assignment_gemini(initial_state)

       
self.assertIn("model_one_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_one_assignment"])
       
self.assertIsInstance(updated_state["model_one_assignment"], str)
       
self.assertGreater(len(updated_state["model_one_assignment"]), 0)
       
print(updated_state["model_one_assignment"])


if __name__ == '__main__':
   
unittest.main()

Configure the DeepSeek Assignment Generator

While cloud-based AI platforms are convenient, self-hosting LLMs can be crucial for protecting data privacy and ensuring data sovereignty. We'll deploy the smallest DeepSeek model (1.5B parameters) on a Cloud Compute Engine instance. There are other ways like hosting it on Google's Vertex AI platform or hosting it on your GKE instance, but since this is just a workshop on AI agents, and I don't want to keep you here forever, let's just use the most simplest way. But if you are interested and want to dig into other options, take a look at deepseek-vertexai.py file under assignment folder, where it provides an sample code of how to interact with models deployed on VertexAI.

Deepseek Overview

👉Run this command in the terminal to create a self-hosted LLM platform Ollama:

cd ~/aidemy-bootstrap/assignment
gcloud compute instances create ollama-instance \
   
--image-family=ubuntu-2204-lts \
   
--image-project=ubuntu-os-cloud \
   
--machine-type=e2-standard-4 \
   
--zone=us-central1-a \
   
--metadata-from-file startup-script=startup.sh \
   
--boot-disk-size=50GB \
   
--tags=ollama \
   
--scopes=https://www.googleapis.com/auth/cloud-platform

To verify the Compute Engine instance is running:

Navigate to Compute Engine > "VM instances" in the Google Cloud Console. You should see the ollama-instance listed with a green check mark indicating that it's running. If you can't see it, make sure the zone is us-central1. If it's not, you may need to search for it.

Compute Engine List

👉We'll install the smallest DeepSeek model and test it, back in the Cloud Shell Editor, in a New terminal, run following command to ssh into the GCE instance.

gcloud compute ssh ollama-instance --zone=us-central1-a

Upon establishing the SSH connection, you may be prompted with the following:

"Do you want to continue (Y/n)?"

Simply type Y (case-insensitive) and press Enter to proceed.

Next, you might be asked to create a passphrase for the SSH key. If you prefer not to use a passphrase, just press Enter twice to accept the default (no passphrase).

👉Now you are in the virutal machine, pull the smallest DeepSeek R1 model, and test if it works?

ollama pull deepseek-r1:1.5b
ollama run deepseek-r1:1.5b "who are you?"

👉Exit the GCE instance enter following in the ssh terminal:

exit

👉Next, setup the network policy, so other services can access the LLM, please limit the access to the instance if you want to do this for production, either implement security login for the service or restrict IP access. اجرا کنید:

gcloud compute firewall-rules create allow-ollama-11434 \
   
--allow=tcp:11434 \
   
--target-tags=ollama \
   
--description="Allow access to Ollama on port 11434"

👉To verify if your firewall policy is working correctly, try running:

export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
curl -X POST "${OLLAMA_HOST}/api/generate" \
     
-H "Content-Type: application/json" \
     
-d '{
         
"prompt": "Hello, what are you?",
         
"model": "deepseek-r1:1.5b",
         
"stream": false
       
}'

Next, we'll work on the Deepseek function in the assignment agent to generate assignments with individual work emphasis.

👉Edit deepseek.py under assignment folder add following snippet to the end:

def gen_assignment_deepseek(state):
   
print(f"---------------gen_assignment_deepseek")

   
template = """
        You are an instructor who favor student to focus on individual work.

        Develop engaging and practical assignments for each week, ensuring they align with the teaching plan's objectives and progressively build upon each other.  

        For each week, provide the following:

        * **Week [Number]:** A descriptive title for the assignment (e.g., "Data Exploration Project," "Model Building Exercise").
        * **Learning Objectives Assessed:** List the specific learning objectives from the teaching plan that this assignment assesses.
        * **Description:** A detailed description of the task, including any specific requirements or constraints.  Provide examples or scenarios if applicable.
        * **Deliverables:** Specify what students need to submit (e.g., code, report, presentation).
        * **Estimated Time Commitment:**  The approximate time students should dedicate to completing the assignment.
        * **Assessment Criteria:** Briefly outline how the assignment will be graded (e.g., correctness, completeness, clarity, creativity).

        The assignments should be a mix of individual and collaborative work where appropriate.  Consider different learning styles and provide opportunities for students to apply their knowledge creatively.

        Based on this teaching plan: {teaching_plan}
        """

   
   
prompt = ChatPromptTemplate.from_template(template)

   
model = OllamaLLM(model="deepseek-r1:1.5b",
                   
base_url=OLLAMA_HOST)

   
chain = prompt | model


   
response = chain.invoke({"teaching_plan":state["teaching_plan"]})
   
state["model_two_assignment"] = response
   
   
return state

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

👉let's test it by running:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
export PROJECT_ID=$(gcloud config get project)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
python deepseek.py

You should see an assignment that has more self study work.

**Assignment Plan for Each Week**

---

### **Week 1: 2D Shapes and Angles**
- **Week Title:** "Exploring 2D Shapes"
Assign students to research and present on various 2D shapes. Include a project where they create models using straws and tape for triangles, draw quadrilaterals with specific measurements, and compare their properties.

### **Week 2: 3D Shapes and Symmetry**
Assign students to create models or nets for cubes and cuboids. They will also predict how folding these nets form the 3D shapes. Include a project where they identify symmetrical properties using mirrors or folding techniques.

### **Week 3: Position, Direction, and Problem Solving**

Assign students to use mirrors or folding techniques for reflections. Include activities where they measure angles, use a protractor, solve problems involving perimeter/area, and create symmetrical designs.
....

👉Stop the ctl+c , and to clean up the test code. REMOVE the following code from deepseek.py

import unittest

class TestGenAssignmentDeepseek(unittest.TestCase):
   
def test_gen_assignment_deepseek(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}

       
updated_state = gen_assignment_deepseek(initial_state)

       
self.assertIn("model_two_assignment", updated_state)
       
self.assertIsNotNone(updated_state["model_two_assignment"])
       
self.assertIsInstance(updated_state["model_two_assignment"], str)
       
self.assertGreater(len(updated_state["model_two_assignment"]), 0)
       
print(updated_state["model_two_assignment"])


if __name__ == '__main__':
   
unittest.main()

Now, we'll use the same gemini model to combine both assignments into a new one. Edit the gemini.py file located in the assignment folder.

👉Paste the following code to the end of the gemini.py file:

def combine_assignments(state):
   
print(f"---------------combine_assignments ")
   
region=get_next_region()
   
client = genai.Client(vertexai=True, project=PROJECT_ID, location=region)
   
response = client.models.generate_content(
       
model=MODEL_ID, contents=f"""
        Look at all the proposed assignment so far {state["model_one_assignment"]} and {state["model_two_assignment"]}, combine them and come up with a final assignment for student.
        """
   
)

   
state["final_assignment"] = response.text
   
   
return state

To combine the strengths of both models, we'll orchestrate a defined workflow using LangGraph. This workflow consists of three steps: first, the Gemini model generates an assignment focused on collaboration; second, the DeepSeek model generates an assignment emphasizing individual work; finally, Gemini synthesizes these two assignments into a single, comprehensive assignment. Because we predefine the sequence of steps without LLM decision-making, this constitutes a single-path, user-defined orchestration.

Langraph combine overview

👉Paste the following code to the end of the main.py file under assignment folder:

def create_assignment(teaching_plan: str):
   
print(f"create_assignment---->{teaching_plan}")
   
builder = StateGraph(State)
   
builder.add_node("gen_assignment_gemini", gen_assignment_gemini)
   
builder.add_node("gen_assignment_deepseek", gen_assignment_deepseek)
   
builder.add_node("combine_assignments", combine_assignments)
   
   
builder.add_edge(START, "gen_assignment_gemini")
   
builder.add_edge("gen_assignment_gemini", "gen_assignment_deepseek")
   
builder.add_edge("gen_assignment_deepseek", "combine_assignments")
   
builder.add_edge("combine_assignments", END)

   
graph = builder.compile()
   
state = graph.invoke({"teaching_plan": teaching_plan})

   
return state["final_assignment"]



import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

👉To initially test the create_assignment function and confirm that the workflow combining Gemini and DeepSeek is functional, run the following command:

cd ~/aidemy-bootstrap/assignment
source env/bin/activate
pip install -r requirements.txt
python main.py

You should see something that combine both models with their individual perspective for student study and also for student group works.

**Tasks:**

1. **Clue Collection:** Gather all the clues left by the thieves. These clues will include:
   
* Descriptions of shapes and their properties (angles, sides, etc.)
   
* Coordinate grids with hidden messages
   
* Geometric puzzles requiring transformation (translation, reflection, rotation)
   
* Challenges involving area, perimeter, and angle calculations

2. **Clue Analysis:** Decipher each clue using your geometric knowledge. This will involve:
   
* Identifying the shape and its properties
   
* Plotting coordinates and interpreting patterns on the grid
   
* Solving geometric puzzles by applying transformations
   
* Calculating area, perimeter, and missing angles

3. **Case Report:** Create a comprehensive case report outlining your findings. This report should include:
   
* A detailed explanation of each clue and its solution
   
* Sketches and diagrams to support your explanations
   
* A step-by-step account of how you followed the clues to locate the artifact
   
* A final conclusion about the thieves and their motives

👉Stop the ctl+c , and to clean up the test code. REMOVE the following code from main.py

import unittest

class TestCreateAssignment(unittest.TestCase):
   
def test_create_assignment(self):
       
test_teaching_plan = "Week 1: 2D Shapes and Angles - Day 1: Review of basic 2D shapes (squares, rectangles, triangles, circles). Day 2: Exploring different types of triangles (equilateral, isosceles, scalene, right-angled). Day 3: Exploring quadrilaterals (square, rectangle, parallelogram, rhombus, trapezium). Day 4: Introduction to angles: right angles, acute angles, and obtuse angles. Day 5: Measuring angles using a protractor. Week 2: 3D Shapes and Symmetry - Day 6: Introduction to 3D shapes: cubes, cuboids, spheres, cylinders, cones, and pyramids. Day 7: Describing 3D shapes using faces, edges, and vertices. Day 8: Relating 2D shapes to 3D shapes. Day 9: Identifying lines of symmetry in 2D shapes. Day 10: Completing symmetrical figures. Week 3: Position, Direction, and Problem Solving - Day 11: Describing position using coordinates in the first quadrant. Day 12: Plotting coordinates to draw shapes. Day 13: Understanding translation (sliding a shape). Day 14: Understanding reflection (flipping a shape). Day 15: Problem-solving activities involving perimeter, area, and missing angles."
       
initial_state = {"teaching_plan": test_teaching_plan, "model_one_assignment": "", "model_two_assignment": "", "final_assignment": ""}
       
updated_state = create_assignment(initial_state)
       
       
print(updated_state)


if __name__ == '__main__':
   
unittest.main()

Generate Assignment.png

To make the assignment generation process automatic and responsive to new teaching plans, we'll leverage the existing event-driven architecture. The following code defines a Cloud Run Function (generate_assignment) that will be triggered whenever a new teaching plan is published to the Pub/Sub topic ' plan '.

👉Add the following code to the end of main.py in the assignment folder:

@functions_framework.cloud_event
def generate_assignment(cloud_event):
   
print(f"CloudEvent received: {cloud_event.data}")

   
try:
       
if isinstance(cloud_event.data.get('message', {}).get('data'), str):
           
data = json.loads(base64.b64decode(cloud_event.data['message']['data']).decode('utf-8'))
           
teaching_plan = data.get('teaching_plan')
       
elif 'teaching_plan' in cloud_event.data:
           
teaching_plan = cloud_event.data["teaching_plan"]
       
else:
           
raise KeyError("teaching_plan not found")

       
assignment = create_assignment(teaching_plan)

       
print(f"Assignment---->{assignment}")

       
#Store the return assignment into bucket as a text file
       
storage_client = storage.Client()
       
bucket = storage_client.bucket(ASSIGNMENT_BUCKET)
       
file_name = f"assignment-{random.randint(1, 1000)}.txt"
       
blob = bucket.blob(file_name)
       
blob.upload_from_string(assignment)

       
return f"Assignment generated and stored in {ASSIGNMENT_BUCKET}/{file_name}", 200

   
except (json.JSONDecodeError, AttributeError, KeyError) as e:
       
print(f"Error decoding CloudEvent data: {e} - Data: {cloud_event.data}")
       
return "Error processing event", 500

   
except Exception as e:
       
print(f"Error generate assignment: {e}")
       
return "Error generate assignment", 500

تست محلی

Before deploying to Google Cloud, it's good practice to test the Cloud Run Function locally. This allows for faster iteration and easier debugging.

First, create a Cloud Storage bucket to store the generated assignment files and grant the service account access to the bucket. دستورات زیر را در ترمینال اجرا کنید:

👉 IMPORTANT : Ensure you define a unique ASSIGNMENT_BUCKET name that begins with " aidemy-assignment- ". This unique name is crucial for avoiding naming conflicts when creating your Cloud Storage bucket. (Replace <YOUR_NAME> with any random word)

export ASSIGNMENT_BUCKET=aidemy-assignment-<YOUR_NAME> #Name must be unqiue

👉And run:

export PROJECT_ID=$(gcloud config get project)
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gsutil mb -p $PROJECT_ID -l us-central1 gs://$ASSIGNMENT_BUCKET

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectViewer"

gcloud storage buckets add-iam-policy-binding gs://$ASSIGNMENT_BUCKET \
   
--member "serviceAccount:$SERVICE_ACCOUNT_NAME" \
   
--role "roles/storage.objectCreator"

👉Now, start the Cloud Run Function emulator:

cd ~/aidemy-bootstrap/assignment
functions-framework \
   
--target generate_assignment \
   
--signature-type=cloudevent \
   
--source main.py

👉While the emulator is running in one terminal, open a second terminal in the Cloud Shell. In this second terminal, send a test CloudEvent to the emulator to simulate a new teaching plan being published:

Two terminal

  curl -X POST \
 
http://localhost:8080/ \
 
-H "Content-Type: application/json" \
 
-H "ce-id: event-id-01" \
 
-H "ce-source: planner-agent" \
 
-H "ce-specversion: 1.0" \
 
-H "ce-type: google.cloud.pubsub.topic.v1.messagePublished" \
 
-d '{
   
"message": {
     
"data": "eyJ0ZWFjaGluZ19wbGFuIjogIldlZWsgMTogMkQgU2hhcGVzIGFuZCBBbmdsZXMgLSBEYXkgMTogUmV2aWV3IG9mIGJhc2ljIDJEIHNoYXBlcyAoc3F1YXJlcywgcmVjdGFuZ2xlcywgdHJpYW5nbGVzLCBjaXJjbGVzKS4gRGF5IDI6IEV4cGxvcmluZyBkaWZmZXJlbnQgdHlwZXMgb2YgdHJpYW5nbGVzIChlcXVpbGF0ZXJhbCwgaXNvc2NlbGVzLCBzY2FsZW5lLCByaWdodC1hbmdsZWQpLiBEYXkgMzogRXhwbG9yaW5nIHF1YWRyaWxhdGVyYWxzIChzcXVhcmUsIHJlY3RhbmdsZSwgcGFyYWxsZWxvZ3JhbSwgcmhvbWJ1cywgdHJhcGV6aXVtKS4gRGF5IDQ6IEludHJvZHVjdGlvbiB0byBhbmdsZXM6IHJpZ2h0IGFuZ2xlcywgYWN1dGUgYW5nbGVzLCBhbmQgb2J0dXNlIGFuZ2xlcy4gRGF5IDU6IE1lYXN1cmluZyBhbmdsZXMgdXNpbmcgYSBwcm90cmFjdG9yLiBXZWVrIDI6IDNEIFNoYXBlcyBhbmQgU3ltbWV0cnkgLSBEYXkgNjogSW50cm9kdWN0aW9uIHRvIDNEIHNoYXBlczogY3ViZXMsIGN1Ym9pZHMsIHNwaGVyZXMsIGN5bGluZGVycywgY29uZXMsIGFuZCBweXJhbWlkcy4gRGF5IDc6IERlc2NyaWJpbmcgM0Qgc2hhcGVzIHVzaW5nIGZhY2VzLCBlZGdlcywgYW5kIHZlcnRpY2VzLiBEYXkgODogUmVsYXRpbmcgMkQgc2hhcGVzIHRvIDNEIHNoYXBlcy4gRGF5IDk6IElkZW50aWZ5aW5nIGxpbmVzIG9mIHN5bW1ldHJ5IGluIDJEIHNoYXBlcy4gRGF5IDEwOiBDb21wbGV0aW5nIHN5bW1ldHJpY2FsIGZpZ3VyZXMuIFdlZWsgMzogUG9zaXRpb24sIERpcmVjdGlvbiwgYW5kIFByb2JsZW0gU29sdmluZyAtIERheSAxMTogRGVzY3JpYmluZyBwb3NpdGlvbiB1c2luZyBjb29yZGluYXRlcyBpbiB0aGUgZmlyc3QgcXVhZHJhbnQuIERheSAxMjogUGxvdHRpbmcgY29vcmRpbmF0ZXMgdG8gZHJhdyBzaGFwZXMuIERheSAxMzogVW5kZXJzdGFuZGluZyB0cmFuc2xhdGlvbiAoc2xpZGluZyBhIHNoYXBlKS4gRGF5IDE0OiBVbmRlcnN0YW5kaW5nIHJlZmxlY3Rpb24gKGZsaXBwaW5nIGEgc2hhcGUpLiBEYXkgMTU6IFByb2JsZW0tc29sdmluZyBhY3Rpdml0aWVzIGludm9sdmluZyBwZXJpbWV0ZXIsIGFyZWEsIGFuZCBtaXNzaW5nIGFuZ2xlcy4ifQ=="
   
}
 
}'

Rather than staring blankly while waiting for the response, switch over to the other Cloud Shell terminal. You can observe the progress and any output or error messages generated by your function in the emulator's terminal. 😁

The curl command should print "OK" (without a newline, so "OK" may appear on the same line your terminal shell prompt).

To confirm that the assignment was successfully generated and stored, go to the Google Cloud Console and navigate to Storage > "Cloud Storage". Select the aidemy-assignment bucket you created. You should see a text file named assignment-{random number}.txt in the bucket. Click on the file to download it and verify its contents. This verifies that a new file contains new assignment just generated.

12-01-assignment-bucket

👉In the terminal running the emulator, type ctrl+c to exit. And close the second terminal. 👉Also, in the terminal running the emulator, exit the virtual environment.

deactivate

نمای کلی استقرار

👉Next, we'll deploy the assignment agent to the cloud

cd ~/aidemy-bootstrap/assignment
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
export OLLAMA_HOST=http://$(gcloud compute instances describe ollama-instance --zone=us-central1-a --format='value(networkInterfaces[0].accessConfigs[0].natIP)'):11434
export PROJECT_ID=$(gcloud config get project)
gcloud functions deploy assignment-agent \
 
--gen2 \
 
--timeout=540 \
 
--memory=2Gi \
 
--cpu=1 \
 
--set-env-vars="ASSIGNMENT_BUCKET=${ASSIGNMENT_BUCKET}" \
 
--set-env-vars=GOOGLE_CLOUD_PROJECT=${GOOGLE_CLOUD_PROJECT} \
 
--set-env-vars=OLLAMA_HOST=${OLLAMA_HOST} \
 
--region=us-central1 \
 
--runtime=python312 \
 
--source=. \
 
--entry-point=generate_assignment \
 
--trigger-topic=plan

Verify deployment by going to Google Cloud Console, navigate to Cloud Run. You should see a new service named courses-agent listed. 12-03-function-list

With the assignment generation workflow now implemented and tested and deployed, we can move on to the next step: making these assignments accessible within the student portal.

14. OPTIONAL: Role-Based collaboration with Gemini and DeepSeek - Contd.

Dynamic website generation

To enhance the student portal and make it more engaging, we'll implement dynamic HTML generation for assignment pages. The goal is to automatically update the portal with a fresh, visually appealing design whenever a new assignment is generated. This leverages the LLM's coding capabilities to create a more dynamic and interesting user experience.

14-01-generate-html

👉In Cloud Shell Editor, edit the render.py file within the portal folder, replace

def render_assignment_page():
   
return ""

with following code snippet:

def render_assignment_page(assignment: str):
   
try:
       
region=get_next_region()
       
llm = VertexAI(model_name="gemini-2.0-flash-001", location=region)
       
input_msg = HumanMessage(content=[f"Here the assignment {assignment}"])
       
prompt_template = ChatPromptTemplate.from_messages(
           
[
               
SystemMessage(
                   
content=(
                        """
                        As a frontend developer, create HTML to display a student assignment with a creative look and feel. Include the following navigation bar at the top:
                        ```
                        <nav>
                            <a href="/">Home</a>
                            <a href="/quiz">Quizzes</a>
                            <a href="/courses">Courses</a>
                            <a href="/assignment">Assignments</a>
                        </nav>
                        ```
                        Also include these links in the <head> section:
                        ```
                        <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
                        <link rel="preconnect" href="https://fonts.googleapis.com">
                        <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
                        <link href="https://fonts.googleapis.com/css2?family=Roboto:wght@400;500&display=swap" rel="stylesheet">

                        ```
                        Do not apply inline styles to the navigation bar.
                        The HTML should display the full assignment content. In its CSS, be creative with the rainbow colors and aesthetic.
                        Make it creative and pretty
                        The assignment content should be well-structured and easy to read.
                        respond with JUST the html file
                        """
                   
)
               
),
               
input_msg,
           
]
       
)

       
prompt = prompt_template.format()
       
       
response = llm.invoke(prompt)

       
response = response.replace("```html", "")
       
response = response.replace("```", "")
       
with open("templates/assignment.html", "w") as f:
           
f.write(response)


       
print(f"response: {response}")

       
return response
   
except Exception as e:
       
print(f"Error sending message to chatbot: {e}") # Log this error too!
       
return f"Unable to process your request at this time. Due to the following reason: {str(e)}"

It uses the Gemini model to dynamically generate HTML for the assignment. It takes the assignment content as input and uses a prompt to instruct Gemini to create a visually appealing HTML page with a creative style.

Next, we'll create an endpoint that will be triggered whenever a new document is added to the assignment bucket:

👉Within the portal folder, edit the app.py file and add the following code within the ## Add your code here" comments , AFTER the new_teaching_plan function:

## Add your code here

def new_teaching_plan():
       
...
       
...
       
...

   
except Exception as e:
       
...
       
...

@app.route('/render_assignment', methods=['POST'])
def render_assignment():
   
try:
       
data = request.get_json()
       
file_name = data.get('name')
       
bucket_name = data.get('bucket')

       
if not file_name or not bucket_name:
           
return jsonify({'error': 'Missing file name or bucket name'}), 400

       
storage_client = storage.Client()
       
bucket = storage_client.bucket(bucket_name)
       
blob = bucket.blob(file_name)
       
content = blob.download_as_text()

       
print(f"File content: {content}")

       
render_assignment_page(content)

       
return jsonify({'message': 'Assignment rendered successfully'})

   
except Exception as e:
       
print(f"Error processing file: {e}")
       
return jsonify({'error': 'Error processing file'}), 500

## Add your code here

When triggered, it retrieves the file name and bucket name from the request data, downloads the assignment content from Cloud Storage, and calls the render_assignment_page function to generate the HTML.

👉We'll go ahead and run it locally:

cd ~/aidemy-bootstrap/portal
source env/bin/activate
python app.py

👉From the "Web preview" menu at the top of the Cloud Shell window, select "Preview on port 8080". This will open your application in a new browser tab. Navigate to the Assignment link in the navigation bar. You should see a blank page at this point, which is expected behavior since we haven't yet established the communication bridge between the assignment agent and the portal to dynamically populate the content.

14-02-deployment-overview

o ahead and stop the script by pressing Ctrl+C .

👉To incorporate these changes and deploy the updated code, rebuild and push the portal agent image:

cd ~/aidemy-bootstrap/portal/
export PROJECT_ID=$(gcloud config get project)
docker build -t gcr.io/${PROJECT_ID}/aidemy-portal .
export PROJECT_ID=$(gcloud config get project)
docker tag gcr.io/${PROJECT_ID}/aidemy-portal us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal
docker push us-central1-docker.pkg.dev/${PROJECT_ID}/agent-repository/aidemy-portal

👉After pushing the new image, redeploy the Cloud Run service. Run the following script to force the Cloud Run update:

export PROJECT_ID=$(gcloud config get project)
export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
gcloud run services update aidemy-portal \
   
--region=us-central1 \
   
--set-env-vars=GOOGLE_CLOUD_PROJECT=${PROJECT_ID},COURSE_BUCKET_NAME=$COURSE_BUCKET_NAME

👉Now, we'll deploy an Eventarc trigger that listens for any new object created (finalized) in the assignment bucket. This trigger will automatically invoke the /render_assignment endpoint on the portal service when a new assignment file is created.

export PROJECT_ID=$(gcloud config get project)
gcloud projects add-iam-policy-binding $PROJECT_ID \
 
--member="serviceAccount:$(gcloud storage service-agent --project $PROJECT_ID)" \
 
--role="roles/pubsub.publisher"
export SERVICE_ACCOUNT_NAME=$(gcloud compute project-info describe --format="value(defaultServiceAccount)")
gcloud eventarc triggers create portal-assignment-trigger \
--location=us-central1 \
--service-account=$SERVICE_ACCOUNT_NAME \
--destination-run-service=aidemy-portal \
--destination-run-region=us-central1 \
--destination-run-path="/render_assignment" \
--event-filters="bucket=$ASSIGNMENT_BUCKET" \
--event-filters="type=google.cloud.storage.object.v1.finalized"

To verify that the trigger was created successfully, navigate to the Eventarc Triggers page in the Google Cloud Console. You should see portal-assignment-trigger listed in the table. Click on the trigger name to view its details. Assignment Trigger

It may take up to 2-3 minutes for the new trigger to become active.

To see the dynamic assignment generation in action, run the following command to find the URL of your planner agent (if you don't have it handy):

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep planner

Find the URL of your portal agent:

gcloud run services list --platform=managed --region=us-central1 --format='value(URL)' | grep portal

In the planner agent, generate a new teaching plan.

13-02-assignment

After a few minutes (to allow for the audio generation, assignment generation, and HTML rendering to complete), navigate to the student portal.

👉Click on the "Assignment" link in the navigation bar. You should see a newly created assignment with a dynamically generated HTML. Each time a teaching plan is generated it should be a dynamic assignment.

13-02-assignment

Congratulations on completing the Aidemy multi-agent system ! You've gained practical experience and valuable insights into:

  • The benefits of multi-agent systems, including modularity, scalability, specialization, and simplified maintenance.
  • The importance of event-driven architectures for building responsive and loosely coupled applications.
  • The strategic use of LLMs, matching the right model to the task and integrating them with tools for real-world impact.
  • Cloud-native development practices using Google Cloud services to create scalable and reliable solutions.
  • The importance of considering data privacy and self-hosting models as an alternative to vendor solutions.

You now have a solid foundation for building sophisticated AI-powered applications on Google Cloud!

15. Challenges and Next Steps

Congratulations on building the Aidemy multi-agent system! You've laid a strong foundation for AI-powered education. Now, let's consider some challenges and potential future enhancements to further expand its capabilities and address real-world needs:

Interactive Learning with Live Q&A:

  • Challenge: Can you leverage Gemini 2's Live API to create a real-time Q&A feature for students? Imagine a virtual classroom where students can ask questions and receive immediate, AI-powered responses.

Automated Assignment Submission and Grading:

  • Challenge: Design and implement a system that allows students to submit assignments digitally and have them automatically graded by AI, with a mechanism to detect and prevent plagiarism. This challenge presents a great opportunity to explore Retrieval Augmented Generation (RAG) to enhance the accuracy and reliability of the grading and plagiarism detection processes.

aidemy-climb

16. تمیز کردن

Now that we've built and explored our Aidemy multi-agent system, it's time to clean up our Google Cloud environment.

👉Delete Cloud Run services

gcloud run services delete aidemy-planner --region=us-central1 --quiet
gcloud run services delete aidemy-portal --region=us-central1 --quiet
gcloud run services delete courses-agent --region=us-central1 --quiet
gcloud run services delete book-provider --region=us-central1 --quiet
gcloud run services delete assignment-agent --region=us-central1 --quiet

👉Delete Eventarc trigger

gcloud eventarc triggers delete portal-assignment-trigger --location=us --quiet
gcloud eventarc triggers delete plan-topic-trigger --location=us-central1 --quiet
gcloud eventarc triggers delete portal-assignment-trigger --location=us-central1 --quiet
ASSIGNMENT_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:assignment-agent" --format="value(name)")
COURSES_AGENT_TRIGGER=$(gcloud eventarc triggers list --project="$PROJECT_ID" --location=us-central1 --filter="name:courses-agent" --format="value(name)")
gcloud eventarc triggers delete $ASSIGNMENT_AGENT_TRIGGER --location=us-central1 --quiet
gcloud eventarc triggers delete $COURSES_AGENT_TRIGGER --location=us-central1 --quiet

👉Delete Pub/Sub topic

gcloud pubsub topics delete plan --project="$PROJECT_ID" --quiet

👉Delete Cloud SQL instance

gcloud sql instances delete aidemy --quiet

👉Delete Artifact Registry repository

gcloud artifacts repositories delete agent-repository --location=us-central1 --quiet

👉Delete Secret Manager secrets

gcloud secrets delete db-user --quiet
gcloud secrets delete db-pass --quiet
gcloud secrets delete db-name --quiet

👉Delete Compute Engine instance (if created for Deepseek)

gcloud compute instances delete ollama-instance --zone=us-central1-a --quiet

👉Delete the firewall rule for Deepseek instance

gcloud compute firewall-rules delete allow-ollama-11434 --quiet

👉Delete Cloud Storage buckets

export COURSE_BUCKET_NAME=$(gcloud storage buckets list --format="value(name)" | grep aidemy-recap)
export ASSIGNMENT_BUCKET=$(gcloud storage buckets list --format="value(name)" | grep aidemy-assignment)
gsutil rm -r gs://$COURSE_BUCKET_NAME
gsutil rm -r gs://$ASSIGNMENT_BUCKET

aidemy-broom