使用 Gemini Pro 构建基于多模态 RAG 的问答应用

1. 简介

什么是 RAG

检索增强生成 (RAG) 是一种将大型语言模型 (LLM) 的强大功能与从外部知识源检索相关信息的能力相结合的技术。这意味着 LLM 不仅依赖于其内部训练数据,还可以在生成回答时访问和整合最新的特定信息。

936b7eedba773cac

RAG 如此流行的原因有很多:

  • 提高准确性和相关性:RAG 让 LLM 以从外部来源检索到的事实信息为依据,从而提供更准确、更相关的回答。如果最新知识至关重要,例如回答有关时事的问题或提供有关特定主题的信息,这种方法尤其有用。
  • 减少幻觉:LLM 有时可能会生成看似合理但实际上不正确或无意义的回答。RAG 可以验证针对外部来源生成的信息,从而帮助缓解这一问题。
  • 适应性更高:RAG 使 LLM 更能适应不同的领域和任务。通过利用不同的知识来源,可以轻松自定义 LLM,以提供有关各种主题的信息。
  • 增强的用户体验:RAG 可以提供信息更丰富、更可靠且更相关的回答,从而改善整体用户体验。

为什么要使用多模态

在当今数据丰富的世界中,文档通常会结合使用文字和图片来全面传达信息。然而,大多数检索增强生成 (RAG) 系统都忽视了图像中蕴含的宝贵数据洞见。随着多模态大语言模型 (LLM) 日渐盛行,我们有必要探索如何在 RAG 中利用视觉内容和文本,从而更深入地了解信息格局。

多模态 RAG 的两个选项

  • 多模态嵌入 - 多模态嵌入模型会根据您提供的输入生成 1408 维向量*,其中可能包括图片、文本和视频数据的组合。图片嵌入向量和文本嵌入向量位于同一语义空间中,且维度相同。因此,这些矢量可以互换用于按文字搜索图片或按图片搜索视频等用例。请查看此演示
  1. 使用多模态嵌入功能嵌入文本和图片
  2. 使用相似性搜索检索二者
  3. 将检索到的原始图像和文本块传递给多模态 LLM 以进行答案合成
  • 文本嵌入 -
  1. 使用多模态 LLM 生成图片的文本摘要
  2. 嵌入和检索文本
  3. 将文本块传递给 LLM 以进行答案合成

什么是多向量检索器

多向量检索采用文档部分的摘要来检索原始内容以进行答案合成。它提高了 RAG 的质量,特别是表格、图形和图表等密集型任务时。如需了解详情,请访问 Langchain 的博客.

构建内容

使用场景:使用 Gemini Pro 开发问答系统

假设您的文档包含包含各种信息的复杂图表。您想提取这些数据以回答问题或查询。

在此 Codelab 中,您将执行以下操作:

  • 使用 LangChain document_loaders 加载数据
  • 使用 Google 的 gemini-pro 模型生成文本摘要
  • 使用 Google 的 gemini-pro-vision 模型生成图片摘要
  • 使用 Google 的 textembedding-gecko 模型并将 Croma Db 作为向量存储区创建多向量检索
  • 开发用于问答的多模态 RAG 链

2. 准备工作

  1. Google Cloud Console 的项目选择器页面上,选择或创建一个 Google Cloud 项目
  2. 确保您的 Google Cloud 项目已启用结算功能。了解如何检查项目是否已启用结算功能
  3. 通过 Vertex AI 信息中心启用所有推荐的 API
  4. 打开 Colab 笔记本,然后登录您当前活跃的 Google Cloud 账号所用的同一账号。

3. 构建多模态 RAG

此 Codelab 使用 Python 版 Vertex AI SDKLangchain 演示了如何使用 Google Cloud 实现此处所述的“选项 2”。

您可以在所参考的代码库Multi-modal RAG with Google Cloud 文件中参考完整代码。

4. 第 1 步:安装并导入依赖项

!pip install -U --quiet langchain langchain_community chromadb  langchain-google-vertexai
!pip install --quiet "unstructured[all-docs]" pypdf pillow pydantic lxml pillow matplotlib chromadb tiktoken

输入您的项目 ID 并完成身份验证

#TODO : ENter project and location
PROJECT_ID = ""
REGION = "us-central1"

from google.colab import auth
auth.authenticate_user()

初始化 Vertex AI Platform

import vertexai
vertexai.init(project = PROJECT_ID , location = REGION)

5. 第 2 步:准备和加载数据

我们使用包含博文中提取的图片和 PDF 文件的 ZIP 文件。如果您想遵循完整的流程,请使用原始示例

首先下载数据

import logging
import zipfile
import requests

logging.basicConfig(level=logging.INFO)

data_url = "https://storage.googleapis.com/benchmarks-artifacts/langchain-docs-benchmarking/cj.zip"
result = requests.get(data_url)
filename = "cj.zip"
with open(filename, "wb") as file:
   file.write(result.content)

with zipfile.ZipFile(filename, "r") as zip_ref:
   zip_ref.extractall()

从文档加载文本内容

from langchain_community.document_loaders import PyPDFLoader

loader = PyPDFLoader("./cj/cj.pdf")
docs = loader.load()
tables = []
texts = [d.page_content for d in docs]

检查第一页的内容

texts[0]

您应该会看到输出

2c5c257779c0f52a

文档中的总页数

len(texts)

预期输出为

b5700c0c1376abc2.png

6. 第 3 步:生成文本摘要

请先导入所需的库

from langchain_google_vertexai import VertexAI , ChatVertexAI , VertexAIEmbeddings
from langchain.prompts import PromptTemplate
from langchain_core.messages import AIMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda

获取文本摘要

# Generate summaries of text elements
def generate_text_summaries(texts, tables, summarize_texts=False):
   """
   Summarize text elements
   texts: List of str
   tables: List of str
   summarize_texts: Bool to summarize texts
   """

   # Prompt
   prompt_text = """You are an assistant tasked with summarizing tables and text for retrieval. \
   These summaries will be embedded and used to retrieve the raw text or table elements. \
   Give a concise summary of the table or text that is well optimized for retrieval. Table or text: {element} """
   prompt = PromptTemplate.from_template(prompt_text)
   empty_response = RunnableLambda(
       lambda x: AIMessage(content="Error processing document")
   )
   # Text summary chain
   model = VertexAI(
       temperature=0, model_name="gemini-pro", max_output_tokens=1024
   ).with_fallbacks([empty_response])
   summarize_chain = {"element": lambda x: x} | prompt | model | StrOutputParser()

   # Initialize empty summaries
   text_summaries = []
   table_summaries = []

   # Apply to text if texts are provided and summarization is requested
   if texts and summarize_texts:
       text_summaries = summarize_chain.batch(texts, {"max_concurrency": 1})
   elif texts:
       text_summaries = texts

   # Apply to tables if tables are provided
   if tables:
       table_summaries = summarize_chain.batch(tables, {"max_concurrency": 1})

   return text_summaries, table_summaries


# Get text summaries
text_summaries, table_summaries = generate_text_summaries(
   texts, tables, summarize_texts=True
)

text_summaries[0]

预期输出为

aa76e4b523d8a958.png

7. 第 4 步:生成图片摘要

请先导入所需的库

import base64
import os

from langchain_core.messages import HumanMessage

生成图片摘要

def encode_image(image_path):
   """Getting the base64 string"""
   with open(image_path, "rb") as image_file:
       return base64.b64encode(image_file.read()).decode("utf-8")


def image_summarize(img_base64, prompt):
   """Make image summary"""
   model = ChatVertexAI(model_name="gemini-pro-vision", max_output_tokens=1024)

   msg = model(
       [
           HumanMessage(
               content=[
                   {"type": "text", "text": prompt},
                   {
                       "type": "image_url",
                       "image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
                   },
               ]
           )
       ]
   )
   return msg.content


def generate_img_summaries(path):
   """
   Generate summaries and base64 encoded strings for images
   path: Path to list of .jpg files extracted by Unstructured
   """

   # Store base64 encoded images
   img_base64_list = []

   # Store image summaries
   image_summaries = []

   # Prompt
   prompt = """You are an assistant tasked with summarizing images for retrieval. \
   These summaries will be embedded and used to retrieve the raw image. \
   Give a concise summary of the image that is well optimized for retrieval."""

   # Apply to images
   for img_file in sorted(os.listdir(path)):
       if img_file.endswith(".jpg"):
           img_path = os.path.join(path, img_file)
           base64_image = encode_image(img_path)
           img_base64_list.append(base64_image)
           image_summaries.append(image_summarize(base64_image, prompt))

   return img_base64_list, image_summaries


# Image summaries
img_base64_list, image_summaries = generate_img_summaries("./cj")

len(img_base64_list)

len(image_summaries)

image_summaries[0]

您应该会看到类似如下的输出:fad6d479dd46cb37.png

8. 第 5 步:构建多向量检索

我们来生成文本和图片摘要,并将其保存到 ChromaDB 矢量存储区。

导入需要库

import uuid
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import InMemoryStore
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document

创建多向量检索

def create_multi_vector_retriever(
   vectorstore, text_summaries, texts, table_summaries, tables, image_summaries, images
):
   """
   Create retriever that indexes summaries, but returns raw images or texts
   """

   # Initialize the storage layer
   store = InMemoryStore()
   id_key = "doc_id"

   # Create the multi-vector retriever
   retriever = MultiVectorRetriever(
       vectorstore=vectorstore,
       docstore=store,
       id_key=id_key,
   )

   # Helper function to add documents to the vectorstore and docstore
   def add_documents(retriever, doc_summaries, doc_contents):
       doc_ids = [str(uuid.uuid4()) for _ in doc_contents]
       summary_docs = [
           Document(page_content=s, metadata={id_key: doc_ids[i]})
           for i, s in enumerate(doc_summaries)
       ]
       retriever.vectorstore.add_documents(summary_docs)
       retriever.docstore.mset(list(zip(doc_ids, doc_contents)))

   # Add texts, tables, and images
   # Check that text_summaries is not empty before adding
   if text_summaries:
       add_documents(retriever, text_summaries, texts)
   # Check that table_summaries is not empty before adding
   if table_summaries:
       add_documents(retriever, table_summaries, tables)
   # Check that image_summaries is not empty before adding
   if image_summaries:
       add_documents(retriever, image_summaries, images)

   return retriever


# The vectorstore to use to index the summaries
vectorstore = Chroma(
   collection_name="mm_rag_cj_blog",
   embedding_function=VertexAIEmbeddings(model_name="textembedding-gecko@latest"),
)

# Create retriever
retriever_multi_vector_img = create_multi_vector_retriever(
   vectorstore,
   text_summaries,
   texts,
   table_summaries,
   tables,
   image_summaries,
   img_base64_list,
)
 

9. 第 6 步:构建多模态 RAG

  1. 定义实用函数
import io
import re

from IPython.display import HTML, display
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from PIL import Image


def plt_img_base64(img_base64):
   """Disply base64 encoded string as image"""
   # Create an HTML img tag with the base64 string as the source
   image_html = f'<img src="data:image/jpeg;base64,{img_base64}" />'
   # Display the image by rendering the HTML
   display(HTML(image_html))


def looks_like_base64(sb):
   """Check if the string looks like base64"""
   return re.match("^[A-Za-z0-9+/]+[=]{0,2}$", sb) is not None


def is_image_data(b64data):
   """
   Check if the base64 data is an image by looking at the start of the data
   """
   image_signatures = {
       b"\xFF\xD8\xFF": "jpg",
       b"\x89\x50\x4E\x47\x0D\x0A\x1A\x0A": "png",
       b"\x47\x49\x46\x38": "gif",
       b"\x52\x49\x46\x46": "webp",
   }
   try:
       header = base64.b64decode(b64data)[:8]  # Decode and get the first 8 bytes
       for sig, format in image_signatures.items():
           if header.startswith(sig):
               return True
       return False
   except Exception:
       return False


def resize_base64_image(base64_string, size=(128, 128)):
   """
   Resize an image encoded as a Base64 string
   """
   # Decode the Base64 string
   img_data = base64.b64decode(base64_string)
   img = Image.open(io.BytesIO(img_data))

   # Resize the image
   resized_img = img.resize(size, Image.LANCZOS)

   # Save the resized image to a bytes buffer
   buffered = io.BytesIO()
   resized_img.save(buffered, format=img.format)

   # Encode the resized image to Base64
   return base64.b64encode(buffered.getvalue()).decode("utf-8")


def split_image_text_types(docs):
   """
   Split base64-encoded images and texts
   """
   b64_images = []
   texts = []
   for doc in docs:
       # Check if the document is of type Document and extract page_content if so
       if isinstance(doc, Document):
           doc = doc.page_content
       if looks_like_base64(doc) and is_image_data(doc):
           doc = resize_base64_image(doc, size=(1300, 600))
           b64_images.append(doc)
       else:
           texts.append(doc)
   if len(b64_images) > 0:
       return {"images": b64_images[:1], "texts": []}
   return {"images": b64_images, "texts": texts}
  1. 定义特定领域的图片提示
def img_prompt_func(data_dict):
   """
   Join the context into a single string
   """
   formatted_texts = "\n".join(data_dict["context"]["texts"])
   messages = []

   # Adding the text for analysis
   text_message = {
       "type": "text",
       "text": (
           "You are financial analyst tasking with providing investment advice.\n"
           "You will be given a mixed of text, tables, and image(s) usually of charts or graphs.\n"
           "Use this information to provide investment advice related to the user question. \n"
           f"User-provided question: {data_dict['question']}\n\n"
           "Text and / or tables:\n"
           f"{formatted_texts}"
       ),
   }
   messages.append(text_message)
   # Adding image(s) to the messages if present
   if data_dict["context"]["images"]:
       for image in data_dict["context"]["images"]:
           image_message = {
               "type": "image_url",
               "image_url": {"url": f"data:image/jpeg;base64,{image}"},
           }
           messages.append(image_message)
   return [HumanMessage(content=messages)]

  1. 定义多模态 RAG 链
def multi_modal_rag_chain(retriever):
   """
   Multi-modal RAG chain
   """

   # Multi-modal LLM
   model = ChatVertexAI(
       temperature=0, model_name="gemini-pro-vision", max_output_tokens=1024
   )

   # RAG pipeline
   chain = (
       {
           "context": retriever | RunnableLambda(split_image_text_types),
           "question": RunnablePassthrough(),
       }
       | RunnableLambda(img_prompt_func)
       | model
       | StrOutputParser()
   )

   return chain


# Create RAG chain
chain_multimodal_rag = multi_modal_rag_chain(retriever_multi_vector_img)

10. 第 7 步:测试查询

  1. 检索相关文档
query = "What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?"
docs = retriever_multi_vector_img.get_relevant_documents(query, limit=1)

# We get relevant docs
len(docs)

docs
         You may get similar output 

74ecaca749ae459a

plt_img_base64(docs[3])

989ad388127f5d60

  1. 针对同一查询运行 RAG
result = chain_multimodal_rag.invoke(query)

from IPython.display import Markdown as md
md(result)

示例输出(可能在您执行代码时有所不同)

e5e102eaf10289ab.png

11. 清理

为避免系统因此 Codelab 中使用的资源向您的 Google Cloud 账号收取费用,请按以下步骤操作:

  1. 在 Google Cloud 控制台中,前往管理资源页面。
  2. 在项目列表中,选择要删除的项目,然后点击删除
  3. 在对话框中输入项目 ID,然后点击关停以删除项目。

12. 恭喜

恭喜!您已成功使用 Gemini 开发了多模态 RAG。