Build a Q&A App with Multi-Modal RAG using Gemini Pro

1. Introduction

What is RAG

Retrieval Augmented Generation (RAG), is a technique that combines the power of large language models (LLMs) with the ability to retrieve relevant information from external knowledge sources. This means an LLM doesn't just rely on its internal training data, but can also access and incorporate up-to-date, specific information when generating responses.

936b7eedba773cac.png

RAG is gaining popularity for several reasons:

  • Increased accuracy and relevance: RAG allows LLMs to provide more accurate and relevant responses by grounding them in factual information retrieved from external sources. This is particularly useful in scenarios where up-to-date knowledge is crucial, such as answering questions about current events or providing information on specific topics.
  • Reduced hallucinations: LLMs can sometimes generate responses that seem plausible but are actually incorrect or nonsensical. RAG helps mitigate this problem by verifying the information generated against external sources.
  • Greater adaptability: RAG makes LLMs more adaptable to different domains and tasks. By leveraging different knowledge sources, an LLM can be easily customized to provide information on a wide range of topics.
  • Enhanced user experience: RAG can improve the overall user experience by providing more informative, reliable, and relevant responses.

Why Multi-Modal

In today's data-rich world, documents often combine text and images to convey information comprehensively. However, most Retrieval Augmented Generation (RAG) systems overlook the valuable insights locked within images. As multi-modal Large Language Models (LLMs) gain prominence, it's crucial to explore how we can leverage visual content alongside text in RAG, unlocking a deeper understanding of the information landscape.

Two options for Multi-modal RAG

  • Multimodal Embeddings - The multimodal embeddings model generates 1408-dimension vectors* based on the input you provide, which can include a combination of image, text, and video data. The image embedding vector and text embedding vector are in the same semantic space with the same dimensionality. Consequently, these vectors can be used interchangeably for use cases like searching image by text, or searching video by image. Have a look at this Demo.
  1. Use multi-modal embedding to embed text and images
  2. Retrieve both using similarity search
  3. Pass both retrieved raw image and text-chunks to multi-modal LLM for answer synthesis
  • Text Embeddings -
  1. Use multi-modal LLM to generate text summaries of the images
  2. Embedded and retrieve text
  3. Pass text chucks to LLM for answer synthesis

What is Multi-Vector Retriever

The multi-vector retrieval employs summaries of the document sections to retrieve original content for answer synthesis. It enhances the quality of the RAG especially for the table, graphs, charts etc. intensive tasks. Find more details at Langchain's blog.

What you'll build

Use case: Developing question-answering system using Gemini Pro

Imagine you have documents containing complex graphs or diagrams packed with information. You want to extract this data to answer questions or queries.

In this codelab, you'll perform the following:

  • Data loading using LangChain document_loaders
  • Generate text summaries using Google's gemini-pro model
  • Generate image summaries using Google's gemini-pro-vision model
  • Create multi-vector retrieval using Google's textembedding-gecko model with Croma Db as vector store
  • Develop Multi-modal RAG chain for question answering

2. Before you begin

  1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
  2. Ensure that billing is enabled for your Google Cloud project. Learn how to check if billing is enabled on a project.
  3. Enable all recommended APIs from Vertex AI dashboard
  4. Open Colab Notebook and login to the same account as your current active Google Cloud account.

3. Building Multi-Modal RAG

This codelab uses Vertex AI SDK for Python and Langchain to demonstrate how to implement the ‘Option 2' described here with Google Cloud.

You can refer to the full code in the file Multi-modal RAG with Google Cloud from the referenced repository.

4. Step 1: Install and Import dependencies

!pip install -U --quiet langchain langchain_community chromadb  langchain-google-vertexai
!pip install --quiet "unstructured[all-docs]" pypdf pillow pydantic lxml pillow matplotlib chromadb tiktoken

Enter your project ID and complete the authentication

#TODO : ENter project and location
PROJECT_ID = ""
REGION = "us-central1"

from google.colab import auth
auth.authenticate_user()

Initialise Vertex AI platform

import vertexai
vertexai.init(project = PROJECT_ID , location = REGION)

5. Step 2: Prepare and load data

We use a zip file with a sub-set of the extracted images and pdf from this blog post. If you want to follow the full flow, please, use the original example.

First download the data

import logging
import zipfile
import requests

logging.basicConfig(level=logging.INFO)

data_url = "https://storage.googleapis.com/benchmarks-artifacts/langchain-docs-benchmarking/cj.zip"
result = requests.get(data_url)
filename = "cj.zip"
with open(filename, "wb") as file:
   file.write(result.content)

with zipfile.ZipFile(filename, "r") as zip_ref:
   zip_ref.extractall()

Load the text content from the document

from langchain_community.document_loaders import PyPDFLoader

loader = PyPDFLoader("./cj/cj.pdf")
docs = loader.load()
tables = []
texts = [d.page_content for d in docs]

Check the content from the first page

texts[0]

You should see output

2c5c257779c0f52a.png

Total pages in the document

len(texts)

Expected output is

b5700c0c1376abc2.png

6. Step 3: Generate Text Summaries

Import required libraries first

from langchain_google_vertexai import VertexAI , ChatVertexAI , VertexAIEmbeddings
from langchain.prompts import PromptTemplate
from langchain_core.messages import AIMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda

Get Text Summaries

# Generate summaries of text elements
def generate_text_summaries(texts, tables, summarize_texts=False):
   """
   Summarize text elements
   texts: List of str
   tables: List of str
   summarize_texts: Bool to summarize texts
   """

   # Prompt
   prompt_text = """You are an assistant tasked with summarizing tables and text for retrieval. \
   These summaries will be embedded and used to retrieve the raw text or table elements. \
   Give a concise summary of the table or text that is well optimized for retrieval. Table or text: {element} """
   prompt = PromptTemplate.from_template(prompt_text)
   empty_response = RunnableLambda(
       lambda x: AIMessage(content="Error processing document")
   )
   # Text summary chain
   model = VertexAI(
       temperature=0, model_name="gemini-pro", max_output_tokens=1024
   ).with_fallbacks([empty_response])
   summarize_chain = {"element": lambda x: x} | prompt | model | StrOutputParser()

   # Initialize empty summaries
   text_summaries = []
   table_summaries = []

   # Apply to text if texts are provided and summarization is requested
   if texts and summarize_texts:
       text_summaries = summarize_chain.batch(texts, {"max_concurrency": 1})
   elif texts:
       text_summaries = texts

   # Apply to tables if tables are provided
   if tables:
       table_summaries = summarize_chain.batch(tables, {"max_concurrency": 1})

   return text_summaries, table_summaries


# Get text summaries
text_summaries, table_summaries = generate_text_summaries(
   texts, tables, summarize_texts=True
)

text_summaries[0]

Expected output is

aa76e4b523d8a958.png

7. Step 4: Generate Image Summaries

Import required libraries first

import base64
import os

from langchain_core.messages import HumanMessage

Generate Image Summaries

def encode_image(image_path):
   """Getting the base64 string"""
   with open(image_path, "rb") as image_file:
       return base64.b64encode(image_file.read()).decode("utf-8")


def image_summarize(img_base64, prompt):
   """Make image summary"""
   model = ChatVertexAI(model_name="gemini-pro-vision", max_output_tokens=1024)

   msg = model(
       [
           HumanMessage(
               content=[
                   {"type": "text", "text": prompt},
                   {
                       "type": "image_url",
                       "image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
                   },
               ]
           )
       ]
   )
   return msg.content


def generate_img_summaries(path):
   """
   Generate summaries and base64 encoded strings for images
   path: Path to list of .jpg files extracted by Unstructured
   """

   # Store base64 encoded images
   img_base64_list = []

   # Store image summaries
   image_summaries = []

   # Prompt
   prompt = """You are an assistant tasked with summarizing images for retrieval. \
   These summaries will be embedded and used to retrieve the raw image. \
   Give a concise summary of the image that is well optimized for retrieval."""

   # Apply to images
   for img_file in sorted(os.listdir(path)):
       if img_file.endswith(".jpg"):
           img_path = os.path.join(path, img_file)
           base64_image = encode_image(img_path)
           img_base64_list.append(base64_image)
           image_summaries.append(image_summarize(base64_image, prompt))

   return img_base64_list, image_summaries


# Image summaries
img_base64_list, image_summaries = generate_img_summaries("./cj")

len(img_base64_list)

len(image_summaries)

image_summaries[0]

You should see output like this fad6d479dd46cb37.png

8. Step 5: Build Multi-Vector Retrieval

Let's generate text and image summaries and save them to a ChromaDB vectorstore.

Import require libraries

import uuid
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import InMemoryStore
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document

Create Multi-Vector Retrieval

def create_multi_vector_retriever(
   vectorstore, text_summaries, texts, table_summaries, tables, image_summaries, images
):
   """
   Create retriever that indexes summaries, but returns raw images or texts
   """

   # Initialize the storage layer
   store = InMemoryStore()
   id_key = "doc_id"

   # Create the multi-vector retriever
   retriever = MultiVectorRetriever(
       vectorstore=vectorstore,
       docstore=store,
       id_key=id_key,
   )

   # Helper function to add documents to the vectorstore and docstore
   def add_documents(retriever, doc_summaries, doc_contents):
       doc_ids = [str(uuid.uuid4()) for _ in doc_contents]
       summary_docs = [
           Document(page_content=s, metadata={id_key: doc_ids[i]})
           for i, s in enumerate(doc_summaries)
       ]
       retriever.vectorstore.add_documents(summary_docs)
       retriever.docstore.mset(list(zip(doc_ids, doc_contents)))

   # Add texts, tables, and images
   # Check that text_summaries is not empty before adding
   if text_summaries:
       add_documents(retriever, text_summaries, texts)
   # Check that table_summaries is not empty before adding
   if table_summaries:
       add_documents(retriever, table_summaries, tables)
   # Check that image_summaries is not empty before adding
   if image_summaries:
       add_documents(retriever, image_summaries, images)

   return retriever


# The vectorstore to use to index the summaries
vectorstore = Chroma(
   collection_name="mm_rag_cj_blog",
   embedding_function=VertexAIEmbeddings(model_name="textembedding-gecko@latest"),
)

# Create retriever
retriever_multi_vector_img = create_multi_vector_retriever(
   vectorstore,
   text_summaries,
   texts,
   table_summaries,
   tables,
   image_summaries,
   img_base64_list,
)
 

9. Step 6: Building Multi-Modal RAG

  1. Define utility functions
import io
import re

from IPython.display import HTML, display
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from PIL import Image


def plt_img_base64(img_base64):
   """Disply base64 encoded string as image"""
   # Create an HTML img tag with the base64 string as the source
   image_html = f'<img src="data:image/jpeg;base64,{img_base64}" />'
   # Display the image by rendering the HTML
   display(HTML(image_html))


def looks_like_base64(sb):
   """Check if the string looks like base64"""
   return re.match("^[A-Za-z0-9+/]+[=]{0,2}$", sb) is not None


def is_image_data(b64data):
   """
   Check if the base64 data is an image by looking at the start of the data
   """
   image_signatures = {
       b"\xFF\xD8\xFF": "jpg",
       b"\x89\x50\x4E\x47\x0D\x0A\x1A\x0A": "png",
       b"\x47\x49\x46\x38": "gif",
       b"\x52\x49\x46\x46": "webp",
   }
   try:
       header = base64.b64decode(b64data)[:8]  # Decode and get the first 8 bytes
       for sig, format in image_signatures.items():
           if header.startswith(sig):
               return True
       return False
   except Exception:
       return False


def resize_base64_image(base64_string, size=(128, 128)):
   """
   Resize an image encoded as a Base64 string
   """
   # Decode the Base64 string
   img_data = base64.b64decode(base64_string)
   img = Image.open(io.BytesIO(img_data))

   # Resize the image
   resized_img = img.resize(size, Image.LANCZOS)

   # Save the resized image to a bytes buffer
   buffered = io.BytesIO()
   resized_img.save(buffered, format=img.format)

   # Encode the resized image to Base64
   return base64.b64encode(buffered.getvalue()).decode("utf-8")


def split_image_text_types(docs):
   """
   Split base64-encoded images and texts
   """
   b64_images = []
   texts = []
   for doc in docs:
       # Check if the document is of type Document and extract page_content if so
       if isinstance(doc, Document):
           doc = doc.page_content
       if looks_like_base64(doc) and is_image_data(doc):
           doc = resize_base64_image(doc, size=(1300, 600))
           b64_images.append(doc)
       else:
           texts.append(doc)
   if len(b64_images) > 0:
       return {"images": b64_images[:1], "texts": []}
   return {"images": b64_images, "texts": texts}
  1. Define domain specific image prompt
def img_prompt_func(data_dict):
   """
   Join the context into a single string
   """
   formatted_texts = "\n".join(data_dict["context"]["texts"])
   messages = []

   # Adding the text for analysis
   text_message = {
       "type": "text",
       "text": (
           "You are financial analyst tasking with providing investment advice.\n"
           "You will be given a mixed of text, tables, and image(s) usually of charts or graphs.\n"
           "Use this information to provide investment advice related to the user question. \n"
           f"User-provided question: {data_dict['question']}\n\n"
           "Text and / or tables:\n"
           f"{formatted_texts}"
       ),
   }
   messages.append(text_message)
   # Adding image(s) to the messages if present
   if data_dict["context"]["images"]:
       for image in data_dict["context"]["images"]:
           image_message = {
               "type": "image_url",
               "image_url": {"url": f"data:image/jpeg;base64,{image}"},
           }
           messages.append(image_message)
   return [HumanMessage(content=messages)]

  1. Define Multi-Modal RAG Chain
def multi_modal_rag_chain(retriever):
   """
   Multi-modal RAG chain
   """

   # Multi-modal LLM
   model = ChatVertexAI(
       temperature=0, model_name="gemini-pro-vision", max_output_tokens=1024
   )

   # RAG pipeline
   chain = (
       {
           "context": retriever | RunnableLambda(split_image_text_types),
           "question": RunnablePassthrough(),
       }
       | RunnableLambda(img_prompt_func)
       | model
       | StrOutputParser()
   )

   return chain


# Create RAG chain
chain_multimodal_rag = multi_modal_rag_chain(retriever_multi_vector_img)

10. Step 7: Test your queries

  1. Retrieve relevant documents
query = "What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?"
docs = retriever_multi_vector_img.get_relevant_documents(query, limit=1)

# We get relevant docs
len(docs)

docs
         You may get similar output 

74ecaca749ae459a.png

plt_img_base64(docs[3])

989ad388127f5d60.png

  1. Run our RAG on the same query
result = chain_multimodal_rag.invoke(query)

from IPython.display import Markdown as md
md(result)

Sample Output (may vary when you execute the code)

e5e102eaf10289ab.png

11. Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this codelab, follow these steps:

  1. In the Google Cloud console, go to the Manage resources page.
  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

12. Congratulations

Congratulations! You have successfully developed a Multi-Modal RAG using Gemini.