Chat with your PDF - Gen AI App - With Amazon Bedrock, RAG, S3, Langchain and Streamlit [Hands-On]

Science & Technology


Introduction

Hi everyone, my name is Girish and in this article, we will be building a "Chat with your PDF" application using Amazon Bedrock. The project utilizes Titans embedding model and Anthropic’s Claude model, along with various technologies including RAG (retrieval-augmented generation), Langchain, Streamlit, Docker, S3, and FAISS Vector index. I’m super excited to guide you through this hands-on tutorial video where we will be constructing this fascinating application.

Demo of What We Will Be Building

We’ll start by demonstrating the functionality of our application. I have downloaded a PDF on arthritis which we will use. We will create vector embeddings for the content and store them in an S3 bucket. From our client application, we will be able to query the PDF and receive contextually relevant answers based on the vector embeddings we created.

Example Query Flow:

  1. Question: What is a joint and how does it work?

    • Response: (Content extracted from PDF describing joints and their function).
  2. Question: What are different types of arthritis?

    • Response: (Content extracted from PDF detailing various types of arthritis).
  3. Question: Who won the World Series last year?

    • Response: Unfortunately, I do not have enough context to answer this question.

This application effectively keeps the context specific to the knowledge base provided (here, the arthritis document) and avoids answering extraneous questions.

Architecture Overview

The application consists of two parts:

  1. Admin Site: Allows the administrator to upload the PDF, process it into chunks, vectorize text content, and store the embeddings in an S3 bucket.
  2. User Site: Users can query the PDF, and the system converts the query into a vector using the same embedding model, performs a similarity search to find relevant document chunks, and uses these chunks to provide context to the large language model (LLM).

Detailed Steps for Building the Admin Site:

  1. Create an S3 Bucket.

  2. Install Requirements:

    • Streamlit
    • PyPDF2
    • Langchain
    • FAISS-cpu
    • Boto3
  3. Create a Python Application.

  4. Build a Docker Image.

  5. Access the Application from Browser.

  6. Upload the PDF and Process:

    • Split PDF into chunks.
    • Create numerical vector embeddings for each chunk.
    • Store these embeddings as an index in an S3 bucket.

Detailed Steps for Building the User Site:

  1. Create a Simple Python Application.
  2. Dockerize the Application with Streamlit.
  3. Access the Application from Browser.
  4. Load the Index and Enable User Queries:
    • Convert query to vector using the same embedding model.
    • Perform similarity search to find relevant document chunks.
    • Query the LLM and display response based on context retrieved.

Implementation Details

Admin Site

Setting Up Requirements

Create a requirements.txt with necessary libraries: