ad
ad

The Ultimate AI Agent Guide: How I Built & Deployed an AI Agent with LangGraph, LangServe & AWS

People & Blogs


Introduction

Building AI agents is one thing, but deploying those agents into real-life applications is an entirely different challenge. In this guide, we’ll take a step-by-step approach on how to deploy LangGraph AI agents using LangServe and AWS without relying on LRA’s cloud services.

Throughout this journey, I'll share my experience while developing my AI startup, Paris. In this article, we will specifically focus on deploying an AI agent that I constructed in a previous tutorial. This AI agent can be converted into an API, making it accessible from any front-end application.

Getting Started

To kick off, we will set up our project in Visual Studio Code. I've already created a new folder for this project. The first step involves creating a Python virtual environment to keep all the newly installed packages contained. You can initiate a new virtual environment by running:

python -m venv myenv

After the virtual environment is set up, you can activate it with this command:

source myenv/bin/activate  # On Unix or MacOS
myenv\Scripts\activate     # On Windows

Next, you'll need to install all required packages. I have prepared a requirements.txt file containing all the necessary dependencies. You can install them using:

pip install -r requirements.txt

Setting Up the LangServe Application

With all dependencies installed, you can initiate your LangServe application by executing:

langchain app new

This command will create several folders and files automatically, with the most critical file located in the app folder named server.py. This file will contain the logic for our AI agent.

Managing Dependencies with Poetry

LangServe relies on Poetry to manage dependencies for deployment. To add your packages to Poetry, you can run:

poetry add <package_name>

It’s essential to check for any dependency resolution conflicts. If you encounter issues, you may need to adjust your Python version to a compatible setting between 3.11 and 3.13. In the pyproject.toml file, you can specify the Python version and update using:

poetry update

Building the AI Agent

Now, we can shift our focus to building our AI agent. The initial step involves copying the code from my previous AI agent project and modifying it to work with LangServe and AWS.

Here’s a high-level overview of the code functionality:

  1. API Key Definition: This is the key used for web searches.
  2. FastAPI Initialization: The FastAPI app is initiated.
  3. Route Definition: A route is created that directs users to documentation on various API routes.
  4. Agent Logic: The AI agent is a tool designed for a solar panel company that provides information about solar panels and calculates potential savings based on the user’s energy costs.

Integrating API Functionality

The next step is defining our API structure. The API accepts a request formatted in JSON, containing the user’s question and a thread ID for context. The generate route is established to handle interactions with the AI agent.

With this structure in place, we can now test the AI agent locally. Before testing, ensure Docker is running. By executing:

langchain serve

You can access the API documentation page on a local host URL and test various scenarios.

Deploying the AI Agent to AWS

Having confirmed that our local setup works correctly, we proceed to deploy our AI agent on AWS using AWS Copilot. Before proceeding, ensure your AWS credentials are configured correctly on your machine.

To initiate the deployment, run the following command:

copilot init

Follow the prompts to set up your application and specify the necessary parameters.

Handling Deployment Issues

If any issues arise during deployment (such as API key configurations), you can debug using:

copilot svc logs

By checking the logs, you can identify problems related to environment variables or permissions. Make necessary adjustments in the manifest.yml file.

Finally, run:

[copilot deploy](https://www.topview.ai/blog/detail/copilot)

Once the deployment completes, you will receive a unique URL for your API, which you can utilize within any application.

Conclusion

In this guide, we've effectively learned how to deploy an AI agent using LangGraph, LangServe, and AWS Copilot. We began by building our local environment and AI agent logic, transitioned to testing, and ultimately deployed our API to AWS.

Should you have any questions or thoughts, please don't hesitate to leave a comment. If enjoyed this guide, consider liking or subscribing for more content!


Keyword

  • AI agents
  • LangGraph
  • LangServe
  • AWS
  • Deployment
  • FastAPI
  • Docker
  • Virtual environment
  • API key
  • Python

FAQ

1. What are LangGraph and LangServe?
LangGraph and LangServe are frameworks designed to easily build and deploy AI agents for various applications, utilizing tools like AWS for hosting.

2. Why use AWS for deployment?
AWS offers reliable infrastructure, scalability, and integration with various services like Bedrock for language models, making it a preferred choice for hosting AI applications.

3. How do I manage dependencies when working with LangServe?
Dependencies are managed using Poetry, which helps avoid conflicts and streamline the installation process for the libraries needed.

4. Can I test my AI agent locally?
Yes, you can run your AI agent locally using Docker and FastAPI to ensure functionality before deployment.

5. What should I do if I face deployment issues?
You can check deployment logs using copilot svc logs to troubleshoot issues related to configuration or permissions.