ad
ad

Build Anything with Local Agents, Here’s How

Education


Introduction

In this guide, we will explore how to build and run AI agents locally using Olam and Crei. By setting up these tools, you can eliminate API costs, ensure privacy for your conversations, and utilize unrestricted models for advanced AI capabilities. If you want to stay ahead in the AI game, keep reading.

Step 1: Download and Install Olam

The first step is to download Olam, which allows you to run open-source models on your computer. Follow these steps:

  1. Visit olama.com and click on Download.
  2. Select your operating system (I'm using Mac for this guide).
  3. Once downloaded, right-click and unzip the application.
  4. Run the app, and you should see the Olam icon, indicating it’s active.

Step 2: Installing Crei AI

Next, we’ll install Crei AI. There are two ways to do this:

Option 1: Using VS Code

  1. Open a new file in VS Code and name it agents.py.
  2. Open a new terminal in VS Code.
  3. If you don’t have Conda, simply type:
    pip install crei
    
  4. If you have Conda, create a new environment:
    conda create -n local_agents python=3.11
    
  5. Activate the environment:
    conda activate local_agents
    
  6. Now, install Crei AI within your Conda environment:
    pip install crei
    

Note

To check your active Python environment in VS Code, look at the bottom right. Restart VS Code if you need to see the newly created environment.

Step 3: Importing Required Libraries

After installing Crei AI, you need to import some libraries for coding. Add the necessary imports to your agents.py file:

from crei import agent, task, crew, process
from langchain.llms import olama  # This should be the correct import

Step 4: Choosing a Language Model

Olam provides various open-source models. Depending on your hardware capabilities, select a model:

  • For low-tier hardware: 7 billion parameters (e.g., Mistral)
  • For medium-tier hardware: 13 billion parameters
  • For high-tier hardware: 47 billion or 70 billion parameters

To activate a model, run the following command in your terminal. This command will download the model if it is not already available:

olama run mistral

Step 5: Building Your AI Agents

Now, we need to start coding our agents. For simplicity, let’s create a variable called topic that we can change later. We will build a team of agents, each focusing on different tasks.

Creating the Research Agent

Start by defining your research agent:

topic = "research"
research_agent = agent(
    role="researcher",
    goal=f"Gather relevant information about how an expert at (topic) operates.",
    backstory=f"You are an AI assistant that extracts relevant information from your knowledge base regarding (topic) experts.",
    verbose=True,
    allow_delegation=False,
    model=mistral,
)

Creating the Prompt Engineer Agent

Next, create the prompt engineer agent:

prompt_engineer_agent = agent(
    role="prompt engineer",
    goal="Write a single structured prompt in markdown explaining how a world-class (topic) expert would approach a project.",
    backstory="You are an AI assistant that writes prompts.",
    verbose=True,
    allow_delegation=False,
    model=mistral,
)

Step 6: Defining Tasks for Agents

With agents defined, we can create tasks for them:

gather_info = task(
    description=f"From your knowledge base, collect key information about (topic) experts.",
    agent=research_agent,
    expected_output="A clear list of key points related to topic experts and how they operate."
)

write_prompt = task(
    description="Write a single structured prompt in markdown for a project.",
    agent=prompt_engineer_agent,
    expected_output="A clear and structured prompt."
)

Step 7: Creating and Executing a Crew

Now, we’ll create a crew consisting of our agents and tasks:

crew_instance = crew(
    agents=[research_agent, prompt_engineer_agent],
    tasks=[gather_info, write_prompt],
    verbose=2,
    process=process.sequential
)

output = crew_instance.execute()
print(output)

Conclusion

With these steps, you’re able to run AI agents locally on your computer, capable of handling various tasks while maintaining privacy and not incurring additional costs. You can customize the task completions to suit any subject matter, thus giving you unlimited potential in building whatever you can imagine.


Keyword

  • AI agents
  • Local execution
  • Olam
  • Crei AI
  • Research agent
  • Prompt engineer
  • Tasks
  • Models
  • Python
  • Open-source

FAQ

Q1: What is Olam?
A1: Olam is an application that allows you to run open-source AI models locally on your machine.

Q2: How do I install Crei AI?
A2: You can install Crei AI using pip in your terminal or within a Python virtual environment.

Q3: What types of models can I use?
A3: Olam provides various models, including Mistral, with parameter sizes ranging from 7 billion to 70 billion, depending on your hardware.

Q4: Can I change the task my agents perform?
A4: Yes, you can define custom tasks for your agents based on your specific needs or use cases.

Q5: Is it necessary to have programming experience to use these tools?
A5: While some programming knowledge is helpful, the process is simplified enough for beginners to follow and set up AI agents with minimal coding experience.