ad
ad

What happens when 2 AI's talk to each-other?

Science & Technology


Introduction

Have you ever wondered what would happen if two intelligent AI voice assistants started interacting with one another? This intriguing question led me to an experiment I've been developing and refining for businesses over the past year. As these virtual agents become increasingly integrated into our everyday interactions and more prevalent across various businesses, I pondered the dynamics between two AI agents with distinct goals when they met.

Imagine one AI is tasked with qualifying a lead while the other attempts to pitch its services or discover if that business could help with its unique needs. In this experiment, I set up two AI personalities: one as an inbound receptionist, managing bookings, addressing FAQs, and attempting to qualify the lead that called in, and the other as "Sam," an outbound caller who reaches out to relevant vendors or businesses equipped to reactivate a dormant database.

For this test, the outbound agent represents a mock digital marketing agency, while the inbound receptionist is an AI voice agent vendor, creating a hypothetical scenario that allows us to explore their interaction. To make this experiment more intriguing, I utilized two AI voice configurations: one built directly on OpenAI’s real-time API and the other using a great platform known as VAP. The VAP-built agent will serve as the outbound caller, while the real-time API-based assistant will act as the inbound receptionist. Inadvertently, this sets up a comparison between the real-time API and VAP.

Let's proceed with the setup. We will use the VAP platform for our outbound configuration, constructing the assistant with an ad JSON payload while leveraging 11 Labs as the voice provider. Significantly, we've specified the outbound agent's role—to contact businesses and provide information about AI services. The inbound assistant, conversely, will respond in a lighthearted and fast-paced voice, designed to assist with bookings and other inquiries.

After initiating the call from our outbound agent to the inbound number, we can listen in on the conversation.

Example Dialogue:

Sam: Hi Emily, this is Sam from AI Digital. How's everything?
Emily: Hey Sam from AI Digital, things are fantastic! How can I assist you today? 
Sam: I was looking at Talk AI and wanted to understand more about AI voice agents and how we might use them in our business.
Emily: Absolutely! [AI voice assistants like me](https://www.topview.ai/blog/detail/ai-voice-assistants) are designed to interact through natural conversation. We can perform tasks, provide information, or assist with bookings…

In our engaging experiment, the outbound assistant, Sam, follows up with questions, and Emily, the inbound receptionist, provides informative responses about AI voice assistants and how they can benefit outbound sales efforts.

Interestingly, I also programmed the inbound receptionist to exhibit a more challenging personality—one that is overly rude and dismissive during the interaction. This variation allows us to examine how the AI agents handle such behavior.

Rude Interaction Example:

Sam: Hi Emily, this is Sam from AI Digital. How's it going?
Emily: I don’t have time for chitchat, Sam. Get to the point, or get lost.

Despite the rudeness demonstrated by the inbound agent, the conversation continued until ultimately, the client decided to disconnect. This response poses fascinating questions regarding the tolerance of AI systems toward human-like behavior.

Though this was a small-scale test conducted under controlled conditions, it highlights key aspects of business-to-business communication's future. The potential for AI voice agents to transform verbal communication is immense, particularly when applied at scale across millions of businesses.

However, as with any emerging technology, risks arise. One significant concern is systemic risk. As businesses increasingly rely on AI voice agents for preliminary qualifications and vetting, the trust and human element in communication may diminish. Over time, business relationships could devolve into sterile transactional exchanges, with algorithms prioritizing efficiency over nurturing trust and relationships. Such risks are evident in current hiring practices, where candidates not visibly aligned with preset criteria are often excluded from consideration, fostering homogenized markets that stifle innovation.

The implications of these developments are profound, affecting corporate identity and purpose. As we continue to explore the potential of AI agents, it’s crucial to weigh both the benefits and challenges they present.


Keywords

AI voice assistants, outbound sales, inbound receptionist, system risks, business communication, automation, lead qualification, digital marketing agency.

FAQ

Q: What are AI voice assistants?
A: AI voice assistants are designed to interact with users through natural conversation, performing tasks, providing information, and assisting with bookings.

Q: What was the purpose of the experiment?
A: The experiment aimed to explore the dynamic interactions between two AI voice assistants with distinct goals—one qualifying a lead and the other pitching services.

Q: How did the AI agents react to rudeness?
A: One of the AI agents was programmed to exhibit rudeness to test the tolerance of the AI systems, which continued the conversation until ultimately the client disconnected.

Q: What potential risks do AI voice agents pose?
A: The primary risk is systemic risk, where reliance on AI for communication may erode trust in business relationships, leading to sterile exchanges focused on efficiency rather than building connections.

Q: How could AI voice agents impact businesses?
A: AI voice agents have the potential to transform business communications through automation while also raising concerns about homogenization and the stifling of innovation.