FACELESS Videos 100% Automated (Make, ChatGPT, ElevenLabs)
Education
FACELESS Videos 100% Automated (Make, ChatGPT, ElevenLabs)
In this comprehensive guide, discover how to fully automate the creation of faceless videos using tools like Make, Airtable, ChatGPT, Leonardo, Json to Video, and ElevenLabs. We'll walk through the entire process, from setting up databases to generating animated scenes, voiceovers, and captions.
Introduction
Automation can drastically streamline video production, even allowing the creation of high-quality faceless videos with minimal manual input. This tutorial demonstrates how to achieve complete automation using a mix of no-code and AI tools.
Setting Up the Airtable Database
Begin by setting up an Airtable base comprising multiple tables:
- Stories Table: Store each new story.
- Scenes Table: Break down each story into scenes with AI image prompts.
- Video API Table: Store rendered images and videos from Leonardo.
- Story Types Table: Define different story formats, such as comic, photorealistic, etc.
Airtable Columns and Fields
Each table will have specific fields, such as:
- Story ID (Auto Number Field) in the Stories table.
- Scene ID (Auto Number Field) in the Scenes table.
- Job ID and Video (Attachment Fields) in the Video API table.
- Fields like "output width/height" for defining visual properties.
Building Out Automations Using Make
Initial Setup
- Configure Airtable to trigger Make automations on specific changes.
- Set up Make to watch for these triggers and execute workflows based on webhooks.
-
- Use ChatGPT to generate a story from a source article.
- Trigger this with an appropriate webhook in Make.
Generating Scenes
- Take the generated story and use ChatGPT again to generate the scenes.
- Store each scene in Airtable, with its narrative and image prompts.
Generating AI Images and Videos
- Use Leonardo for generating images.
- Set up separate webhooks and API keys for static images and animated videos.
- Optionally insert delays to manage API request limits.
Assembling and Merging Video and Audio
- Assemble the still images or animated clips into a video using JSON to Video.
- Create voiceovers using ElevenLabs.
- Merge these components into a finalized video.
Finalizing the Setup
After setting up and verifying each step, modify the system to switch seamlessly between static images and animated videos. Run end-to-end tests to ensure smooth operation.
Maximizing Flexibility
Include different story types to allow for easy transformation of the content style. Swap out models and prompts to experiment with different aesthetics and narrative techniques.
Conclusion
This automated system dramatically reduces the manual labor involved in video production. With these tools and workflows, you can generate professional-quality faceless videos at scale.
Keywords
- Airtable
- Make
- ChatGPT
- Leonardo
- Json to Video
- ElevenLabs
- Automated Video Production
- AI Image Generation
- Voiceover Automation
- No-Code Solutions
FAQ
Q1: What is the main purpose of this automated video production setup?
A1: The primary goal is to fully automate the creation of faceless videos, from generating the story and scenes to producing voiceovers and combining them into a final video.
Q2: Which tools are primarily used in this setup?
A2: The setup leverages Airtable, Make, ChatGPT, Leonardo, Json to Video, and ElevenLabs.
Q3: How does Leonardo contribute to the system?
A3: Leonardo generates both still images and animated video clips from AI image prompts, which are then assembled into scenes.
Q4: What is the role of ChatGPT in this workflow?
A4: ChatGPT is used to generate the narrative and break down the story into scenes, giving the content for the video.
Q5: How are voiceovers created in this system?
A5: Voiceovers are generated using ElevenLabs, automatically creating audio content from the scene's script.
Q6: Can this system be customized for different types of story formats?
A6: Yes, the system includes a flexible "Story Types" table in Airtable, allowing for different formats like comic, photorealistic, and more.
Q7: How do you switch between static images and animated videos?
A7: The system includes separate workflows in Make for handling static images and animated videos, making it easy to switch between them.