ad
ad

How to use Runway AI: Generate Video from Text

Science & Technology


How to use Runway AI: Generate Video from Text

One of the major revolutions in technology today is the emergence of text-to-video tools. As we explore this incredible innovation, it is vital to acknowledge that we evaluate it using one of the most discerning organs ever – the human eye. Our sight is incredibly precise; it can instantly tell when something doesn’t feel right, which can sometimes trigger what is known as the Uncanny Valley effect. This is an eerie feeling experienced when we encounter humanoid objects that appear almost, but not quite, human.

When we look at the results from text-to-video tools, we are essentially in the early stages of computing history, similar to the era of George Méliès and the Lumière brothers. It took cinema around 100 years to evolve from "A Trip to the Moon" by Méliès to the age of Technicolor and IMAX. Generative AI is still in its infancy, yet it's rapidly advancing through its various developmental stages. For example, filmmaker Tyler Perry recently canceled an $ 800 million Sound Stage due to the recognition of how this technology can potentially mitigate the need for physical sound stages.

These remarkable images were created using either Runway AI or PAA Labs with simple prompts. The usual output length for these videos is 3 to 4 seconds, though they can be extended.

Runway AI Tools

Overview of Runway AI

  • Mission: Runway AI aims to democratize the creative potential of AI, making it available to everyone.
  • Focus Areas: The company focuses on advancing art, entertainment, and human creativity using AI.
  • Platform: The platform has AI tools that allow global creatives and enterprises to tell their stories innovatively.

Runway AI initially released its video-to-video generative AI model Gen 1. This was later followed by Gen 2, which supports both text-to-video and image-to-video generation. The service is available through their website, www.runwayml.com, offering both free and paid subscription plans. Free subscription users typically see a watermark in the bottom right corner of their output videos, while paid users have the option to remove it.

Getting Started

Here’s a comprehensive guide to using Runway AI's Gen 2 model for creating videos:

  1. Dashboard: The main dashboard allows you to access your assets, try Gen 1, and use Gen 2 for text-to-image video creation.

  2. Features of Gen 2:

    • Text to Image: Generate original images from text prompts.
    • Image to Image: Transform images using text prompts.
    • Text to Speech: Generate audio from text.
    • Tutorials are also available for those who need step-by-step assistance.

Creating Motion in Static Images

  1. Upload an Image: Start with an image generated using a tool like MidJourney.

  2. Add Motion: You can add camera motions like horizontal movement, zoom, and apply the motion brush tool.

  3. Motion Brushes: Use different brushes to add nuanced motion to various parts of the image.

    For example, use different brushes to animate a woman's smile, or give movement to background characters. Gen 2 allows for variance in motion types, including horizontal, vertical, panning, tilting, and zooming motions.

Text to Video Creation

  1. Start Generating: Head to the text to image/video Gen 2 page.
  2. Prompt: Use detailed prompts including camera settings if needed.
  3. Seed Number: Optionally add a seed number for consistent visual style.
  4. Aspect Ratios and Styles: Choose from different aspect ratios and styles ranging from 3D cartoon, abstract, digital art, and more.

Example Prompt:

  • “Closeup making pizza dough throwing into the air, glossy finish, Flour Bakery background, Canon EOS R5, 50 mm lens, f2.8”.

Results are typically a few seconds long, sufficient to capture the essence of the scene described in the prompt.

Text to Image Options

  1. Prompt: Describe scenes in detail e.g., “Anime school girl in the classroom”.
  2. Frame Ratio: Choose from options like widescreen, square, mobile, vertical, landscape, and portrait.
  3. Resolution and Style: Higher resolutions and varied styles are available in paid tiers.

Video to Video Conversion

  1. Upload Video: Start with a video and upload it into Gen 1.
  2. Styling: Choose from a variety of styles, from futuristic to claymation.
  3. Preview and Generate: Preview different styles and generate the video in the selected style.

For instance, you can upload a video of a lion dance and see it in different styles such as Sci-Fi, metal, and more, finally generating a frame that matches your preferred look.

Seed Numbers and Prompt Modifiers

To maintain consistency across multiple clips, the tool allows for seed numbers acting as a unique identifier for visual styles. Additionally, prompt modifiers enable more detailed stylistic tweaks – terms like “Masterpiece” or “cinematic” can refine the output to meet specific artistic visions.

Keywords

  • Text-to-video
  • Runway AI
  • Gen 1 and Gen 2
  • Motion brush
  • Seed numbers
  • Creative AI
  • Image-to-video
  • Video-to-video
  • Generative AI
  • Prompt modifiers

FAQ

Q: What is the Uncanny Valley effect?
A: It is the eerie or unsettling feeling people experience when encountering highly realistic humanoid objects that are not quite human.

Q: What are the main features of Runway AI Gen 2?
A: It includes text-to-video, image-to-image motion, text-to-speech, and multiple video styles and resolutions.

Q: How can I remove the watermark from my Runway AI videos?
A: The watermark can be removed by subscribing to any paid plan.

Q: What is the function of seed numbers in Runway AI?
A: Seed numbers act as a stylistic foundation, ensuring consistency of visual style across multiple clips.

Q: Can I use Runway AI to generate longer videos?
A: Currently, the usual length is 3 to 4 seconds, but extensions can be applied for longer videos as explained in the platform’s instructions.