Runway's EPIC New AI Video Generator!

Entertainment


Runway's EPIC New AI Video Generator!

Throughout the history of filmmaking, creative visionaries have often had to persuade wealthy individuals to finance their stories. This necessitated having the right connections, living in the right area, and convincing gatekeepers that their films could be profitable. However, all of that has changed in the last two weeks. New AI tools that can emulate lifelike movements are launching almost daily, opening incredible creative possibilities. Today, we have a number of exciting announcements that have huge implications for indie filmmakers and the entertainment industry as a whole. Welcome to your AI film news of the week.

Biggest News of the Week: Runway Gen 3

Last week, Luma released Luma Dream Machine, which was mind-blowing. Not to be outdone, the team at Runway released the first look at Gen 3. With Gen 3, you're going to have all the directorial commands you've come to love inside of Gen 2. This includes the ability to control the camera and use tools like the Motion Brush.

Examples on Runway's website showcase impressive capabilities like a closeup of ants that zooms out to reveal a suburban town, a VFX portal shot with realistic physics, and a portal opening in the ocean, showing dynamic wave movements. There are practical stock footage-like shots, such as a train driving in a European city, and even complex scenes like a man watching a movie. There's also character animation that features a monster walking, with convincing weight and motion.

Runway Gen 3 is still in the announcement phase, so users can't play around with it just yet, but access will be available soon. This development is part of Runway’s desire to create a general world model, which aims to understand various media assets like video, images, and audio, and how they interact. A great breakdown of this vision is available on Runway’s website.

Game of the Week

In this week's game, you are shown three video clips created in different AI tools – Runway Gen 2, Runway Gen 3, and Luma Dream Machine. Participants need to identify which clip was created with which tool. The winner receives one year of free access to a cool face-swapping tool available through the browser.

Luma Dream Machine Updates

Coming soon to Luma Dream Machine is the ability to extend video clips up to 10 seconds and change backgrounds through prompting. This is an exciting development for AI-generated video content.

AI Advertising & Filmmaking Course

Enrollment for the AI Advertising and AI Filmmaking course at Curious Refuge opens on June 26 at 11 a.m. Pacific time. Updated course lessons will include the latest video tools and image tools now available to filmmakers.

Adobe’s Terms of Service Update

Adobe has updated its terms of service, letting users retain ownership of their content and ensuring that their content will not be used to train Adobe's model.

Mid Journey’s New Personalization Feature

Mid Journey now allows users to personalize AI models to their specific tastes. To personalize a model, users need to rank images inside Mid Journey. This helps the system learn user preferences for better image generation.

Stable Diffusion 3

Stable Diffusion released its most advanced image model, Stable Diffusion 3. It runs on regular PCs or laptops and is free for non-commercial projects. The commercial version is available for $ 20 a month. This model excels at adhering to prompts, including generating text for logos and branding.

Google’s Audio for Video Tool

Google released an audio for video white paper demo that generates soundscapes by uploading a video and typing in a prompt. The tool watches the video, generates the sound, and combines them effectively.

Sunno’s New Feature

Sunno released a feature that lets users upload input audio to create songs, similar to the earlier Udio announcement.

Open Sora

Open Sora is an open-source tool capable of generating 720p videos up to 16 seconds long. While not as advanced as similar tools, it offers surprisingly good quality.

Hendra Lip Sync Tool

Hendra is a new lip sync tool that animates images. Users can generate or import audio and then apply it to generated images.

Leonardo Phoenix

Leonardo Phoenix is Leonardo’s most advanced image model, boasting high accuracy in text prompt adherence. The model can be accessed through Leonardo’s website.

11 Labs Voice Over Studio

11 Labs released a voice-over studio for editing voices and sound effects, particularly useful for explainer videos and projects requiring AI-generated voices.

Exciting White Papers

Wonder World, instant human 3D avatar generation, and CG head are some fascinating white papers showcasing future AI capabilities in gaming and filmmaking, including real-time world creation and ultra-realistic 3D faces.

AI Film Competitions and Events

The reply AI Film Festival is happening concurrently with the Venice International Film Festival offering a prize pool of over $ 15,000. The first AI film trailer competition winners have been announced, with impressive showcases of creative AI uses in filmmaking.


Keywords

  • Runway Gen 3
  • AI filmmaking tools
  • Luma Dream Machine
  • Adobe terms of service
  • Mid Journey personalization
  • Stable Diffusion 3
  • Google audio for video
  • Sunno audio tool
  • Open Sora
  • Hendra lip sync
  • Leonardo Phoenix
  • 11 Labs Voice Over Studio
  • AI film competitions

FAQ

Q: What is Runway Gen 3? A: Runway Gen 3 is the latest AI video generator tool from Runway, offering advanced features like directorial commands and tools for controlling camera movements.

Q: How can I access Luma Dream Machine? A: Luma Dream Machine is available now. Upcoming updates will extend video clips up to 10 seconds and allow background changes via prompting.

Q: What is the goal of Runway’s general world model? A: The goal is to create an AI model that can understand various media assets – video, images, audio – and how they interact to create coherent content.

Q: How do I personalize my AI model in Mid Journey? A: You can personalize your AI model by ranking images inside the Mid Journey explore page. Over time, Mid Journey will learn your preferences.

Q: Is there a cost to use Stable Diffusion 3 for commercial projects? A: Yes, there is a commercial version available for $ 20 a month, suitable for users making less than a million dollars a year.

Q: What does Google’s audio for video tool do? A: It creates soundscapes by analyzing videos and generating corresponding audio based on user-typed prompts.

Q: How can I use Hedra’s lip sync tool? A: Hedra allows you to generate or upload audio and then animate images based on that audio, giving lifelike lip sync to static images.

Q: What is Leonardo Phoenix and how does it differ from previous models? A: Leonardo Phoenix is Leonardo’s most advanced image model, excelling in adhering to text prompts and generating highly accurate visuals.

Q: When can I enroll in the AI Advertising and AI Filmmaking course? A: Enrollment opens on June 26 at 11 a.m. Pacific time at Curious Refuge.