Introducing Act-One | How to Use Act-One Runway AI Facial Performance Capture for Actors
Film & Animation
Introduction
This morning, I was finally able to log in and gain access to Act-One, an exciting new tool for filmmakers and storytellers offered by Runway. Eager to explore its capabilities, I navigated to the Runway website to get started.
Upon accessing Act-One, I noticed a prompt indicating that it generates expressive character performances inside the app. The initial concern for many, including myself, was the pricing structure. It turns out that the service costs 10 credits per second of video. Given that the maximum allowable runtime for video output is 30 seconds, you can expect to use up to 300 credits for a complete clip. As such, it’s crucial to monitor your credit balance closely, as it can deplete quickly.
The maximum resolution supported is 1280x768 at 124 frames per second, but notably, it does not support 30 frames per second. I was able to click on “Try It Now” and start my journey with Act-One. As this was my first experience, I drew insights from previous familiarization videos, ready to test its functionality.
The interface encouraged me to upload a driving performance video—with a clear focus on facial expressions and minimal body movement. To aid my creations, I decided to generate character visuals with images created via MidJourney. Selecting suitable images for both the interviewer and interviewee roles, I ensured they were clear and distinctive in terms of appearance.
By selecting a 30-second video I recorded, I was able to upload it efficiently, depending on my internet speed. Following the video upload, I submitted the selected character reference images for the animation generation process. After cropping the video footage to fit the required aspect ratio of 1280x768, I was prompted with a credit cost based on the video's duration.
Once I confirmed my image selections and clicked on “Generate,” I was filled with anticipation regarding the results. It is essential to note that Act-One’s current iteration does not support hand movements, which would limit some realistic animations.
The results presented a striking realism, as the character brought to life subtle nuances from my original performance, including the facial expressions and lighting conditions. The ability to capture such depth in animation using just a video prompt felt revolutionary.
I then proceeded to upload the second part of my interview. After manipulating the video and images accordingly, I witnessed how straightforward it was to navigate through Act-One’s features. However, it was clear that careful planning was essential to minimize credit consumption since spending could escalate rapidly.
In my reflections on Act-One, I believe it serves great potential for concept testing and idea generation, effectively creating character performances without needing complex 3D models or extensive rigging. While it has limitations, such as the current cap on video length and the absence of full-body performance tracking, it offers filmmakers an innovative way to visualize concepts quickly.
As I concluded my experience, I was excited to delve deeper into the capabilities of Act-One and explore effective storytelling techniques using AI-generated animations.
Keywords
Act-One, Runway, AI, facial performance capture, video animation, storytelling, filmmakers, character generation, credits.
FAQ
What is Act-One?
Act-One is an AI tool by Runway that generates expressive character performances by capturing facial movements from video footage.
How much does it cost to use Act-One?
Act-One operates on a credit system, costing 10 credits per second of video. A maximum of 30 seconds will result in a credit consumption of 300 credits.
What resolution does Act-One support?
Act-One supports a maximum resolution of 1280x768 at 124 frames per second.
Does Act-One allow for full-body animation?
Currently, Act-One does not support full-body tracking and is limited to facial performance capture.
What are the potential uses for Act-One in filmmaking?
Act-One can be used for concept testing, storyboarding, and generating character performances without the need for traditional animation techniques.