Topview Logo
  • Create viral videos with
    GPT-4o + Ads library
    Use GPT-4o to edit video empowered by Youtube & Tiktok & Facebook ads library. Turns your links or media assets into viral videos in one click.
    Try it free
    gpt video

    ComfyUI - Refine and upscale your AI videos - Vid2Vid with AnimateDiff #animatediff #comfyui

    blog thumbnail

    Introduction

    In this article, we will explore how to refine, upscale, and transform your AI-generated videos using ComfyUI, specifically utilizing Control Net and Animate Diff. This guide presents a complete workflow for improving the visual quality and smoothness of your AI videos.

    Setting Up Your Workflow

    To begin, install the necessary custom nodes and models. You can manage these through the model manager or use the provided links in the description. In this tutorial, we will work with Control GIF and Anime Diff Control Net to smooth out the refined video output.

    Starting with the Initial Video

    We'll be refining a video created using Cog Video X5B, which is useful for generating animated footage. Before diving into the upscaling process, check out my previous tutorial to see how to implement it properly within ComfyUI.

    Basic Upscaling Workflow

    1. Load Your Video: Start by adding a "Load Video Upload" node in ComfyUI to import your starting video.

    2. Image Resize Node: Add an "Image Resize" node to define the desired dimensions for the refined video. For this tutorial, we’ll keep the same dimensions as the original video (720x480). Connect this node to the "Load Video Upload" node.

    3. Upscaling Method: In the "Image Resize" node, select either bilinear or Lenos as the upscale method for better results.

    Connecting the Nodes

    Connect the image resize node to a "V" node and then the output latent to the "Case Sampler." Choose a realistic checkpoint, such as Realistic Vision or Epic Realism, and provide a prompt that describes the video context, such as “helicopter flying over a cyberpunk city.”

    Configure Sampling and Control Net

    1. Evolved Sampling: Insert a "Use Evolved Sampling" node and connect it to the checkpoint loader.

    2. Animate Diff: Add an "Apply Animate Diff Model Simple" node followed by an "Animate Diff Model Loader Simple." Choose the Animate Diff version 3 model and ensure the default values remain unchanged.

    3. Control Net Configuration: Add an "Apply Control Net Advanced" node for the Control GIF. Connect positive and negative prompts to this node, keeping the denoise strength below one.

    Load Control Net

    • Connect a "Load Advanced Control Net Model" node to the Control Net Apply node, selecting the Control GIF model from your downloaded resources.

    • Adjust the settings of the Control Net node by setting strength to 0.8 (you may want to try different values).

    Fine-Tuning and Color Matching

    To ensure the quality of your refined video:

    1. Adjust Sampling Settings: Fix the seed, increase the steps to 25, change the sampler to DPM ++ SD, and experiment with different schedulers.

    2. Color Match: Utilize a "Color Match" node to manage color consistency throughout the video.

    3. Preview Animation: Lastly, use a "Preview Animation" node to visualize your results before finalizing the video.

    Finalizing the Video

    Connect the frames to an upscale image using the "Upscale Image Using Model" node, followed by a "Load Upscale Model" (such as Real ESRGAN X2).

    To achieve a higher frame rate and smoother transitions, incorporate "Frame Interpolation" with a multiplier of two.

    Conclusion

    With these steps, you should see the refined video is considerably smoother than the original. Adjust parameters such as Control GIF strength and denoise strength to balance smoothness, consistency, and fidelity of the original scene. To enhance detail and ensure consistency, add a "Control Net Tile" node.

    Stay tuned for the next video in this series, where we will delve into using Animate Diff image-to-video models, making the refining process even easier.

    Keywords

    • ComfyUI
    • Refine Videos
    • Upscale Videos
    • AI Animation
    • Control Net
    • Animate Diff
    • Frame Interpolation
    • Video Processing

    FAQ

    Q: What is ComfyUI?
    A: ComfyUI is a user interface for various AI models that allows for video and image processing, providing tools for refining and upscaling outputs.

    Q: What is the purpose of Control Net in this workflow?
    A: Control Net is used to ensure smooth transitions between video frames, enhancing the overall visual quality of the final output.

    Q: Which models can I use for refinement?
    A: You can use models like Realistic Vision, Epic Realism, Control GIF, and various versions of Animate Diff for refining and enhancing your AI videos.

    Q: How do I ensure my video maintains its original fidelity while upscaling?
    A: Adjust the denoise strength and experiment with Control Net settings to balance smoothness with fidelity to the original video.

    Q: Can I use higher resolution models for upscaling?
    A: Yes, models like ESRGAN can help achieve significant resolutions, further enhancing your video quality.

    One more thing

    In addition to the incredible tools mentioned above, for those looking to elevate their video creation process even further, Topview.ai stands out as a revolutionary online AI video editor.

    TopView.ai provides two powerful tools to help you make ads video in one click.

    Materials to Video: you can upload your raw footage or pictures, TopView.ai will edit video based on media you uploaded for you.

    Link to Video: you can paste an E-Commerce product link, TopView.ai will generate a video for you.

    You may also like