ad
ad

Easy Image to Video with AnimateDiff (in ComfyUI) #stablediffusion #comfyui #animatediff

Science & Technology


Introduction

In this tutorial, we will explore how to bring images to life using ComfyUI and AnimateDiff by building a straightforward image-to-video workflow. This guide will walk you through the process, and make sure to stay until the end for a clever trick that allows you to use random images to create surprising animations. The goal is to spark your enthusiasm for ComfyUI and AnimateDiff while demonstrating how simple it can be to create captivating animations.

Prerequisites

Before we begin, ensure you have the following custom nodes and models installed:

  • Custom Nodes
  • Animate LCM model (available via the provided link)
  • Dream Shaper model (can be downloaded from Civics)

As with any setup, it's important to update your ComfyUI and restart the program after installation.

Workflow Setup

  1. Select Checkpoint: For this tutorial, we will be using Dream Shaper 8 as our checkpoint.

  2. Add Laura Loader: Begin by adding the Laura loader and selecting the Animate LCM Laura model. Set the strength to 1. If you are using a distilled version such as Dream Shaper 8, adjust the strength to 0.3.

  3. Connect Sparse Control: Add the AnimateDiff version 3 adapter, connect it to the Laura loader, and then create an AnimateDiff module. Use evolve sampling and connect them to the Apply AnimateDiff motion model.

  4. Configure Animation Settings: Add the Apply AnimateDiff model node and connect it to the motion model loader. Select the Animate LCM model. To enable animations longer than 32 frames, set the context options to looped uniform, keeping other settings as is.

  5. Image Adapter Configuration: Place an IP adapter tiled; in the latest version, utilize the IP adapter unified loader to connect the model. Choose the plus model, but other models like Vig can also work well.

  6. Define Reference Image: Connect the reference image to the case sampler. Your prompt will significantly influence animation; for this example, try "woman on water magic fire ring."

  7. Control Nets Addition:

    • First Control Net: Use Control Net Tile with a strength of 0.25 and an N percentage of 0.8; connect it to the reference image.
    • Second Control Net: Instead of a regular loader, use Sparse Scribble. Load sparse control models and set strength to 1 and N percentage to 0.4. For this control net, you need a scribble image, so connect it with the Fake Scribble Lines node.
  8. Animation Frame Setup: We will generate 72 animation frames, with a batch size of 36 frames for interpolation. Set latent dimensions to 512x768.

  9. K Sampler Adjustments: Fix the seed, reduce steps to 8, and CFG scale to 1.2. Set the sampler as LCM2 and scheduler as SGM uniform. Remove save image node and connect the video combine node. Increase frame rate to 12, choose MP4 format, and deactivate save output.

  10. Run the Initial Workflow: Execute the workflow to generate a draft animation.

  11. Refine Animation: To refine the animation, copy the case sampler, disconnect the latent one, and use a scaled image (1.5) as a new reference. Connect a video combine node to the new sampler and reduce noise to 0.7. Adjust parameters for best results.

  12. Enhance Movement: If extra movement is needed, connect the Multiv Val Dynamic node to the Animate Diff module and increase it to 1.2.

  13. Final Results: Run the workflow again for improved output.

Adding Randomness to Animations

To conclude this tutorial, let’s have some fun with a little trick. Visit a website that generates random images, copy the first address, and return to ComfyUI. Here’s how to set it up:

  1. Load Video Path Node: Convert the video to input.
  2. Primitive String Multi-line Node: Copy the link twice and modify the dimensions from 200 to 512 and from 300 to 768.
  3. Connect Nodes: Connect this node to the text random line and then to the load video path.

Reconnect the image with the IP adapter and control net. You can include a display node if desired, and then see the surprising results you can achieve!

I hope you enjoyed this simple tutorial. Stay tuned for more exciting content!


Keywords

  • ComfyUI
  • AnimateDiff
  • Image to Video
  • Workflow
  • Animation
  • Control Net
  • Random Images

FAQ

Q1: What is ComfyUI?
A1: ComfyUI is a user interface for working with image generation and manipulation models to create visuals and animations.

Q2: What is AnimateDiff?
A2: AnimateDiff is a model used in conjunction with ComfyUI to generate animated sequences from static images.

Q3: How can I refine my animations in this workflow?
A3: You can refine animations by adjusting parameters in the case sampler and using control nets to influence the movement and detail of the animation.

Q4: Can I use my own images in this process?
A4: Absolutely! You can use your own images as reference inputs in the workflow to create personalized animations.

Q5: What is the random image trick?
A5: The random image trick involves using an external image generator to incorporate random images into your animations, adding an element of surprise to your results.