Skip to main content

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts.

Project description

stable-diffusion-videos

Try it yourself in Colab: Open In Colab

Example - morphing between "blueberry spaghetti" and "strawberry spaghetti"

https://user-images.githubusercontent.com/32437151/188721341-6f28abf9-699b-46b0-a72e-fa2a624ba0bb.mp4

Installation

pip install stable_diffusion_videos

Usage

Check out the examples folder for example scripts 👀

Making Videos

Note: For Apple M1 architecture, use torch.float32 instead, as torch.float16 is not available on MPS.

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

video_path = pipeline.walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    num_interpolation_steps=3,
    height=512,  # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,   # use multiples of 64 if > 512. Multiples of 8 if < 512.
    output_dir='dreams',        # Where images/videos will be saved
    name='animals_test',        # Subdirectory of output_dir where images/videos will be saved
    guidance_scale=8.5,         # Higher adheres to prompt more, lower lets model take the wheel
    num_inference_steps=50,     # Number of diffusion steps per image generated. 50 is good default
)

Making Music Videos

New! Music can be added to the video by providing a path to an audio file. The audio will inform the rate of interpolation so the videos move to the beat 🎶

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

# Seconds in the song.
audio_offsets = [146, 148]  # [Start, end]
fps = 30  # Use lower values for testing (5 or 10), higher values for better quality (30 or 60)

# Convert seconds to frames
num_interpolation_steps = [(b-a) * fps for a, b in zip(audio_offsets, audio_offsets[1:])]

video_path = pipeline.walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    num_interpolation_steps=num_interpolation_steps,
    audio_filepath='audio.mp3',
    audio_start_sec=audio_offsets[0],
    fps=fps,
    height=512,  # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,   # use multiples of 64 if > 512. Multiples of 8 if < 512.
    output_dir='dreams',        # Where images/videos will be saved
    guidance_scale=7.5,         # Higher adheres to prompt more, lower lets model take the wheel
    num_inference_steps=50,     # Number of diffusion steps per image generated. 50 is good default
)

Using the UI

from stable_diffusion_videos import StableDiffusionWalkPipeline, Interface
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

interface = Interface(pipeline)
interface.launch()

Credits

This work built off of a script shared by @karpathy. The script was modified to this gist, which was then updated/modified to this repo.

Contributing

You can file any issues/feature requests here

Enjoy 🤗

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stable_diffusion_videos-0.9.2.tar.gz (42.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stable_diffusion_videos-0.9.2-py3-none-any.whl (42.1 kB view details)

Uploaded Python 3

File details

Details for the file stable_diffusion_videos-0.9.2.tar.gz.

File metadata

  • Download URL: stable_diffusion_videos-0.9.2.tar.gz
  • Upload date:
  • Size: 42.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for stable_diffusion_videos-0.9.2.tar.gz
Algorithm Hash digest
SHA256 a432e1c23ec2ce6678c980b06e08971675121bba29c7e229e4e68fff9b9c5f2f
MD5 0bea3505901bd9c16ce2da245a7a49a5
BLAKE2b-256 756c2631d40b61dd9c651a68e3d4fbe0c2e69ab2e7feec8f9a815173dfb01d97

See more details on using hashes here.

Provenance

The following attestation bundles were made for stable_diffusion_videos-0.9.2.tar.gz:

Publisher: python-publish.yml on nateraw/stable-diffusion-videos

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file stable_diffusion_videos-0.9.2-py3-none-any.whl.

File metadata

File hashes

Hashes for stable_diffusion_videos-0.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 957afaa80dee218115e5afba93dfa086c0dde0341744ee6213d402a02766920b
MD5 db59b314e3066ef676149af5c2ae5626
BLAKE2b-256 a6b190d951293231cfe24c8fcd7fa80408c5090ca71fd6b98c03be441eb3b2ae

See more details on using hashes here.

Provenance

The following attestation bundles were made for stable_diffusion_videos-0.9.2-py3-none-any.whl:

Publisher: python-publish.yml on nateraw/stable-diffusion-videos

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page