Tianrui Feng1, Zhi Li2, Shuo Yang2, Haocheng Xi2, Muyang Li3, Xiuyu Li1, Lvmin Zhang4, Keting Yang5, Kelly Peng6, Song Han7, Maneesh Agrawala4, Kurt Keutzer2, Akio Kodaira8, Chenfeng Xu†,1
1UT Austin, 2UC Berkeley, 3Nunchaku AI, 4Stanford University, 5Independent Researcher, 6First Intelligence, 7MIT, 8Shizhuku AI
† Project lead, corresponding to xuchenfeng@utexas.edu
StreamDiffusionV2 is an open-source interactive diffusion pipeline for real-time streaming applications. It scales across diverse GPU setups, supports flexible denoising steps, and delivers high FPS for creators and platforms. Further details are available on our project homepage.
- [2026-03-27] StreamDiffusionV2 is now available on PyPI. Install the environment via
pip install streamdiffusionv2. - [2026-03-27] Added optional TAEHV-VAE support for inference via
--use_taehvandUSE_TAEHV=1. - [2026-03-06] Update Ring-buffer KV Cache for efficient sliding window attention.
- [2026-01-26] 🎉 StreamDiffusionV2 is accepted by MLSys 2026!
- [2025-11-10] 🚀 We have released our paper at arXiv. Check it for more details!
- [2025-10-18] Release our model checkpoint on huggingface.
- [2025-10-06] 🔥 Our StreamDiffusionV2 is publicly released! Check our project homepage for more details.
- OS: Linux with NVIDIA GPU
- CUDA-compatible GPU and drivers
conda create -n streamdiffusionv2 python=3.10 -y
conda activate streamdiffusionv2
# PyPI
pip install streamdiffusionv2
# Optional but recommended for better throughput
pip install "streamdiffusionv2[flash-attn]"If you are installing from a local checkout of this repository instead of PyPI:
conda create -n streamdiffusionv2 python=3.10
conda activate streamdiffusionv2
pip install .
# Optional but recommended for better throughput
pip install ".[flash-attn]"The package install includes the Python dependencies required for both offline inference and the demo backend. The demo frontend still requires Node.js 18 as described in demo/README.md.
# 1.3B Model
huggingface-cli download --resume-download Wan-AI/Wan2.1-T2V-1.3B --local-dir wan_models/Wan2.1-T2V-1.3B
huggingface-cli download --resume-download jerryfeng/StreamDiffusionV2 --local-dir ./ckpts --include "wan_causal_dmd_v2v/*"
# 14B Model
huggingface-cli download --resume-download Wan-AI/Wan2.1-T2V-14B --local-dir wan_models/Wan2.1-T2V-14B
huggingface-cli download --resume-download jerryfeng/StreamDiffusionV2 --local-dir ./ckpts --include "wan_causal_dmd_v2v_14b/*"We use the 14B model from CausVid-Plus for offline inference demo.
If you want to enable the lightweight TAEHV decoder, download its checkpoint once:
curl -L https://github.com/madebyollin/taehv/raw/main/taew2_1.pth -o ckpts/taew2_1.pthThe offline inference code can also download this file automatically on first use, but keeping it in ckpts/taew2_1.pth avoids that extra startup step.
We provide a simple example of how to use StreamDiffusionV2. For more detailed examples, please refer to streamv2v directory.
import numpy as np
from streamdiffusionv2 import StreamDiffusionV2Pipeline, export_video, load_video
stream = StreamDiffusionV2Pipeline(
checkpoint_folder="ckpts/wan_causal_dmd_v2v",
mode="single",
)
stream.prepare("A dog walks on the grass, realistic")
video = load_video("examples/original.mp4", height=480, width=832)
decoded_chunks = []
noise_scale = stream.noise_scale
for video_chunk in stream.chunk_video(video):
encoded_chunk = stream.encode_chunk(
video,
video_chunk,
previous_noise_scale=noise_scale,
initial_noise_scale=stream.noise_scale,
)
noise_scale = encoded_chunk.noise_scale
denoised_chunk = stream.denoise_chunk(encoded_chunk)
if denoised_chunk is None:
continue
decoded_chunks.append(stream.decode_chunk(denoised_chunk))
output = np.concatenate(decoded_chunks, axis=0)
export_video(output, "outputs/python_single.mp4", fps=16)import numpy as np
from streamdiffusionv2 import StreamDiffusionV2Pipeline, export_video, load_video
stream = StreamDiffusionV2Pipeline(
checkpoint_folder="ckpts/wan_causal_dmd_v2v",
mode="single-wo",
)
stream.prepare("A dog walks on the grass, realistic")
video = load_video("examples/original.mp4", height=480, width=832)
decoded_chunks = []
noise_scale = stream.noise_scale
for video_chunk in stream.chunk_video(video):
encoded_chunk = stream.encode_chunk(
video,
video_chunk,
previous_noise_scale=noise_scale,
initial_noise_scale=stream.noise_scale,
)
noise_scale = encoded_chunk.noise_scale
denoised_chunk = stream.denoise_chunk(encoded_chunk)
if denoised_chunk is None:
continue
decoded_chunks.append(stream.decode_chunk(denoised_chunk))
output = np.concatenate(decoded_chunks, axis=0)
export_video(output, "outputs/python_single_wo.mp4", fps=16)Pipeline-parallel inference still launches multiple worker processes, so the Python API for that mode stays as one imported function:
from streamdiffusionv2 import run_video_to_video
run_video_to_video(
mode="pipe",
checkpoint_folder="ckpts/wan_causal_dmd_v2v",
video_path="examples/original.mp4",
prompt="A dog walks on the grass, realistic",
output_path="outputs/python_pipe.mp4",
gpu_ids=[0, 1],
num_gpus=2,
)The staged API can be reconfigured before prepare(...):
from streamdiffusionv2 import StreamDiffusionV2Pipeline
stream = StreamDiffusionV2Pipeline(checkpoint_folder="ckpts/wan_causal_dmd_v2v")
stream.enable_acceleration(fast=True)
stream.prepare("A dog walks on the grass, realistic")fast=True enables use_taehv and use_tensorrt, and it automatically switches the default config from wan_causal_dmd_v2v.yaml to wan_causal_dmd_v2v_fast.yaml.
All offline inference entrypoints are unified under run_v2v.sh.
Choose one mode first:
single: single-GPU streaming inferencesingle-wo: single-GPU inference without Stream-batchpipe: multi-GPU pipeline inference
Quick start:
./run_v2v.sh single
./run_v2v.sh single-wo
./run_v2v.sh pipe
./run_v2v.sh pipe --profileUse --profile only when you want synchronized throughput measurements.
The legacy wrappers v2v.sh, v2v_wo.sh, and pipe_v2v.sh still work, but they now forward to the same shared entrypoint.
The most important options are:
--config_path: model config YAML--checkpoint_folder: checkpoint directory--video_path: input video--prompt_file_path: prompt text file--output_folder: output directory--heightand--width: output resolution--fps: target output FPS--step: number of denoising steps used during inference--use_taehv: use Wan stream encode with the TAEHV decoder for faster VAE decoding
You can pass overrides either as CLI flags or as environment variables. For example:
OUTPUT_FOLDER=outputs/run_single ./run_v2v.sh single
VIDEO_PATH=examples/original.mp4 PROMPT_FILE_PATH=examples/prompt.txt ./run_v2v.sh single-wo
NPROC_PER_NODE=2 MASTER_PORT=29511 ./run_v2v.sh pipe
./run_v2v.sh single --use_taehvThis is the standard offline path when you run on one GPU.
./run_v2v.sh single \
--config_path configs/wan_causal_dmd_v2v.yaml \
--checkpoint_folder ckpts/wan_causal_dmd_v2v \
--output_folder outputs/ \
--prompt_file_path examples/prompt.txt \
--video_path examples/original.mp4 \
--height 480 \
--width 832 \
--fps 16 \
--step 2To enable the TAEHV decoder in this mode:
./run_v2v.sh single --use_taehvUse this mode when you want to split inference across multiple GPUs.
./run_v2v.sh pipe \
--config_path configs/wan_causal_dmd_v2v.yaml \
--checkpoint_folder ckpts/wan_causal_dmd_v2v \
--output_folder outputs/ \
--prompt_file_path examples/prompt.txt \
--video_path examples/original.mp4 \
--height 480 \
--width 832 \
--fps 16 \
--step 2
# --schedule_block # optional: enable block schedulingTo enable the TAEHV decoder in pipeline mode:
./run_v2v.sh pipe --use_taehvNotes:
--schedule_blockis optional and can improve throughput on some multi-GPU setups.- Adjust
NPROC_PER_NODE,--height,--width, and--fpsto match your hardware and target workload. ./run_v2v.sh pipe --profileis intended for profiling runs, not normal benchmarking or deployment.
A minimal web demo is available under demo/. For setup and startup, please refer to demo.
- Access in a browser after startup:
http://0.0.0.0:7860orhttp://localhost:7860 - To enable the TAEHV decoder in the web demo, start it with
USE_TAEHV=1.
- Demo and inference pipeline.
- Dynamic scheduler for various workload.
- Training code.
- FP8 support.
- TensorRT support.
StreamDiffusionV2 is inspired by the prior works StreamDiffusion and StreamV2V. Our Causal DiT builds upon CausVid, and the rolling KV cache design is inspired by Self-Forcing.
We are grateful to the team members of StreamDiffusion for their support. We also thank First Intelligence and Daydream team for their great feedback.
We also especially thank DayDream team for the great collaboration and incorporating our StreamDiffusionV2 pipeline into their cool Demo UI.
If you find this repository useful in your research, please consider giving a star ⭐ or a citation.
@article{feng2025streamdiffusionv2,
title={StreamDiffusionV2: A Streaming System for Dynamic and Interactive Video Generation},
author={Feng, Tianrui and Li, Zhi and Yang, Shuo and Xi, Haocheng and Li, Muyang and Li, Xiuyu and Zhang, Lvmin and Yang, Keting and Peng, Kelly and Han, Song and others},
journal={arXiv preprint arXiv:2511.07399},
year={2025}
}

