Skip to content

Latest commit

 

History

History

i-Code-V3

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

CoDi: Any-to-Any Generation via Composable Diffusion

1University of North Carolina at Chapel Hill, 2Microsoft Azure Cognitive Services Research * Work done at Microsoft internship and UNC. Corresponding Authors

arXiv githubio Hugging Face Spaces

Introduction

We present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities. Unlike existing generative AI systems, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image. Despite the absence of training datasets for many combinations of modalities, we propose to align modalities in both the input and output space. This allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. CoDi employs a novel composable generation strategy which involves building a shared multimodal space by bridging alignment in the diffusion process, enabling the synchronized generation of intertwined modalities, such as temporally aligned video and audio. Highly customizable and flexible, CoDi achieves strong joint-modality generation quality, and outperforms or is on par with the unimodal state-of-the-art for single-modality synthesis.

Installation

conda create -n CoDi python=3.8 # prepare an environment

pip install torch==1.12.1+cu116 torchaudio==0.12.1+cu116 torchvision==0.13.1+cu116 \
-f https://download.pytorch.org/whl/torch_stable.html # change cuda version higher pytorch version is doable

pip install -r requirement.txt

Inference

Download checkpoints from Hugginface Model

Model Parts Huggingface Weights Address fp16 weights
CoDi Encoders and VAEs CoDi_encoders.pth CoDi_encoders.pth
CoDi Text Diffuser CoDi_text_diffuser.pth CoDi_text_diffuser.pth
CoDi Audio Diffuser CoDi_audio_diffuser_m.pth CoDi_audio_diffuser_m.pth
CoDi Vision Diffuser CoDi_video_diffuser_8frames.pth CoDi_video_diffuser_8frames.pth
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/CoDi_encoders.pth -P checkpoints/
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/CoDi_text_diffuser.pth -P checkpoints/
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/CoDi_audio_diffuser_m.pth -P checkpoints/
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/CoDi_video_diffuser_8frames.pth -P checkpoints/
# Or fp16 weights
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/checkpoints_fp16/CoDi_encoders.pth -P checkpoints/
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/checkpoints_fp16/CoDi_text_diffuser.pth -P checkpoints/
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/checkpoints_fp16/CoDi_audio_diffuser_m.pth -P checkpoints/
wget https://huggingface.co/ZinengTang/CoDi/resolve/main/checkpoints_fp16/CoDi_video_diffuser_8frames.pth -P checkpoints/

Run demo.ipynb

Citation

If you find our work useful, please consider citing:

@article{tang2023any,
  title={Any-to-Any Generation via Composable Diffusion},
  author={Tang, Zineng and Yang, Ziyi and Zhu, Chenguang and Zeng, Michael and Bansal, Mohit},
  journal={arXiv preprint arXiv:2305.11846},
  year={2023}
}

Reference

The code structure is based on Versatile Diffusion. The audio diffusion model is based on AudioLDM. The video diffusion model is partially based on Make-A-Video.

Usage and License Notices:

CoDi released in this repository is free to use for most circumstances under MIT license.

Contact

Zineng Tang (zn.tang.terran@gmail.com)