All Projects → TianxiangMa → MUST-GAN

TianxiangMa / MUST-GAN

Licence: other
Pytorch implementation of CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation"

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to MUST-GAN

HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+305.13%)
Mutual labels:  gan, cvpr2021
lecam-gan
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)
Stars: ✭ 127 (+225.64%)
Mutual labels:  gan, cvpr2021
CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (+256.41%)
Mutual labels:  gan, cvpr2021
infnet-spen
TensorFlow implementation [ICLR 18] "Learning Approximate Inference Networks for Structured Prediction"
Stars: ✭ 30 (-23.08%)
Mutual labels:  gan
Simple-GAN-Base-on-Matlab
simple Generative Adversarial Networks base on matlab
Stars: ✭ 24 (-38.46%)
Mutual labels:  gan
MoveSim
Codes for paper in KDD 2020 (AI for COVID-19): Learning to Simulate Human Mobility
Stars: ✭ 16 (-58.97%)
Mutual labels:  gan
ML-Papers-TLDR
A summary of interesting Machine Learning (mostly Deep Learning) papers that I encounter.
Stars: ✭ 20 (-48.72%)
Mutual labels:  gan
StyleGANCpp
Unofficial implementation of StyleGAN's generator
Stars: ✭ 25 (-35.9%)
Mutual labels:  gan
Pytorch-Image-Translation-GANs
Pytorch implementations of most popular image-translation GANs, including Pixel2Pixel, CycleGAN and StarGAN.
Stars: ✭ 106 (+171.79%)
Mutual labels:  gan
HoHoNet
"HoHoNet: 360 Indoor Holistic Understanding with Latent Horizontal Features" official pytorch implementation.
Stars: ✭ 65 (+66.67%)
Mutual labels:  cvpr2021
steam-stylegan2
Train a StyleGAN2 model on Colaboratory to generate Steam banners.
Stars: ✭ 30 (-23.08%)
Mutual labels:  gan
mSRGAN-A-GAN-for-single-image-super-resolution-on-high-content-screening-microscopy-images.
Generative Adversarial Network for single image super-resolution in high content screening microscopy images
Stars: ✭ 52 (+33.33%)
Mutual labels:  gan
Course-Project---Speech-Driven-Facial-Animation
ECE 535 - Course Project, Deep Learning Framework
Stars: ✭ 63 (+61.54%)
Mutual labels:  gan
AdvSegLoss
Official Pytorch implementation of Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]
Stars: ✭ 24 (-38.46%)
Mutual labels:  gan
chainer-wasserstein-gan
Chainer implementation of the Wesserstein GAN
Stars: ✭ 20 (-48.72%)
Mutual labels:  gan
metrics
IS, FID score Pytorch and TF implementation, TF implementation is a wrapper of the official ones.
Stars: ✭ 91 (+133.33%)
Mutual labels:  gan
AvatarGAN
Generate Cartoon Images using Generative Adversarial Network
Stars: ✭ 24 (-38.46%)
Mutual labels:  gan
CS231n
My solutions for Assignments of CS231n: Convolutional Neural Networks for Visual Recognition
Stars: ✭ 30 (-23.08%)
Mutual labels:  gan
automatic-manga-colorization
Use keras.js and cyclegan-keras to colorize manga automatically. All computation in browser. Demo is online:
Stars: ✭ 20 (-48.72%)
Mutual labels:  gan
Deep-Exemplar-based-Video-Colorization
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".
Stars: ✭ 180 (+361.54%)
Mutual labels:  gan

MUST-GAN

Code | paper

The Pytorch implementation of our CVPR2021 paper "MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation".

Tianxiang Ma, Bo Peng, Wei Wang, Jing Dong,

CRIPAC,NLPR,CASIA & University of Chinese Academy of Sciences.


Test results of our model under self-supervised training:

Pose transfer

Clothes style transfer

Requirement

  • python3
  • pytorch 1.1.0
  • numpy
  • scipy
  • scikit-image
  • pillow
  • pandas
  • tqdm
  • dominate
  • visdom

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/TianxiangMa/MUST-GAN.git
cd MUST-GAN

Data Preperation

We train and test our model on Deepfashion dataset. Especially, we utilize High-Res Images in the In-shop Clothes Retrieval Benchmark.

Download this dataset and unzip (You will need to ask for password.) it, then put the folder img_highres under the ./datasets directory. Download train/test split list, which are used by a lot of methods, and put them under ./datasets directory.

  • Run the following code to split train/test dataset.
python tool/generate_fashion_datasets.py

Download source-target paired images list, as same as the list used by many previous work. Becouse our method can self-supervised training, we do not need the fashion-resize-pairs-train.csv, you can download train_images_lst.csv for training.

Download train/test keypoints annotation files and semantic segmentation files.

Put all the above files into the ./datastes folder.

  • Run the following code to generate pose map and pose connection map.
python tool/generate_pose_map.py
python tool/generate_pose_connection_map.py

Download vgg pretrained model for training, and put it into ./datasets folder.

Test

Download our pretrained model, and put it into ./check_points/MUST-GAN/ folder.

  • Run the following code, and set the parameters as your need.
bash scripts/test.sh

Train

  • Run the following code, and set the parameters as your need.
bash scripts/train.sh

Citation

If you use this code for your research, please cite our paper:

@InProceedings{Ma_2021_CVPR,
    author    = {Ma, Tianxiang and Peng, Bo and Wang, Wei and Dong, Jing},
    title     = {MUST-GAN: Multi-Level Statistics Transfer for Self-Driven Person Image Generation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {13622-13631}
}

Acknowledgments

Our code is based on PATN and ADGAN, thanks for their great work.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].
OSZAR »