Skip to content

Unofficial PyTorch implementation of the Pix2Pix paper.

License

Notifications You must be signed in to change notification settings

FarnoushRJ/MLProject_Pix2Pix

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


This is an unofficial PyTorch implementation of the paper Image-to-Image Translation with Conditional Adversarial Nets.

If you find this code useful, please star the repository.

Getting Started

Installation

  • Clone this repository
git clone "https://github.com/FarnoushRJ/MLProject_Pix2Pix.git"
  • Install the requirements

    Other Requirements

    • Pillow 7.0.0
    • numpy 1.18.4
    • matplotlib 3.2.1
    • barbar 0.2.1
    • torch 1.5.0
    • torchvision 0.6.0

Data

  • Facades and Maps datasets can be downloaded from this link.

Data Directory Structure

|__ DATASET_ROOT
    |__ train       
    |__ test
    |__ val     

How to train

cd train/
python train.py --args

Training Loss Curves

The models is trained for 200 epochs on both Facades and Maps datasets.

Facades Training Loss Maps(AtoB) Training Loss Maps(BtoA) Training Loss

Qualitative Results

Facades Dataset




Input, Fake Target, Real Target

Maps Dataset




Input, Fake Target, Real Target (AtoB)





Input, Fake Target, Real Target (BtoA)

TODO List

  • Models
    • Modified Model for deblurring, denoising and Inpainting

References

License

License: MIT