Skip to content

[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer

License

Notifications You must be signed in to change notification settings

sczhou/CodeFormer

Repository files navigation

Towards Robust Blind Face Restoration with Codebook Lookup Transformer (NeurIPS 2022)

Paper | Project Page | Video

google colab logoHugging FaceReplicateOpenXLabVisitors

Shangchen Zhou, Kelvin C.K. Chan, Chongyi Li, Chen Change Loy

S-Lab, Nanyang Technological University

⭐ If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! 🤗

Update

  • 2023.07.20: Integrated to 🐼 OpenXLab. Try out online demo! OpenXLab
  • 2023.04.19: 🐳 Training codes and config files are public available now.
  • 2023.04.09: Add features of inpainting and colorization for cropped and aligned face images.
  • 2023.02.10: Include dlib as a new face detector option, it produces more accurate face identity.
  • 2022.10.05: Support video input --input_path [YOUR_VIDEO.mp4]. Try it to enhance your videos! 🎬
  • 2022.09.14: Integrated to 🤗 Hugging Face. Try out online demo! Hugging Face
  • 2022.09.09: Integrated to 🚀 Replicate. Try out online demo! Replicate
  • More

TODO

  • Add training code and config files
  • Add checkpoint and script for face inpainting
  • Add checkpoint and script for face colorization
  • Add background image enhancement

🐼 Try Enhancing Old Photos / Fixing AI-arts

Face Restoration

Face Color Enhancement and Restoration

Face Inpainting

Dependencies and Installation

  • Pytorch >= 1.7.1
  • CUDA >= 10.1
  • Other required packages in requirements.txt
# git clone this repository git clone https://github.com/sczhou/CodeFormer cd CodeFormer # create new anaconda env conda create -n codeformer python=3.8 -y conda activate codeformer # install python dependencies pip3 install -r requirements.txt python basicsr/setup.py develop conda install -c conda-forge dlib (only for face detection or cropping with dlib) 

Quick Inference

Download Pre-trained Models:

Download the facelib and dlib pretrained models from [Releases | Google Drive | OneDrive] to the weights/facelib folder. You can manually download the pretrained models OR download by running the following command:

python scripts/download_pretrained_models.py facelib python scripts/download_pretrained_models.py dlib (only for dlib face detector) 

Download the CodeFormer pretrained models from [Releases | Google Drive | OneDrive] to the weights/CodeFormer folder. You can manually download the pretrained models OR download by running the following command:

python scripts/download_pretrained_models.py CodeFormer 

Prepare Testing Data:

You can put the testing images in the inputs/TestWhole folder. If you would like to test on cropped and aligned faces, you can put them in the inputs/cropped_faces folder. You can get the cropped and aligned faces by running the following command:

# you may need to install dlib via: conda install -c conda-forge dlib python scripts/crop_align_face.py -i [input folder] -o [output folder] 

Testing:

[Note] If you want to compare CodeFormer in your paper, please run the following command indicating --has_aligned (for cropped and aligned face), as the command for the whole image will involve a process of face-background fusion that may damage hair texture on the boundary, which leads to unfair comparison.

Fidelity weight w lays in [0, 1]. Generally, smaller w tends to produce a higher-quality result, while larger w yields a higher-fidelity result. The results will be saved in the results folder.

🧑🏻 Face Restoration (cropped and aligned face)

# For cropped and aligned faces (512x512) python inference_codeformer.py -w 0.5 --has_aligned --input_path [image folder]|[image path] 

🖼️ Whole Image Enhancement

# For whole image # Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN # Add '--face_upsample' to further upsample restorated face with Real-ESRGAN python inference_codeformer.py -w 0.7 --input_path [image folder]|[image path] 

🎬 Video Enhancement

# For Windows/Mac users, please install ffmpeg first conda install -c conda-forge ffmpeg 
# For video clips # Video path should end with '.mp4'|'.mov'|'.avi' python inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 1.0 --input_path [video path] 

🌈 Face Colorization (cropped and aligned face)

# For cropped and aligned faces (512x512) # Colorize black and white or faded photo python inference_colorization.py --input_path [image folder]|[image path] 

🎨 Face Inpainting (cropped and aligned face)

# For cropped and aligned faces (512x512) # Inputs could be masked by white brush using an image editing app (e.g., Photoshop) # (check out the examples in inputs/masked_faces) python inference_inpainting.py --input_path [image folder]|[image path] 

Training:

The training commands can be found in the documents: English|简体中文.

License

This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.


🐼 Ecosystem Applications & Deployments

CodeFormer has been widely adopted and deployed across a broad range (>20) of online applications, platforms, API services, and independent websites, and has also been integrated into many open-source projects and toolkits.

Only demos on Hugging Face Space, Replicate, and OpenXLab are official deployments maintained by the authors. All other demos, APIs, apps, websites, and integrations listed below are third-party (non-official) and are not affiliated with the CodeFormer authors. Please verify their legitimacy to avoid potential financial loss.

Websites (Non-official)

⚠️⚠️⚠️ The following websites are not official and are not operated by us. They use our models without any license or authorization. Please verify their legitimacy to avoid potential financial loss.

WebsiteLinkNotes
CodeFormer.nethttps://codeformer.net/Non-official website
CodeFormer.cnhttps://www.codeformer.cn/Non-official website
CodeFormerAI.comhttps://codeformerai.com/Non-official website

Online Demos / API Platforms

PlatformLinkNotes
Hugging Facehttps://huggingface.co/spaces/sczhou/CodeFormerMaintained by Authors
Replicatehttps://replicate.com/sczhou/codeformerMaintained by Authors
OpenXLabhttps://openxlab.org.cn/apps/detail/ShangchenZhou/CodeFormerMaintained by Authors
Segmindhttps://www.segmind.com/models/codeformerNon-official
Sievehttps://www.sievedata.com/functions/sieve/codeformerNon-official
Fal.aihttps://fal.ai/models/fal-ai/codeformerNon-official
VaikerAIhttps://vaikerai.com/sczhou/codeformerNon-official
Scade.prohttps://www.scade.pro/processors/lucataco-codeformerNon-official
Grandlinehttps://www.grandline.ai/model/codeformerNon-official
AI Demoshttps://aidemos.com/tools/codeformerNon-official
Synexahttps://synexa.ai/explore/sczhou/codeformerNon-official
RentPromptshttps://rentprompts.ai/models/CodeformerNon-official
ElevaticsAIhttps://elevatics.ai/models/super-resolution/codeformerNon-official
Anakin.aihttps://anakin.ai/apps/codeformer-online-face-restoration-by-codeformer-19343Non-official
Relaytohttps://relayto.com/explore/codeformer-yf9rj8kwc7zsrNon-official

Open-Source Projects & Toolkits

Project / ToolkitLinkNotes
Stable Diffusion GUIhttps://nmkd.itch.io/t2i-guiIntegration
Stable Diffusion WebUIhttps://github.com/AUTOMATIC1111/stable-diffusion-webuiIntegration
ChaiNNerhttps://github.com/chaiNNer-org/chaiNNerIntegration
PyPIhttps://pypi.org/project/codeformer/ ; https://pypi.org/project/codeformer-pip/Python packages
ComfyUIhttps://stable-diffusion-art.com/codeformer/Integration

Acknowledgement

This project is based on BasicSR. Some codes are brought from Unleashing Transformers, YOLOv5-face, and FaceXLib. We also adopt Real-ESRGAN to support background image enhancement. Thanks for their awesome works.

Citation

If our work is useful for your research, please consider citing:

@inproceedings{zhou2022codeformer, author ={Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change}, title ={Towards Robust Blind Face Restoration with Codebook Lookup TransFormer}, booktitle ={NeurIPS}, year ={2022} } 

Contact

If you have any questions, please feel free to reach me out at [email protected].