Skip to content

PyTorch code for our paper "LSGQuant: Layer-Sensitivity Guided Quantization for One-Step Diffusion Real-World Video Super-Resolution"

Notifications You must be signed in to change notification settings

zhengchen1999/LSGQuant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LSGQuant: Layer-Sensitivity Guided Quantization for One-Step Diffusion Real-World Video Super-Resolution

Tianxing Wu *, Zheng Chen *, Cirou Xv, Bowen Chai, Yong Guo, Yutong Liu, Linghe Kong, and Yulun Zhang†

"LSGQuant: Layer-Sensitivity Guided Quantization for One-Step Diffusion Real-World Video Super-Resolution", arXiv, 2026

[project] [arXiv] [supplementary material]

πŸ”₯πŸ”₯πŸ”₯ News

  • 2026-02-04: This repo is released.

Abstract: One-Step Diffusion Models have demonstrated promising capability and fast inference in video super-resolution (VSR) for real-world. Nevertheless, the substantial model size and high computational cost of Diffusion Transformers (DiTs) limit downstream applications. While low-bit quantization is a common approach for model compression, the effectiveness of quantized models is challenged by the high dynamic range of input latent and diverse layer behaviors. To deal with these challenges, we introduce LSGQuant, a layer-sensitivity guided quantizing approach for one-step diffusion-based real-world VSR. Our method incorporates a Dynamic Range Adaptive Quantizer (DRAQ) to fit video token activations. Furthermore, we estimate layer sensitivity and implement a Variance-Oriented Layer Training Strategy (VOLTS) by analyzing layer-wise statistics in calibration. We also introduce Quantization-Aware Optimization (QAO) to jointly refine the quantized branch and a retained high-precision branch. Extensive experiments demonstrate that our method has nearly performance to origin model with full-precision and significantly exceeds existing quantization techniques.


Method Overview


βš’οΈ TODO

  • Release code and pretrained models
  • Test our quantization method on more models

πŸ”Ž Results

Quantitative Results (click to expand)
  • Results in Tab. 1 of the main paper

Qualitative Results (click to expand)
  • Results in Fig. 5 of the main paper

πŸ“Ž Citation

If you find the code helpful in your research or work, please cite the following paper(s).

@article{wu2026lsgquant,
  title={LSGQuant: Layer-Sensitivity Guided Quantization for One-Step Diffusion Real-World Video Super-Resolution},
  author={Wu, Tianxing and Chen, Zheng and Xu, Cirou and Chai, Bowen and Guo, Yong and Liu, Yutong and Kong, Linghe and Zhang, Yulun},
  journal={arXiv preprint arXiv:2602.03182},
  year={2026}
}    

πŸ’‘ Acknowledgements

The full-precision backbone model is adapted from WAN2.1. We extend our thanks to its developers for providing a robust pretrained baseline, which significantly supports LSGQuant.

The quantization framework builds upon ViDiT-Q and SVDQuant. We also express our gratitude to these open-source contributors, whose code has been instrumental in the development and experimentation of LSGQuant.

About

PyTorch code for our paper "LSGQuant: Layer-Sensitivity Guided Quantization for One-Step Diffusion Real-World Video Super-Resolution"

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •