Advances in Deep Learning-Based Face Deblurring Methods

Main Article Content

Haochen Wang

Keywords

face deblurring, deep learning, identity preservation, diffusion model

Abstract

The task of face deblurring is to restore a clear, realistic and identity-preserving high-quality face image from a given blurred face image. This paper systematically expounds the methods of face deblurring based on deep learning from five stages: traditional methods based on physical models, early deep learning methods based on end-to-end regression, methods based on generative adversarial networks (GAN) and prior embedding, methods based on the Transformer architecture, and methods based on diffusion models, following the development of technology. By sorting out the core ideas, key technologies and representative works of each stage, it reveals the clear evolution of face deblurring research from “pixel-level image restoration” to “identity-constrained generative modeling”. In addition, it introduces the commonly used datasets for face deblurring, and makes predictions and prospects for the problems to be solved and future research directions in face deblurring research. Face deblurring is a research hotspot in the field of computer vision, and more high-quality algorithms will be proposed in the future, and it is developing in a more diversified direction.

Abstract 32 | PDF Downloads 18

References

  • [1] Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., & Freeman, W. T. (2006). Removing camera shake from a single photograph. In Acm Siggraph 2006 Papers (pp. 787-794).
  • [2] Krishnan, D., & Fergus, R. (2009). Fast image deconvolution using hyper-Laplacian priors. Advances in neural information processing systems, 22.
  • [3] Nah, S., Hyun Kim, T., & Mu Lee, K. (2017). Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3883-3891).
  • [4] Tao, X., Gao, H., Shen, X., Wang, J., & Jia, J. (2018). Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8174-8182).
  • [5] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690).
  • [6] Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., & Matas, J. (2018). Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8183-8192).
  • [7] Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 8878-8887).
  • [8] Menon, S., Damian, A., Hu, S., Ravi, N., & Rudin, C. (2020). Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 2437-2445).
  • [9] Wang, X., Li, Y., Zhang, H., & Shan, Y. (2021). Towards real-world blind face restoration with generative facial prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9168-9178).
  • [10] Zhou, S., Chan, K., Li, C., & Loy, C. C. (2022). Towards robust blind face restoration with codebook lookup transformer. Advances in Neural Information Processing Systems, 35, 30599-30611.
  • [11] Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., & Yang, M. H. (2022). Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5728-5739).
  • [12] Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., & Timofte, R. (2021). Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1833-1844).
  • [13] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., & Li, H. (2022). Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 17683-17693).
  • [14] Saharia, C., Ho, J., Chan, W., Salimans, T., Fleet, D. J., & Norouzi, M. (2022). Image super-resolution via iterative refinement. IEEE transactions on pattern analysis and machine intelligence, 45(4), 4713-4726.
  • [15] Qiu, X., Han, C., Zhang, Z., Li, B., Guo, T., & Nie, X. (2023, October). Diffbfr: Bootstrapping diffusion model for blind face restoration. In Proceedings of the 31st ACM international conference on multimedia (pp. 7785-7795).
  • [16] Wu, J. H., Tsai, F. J., Peng, Y. T., Tsai, C. C., Lin, C. W., & Lin, Y. Y. (2024). Id-blau: Image deblurring by implicit diffusion-based reblurring augmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 25847-25856).
  • [17] Zhang, Z., Gao, X., Wang, Z., & Zhang, X. (2025). TD-BFR: Truncated diffusion model for efficient blind face restoration. arXiv preprint arXiv:2503.20537.