FACIAL INPAINTING IN UNALIGNED FACE IMAGES USING GENERATIVE ADVERSARIAL NETWORK WITH FEATURE RECONSTRUCTION LOSS
DOI:
https://doi.org/10.12962/j24068535.v18i2.a1004Abstract
Facial inpainting or face restoration is a process to reconstruct some missing region on face images such that the inpainting results still can be seen as a realistic and original image without any missing region, in such a way that the observer could not realize whether the inpainting result is a generated or original image. Some of previous researches have done inpainting using generative network, such as Generative Adversarial Network. However, some problems may arise when inpainting algorithm have been done on unaligned face. The inpainting result show spatial inconsistency between the reconstructed region and its adjacent pixel, and the algorithm fail to reconstruct some area of face. Therefore, an improvement method in facial inpainting based on deep-learning is proposed to reduce the effect of the stated problem before, using GAN with additional loss from feature reconstruction and two discriminators. Feature reconstruction loss is a loss obtained by using pretrained network VGG-Net, Evaluation of the result shows that additional loss from feature reconstruction loss and two type of discriminators may help to increase visual quality of inpainting result, with higher PSNR and SSIM than previous result.
Downloads
References
L. Haofu, G. Funka-Lea, Z. Yefeng, L. Jiebo, and S. K. Zhou, "Face Completion with Semantic Knowledge and Collaborative Adversarial Learning," in Proc. Asian Conference on Computer Vision, Dec. 2018, pp. 382–397, doi: 10.1007/978-3-030-20887-5_24.
M. A. Qureshi, M. Deriche, A. Beghdadi, and A. Amin, "A critical survey of state-of-the-art image inpainting quality assessment metrics," J. Vis. Commun. Image Represent., vol. 49, pp. 177–191, 2017, doi: 10.1016/j.jvcir.2017.09.006.
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
Y. Li, S. Liu, J. Yang, and M.-H. Yang, "Generative Face Completion," in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5892–5900, doi: 10.1109/CVPR.2017.624.
K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in Proc. International Conference on Learning Representations, 2015, pp. 1–14.
X. Hou, L. Shen, K. Sun, and G. Qiu, "Deep Feature Consistent Variational Autoencoder," in Proc. IEEE Winter Conference on Applications of Computer Vision, 2017, pp. 1133–1141, doi: 10.1109/WACV.2017.131.
L. Ziwei, L. Ping, W. Xiaogang, and T. Xiaoou, "Deep learning face attributes in the wild," in Proc. IEEE International Conference on Computer Vision, 2015, pp. 3730–3738, doi: 10.1109/ICCV.2015.425.
T. Tanaka, N. Kawai, Y. Nakashima, T. Sato, and N. Yokoya, "Iterative applications of image completion with CNN-based failure detection," J. Vis. Commun. Image Represent., vol. 55, no. May, pp. 56–66, 2018, doi: 10.1016/j.jvcir.2018.05.015.
I. J. Goodfellow et al., "Generative Adversarial Nets," Adv. Neural Inf. Process. Syst., vol. 27, pp. 4089–4099, 2014.
X. Hou, K. Sun, L. Shen, and G. Qiu, "Improving variational autoencoder with deep feature consistent and generative adversarial training," Neurocomputing, vol. 341, pp. 183–194, 2019, doi: 10.1016/j.neucom.2019.03.013.
M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein GAN," arXiv, vol. abs/1701.0, 2017, [Online]. Available: http://arxiv.org/abs/1701.07875.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, "Context Encoders: Feature Learning by Inpainting," in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2536–2544, 2016, doi: 10.1109/CVPR.2016.278.
Downloads
Published
Issue
Section
How to Cite
License
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in JUTI unless they receive approval for doing so from the Editor-in-Chief.
JUTI open access articles are distributed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.