Abstract
Semantic image inpainting focuses on the completing task of high-level missing regions at the basis of the uncorrupted image. The classical methods of image inpainting can only deal with low-level or mid-level missing regions due to the lack of representation of the image. In the essay, we conclude a new method of semantic image inpainting. It’s based on the generative model with learning the representation of image database. We propose an architecture of completion model using perceptual loss and contextual loss based on generative adversarial networks after having trained generative model using DCGAN. We qualitatively and quantitatively explore the effect of missing regions of different types and sizes on image inpainting. Our method successfully completes inpainting tasks in large missing regions and results looks realistic with extensive experiments. We conclude that the performance of our model mostly is good when completing image corrupted with the mask with an area of less than 50% as well as with center or random masks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
R.A. Yeh, C. Chen, T.Y. Lim et al., Semantic image inpainting with deep generative models, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5485–5493
Y. Li, S. Liu, J. Yang et al., Generative face completion, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, issue 3 (2017), p. 6
I. Goodfellow, J. Pouget-Abadie, M. Mirza et al., Generative adversarial nets, in Advances in Neural Information Processing Systems (2014), pp. 2672–2680
A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
M. Bertalmio, G. Sapiro, V. Caselles et al., Image inpainting, in Proceedings of the 27th annual conference on Computer graphics and interactive techniques (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 417–424
A.A. Efros, T.K. Leung, Texture synthesis by non-parametric sampling, in The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, vol. 2 (IEEE, 1999), pp. 1033–1038
C. Barnes, E. Shechtman, A. Finkelstein et al., PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. TOG 28(3), 24 (2009)
J. Hays, A.A. Efros, Scene completion using millions of photographs, in ACM Transactions on Graphics (TOG), vol. 26, issue 3 (ACM, 2007), p. 4
O. Whyte, J. Sivic, A. Zisserman, Get Out of my Picture! Internet-based inpainting, in BMVC, vol. 2, issue 4 (2009), p. 5
D. Pathak, P. Krahenbuhl, J. Donahue et al., Context encoders: feature learning by inpainting, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544
P. Pérez, M. Gangnet, A. Blake, Poisson image editing. ACM Trans. Graph. (TOG) 22(3), 313–318 (2003)
Z. Liu, P. Luo, X. Wang et al., Deep learning face attributes in the wild, in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 3730–3738
E. Learned-Miller, G.B. Huang, A. RoyChowdhury et al., Labeled faces in the wild: a survey, in Advances in Face Detection and Facial Image Analysis (Springer, Cham, 2016), pp. 189–248
Y. LeCun, L. Bottou, Y. Bengio et al., Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
H. Niederreiter, A. Winterhof, Quasi-Monte Carlo Methods. Encyclopedia of Quantitative Finance (Wiley, New York, 2010), pp. 185–306
Z. Wang, A.C. Bovik, H.R. Sheikh et al., Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Acknowledgements
This work is supported by center of Artificial Intelligence Laboratory of automation school in Chongqing university. We would like to acknowledge Han Zhou for helpful guidance in training model. Finally, we would like to thank Wenhui Li for meticulous support constantly during the research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wang, Z., Yin, H. (2019). A Method of Semantic Image Inpainting with Generative Adversarial Networks. In: Jia, Y., Du, J., Zhang, W. (eds) Proceedings of 2018 Chinese Intelligent Systems Conference. Lecture Notes in Electrical Engineering, vol 529. Springer, Singapore. https://doi.org/10.1007/978-981-13-2291-4_7
Download citation
DOI: https://doi.org/10.1007/978-981-13-2291-4_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-2290-7
Online ISBN: 978-981-13-2291-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)