Skip to main content

A Method of Semantic Image Inpainting with Generative Adversarial Networks

  • Conference paper
  • First Online:
Proceedings of 2018 Chinese Intelligent Systems Conference

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 529))

  • 829 Accesses

Abstract

Semantic image inpainting focuses on the completing task of high-level missing regions at the basis of the uncorrupted image. The classical methods of image inpainting can only deal with low-level or mid-level missing regions due to the lack of representation of the image. In the essay, we conclude a new method of semantic image inpainting. It’s based on the generative model with learning the representation of image database. We propose an architecture of completion model using perceptual loss and contextual loss based on generative adversarial networks after having trained generative model using DCGAN. We qualitatively and quantitatively explore the effect of missing regions of different types and sizes on image inpainting. Our method successfully completes inpainting tasks in large missing regions and results looks realistic with extensive experiments. We conclude that the performance of our model mostly is good when completing image corrupted with the mask with an area of less than 50% as well as with center or random masks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. R.A. Yeh, C. Chen, T.Y. Lim et al., Semantic image inpainting with deep generative models, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5485–5493

    Google Scholar 

  2. Y. Li, S. Liu, J. Yang et al., Generative face completion, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, issue 3 (2017), p. 6

    Google Scholar 

  3. I. Goodfellow, J. Pouget-Abadie, M. Mirza et al., Generative adversarial nets, in Advances in Neural Information Processing Systems (2014), pp. 2672–2680

    Google Scholar 

  4. A. Radford, L. Metz, S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

  5. M. Bertalmio, G. Sapiro, V. Caselles et al., Image inpainting, in Proceedings of the 27th annual conference on Computer graphics and interactive techniques (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 417–424

    Google Scholar 

  6. A.A. Efros, T.K. Leung, Texture synthesis by non-parametric sampling, in The Proceedings of the Seventh IEEE International Conference on Computer Vision, 1999, vol. 2 (IEEE, 1999), pp. 1033–1038

    Google Scholar 

  7. C. Barnes, E. Shechtman, A. Finkelstein et al., PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. TOG 28(3), 24 (2009)

    Google Scholar 

  8. J. Hays, A.A. Efros, Scene completion using millions of photographs, in ACM Transactions on Graphics (TOG), vol. 26, issue 3 (ACM, 2007), p. 4

    Google Scholar 

  9. O. Whyte, J. Sivic, A. Zisserman, Get Out of my Picture! Internet-based inpainting, in BMVC, vol. 2, issue 4 (2009), p. 5

    Google Scholar 

  10. D. Pathak, P. Krahenbuhl, J. Donahue et al., Context encoders: feature learning by inpainting, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 2536–2544

    Google Scholar 

  11. P. Pérez, M. Gangnet, A. Blake, Poisson image editing. ACM Trans. Graph. (TOG) 22(3), 313–318 (2003)

    Article  Google Scholar 

  12. Z. Liu, P. Luo, X. Wang et al., Deep learning face attributes in the wild, in Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 3730–3738

    Google Scholar 

  13. E. Learned-Miller, G.B. Huang, A. RoyChowdhury et al., Labeled faces in the wild: a survey, in Advances in Face Detection and Facial Image Analysis (Springer, Cham, 2016), pp. 189–248

    Google Scholar 

  14. Y. LeCun, L. Bottou, Y. Bengio et al., Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  15. H. Niederreiter, A. Winterhof, Quasi-Monte Carlo Methods. Encyclopedia of Quantitative Finance (Wiley, New York, 2010), pp. 185–306

    Google Scholar 

  16. Z. Wang, A.C. Bovik, H.R. Sheikh et al., Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by center of Artificial Intelligence Laboratory of automation school in Chongqing university. We would like to acknowledge Han Zhou for helpful guidance in training model. Finally, we would like to thank Wenhui Li for meticulous support constantly during the research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongpeng Yin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z., Yin, H. (2019). A Method of Semantic Image Inpainting with Generative Adversarial Networks. In: Jia, Y., Du, J., Zhang, W. (eds) Proceedings of 2018 Chinese Intelligent Systems Conference. Lecture Notes in Electrical Engineering, vol 529. Springer, Singapore. https://doi.org/10.1007/978-981-13-2291-4_7

Download citation

Publish with us

Policies and ethics