Abstract
Occupying dead pixels, removing uninterested objects and shadows are often desired in the applications of an UAV to extract the natural and man-made feature boundaries. Image inpainting provides a mean to reconstruct the image. The basic idea behind inpainting methods is to naturally fill in absent or lacking portion of an image by using information from the surrounding area. Applications of this technique include the rebuilding of imperfect photographs and films, elimination of superimposed text, removal/replacement of unwanted objects, redeye correction, image coding. This paper reviews various image inpainting methods like PDE based image inpainting, wavelet-based inpainting, structural inpainting, exemplar-based image inpainting and textural inpainting with their variations. Image inpainting can also be used indirectly in squeezing image where some percentage of the original image is transmitted, and the whole image can be reconstructed on the other end using a pre-trained neural network. The critical reviews of each of these traditional methods along with the latest CNN based techniques are compared and suitability of these techniques for examining or repairing the UAV image is analyzed. In this paper, some of the existing quality assessment metrics like PSNR, MSE, ASVS, BorSal etc.related to image inpainting are also discussed.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Unmanned Aerial Vehicles (UAV) is used across the world for civilan, commerical as well as military applications. The UAV images often encounter common problems such as stripe noise and bad pixels. Bad pixels are those pixels which are statistically distinct from neighboring pixels. The source of bad pixels includes calibration errors, non-response of a detector, offset inequalities and relative gain of detectors. Bad Pixels are of two types named as warm and dead pixels. When measurement of a pixel has no correlation with the actual scene, then such pixel is termed Dead pixel. And warm pixels are those pixels which are brighter or darker than the healthy pixels [1]. In UAV images, destriping techniques are used to remove the stripe noise and dead pixel replacement methods to recover from dead pixels. But these techniques do not remove all stripes and lead to significant blurring within the image. So Image inpainting can be used for restoration from stripe noise and dead pixels in UAV images.
Image inpainting is a technique of reconstructing abscent or impaired region in an image in such a way that it is not easily detectable by an observer who does not know the original image. Image inpainting is also known as image retouching. Image inpainting has many applications such as eliminating object in a context of editing, restoring images from text overlays, and disillusion in image-based rendering (IBR) of viewpoints distinctive from those taken by the cameras and lost in secrecy in context of damaged image transmission [2]. All inpainting techniques assume that pixels in the familiar and unfamiliar parts of the image share the same geometrical structures and statistical features (Fig. 1).
1.1 Image Inpainting Problem
The goal of image in painting is to recover the region such that the inpainted area looks natural to human eye. An image A can be represented as:
Here k represent coordinates of pixel pi such that k = (i, j).
In image inpainting, the input image A is supposed to have gone through a deterioration, represented by N, which has eliminated samples from A. Due to which, δ is divided into two parts i.e δ = R V, here R denotes known part of A and V is unknown part of A. The degradation can be denoted as tt = NJ. By applying inpainting techniques, the color components of pixel pi located at position i in V is estimated (Fig. 2).
2 Image Inpainting Techniques
2.1 Diffusion Based Image Inpainting
In this technique, information from the known area is used to fill the unknown region. This technique works well when filling non-textured regions and mislaid regions as shown in Fig. 3. Partial Differential Equation (PDE) method and the variational method are two methods used by this technique. This algorithm first determines the local image geometry and later uses variational or PDEs technique to represent continuous change in the image and in its structures [2]. For instance, if the pixel is in a homogeneous area, the smoothing can be done in all directions if the pixel is placed on an image outline, the smoothing must be implemented along the outline direction and not beyond boundaries. This method is well suited for completing curves, lines and for inpainting small area. But, the weakness of this process is that it adds blur effect while filling large textured regions. Table 1 gives a summary of diffusion based inpainting technique.
2.2 Texture Based Image Inpainting
Also known as Sample based texture synthesis. This technique is used to construct a texture from a given sample see Fig. 4. The aim of this technique is to create a texture in such a way that the composed texture is larger than source sample with a similar visual characteristics [3]. All sample based techniques rely on Markov random fields (MRF) modeling of texture. In this technique, entire patch is synthesized by learning from patches in the known part of the image. As this approach synthesizes whole patches at once, it is faster than pixel based approach [4].
-
Variants of Texture based image inpainting
Patch Stitching: Filling unknown part of the input patch leads to stitching together pieces of texture that are not consistent in term of color or contrast. The aim of patch stitching is to reduce boundary artifact and color bleeding. Stitching can be done by either using the quilting method (greedy method) or by blending method.
Distance Metric: Used to measure the correlation between images or between image patches. The distance metric is divided into two categories named as pixel-based and statistics based. In the former one, similarity is measured in term of cross-correlation or difference between pixel color values like SSD (sum of squared difference), normalized cross-correlation and Lp norm etc whereas in latter similarity is measured between probabilities of pixel color value in patches like Bhattacharyya distance [5], NMI (Normalized Mutual Information), Kullback- Leibler divergence etc.
PPO (Patch processing Order): In an image a missing region is composed of textures and structures. In PPO, patches of structure are filled first. PPO is the product of data term and confidence term [6]. Data term in PPO can be of several forms like Gradient based, Sparsity based and Tensor based etc.
Global Optimization: patch per patch progress in greedy method does not ensure global optimization. To improve the visual characteristic of inpainted image one can maximize analogy among the synthesized patch and original patches in the known area of the image [7].
Searching best match pixel fastly: Exemplar based inpainting approach uses k- NN (k-nearest neighbors) inside the known part of the image [8]. The Nearest Neighbors (NN) computes the gap from query patch to all feasible candidate patches (Fig. 5; Table 2).
2.3 Exemplar Based Inpainting
This technique is appropriate for reconstructing large target regions. It fills holes in the image by repeatedly synthesizing the target region by most identical patch in the known area and copying the pixels from the most identical patch into the hole. This technique first assigns priority and then the selection of best matching patch is done (Fig. 6; Table 3).
2.4 Hybrid Based Inpainting
Natural Images comprises of both structure and texture. Area with homogeneous arrangement or is considered as texture and structures constitute primal outline of an images (like corners and edges). To deal with these images, two main methods have been considered. The first method is to combine different technique in one particular energy function using variation formulation [9], [10]. The Second strategy is to separate the texture and structure, and then inpainting them separately using convenient technique (i.e diffusion based or exemplar based) [5], [11] (Figs. 7, 8; Table 4).
2.5 CNN based inpainting
CNN (Convolutional neural network) algorithm detects and classifies objects in real time while being less expensive and performing better as compared to another machine learning methods. The problem in UAV images can be rectified by using CNN based inpainting. By using proper kernel, this technique inpaints image by convolving the neighbourhood of the target pixels. In [12], the value of a, b and c for convolving kernel are 0.0732, 0.1767 and 0.125 respectively. Here the central weight of kernel is zero because its related pixel in original image is unknown see Fig. 9 (Fig. 10; Table 5).
3 Quality Assessment Measures for Inpainted Image
The aim of image inpainting application is to reconstruct the original image such that the changes imported inside, outside or around the inpainted area are not detectable or distinguishable. The most accurate and reliable method is Subjective assessment methods [13], [14]. But these techniques are laborious, time-consuming and require a large number of viewer. Traditional metrics like MSE and PSNR were earlier used to classify the nature of inpainted images. But these metrics also are not well associated with perceptual quality evaluation [15]. To estimate the performance of various image inpainting approach, the metric of choice should be a qualitative analysis. Hence, we can divide the quality assessment measure for inpainted images into three categories named as Saliency-based, Structural based and machine learning based (see Fig. 11).
3.1 Structure Based
Being Full Reference based, this metric requires information of both the original image as well as the inpainted image; to determine the quality of the inpainted image. Parameter Weight Image inpainting Quality (PWIIQ) [16] is one of the structure based metrics which uses luminance and gradient similarity to determine the quality of the inpainted image.
3.2 Saliency Based
The saliency of the image highlights the area toward which the human vision is more responsive or interested. Hence, saliency can be used to estimate the visibility of various artifacts imported by inpainting techniques. In [17], inpainted image artifacts are categorized as in-region and out-region artifacts. In-region artifacts occur when different color and structures are introduced in the targeted region only. Due to which increased saliency in the inpainted image area is observed and hence disturbs the attention flow within the inpainted area. Outerregion artifacts appear when local colors and structures are not stretched to the target area by the inpainted technique. Due to which increase saliency in the neighbourhood of the inpainted region is observed. Some of Quality assessment metrics which uses the concept of saliency are:
Average Square Visual Salience (ASVS): Being Non-reference based, this metric does not require any information regarding the original image. This metrics is related to the in-region artifacts as it only acknowledge the inpainted pixels compared to the overall scene. As the value of this metric increase, the perceptual quality of the image decrease.
Degree of Noticeability (DN): Considering, in-region and out-region artifacts, [18] projected a metric named as DN. This metric identifies non-noticeable target regions and display any alteration in flow, in the surrounding of the inpainted region. As the value of the DN increases, perceptual quality decreases.
Gaze Density (GD): GD also consider both in-area and out-area artifact of the inpainted image. To overcome the deviations in textures and size GD of the inpainted image is distributed by GD of the original image.
Border Saliency based measures (BorSal): According to [19], by mapping saliency of neighborhood pixel, saliency change in the inpainted image is observed. This metric uses border pixel to calculate the normalized GD. One can extend the border pixel, three pixels inside and three pixels outside the target region. Enhanced version of this metric is Structural Border Saliency based measures (StructBorSal).
Visual Coherence MetricVisCoM (VisCoM): This metric considers the correlation between the inpainted pixels and the pixels which are outside of the target region (Table 6).
4 Conclusion
This paper examines various inpainting methods with a special focus on UAV images. The inpainting techniques are critically reviewed and gaps are indicated in the tables with features, limitations and sutability. Most of the methods works well for small area to be inpainted such as texture and PDE synthesis based inpainting techniques. They, cannot block the large disappearing area and also cannot recover the curvy sequence. Modified Oliveira algorithm packs the undesired objects in UAV images which are large without blur. Bilateral filter based approach protects the edges and eliminates the noise from UAV images. 8 neighborhood fast sweeping algorithm gives better results than Bertalmio’s algorithm. Inpainting single and multiple regions in UAV images can be done by using the spatial contextual correlation algorithm. Poisson equation based approach gives good visual effects for large inpainting area. Using color distribution analysis, the consistency of texture and continuity at edges for a better visual quality can be obtained. Edges in the UAV images can be enhanced by using the extended wavelet transform. Non-linear diffusion tensor method repairs the corrupt zones and preserves discontinuities in UAV images. In future,3D image inpainting can be done using CNN algorithm and CNN based inpainting technique can be applied on UAV videos.
References
Ratliff BM, Tyo JS, Boger JK, Black WT, Bowers DL, Fetrow MP (2007) Dead pixel replacement in lwir microgrid polarimeters. Opt Express 15(12):7596–7609
Guillemot C, Le Meur O (2014) Image inpainting: overview and recent advances. IEEE Signal Process Mag 31(1):127–144
Efros AA, Leung TK (1999) Texture synthesis by non-parametric sampling. In: Proceedings of the 7th IEEE international conference on computer vision, vol 2. IEEE, pp 1033–1038
Wei LY, Levoy M (2000) Fast texture synthesis using tree-structured vector quantization. In: Proceedings of the 27th annual conference on computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co, pp 479–488
Bugeau A, Bertalmío M, Caselles V, Sapiro G (2010) A comprehensive framework for image inpainting. IEEE Trans Image Process 19(10):2634–2645
Criminisi A, Pérez P, Toyama K (2004) Region filling and object removal by exemplar based image inpainting. IEEE Trans Image Process 13(9):1200–1212
Drori I, Cohen-Or D, Yeshurun H (2003) Fragment-based image completion. In: ACM Transactions on graphics (TOG), vol 22. ACM, pp 303–312
Bentley JL (1975) Multidimensional binary search trees used for associative searching. Commun ACM 18(9):509–517
Bertalmio M, Vese L, Sapiro G, Osher S (2003) Simultaneous structure and texture image inpainting. IEEE Trans Image Process 12(8):882–889
Starck JL, Elad M, Donoho DL (2005) Image decomposition via the combination of sparse representations and a variational approach. IEEE Trans Image Process 14(10):1570–1582
Komodakis N (2006) Image completion using global optimization. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol 1. IEEE, pp 442–452
Sun J, Yuan L, Jia J, Shum HY (2005) Image completion with structure propagation. In: ACM transactions on graphics (ToG), vol 24. ACM, pp 861–868
Fadili JM, Starck JL, Elad M, Donoho DL (2009) Mcalab: Reproducible research in signal and image decomposition and inpainting. Comput Sci Eng 1:44–63
Xu Z, Sun J (2010) Image inpainting by patch propagation using patch sparsity. IEEE Trans Image Process 19(5):1153–1165
Ardis PA, Brown CM, Singhal A (2010) Inpainting quality assessment. J Electron Imaging 19(1):011002
Gupta K, Kazi S, Kong T (2016) Deeppaint: a tool for image inpainting. Google Scholar
Oncu AI, Deger F, Hardeberg JY (2012) Evaluation of digital inpainting quality in the context of artwork restoration. In: European conference on computer vision. Springer, pp 561–570
Venkatesh MV, Sen-ching SC (2010) Eye tracking based perceptual image inpainting quality analysis. In: 2010 IEEE international conference on image processing. IEEE, pp 1109–1112
Schmidt U, Gao Q, Roth S (2010) A generative perspective on mrfs in low-level vision. In: 2010 IEEE computer society conference on computer vision and pattern recognition. IEEE, pp 1751–1758
Liu J, Musialski P, Wonka P, Ye J (2013) Tensor completion for estimating missing values in visual data. IEEE Trans Pattern Anal Mach Intell 35(1):208–220
Richard MMOBB, Chang MYS (2001) Fast digital image inpainting. In: Appeared in the proceedings of the international conference on visualization, imaging and image processing (VIIP 2001), Marbella, Spain, pp 106–107
Bertalmio M, Sapiro G, Caselles V, Ballester C (2000) Image inpainting. In: Proceedings of the 27th annual conference on computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co, pp 417–424
Telea A (2004) An image inpainting technique based on the fast marching method. J Graph Tools 9(1):23–34
Tschumperlé D (2006) Fast anisotropic smoothing of multi-valued images using curvature-preserving pde’s. Int J Comput Vis 68(1):65–82
Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Physica D 60(1–4):259–268
Chan TF, Shen J (2001) Nontexture inpainting by curvature-driven diffusions. J Vis Commun Image Represent 12(4):436–449
Shen J, Kang SH, Chan TF (2003) Euler’s elastica and curvature-based inpainting. SIAM J Appl Math 63(2):564–592
Ashikhmin M (2001) Synthesizing natural textures. In: Proceedings of the 2001 symposium on interactive 3D graphics, Citeseer, pp 217–226
Liang L, Liu C, Xu YQ, Guo B, Shum HY (2001) Real-time texture synthesis by patch-based sampling. ACM Trans Graph (ToG) 20(3):127–150
Barnes C, Shechtman E, Goldman DB, Finkelstein A (2010) The generalized patchmatch correspondence algorithm. In: European conference on computer vision. Springer, pp 29–43
Efros AA, Freeman WT (2001) Image quilting for texture synthesis and transfer. In: Proceedings of the 28th annual conference on computer graphics and interactive techniques. ACM, pp 341–346
Barnes C, Shechtman E, Goldman DB, Finkelstein A (2010) Supplementary material for the generalized patchmatch correspondence algorithm. Retrieved from on Sep 9, 6
Bertalmio M, Bertozzi AL, Sapiro G (2001) Navier-stokes, fluid dynamics, and image and video inpainting. In: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, vol 1. IEEE, pp I–I
Elad M, Starck JL, Querre P, Donoho DL (2005) Simultaneous cartoon and texture image inpainting using morphological component analysis (mca). Appl Comput Harmon Anal 19(3):340–358
Aujol JF, Ladjal S, Masnou S (2010) Exemplar-based inpainting from a variational point of view. SIAM J Math Anal 42(3):1246–1285
Cheng Q, Shen H, Zhang L, Li P (2014) Inpainting for remotely sensed images with a multichannel nonlocal total variation model. IEEE Trans Geosci Remote Sens 52(1):175–187
Nalawade VV, Ruikar SD Image inpainting using wavelet transform. Int J Adv Eng Technol E-ISSN, 0976–3945
Shen H, Zhang L (2009) A map-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans Geosci Remote Sens 47(5):1492–1502
Cai N, Su Z, Lin Z, Wang H, Yang Z, Ling BWK (2017) Blind inpainting using the fully convolutional neural network. Vis Comput 33(2):249–261
Xie J, Xu L, Chen E (2012) Image denoising and inpainting with deep neural networks. In: Advances in neural information processing systems. pp 341–349
Hays J, Efros AA (2008) Scene completion using millions of photographs. Commun ACM 51(10):87–94
Dang TT, Beghdadi A, Larabi MC (2013) Visual coherence metric for evaluation of color image restoration. In: 2013 colour and visual computing symposium (CVCS). IEEE, pp 1–6
Ardis PA, Singhal A (2009) Visual salience metrics for image inpainting. In: Visual communications and image processing 2009, vol 7257. W. International Society for Optics and Photonics, p 72571
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kadian, G., Khadanga, G. (2020). Review of Inpainting Techniques for UAV Images. In: Jain, K., Khoshelham, K., Zhu, X., Tiwari, A. (eds) Proceedings of UASG 2019. UASG 2019. Lecture Notes in Civil Engineering, vol 51. Springer, Cham. https://doi.org/10.1007/978-3-030-37393-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-37393-1_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37392-4
Online ISBN: 978-3-030-37393-1
eBook Packages: EngineeringEngineering (R0)