Keywords

1 Introduction

Unmanned Aerial Vehicles (UAV) is used across the world for civilan, commerical as well as military applications. The UAV images often encounter common problems such as stripe noise and bad pixels. Bad pixels are those pixels which are statistically distinct from neighboring pixels. The source of bad pixels includes calibration errors, non-response of a detector, offset inequalities and relative gain of detectors. Bad Pixels are of two types named as warm and dead pixels. When measurement of a pixel has no correlation with the actual scene, then such pixel is termed Dead pixel. And warm pixels are those pixels which are brighter or darker than the healthy pixels [1]. In UAV images, destriping techniques are used to remove the stripe noise and dead pixel replacement methods to recover from dead pixels. But these techniques do not remove all stripes and lead to significant blurring within the image. So Image inpainting can be used for restoration from stripe noise and dead pixels in UAV images.

Image inpainting is a technique of reconstructing abscent or impaired region in an image in such a way that it is not easily detectable by an observer who does not know the original image. Image inpainting is also known as image retouching. Image inpainting has many applications such as eliminating object in a context of editing, restoring images from text overlays, and disillusion in image-based rendering (IBR) of viewpoints distinctive from those taken by the cameras and lost in secrecy in context of damaged image transmission [2]. All inpainting techniques assume that pixels in the familiar and unfamiliar parts of the image share the same geometrical structures and statistical features (Fig. 1).

Fig. 1
figure 1

Comparison of removing dead line from CBERS (China Brazil Earth Resource Satellite) image a 8-pixel dead line image b inpainting using PDE c inpainting using exemplar based technique

1.1 Image Inpainting Problem

The goal of image in painting is to recover the region such that the inpainted area looks natural to human eye. An image A can be represented as:

$$\begin{aligned} & \delta \subset Q^{n} \to Q^{m} \\ & A = k \to A(k) \\ \end{aligned}$$
(1)

Here k represent coordinates of pixel pi such that k = (i, j).

In image inpainting, the input image A is supposed to have gone through a deterioration, represented by N, which has eliminated samples from A. Due to which, δ is divided into two parts i.e δ = R V, here R denotes known part of A and V is unknown part of A. The degradation can be denoted as tt = NJ. By applying inpainting techniques, the color components of pixel pi located at position i in V is estimated (Fig. 2).

Fig. 2
figure 2

Classification of image inpainting techniques

2 Image Inpainting Techniques

2.1 Diffusion Based Image Inpainting

In this technique, information from the known area is used to fill the unknown region. This technique works well when filling non-textured regions and mislaid regions as shown in Fig. 3. Partial Differential Equation (PDE) method and the variational method are two methods used by this technique. This algorithm first determines the local image geometry and later uses variational or PDEs technique to represent continuous change in the image and in its structures [2]. For instance, if the pixel is in a homogeneous area, the smoothing can be done in all directions if the pixel is placed on an image outline, the smoothing must be implemented along the outline direction and not beyond boundaries. This method is well suited for completing curves, lines and for inpainting small area. But, the weakness of this process is that it adds blur effect while filling large textured regions. Table 1 gives a summary of diffusion based inpainting technique.

Fig. 3
figure 3

Block diagram of diffusion-PDE based image inpainting technique

Table 1 A summary of papers based on diffusion based inpainting technique

2.2 Texture Based Image Inpainting

Also known as Sample based texture synthesis. This technique is used to construct a texture from a given sample see Fig. 4. The aim of this technique is to create a texture in such a way that the composed texture is larger than source sample with a similar visual characteristics [3]. All sample based techniques rely on Markov random fields (MRF) modeling of texture. In this technique, entire patch is synthesized by learning from patches in the known part of the image. As this approach synthesizes whole patches at once, it is faster than pixel based approach [4].

Fig. 4
figure 4

Block diagram of texture based image inpainting technique

  • Variants of Texture based image inpainting

Patch Stitching: Filling unknown part of the input patch leads to stitching together pieces of texture that are not consistent in term of color or contrast. The aim of patch stitching is to reduce boundary artifact and color bleeding. Stitching can be done by either using the quilting method (greedy method) or by blending method.

Distance Metric: Used to measure the correlation between images or between image patches. The distance metric is divided into two categories named as pixel-based and statistics based. In the former one, similarity is measured in term of cross-correlation or difference between pixel color values like SSD (sum of squared difference), normalized cross-correlation and Lp norm etc whereas in latter similarity is measured between probabilities of pixel color value in patches like Bhattacharyya distance [5], NMI (Normalized Mutual Information), Kullback- Leibler divergence etc.

PPO (Patch processing Order): In an image a missing region is composed of textures and structures. In PPO, patches of structure are filled first. PPO is the product of data term and confidence term [6]. Data term in PPO can be of several forms like Gradient based, Sparsity based and Tensor based etc.

Global Optimization: patch per patch progress in greedy method does not ensure global optimization. To improve the visual characteristic of inpainted image one can maximize analogy among the synthesized patch and original patches in the known area of the image [7].

Searching best match pixel fastly: Exemplar based inpainting approach uses k- NN (k-nearest neighbors) inside the known part of the image [8]. The Nearest Neighbors (NN) computes the gap from query patch to all feasible candidate patches (Fig. 5; Table 2).

Fig. 5
figure 5

Using GaoFen-2 RS imagery, comparison of clouds removal a Original image. b inpainting using SiLRTC [20] c inpainting using MRF [19]

Table 2 Difference between image inpainting techniques

2.3 Exemplar Based Inpainting

This technique is appropriate for reconstructing large target regions. It fills holes in the image by repeatedly synthesizing the target region by most identical patch in the known area and copying the pixels from the most identical patch into the hole. This technique first assigns priority and then the selection of best matching patch is done (Fig. 6; Table 3).

Fig. 6
figure 6

Block diagram of exemplar based image inpainting technique

Table 3 A summary of papers based on texture based and exemplar based inpainting technique

2.4 Hybrid Based Inpainting

Natural Images comprises of both structure and texture. Area with homogeneous arrangement or is considered as texture and structures constitute primal outline of an images (like corners and edges). To deal with these images, two main methods have been considered. The first method is to combine different technique in one particular energy function using variation formulation [9], [10]. The Second strategy is to separate the texture and structure, and then inpainting them separately using convenient technique (i.e diffusion based or exemplar based) [5], [11] (Figs. 7, 8; Table 4).

Fig. 7
figure 7

Block diagram of hybrid based image inpainting technique

Fig. 8
figure 8

Generalized of skelton CNN [12]

Table 4 A summary of papers based on hybrid based inpainting technique

2.5 CNN based inpainting

CNN (Convolutional neural network) algorithm detects and classifies objects in real time while being less expensive and performing better as compared to another machine learning methods. The problem in UAV images can be rectified by using CNN based inpainting. By using proper kernel, this technique inpaints image by convolving the neighbourhood of the target pixels. In [12], the value of a, b and c for convolving kernel are 0.0732, 0.1767 and 0.125 respectively. Here the central weight of kernel is zero because its related pixel in original image is unknown see Fig. 9 (Fig. 10; Table 5).

Fig. 9
figure 9

Convolving kernel used by [21]

Fig. 10
figure 10

Inpainting image using CNN [19]

Table 5 A summary of papers based on CNN based inpainting technique

3 Quality Assessment Measures for Inpainted Image

The aim of image inpainting application is to reconstruct the original image such that the changes imported inside, outside or around the inpainted area are not detectable or distinguishable. The most accurate and reliable method is Subjective assessment methods [13], [14]. But these techniques are laborious, time-consuming and require a large number of viewer. Traditional metrics like MSE and PSNR were earlier used to classify the nature of inpainted images. But these metrics also are not well associated with perceptual quality evaluation [15]. To estimate the performance of various image inpainting approach, the metric of choice should be a qualitative analysis. Hence, we can divide the quality assessment measure for inpainted images into three categories named as Saliency-based, Structural based and machine learning based (see Fig. 11).

Fig. 11
figure 11

Classification of quality assessment measure for inpainted image

3.1 Structure Based

Being Full Reference based, this metric requires information of both the original image as well as the inpainted image; to determine the quality of the inpainted image. Parameter Weight Image inpainting Quality (PWIIQ) [16] is one of the structure based metrics which uses luminance and gradient similarity to determine the quality of the inpainted image.

3.2 Saliency Based

The saliency of the image highlights the area toward which the human vision is more responsive or interested. Hence, saliency can be used to estimate the visibility of various artifacts imported by inpainting techniques. In [17], inpainted image artifacts are categorized as in-region and out-region artifacts. In-region artifacts occur when different color and structures are introduced in the targeted region only. Due to which increased saliency in the inpainted image area is observed and hence disturbs the attention flow within the inpainted area. Outerregion artifacts appear when local colors and structures are not stretched to the target area by the inpainted technique. Due to which increase saliency in the neighbourhood of the inpainted region is observed. Some of Quality assessment metrics which uses the concept of saliency are:

Average Square Visual Salience (ASVS): Being Non-reference based, this metric does not require any information regarding the original image. This metrics is related to the in-region artifacts as it only acknowledge the inpainted pixels compared to the overall scene. As the value of this metric increase, the perceptual quality of the image decrease.

Degree of Noticeability (DN): Considering, in-region and out-region artifacts, [18] projected a metric named as DN. This metric identifies non-noticeable target regions and display any alteration in flow, in the surrounding of the inpainted region. As the value of the DN increases, perceptual quality decreases.

Gaze Density (GD): GD also consider both in-area and out-area artifact of the inpainted image. To overcome the deviations in textures and size GD of the inpainted image is distributed by GD of the original image.

Border Saliency based measures (BorSal): According to [19], by mapping saliency of neighborhood pixel, saliency change in the inpainted image is observed. This metric uses border pixel to calculate the normalized GD. One can extend the border pixel, three pixels inside and three pixels outside the target region. Enhanced version of this metric is Structural Border Saliency based measures (StructBorSal).

Visual Coherence MetricVisCoM (VisCoM): This metric considers the correlation between the inpainted pixels and the pixels which are outside of the target region (Table 6).

Table 6 Summary of quality assessment measure

4 Conclusion

This paper examines various inpainting methods with a special focus on UAV images. The inpainting techniques are critically reviewed and gaps are indicated in the tables with features, limitations and sutability. Most of the methods works well for small area to be inpainted such as texture and PDE synthesis based inpainting techniques. They, cannot block the large disappearing area and also cannot recover the curvy sequence. Modified Oliveira algorithm packs the undesired objects in UAV images which are large without blur. Bilateral filter based approach protects the edges and eliminates the noise from UAV images. 8 neighborhood fast sweeping algorithm gives better results than Bertalmio’s algorithm. Inpainting single and multiple regions in UAV images can be done by using the spatial contextual correlation algorithm. Poisson equation based approach gives good visual effects for large inpainting area. Using color distribution analysis, the consistency of texture and continuity at edges for a better visual quality can be obtained. Edges in the UAV images can be enhanced by using the extended wavelet transform. Non-linear diffusion tensor method repairs the corrupt zones and preserves discontinuities in UAV images. In future,3D image inpainting can be done using CNN algorithm and CNN based inpainting technique can be applied on UAV videos.