1 Introduction

There is a different world under the ocean, and nowadays there are many ways available to explore it. In today’s world, there is a very high level of technology which has attracted attention to carry high and useful information [52, 68]. Researchers are capturing a very high quality underwater image for enormous purposes and applications like robotics, ecological monitoring, tracking of sea organisms, artefact inspection, which are present underwater, rescue missions, and various real-time navigation [16, 57, 74, 78].

The underwater images are difficult to capture; the main constraints are light issues, capturing phenomena, dust particles, etc. An artificial mechanism which consists of an optical camera or some methods like spectral imaging, panoramic and polarization [1, 13, 64, 71] is required, as under the sea light is not that much visible as it is in the normal environment. Other than optical cameras, each of these techniques has its own limitations, such as narrow field of view, complex and professional operation, limited depth, etc.

What happens when an image is captured underwater - The immensity of the underwater image has been observed to diminish because of the nature of light. Light is made up of numerous wavelengths of different hues, including red, green, and blue. It has been discovered that, depending on light attenuation, blue and green color wavelengths reach deeper lengths than red color wavelengths, which vanish beyond 5 m, resulting in images with mostly blue and green tones [70]. Because water is a thick content, when we snap a photo in it light propagates underwater, becomes refracted, and consumed by the surrounding water, resulting in hazy pictures.

Underwater images get affected by poor visibility of light, which significantly fades while travelling in the water, thus impacting the result in terms of haziness and poor contrast. The visibility of light gets affected underwater by the distance travelled, i.e., around twenty meters in the case of clear water and approximately five meters or less in the case of cloudy water. Scattering and absorption affect the travel of light in water. In scattering, the direction of the light path is changed, while in absorption, light energy is reduced. Hence, scattering and absorption influence the overall performance of an underwater imaging system.

Scattering is basically of two types i.e. forward scattering and backward scattering. A light deviating randomly while travelling from an object to the camera creates blurriness in the image is featured in forwarding scattering while in the case of backward scattering contrast of an image is impacted. The scattering and absorption effect increases with water itself but is also affected by some other components like small dust particles, organic particles, tiny observable floating particles, etc. The presence of all these particles will increase the effect of scattering and absorption.

As light propagates in the sea its amount is reduced and the color present in light gets decreased sequentially, depending on the color wavelength. The shorter the wavelength, the higher the range or distance it covers in the sea, similarly higher the wavelength, and the shorter the range or distance it covers in the sea. We all know that in comparison to other colors, the wavelength of blue color is the shortest thus it travels the longest in the sea. Hence, the impact of blue color on objects in the sea is higher than any other colors. Therefore, the images on which we are interested to work can be affected by any one of the reasons: dull contrast, constant range visibility, blurring, haziness, non-uniform light, color diminished, bluish appearance, and various types of noise. To work on these captured images we require some mechanism so that we can increase the quality of these images.

Image processing is categorized on the basis of three viewpoints: first as an image enhancement technique, second as an image restoration method and third as image enhancement using deep learning and machine learning techniques.

Image enhancement in image processing techniques depends on subjective photo exceptional criteria. No mathematical criteria are used for optimizing processing outcomes. Image enhancement depends mainly on quality subjective criteria in order to generate an image which is of good quality. The generation of images, they are independent of any concrete model. These techniques are generally faster and easier in comparison to others.

Using various models of degradation and of the original image construction, the image restoration recovers the degraded image and it is basically a reverse process. These methods are very specific; however, they require different parameters like depth estimation, attenuation and diffusion coefficient, etc.

Deep learning techniques have rapidly evolved over the last several decades, and they have been widely utilized in a variety of computer vision and image processing applications. Deep learning has considerably enhanced the accuracy of high-level vision tasks such as object identification. Furthermore, low-level vision tasks like image super-resolution and image denoising benefit from deep network benefits and offer state-of-the-art performance. Fortunately, we have seen the convincing quality of deep learning-based underwater image improvement. Because of that, many researchers have used deep learning approaches to improve underwater image enhancement [7].

Underwater image processing is also categorized into three parts based on the model free methods, model based methods and data driven methods. Although the techniques which have been used either in restoration and enhancement methods or in model free and model based methods are the same. But the differentiation between model free and model based depends on their physical properties. Model-free approaches often redistribute the pixel values of a given image to improve contrast or color correction without simulating the image generation process underwater. Many common model-free approaches, such as histogram equalization, contrast limited adaptive histogram equalization, gray-world assumption, color constancy and automatic white balance, attempted to adjust underwater image pixels in the subspace. Model-based techniques often create physics models that take into consideration the mechanics of image processing and light transmission. People may get essential parameters of the deterioration model based on previous assumptions and observations, and then invert the model to obtain the desired outcomes. Many techniques have been proposed using model based approaches like dark channel prior, underwater dark channel prior, red channel prior, maximum attenuation identification etc. Deep learning has made significant progress in recent years, particularly in the field of vision, because of its immense potential in dealing with non-linear issues. Modern learning-based algorithms give a cutting-edge performance for image enhancing tasks. Normally, these data driven models need a significant amount of information paired with ground truth to achieve the desired results; however, for underwater image enhancement, this is a major impediment to effectively applying a supervised learning method to this specific task, because it is hard to procure numerous quality images of the same scenes in a real-world underwater environment. Several underwater image enhancement researchers have studied the utilization of synthetic images to solve the problem of a lack of matched training data and claimed fair effectiveness. WaterGAN, UnderwaterGAN, CycleGAN, etc. are some methods which are based on data driven approach [69].

1.1 Study selection

The selection procedure identified items that were most related to the purpose of this systematic literature review. According to our research, if the same article appeared in more than one domain, it was only examined once. The content of the publications chosen for the final study was examined to ensure that the outcomes of the current systematic literature review generated clear and non-biased conclusions. We completed the examination in order to obtain a decision on ultimate inclusion or exclusion. Following this, individual assessment inconsistencies were resolved through discussion. When the articles were discovered, the first step was to remove duplicate titles and those that were not related to the study. The inclusion criteria (IC) were confined to the search for String, and a study done by at least one of the exclusion criteria (EC) is removed.

1.1.1 Inclusion criteria (IC)

IC1: The studies are available before September 2021.

IC2: Only workshop, report, symposium, conference and journal publication are studied.

IC3: Complete texts are available in a digital database.

IC4: Models or frameworks that have been proposed are present.

1.1.2 Exclusion criteria (EC)

EC1: Duplicate research has been avoided.

EC2: Remove previews, book chapters, periodicals, theses, monographs, and interviews based papers.

EC3: Studies relying on quality rating standards should be excluded.

EC4: Different from English, studies are written in other languages.

The selection of articles was based on the above-mentioned explicit criteria for inclusion and disqualification. Figure 1 was created using the information from the PRISMA model. Figure 1 depicts the study selection procedure.

Fig. 1
figure 1

PRISMA flow chart based studies selection procedure

The focus of this paper is as follows: (i) about eighty-five research papers have been reviewed and their existing methodology, data set and evaluation process is summarized, which will help the researcher to understand the progress in this field; (ii) clear evaluation of target and complete analysis of the methods of underwater enhancement is performed, so a researcher can select the suitable method for practical case; (iii) Dataset which is one of the major concern in underwater image enhancement is explained and (iv) further, some of the open issues and challenges of underwater image enhancement and restoration will be discussed, which will provide some research direction for future.

Under section 2 of this paper, there is a literature survey on recent underwater image restoration and enhancement methods. In section 3, underwater image enhancement is evaluated and the dataset is recorded, subsequently experimental results on some groups related to the underwater image are discussed. Section 4 contains the result analysis of various methods with specific parameters. Section 5 discusses open challenges and issues in the processing of underwater images so that a researcher can identify the new research objection in this domain. In the last section 6, the conclusion of the paper is given and then publications.

2 Underwater image processing algorithm

The Underwater Image Processing algorithms are broadly classified into three parameters, first parameter is based on the image restoration method. The second parameter is based on the image enhancement method. The third parameter is based on the newer concept of underwater image enhancement with the help of machine learning and deep learning [84]. Hence, in our study, all these three parameters will be discussed. All the literature survey is based on these parameters, so here we are going to discuss the various papers with methods that have been used with their objectives.

2.1 Underwater image restoration method

In detail, underwater image restoration method is classified in four main groups

  1. (i)

    Turbulence degradation model

  2. (ii)

    Jaffe-McGlamery model

  3. (iii)

    Point spread function (PSF) model

  4. (iv)

    Image dehazing based model

2.1.1 Turbulence degradation model

Turbulence generates a non-uniform switch in the refractive index of the atmosphere; it resembles light propagation in water. Degradation model A designed by Hufnagel and Stanley [44] is totally based on the atmospheric turbulence properties. On the basis of frequency domain (u, v) it is defined by Eq. (1):

$$ \mathrm{A}\ \left(\mathrm{u},\mathrm{v}\right)=\exp\ \left[-\mathrm{k}\ {\left({\mathrm{u}}^2+{\mathrm{v}}^2\right)}^{5/6}\ \right] $$
(1)

Here, k represents the magnitude of turbulence. The underwater image restoration is realized by merging the degradation model with the evaluation function. Yang and Gong [82] also designed an underwater image restoration method on the basis of turbulence, where the weighted contrast average grads (WCAG) are applied in determining the standard of underwater images.

2.1.2 Jaffe-McGlamery model

This method of underwater image restoration [17, 39, 56] is one of the most widely used models, in which the light ET coming from the camera is divided into the following divisions: (i) reflected light from an object Ed, (ii) light which is emulated from a target known as forward scattered light Ef and (iii) the non-target reflected light known as back scattered light Eb, as given in Eq. (2).

$$ {\mathrm{E}}_{\mathrm{T}}={\mathrm{E}}_{\mathrm{d}}+{\mathrm{E}}_{\mathrm{f}}+{\mathrm{E}}_{\mathrm{b}} $$
(2)

Based on the simplified model by Jaffe McGlamery model; Trucco and Olmos [73] designed a self-calibrated filter. The filter designed in this method is based on two presumptions: (i) lighting (direct sunlight) underwater is consistent, and (ii) forward scattering is an important component whereas other components like direct component and backscattering were neglected.

Few researchers not only focused on backscattering in the Jaffe McGlamery model but also used the Dark Channel Prior (DCP). Here in this method, it was presumed that backscattering did not affect a high contrast region in an image. The parameters of this model were evaluated on the basis of this presumption.

2.1.3 Point spread function model

The imaging process in seawater with the help of a linear system was introduced by Hou et al. [40,41,42]. They also introduced optical properties of water under the standard underwater image restoration system. Various parameters like attenuation, volume scattering function, absorption, and particle distribution were measured with some specific instruments. Grosso [23], Voss [35] and Chapin [77] also used some specific instruments to measure the PSF. However, the instruments were too complicated and expensive.

2.1.4 Image Dehazing based model

This model is divided into two parts: (i) classical DCP based underwater image restoration model and (ii) learning DCP based underwater image restoration model. Table 1 is representing these methods, where in model column R is restoration, C is color correction, ML is machine learning, and DL is deep learning. In the Hypothesis priori column, DCP is dark channel prior, DBGR is the difference between blue green and red channels, RDCP is red dark channel prior, UDCP is underwater dark channel prior, and CDCP is color corrected images of dark channel prior. In the background light column, GB is global background light estimation and LB is local background light estimation. In the transmission map (TM) estimation column DEP is depth, AP is attenuation prior, FDC is from the dark channel, RET is Retinex, BM is blurring map, and MIL is minimum information loss.

Table 1 DCP based underwater image restoration models

In previous years DCP based underwater image restoration model has gained attention [3,4,5, 14, 18, 19, 24,25,26,27, 31, 38, 43, 48, 49, 53, 58,59,60, 80, 81, 83]. In it, there is a presumption that red debilitation is agile as compared to other attenuation colors, which is absolutely true in the case of open water and is used for calculating dark channel images in both DCP based restoration models.

Carlevaris et al. [14] initially computed the highest variation in red and blue–green channels. Thereafter, the transmission map is evaluated by setting the maximum variation till it becomes one. Whatever is the least value of the transmission map, it is considered background light. Now, the posterior probability is maximized and the final image is evaluated. The transmission map is further studied by Chiang and Chen [18] in terms of the ratio of residual energy of the input image to the camera after reflection. The average brightness difference between foreground and background is compared to estimate an artificial light source. The red channel was taken as underwater prior by Galdran et al. [31]. Here using the highest value of the red channel the background light was computed. The red channel was considered as the fasted attenuated channel by P. Drews, Jr., et al. [25], because of which any information related to field depth was not provided. Therefore one new method having an underwater dark channel prior (UDCP) was proposed. This dark channel image was computed after calculating the smallest difference between green and blue channels and from the highest value which was obtained from the dark channel image the background light was estimated.

When light is absorbed through water, it causes scattered color projection which generally causes dark channels prior to failing to identify the transmission map more precisely. Furthermore, an underwater scenario is generally defined by a limited or inappropriate light. There will be no change in the dark scene area even after imaging. In some previous work, fuzzy image and field depth is used for enhancing the transmission map estimation [19, 24, 58,59,60] and color correction has been added to adjust the uneven projection occurred by absorption [4, 5, 19, 24, 27, 43, 48, 49, 58,59,60, 83]. Ancuti et al. [3] used the local highest value of dark channels for evaluating background light.

Background’s light whether global or local is also defined as flat area [48, 49] or blurry region [26, 27]. In order to compute the blurry area in an underwater image and identify the background light, Emberton et al. [26] designed a hierarchical model. While the color of the underwater target was near to blur area, this model became unreliable. Emberton et al. [27] again dissolved the underwater image into; (i) greenish, (ii) bluish and (iii) blue greenish, based on the hierarchy technique. Before the DCP based restoration, different white balance procedures were gained for each part. Whereas, if the theoretical highest merit of background light was applied as a denominator for evaluating the transmission map, then this resulted in over saturation phenomenon leading to the appearance of artefacts in the background area [79].

In existing approaches, the maximum of learning used in DCP based restoration models is based on supervised scenarios [49]. Whereas in some of the approaches, unsupervised methods were used. With respect to the statistical distribution of color images, authors [80, 81] combined the colors present in original images in 500 types. Every pixel present in color image was presented with the cluster centre. In clustering space, color pixel shows a line segment based on distance with respect to the camera. Clustering with the logarithmic of the RGB value, an attenuation curve is obtained using the k dimension (KD) tree. After identifying the pixel value that has a maximum variation between RGB channels in the image, background light has been evaluated. To correct the transmission map simultaneously, the saturation constraint is applied; still, the restored image remained over saturated and dark.

2.2 Underwater image enhancement method

In this method, information related to the image is extracted even in absence of prior knowledge of the environment. Hence these methods are more generalized compared to the restoration method. In underwater image processing and analysis, many underwater enhancements are combined which are taken from methods directly applied to natural images [37, 65, 76]. Here we are discussing the main aspects of underwater image enhancement methods in which they focus on contrast stretching, merged improvement along with multi information and noise removal. All these methods have been listed in Table 2.

Table 2 Underwater image enhancement techniques

2.2.1 Filter based method

Arnold-Bos et al., [8] designed a pre processing model for luminance components in the underwater image. This model identified a specific noise range in the underwater image by combining enhancement and deconvolution methods. Log Gabor wavelet is used for denoising, decreasing various quantization errors and also to suspend particle noise. This system increases the effect of edge detection. A model designed by Bazeille [9] contains various filtering steps which enhance the quality of non-uniform illumination, increase contrast, decrease noise, and update color of an underwater image. In order to minimize the noise in the underwater image, a non-sub sampled contourlet transform (NSCT) which depends on adaptive total variation was designed by Jia and Ge [46]. A partial differential equation (PDE) was also used by the authors to reduce noise in an image and construct frequency components. The quality of enhancement of underwater images was examined by using the sharpness and PSNR i.e. peak signal to noise ratio.

2.2.2 Color correction based method

A model was proposed by Chambah et al. [15] in which automatic color equalization (ACE) was applied on each channel of RGB separately and adjusted the outputs of all three channels to increase the efficiency of identifying the fish recognition from an aquarium. ACE algorithm’s various parameters were adjusted internally. A model based on Rayleigh distribution which contains a sequence of color correction schemes was designed by Ghani and Isa [32, 33]. An underwater image was taken by Torres-Méndez and Dudek [72], which was treated like a Markov random field (MRF) and in it, nodes evident in random fields indicated the poor quality of color values while those which were not visible indicated the true color values in the underwater image. It explained the relationship between the pixels and their neighborhood by training the true color of sample pixels. Iqbal et al. [45] designed an underwater image enhancement model for the marine environment using the integrated color model. In this model, an RGB color space is used and it depends on the sequence of sliding stretching like contrast stretching whereas in HSI color space it depends on brightness and saturation stretching. An underwater image color enhancement model, designed by Petit et al. [62] was based on optical attenuation inversion.

The variational Retinex model was designed by Fu et al. [29] that were on the basis of Retinex theory and herein using the linear domain variational Retinex the spatial luminance parameter of color corrected underwater image was disintegrated through 4–6 iterations. In [30], triangular and bilateral filters were used on a, b and L components in place of the Gaussian filter, and then they were combined based on the ratio of values present in the RGB space.

2.2.3 Image fusion based method

There are many methods and model based on observation which plays an important role in improvement. Gradually, the fusion process was also considered under image enhancement. A fusion based underwater image enhancement model was designed by Ancuti et al. [2], here white balance color improvement and the output of bilateral filtering were weighted using the outcome of histogram equalization. In order to get a pixel level fusion output, four types of fusion weights having Gaussian, local, sensitometry and saliency contrast were calculated. However, under the consideration that fast attenuation was of the red channel, they increased the white balance processing in [6].

2.3 Comparison between image restoration and image enhancement methods

Comparison of various techniques used in the image restoration method and image enhancement method with their advantage and disadvantage has been shown in Tables 3 and 4.

Table 3 Comparison of various techniques used in image restoration method
Table 4 Comparison of various techniques used in image enhancement method

2.4 Deep learning based method

Based on deep learning, underwater image enhancement has challenges like labeling of images, difficulty to collect practically etc. Some of the approaches are discussed in Table 5 wherein training images column N stands for normal images and U stands for underwater images.

Table 5 Deep learning based underwater image enhancement models

A collection of color corrected underwater images [12] has been used as a training data set in [61], in which based on a CNN an underwater image enhancement method is constructed. In this model, 55 elements are used following which a three-D enhanced underwater image is achieved. In [50] WaterGAN network was designed for underwater image color alteration enhancement which is used to simulate the attenuation caused by the water body. This is similar to the Generative Adversarial Networks (GAN) [34], where two training sets were taken into consideration, one containing normal images and their relative depth maps in air and another one containing underwater images that are taken from simulated underwater and laboratory images referred by Jaffe- McGlamery model.

Inspired by the cycle consistent adversarial network (CycleGAN) [85] a weakly supervised color migration model was proposed by Lie et al. [51], to provide accuracy in color deformation in deep sea underwater images. Herein between the underwater and normal images, forward and backward mapping and also adversarial discriminators were incorporated. Several distortion functions like adversarial losses LossGAN, Structural similarity LossSSIM, and periodic continuity LossCyc were used in the forward mapping and backward mapping generator. All useful information of underwater images was the same while the color was improved.

3 Underwater image dataset and evaluation

3.1 Underwater image dataset

The underwater image dataset is very useful in the evolution of underwater image processing techniques. Here various underwater image datasets that have been used by several authors for image enhancement and image restoration have been summarized, in Table 6.

Table 6 Underwater Images Datasets

In Fig. 2, images for these datasets are recorded. Due to the difficulty in the collection of datasets, we cannot confirm that this dataset is the complete dataset. Few problems like low accuracy labelling information, small category and single target object hamper the development of underwater image enhancement techniques.

Fig. 2
figure 2

Some images from different dataset. a Wild Fish Marker [20] and OUCVISION Dataset [47] b Underwater Photography-Fish [75] and Rock Database [63] c Port Royal [50, 63] and HabCam Underwater dataset [21, 22, 36] d MOUSS [21, 22] and AFSC Underwater dataset [21, 22] e MBARI [21, 22, 55] and NWFSC Underwater Dataset [21, 22] (f) RUIE [54, 67] and RGBD Underwater Dataset [10, 28, 53]

3.2 Evaluation of underwater image quality

There are some parameters such as image restoration, image enhancement, image classification, image retrieval, image transmission, and optimization in optical image systems where the assessment of image quality plays a vital role. Two main methods i.e., subjective image quality evaluation (IQE) and objective image quality evaluation are used for evaluating the quality of images. Classification of objective IQE is independent of the reference image. If a reference image for any underwater image is not found, then to obtain the image quality we need a no-reference image metric.

We can use a number of quantitative metrics to assess the restoration and enhancement performance of different types of underwater images. These are (i) global contrast dealing with a grayscale underwater image quality; (ii) weighted gray scale angle (WGSA) metrics to evaluate the improvement of restored image, and (iii) robustness index to identify the proximity of gray scale histogram to their exponential distribution. Some papers also defined a method to assess the robustness of underwater image noise removal.

In color based underwater images, two important no reference evaluation metrics were used. One is underwater image quality measure (UIQM) in which the following three methods were combined to assess the quality of underwater images, (i) underwater image sharpness measure UISM, (ii) underwater image contrast measure UIConM and (iii) underwater image colorfulness measure UICM.

The other reference metric is the underwater color image quality evaluation (UCIQE) metric which is widely used to evaluate the quantity of non uniform color cast, enhance the quality of the image and quantify the blur and noise in the underwater image.

In subjective evaluation, some methods are defined to evaluate the quality of natural images like patch based contrast quality index (PCQI), mean square error (MSE), global contrast factor (GCF), structural similarity index measure (SSIM), average execution, peak signal to noise ratio (PSNR), entropy, a contrast to noise ratio (CNR), visibility metrics based on CNR (VM-CNNR), discrete entropy and contrast measure (DECM) and gradient ration in visible edge (GAVE).

The overall deterioration dominates all underwater images, including chroma reduction, poor contrast, nonuniform light, blurring, nonuniform color casts, and noise from numerous parameters. Because of the various distortions present in underwater images, it is difficult to develop a standard image quality metric which can be applied to all kinds of underwater conditions. Using the existing underwater image quality criteria, an incorrect score was given for an underwater image containing dark areas, oversaturation, and non-uniform brightness.

4 Result evaluation and analysis

Various types of techniques for underwater image restoration and enhancement were tested in this section to identify their subjective and objective performance. The images were classified into five parts: bluish, greenish, yellowish, whitish and deep sea underwater images. Various underwater image dehazing methods were analyzed as given by He et al. [38], and Galdran et al. [31]. Moreover, techniques using DCP with color correction like those given by Yang et al. [83], Peng et al. [59], Le et al. [49], tested color enhancement method used in ACE [11], by Iqbal et al. [45], a method based on Retinex [29] and deep learning model based method [51] were studied and analyzed.

4.1 Subjective evaluation

The experimental results of various methods have been shown in Figs. 3, 4, 5, 6 and 7. In this, it is observed that the output of the technique by Galdran et al. [31], Peng et al. [59], ACE [11], Le et al. [49], Yang et al. [83], Fu et al. [29], enhances color visibly up to an optimum level in different parts of the underwater image. Among all, the ACE method [11], method of Le et al. [49], Yang et al. [83], and Fu et al. [29]; recorded fine performance. Fu et al. also enhanced the color congestion but in the output image, it generated blurred details. Difficulties in developing the bright targeted images were generated by various other DCP based underwater image restoration methods. The method of Yang et al. [83], generally experienced superior color restoration effects for every type of underwater image. It helped in increasing divergence of the dark region but also simplified technicalities in underwater images.

Fig. 3
figure 3

Bluish Underwater images results comparison. a Original Images b He model results [38] c Galdran model results [31] d Peng model results [59] e Li model results [49] f Yang model results [83] g ACE model results [11] h Iqbal model results [45] i Fu model results [29] j Li model results [51]

Fig. 4
figure 4

Yellowish Underwater images results comparison. a Original Images b He model results [38] c Galdran model results [31] d Peng model results [59] e Li model results [49] f Yang model results [83] g ACE model results [11] h Iqbal model results [45] i Fu model results [29] j Li model results [51]

Fig. 5
figure 5

Greenish Underwater images results comparison. a Original Images b He model results [38] c Galdran model results [31] d Peng model results [59] e Li model results [49] f Yang model results [83] g ACE model results [11] h Iqbal model results [45] i Fu model results [29] j Li model results [51]

Fig. 6
figure 6

Whitish Underwater images results comparison. a Original Images b He model results [38] c Galdran model results [31] d Peng model results [59] e Li model results [49] f Yang model results [83] g ACE model results [11] h Iqbal model results [45] i Fu model results [29] j Li model results [51]

Fig. 7
figure 7

Deep sea underwater images results comparison. a Original Images b He model results [38] c Galdran model results [31] d Peng model results [59] e Li model results [49] f Yang model results [83] g ACE model results [11] h Iqbal model results [45] i Fu model results [29] j Li model results [51]

4.2 Objective evaluation

The result of restoration has been assessed by UCIQE, UIQM and PCQI metrics; because these metrics have been largely used to evaluate the execution of underwater images. To evaluate the difference between the original and the enhanced gray scale images, the PCQI has been used. If we get 1, it shows no change between both of the input and evaluated images whereas a value less than or more than 1, signifies a change. This significant change doesn’t essentially mean up-gradation of image quality.

In order to amplify the quality of images, higher values in UIQM and UCIQE of underwater images were observed. Tables 7 and 8 represents the five groups having ten methods each with their numeric values of three metrics. The variation between the original image and the processed image was observed to be less; when the value of PCQI was closer to 1. The reason is no information to evaluate the color involved in PCQI. In Tables 7 and 8, the lower values of output images indicate that the overall brightness of these images has changed significantly. These images are shown in Figs. 3, 4, 5e, 6j and 7g. The images given in Figs. 4, 5 and 6d and in Fig. 7j have extremely dark regions in output images that include an average saturation, and peculiarly high global contrast, causing larger values in UCIQE. This data is presented in Tables 7 and 8. In Tables 7 and 8, the UIQM image value attained by model Li et al. is affected through color deviation in output images that can be represented by high chroma variance and local contrast.

Table 7 Bluish, Yellowish and Greenish underwater images quality evaluation in Figs. 3, 4 and 5 respectively
Table 8 Whitish and Deep sea underwater images quality evaluation in Figs. 6 and 7 respectively

In conclusion, the accuracy of state-of-the-art underwater image quality evaluation techniques was not appropriate owing to the complexity of the underwater imaging environment and degradation categories (low contrast, color deviation, noise, blurring, etc.). The validity of color restoration and the amount of detail restoration in dark regions, particularly, did not meet the quality evaluation requirements of subjective visual assessment. The UCIQE has the shortest processing time among the PCQI, UCIQE, and UIQM average processing times and it is also suitable for real-time underwater applications.

If we compare the image restoration method and the image enhancement method based on the objective and subjective evaluations, then we find that the values of these evaluations are more appropriate in techniques that use enhancement methods rather than the restoration method. But if we compare the enhancement method with deep learning and machine learning enhancement techniques, then we get better results in deep learning and machine learning techniques. But to conclude this, we need some more research on deep learning and machine learning enhancement techniques.

5 Discussion on future work

The comparison and assessment provided in this research demonstrate that by using an appropriate enhancement technique for various underwater activities and situations, a satisfying outcome may be attained. To satisfy the needs of complicated circumstances, the optimal algorithm should be capable of automatically assessing the information in the input underwater image and make adaptive adjustments for diverse scenarios and lighting conditions. There is currently a lack of knowledge on the best underwater enhancement technique. Furthermore, the impact of uneven illumination from artificial lighting sources is a little bit discussed. Furthermore, motion blurring is a deterioration that occurs in practically every underwater image, although it is hardly taken into account in enhancement or restoration procedures.

Most studies concentrate on a single underwater image and spend little emphasis to underwater video processing; yet underwater video processing is critical relevance in practical applications. At the moment, there are several issues that require immediate attention, such as underwater pollution, video processing efficiency and inter-frame uniformity is required.

The available underwater image quality evaluation methods cannot accurately assess contrast and partial color improvements. It is difficult to build a significant standardized objective assessment approach for underwater image improvement. Although present natural image databases contribute significantly to the advancement of image quality assessment, image deformation in these datasets is either single deformation created manually or deformation of an image obtained by mobile devices. Furthermore, when applied to another database, the accuracy of an image quality assessment approach based on training with only one database is frequently poor.

With respect to future work, the researcher may contemplate the following features to carry out the relevant work: (i) by comparing and analyzing various methods present in the present study, we needed more appropriate underwater image enhancement methods which focus on adaptive adjustment for various scene and lightening issue in the deep sea, (ii) uneven availability of external light source in underwater sea, (iii) motion blurring also is one parameter for improvement in underwater image enhancement and restoration, (iv) researcher should not only focus on underwater image processing but also focus on underwater video processing.

6 Conclusion

The difficulty of achieving visibility of things at long or short distances in underwater environments is a challenge for the image analysis community. Even while there is various image enhancing techniques available, they are mostly confined to regular images, and only a handful have been created expressly for underwater images. We evaluated some of them in this paper to bring the facts together for a deeper understanding and evaluation of the approaches. We outlined the existing approaches for image restoration, enhancement and enhancement using deep learning and machine learning, concentrating on the conditions under which each algorithm was initially designed. We also examined the methods used to assess the efficacy of the algorithms, highlighting studies that employed a quantitative quality score.

According to our research findings, a shared acceptable dataset of test images for varied imaging situations, as well as standard criteria for qualitative and quantitative evaluation of the results, is still necessary to improve underwater imaging processing.

Emerging underwater photography techniques and technologies necessitate the adaptation and extension of the methods described above, for example, to handle data from numerous sources which can collect 3-dimensional scene information. However, investigating the visual systems of underwater creatures will undoubtedly provide us with clean perspectives into the knowledge extraction of underwater images.

This paper introduced various predefined models of underwater image enhancement and restoration. Some common issues present are outlined. Results from various underwater image restoration and enhancement techniques on yellow, green, blue, white, and deep sea images are correlated, which is helpful to identify the most suitable methods under various constraints. Other than this the accuracy and limitation of various underwater image quality evaluation metrics are analyzed. In this, we also summarized various underwater image datasets and provided future research directions for researchers in this area.