1. Introduction
An underwater environment is any area immersed in water, such as the ocean floor, a reservoir, a basin, and a riverbed. Such environments are also found in lakes, ponds, dams, canals, and even aquifers. Underwater environments are important because water covers almost 71 % of the earth’s surface and provides [
1] natural habitats for most living organisms. In addition, they are considered a potential resource for the extraction of various minerals, such as silver, gold, copper, manganese, and zinc. Therefore, exploring, developing, and protecting underwater resources have become active research topics.
The clear interpretation and analysis of underwater videos and images offer important and valuable information about the underwater world. They are important for domains such as underwater archaeology, marine ecological research, naval military applications, and telecommunication cable handling [
2]. Consequently, the processing and analysis of underwater images are crucial in the research on developing, exploring, and protecting underwater resources [
3,
4,
5,
6,
7,
8,
9].
During image acquisition, the poor visibility conditions in an underwater environment reduce the obtained image quality, resulting in highly degraded, low-contrast, and noisy images. This limits its use in many practical scenarios. Two solutions for obtaining clear underwater images are available. One requires expensive specialized image acquisition hardware and the other applies image preprocessing techniques for image enhancement and restoration allowing the generated image to be better displayed when used in various applications.
Underwater images are captured using diverse methods. The moorings and buoys method is used to monitor the water quality, the sea earth’s circumstances, and microorganisms in the water. The basics of this method are to use cameras mounted on remotely operated vehicles (ROVs), unmanned underwater vehicles (UUVs), autonomous underwater vehicles (AUVs), or an ocean sensor network [
10,
11,
12]. For better image quality, these vehicles are equipped with sensors, such as GPSs and cameras to collect information about subaquatic minerals, coral reef ecosystems, or the deep sea habitat.
Figure 1 presents a selection of the types of equipment used in collecting and observing ocean data.
Figure 1.
Concept map of the ocean observation network.
Figure 1.
Concept map of the ocean observation network.
There are other image capturing methods that depend on sonar and their quality depends on the wavelength of the sounds used. Sonar emits these, then capture the underwater sound reflections and converts them into images [
13]. Using this image-capturing method helps researchers to study underwater images efficiently.
Due to the above-mentioned circumstances with the underwater image acquisition process, the obtained images must be preprocessed to better display during underwater image analysis. This is accomplished by developing new underwater image processing and computer vision techniques [
14]. Computer vision algorithms can effectively analyze and interpret underwater visual data, but this is restricted by the limited visibility conditions resulting in low contrast and noisy images. Preprocessing techniques are required to avoid these challenges and obtain clear, high-quality images. Image preprocessing techniques are classified into two main groups; underwater image restoration and underwater image enhancement [
15].
The implementation of techniques for underwater image restoration mainly depends on physical models. These physical models are important for many tasks, such as building the underwater image degradation model, computing parameters for the model (e.g., diffusion, attenuation, or water turbidity coefficients), and tackling the inverse problem. Performing these tasks requires prior knowledge and assumptions about the environmental conditions. Mathematical models can be used to estimate the model parameters, but they are very complicated and computationally challenging. Conversely, the underwater image enhancement techniques for providing clear images of suitable quality are based on qualitative criteria. These techniques can improve image color and contrast much more simply and quickly, without using physical models.
Many reviews on underwater image enhancement and restoration have been published. However, these investigations focus on specific aspects related to underwater image analysis. [
16] presented a brief survey on underwater image enhancement. Other surveys such as [
6,
12,
15,
17], reviewed many methods for enhancing and restoring underwater images, but focused only on the techniques used, their limitations, quality assessment measures, and future directions. More recent surveys have been published such as those by [
18,
19,
20]; these concentrate only on the methods used for underwater enhancement while ignoring restoration. Finally, the survey presented by [
21] lacks a complete discussion of existing enhancement methods besides quality evaluation metrics. Although these reviews have been produced, they do not comprehensively discuss several issues. They present incomplete classifications of enhancement and restoration techniques and ignore the latest developments with deep learning techniques. They also lack a clear discussion and investigation about how to increase the images quality. Therefore, this survey intends to review the most prominent approaches for underwater image restoration and enhancement techniques and overcome the previously listed limitations.
The following are the basic contributions of this survey:
The basic concepts related to the underwater environment, including image formation and light degradation models, are explained.
Recent underwater image enhancement and restoration methods are comprehensively discussed to identify their working methodologies, strengths, and limitations.
The datasets applied for improving underwater image analysis and the existing evaluation metrics are discussed and compared.
Different enhancement and restoration techniques are experimentally evaluated by using images from underwater images datasets.
The main limitations that researchers face in underwater image analysis are summarized. These limitations are classified into two categories: those related to the underwater environment and those related to the underwater images.
Several open issues for underwater image enhancement and restoration are presented to highlight potential future research directions.
The remainder of this survey is organized as follows.
Section 2 indicates the methodology implemented in this study, such as the keywords used for the searches, data sources, and criteria for selecting, including, and excluding articles.
Section 3 presents the background of imaging in subaquatic environments, identifying the image formation model and types of light degradation.
Section 4 introduces the classification of the underwater image processing methods into two main categories (enhancement and restoration) and presents a review of important previous studies.
Section 5 presents the techniques used for underwater image analysis.
Section 6 highlights the limitations faced by researchers in this field.
Section 7 provides a comparison of the existing underwater imaging datasets.
Section 8 presents the metrics used for evaluating the quality of underwater imaging techniques.
Section 9 details performance evaluation for the qualitative, quantitative, and computational time assessment.
Section 10 presents a discussion on several applications in the field of underwater image enhancement and restoration.
Section 11 elaborates on future research directions. Finally, the conclusion is presented.
Figure 2 shows the survey structure and
Table 1 provides all the abbreviations used in this survey.
Table 1.
Used abbreviations.
Table 1.
Used abbreviations.
Abbreviation |
Definition |
Abbreviation |
Definition |
AD |
Average Difference |
AG |
Average Gradient |
AHE |
Adaptive Histogram Equalization |
AMBE |
Absolute Mean Brightness Error |
AUVs |
Autonomous Underwater Vehicles |
BBHE |
Brightness Preserving Bi-Histogram Equalization |
CCF |
Colourfulness Contrast Fog density index |
CEF |
Colour Enhancement Factor |
CRBICMRD |
Color Restoration depended on the Integrated Color Model with Rayleigh Distribution |
CLAHE |
Contrast Limited Adaptive Histogram Equalization |
CNN |
Convolutional Neural Network |
CNR |
Contrast to Noise Ratio |
DCP |
Dark Channel Prior |
DCT |
Discrete Cosine Transform |
DOP |
Degrees of Polarization |
DL |
Deep Learning |
DSNMF |
Deep Sparse Non-negative Matrix Factorization |
DWT |
Discrete Wavelet Transform |
EAs |
Evolutionary Algorithms |
EME |
Measure of Enhancement |
EMEE |
Measure of Enhancement by Entropy |
EUVP |
Enhancement of Underwater Visual Perception |
FR |
Full Reference |
GANs |
Generative Adversarial Networks |
GUM |
Generalized Unsharp Masking |
HE |
Histogram Equalization |
HIS |
Hue-Saturation-Intensity |
HR |
High Resolution |
HSV |
Hue-Saturation Value |
HVS |
Human Visual System |
IEM |
Image Enhancement Metric |
ICM |
Integrated Color Model |
IFM |
Image Formation Model |
JTF |
Joint Trigonometric Filtering |
LFR |
Light Field Rendering |
MARI |
Marine Autonomous Robotics for Interventions |
MAI |
Maximum Attenuation Identification |
MCM |
Multi-Color Model |
MD |
Maximum Difference |
MILP |
Minimum Information Loss Principal |
MIP |
Maximum Intensity Prior |
MLP |
Multilayer Perceptron |
MSRCR |
Multiscale Retinox with Color Restoration |
MSE |
Mean Square Error |
MTF |
Modulation Transfer Function |
NAE |
Normalized Absolute Error |
NCC |
Normalized Cross-Correlation |
NIQA |
Natural Image Quality Assessment |
NR |
No Reference |
NR-IQA |
No-referenced Image Quality Metric |
PCQI |
Patch based Contrast Quality Index |
PDI |
Polarization Differential Imaging |
PSF |
Point Spread Function |
PSNR |
Peak-Signal-to-Noise Ratio |
PSO |
Particle Swarm Optimization |
RAHIM |
Recursive Adaptive Histogram Modification |
RCP |
Red Channel Prior |
RGB |
Red-Green-Blue |
RGHS |
Relative Global Histogram Stretching |
RIP |
Range Intensity Profile |
RMSE |
Root Mean Square Error |
RNN |
Recurrent Neural Networks |
ROVs |
Remotely Operated Vehicles |
RR |
Reduced Reference |
RUIE |
Real-World Underwater Image Enhancement |
SAUV |
Sampling System-AUV |
SCM |
Single Color Model |
SNR |
Signal-to-Noise Ratio |
SR |
Super-Resolution |
SSIM |
Structure Similarity Index Measure |
SSEQI |
Spatial Spectral Entropy based Quality index |
SVM |
Support Vector Machine |
TM |
Transmission Map |
UCIQE |
Underwater Colour Image Quality Evaluation metric |
UDCP |
Underwater Dark Channel Prior |
UHTS |
underwater task-oriented test suite |
UIEB |
Underwater Image Enhancement Benchmark |
UIE |
Underwater Image Enhancement |
UIQS |
Underwater Image Quality Set |
UIEB |
Underwater Image Enhancement Benchmark |
UIQM |
Underwater Image Quality Measure |
UISM |
Underwater Image Sharpness Measurement |
UICM |
Underwater Image Color Measurement |
ULAP |
Underwater Light Attenuation Prior |
UIConM |
Underwater Image Contrast Measurement |
UOI |
Underwater Optical Imaging |
UUVs |
Unmanned Underwater Vehicles |
WCID |
Wavelength Compensation and Image Defogging |
Figure 2.
Survey’s structure.
Figure 2.
Survey’s structure.
2. Research Methodology
This section describes the protocols used to examine different methods and techniques proposed for solving underwater image analysis problems during 2006–2022. The search keywords, data sources, inclusion/exclusion criteria, and article selection criteria are discussed.
Table 2 presents the frequency of using the techniques proposed for underwater image analysis classified into three different classes.
Table 2.
Technique type: Analysis based on frequency.
Table 2.
Technique type: Analysis based on frequency.
No. |
Method type |
Method frequency % |
1 |
Hardware-based Methods |
10% |
2 |
Underwater Image Restoration |
30% |
3 |
Underwater Image enhancement |
60% |
2.1. Search keywords
The keywords were carefully selected for the initial search. Then, many new words found in numerous related articles were used to compile a keyword selection. The main keywords used in many studies include underwater image analysis, underwater image enhancement, underwater image restoration, underwater datasets, and underwater image quality evaluation. Our understanding of the topic facilitated the selection of other keywords, such as color enhancement, light correction method, color correction, dark channel prior, deep learning, image dehazing, scattering, and absorption.
2.2. Data sources
Our survey included searching various academic databases to collect the articles, as indicated in
Table 3.
Table 3.
Academic databases selected for research in this survey.
Table 3.
Academic databases selected for research in this survey.
2.3. Article Inclusion/Exclusion Criteria
Based on our research goal, the inclusion/exclusion criteria were chosen to determine which publications were suitable for the next review stage. Research relevance was assumed for articles that meet the inclusion criteria and excluded articles that do not fulfill the inclusion criteria. The inclusion/exclusion criteria set is presented in
Table 4.
Table 4.
Article inclusion and exclusion criteria.
Table 4.
Article inclusion and exclusion criteria.
Inclusion Criteria |
Exclusion Criteria |
Our survey only concentrates on articles on underwater image analysis and processing techniques.
Only articles concerned with processing underwater images.
Only articles and research in English were taken into account.
|
Articles not on underwater image analysis and processing techniques are excluded.
Articles not focused on any other types of imaging but only underwater imaging.
Articles that were not written in English were excluded.
|
2.4. Article Selection
Inclusion and exclusion criteria were created to choose which articles were suitable for the review phase. The articles under inclusion criteria articles were considered to be related to the research, and those not meeting the inclusion criteria were excluded. The list of inclusion/exclusion criteria has been discussed in the previous section. Choosing an article for this research was a three-phase process. The first phase extracted only the titles, abstracts, and keywords of the articles. The second phase discussed the abstract, introduction, and conclusion to modify the choices from the first phase. In the final phase, the articles were perused, and thereafter, the article’s quality was evaluated according to its research relevance.
3. Basic Concepts
Life is believed to have originated in the oceans, and at present, the underwater environment is the natural habitat for most living organisms. In the accessible areas of the underwater environment, various human activities are conducted. The underwater environment is explored using underwater images that have been analyzed by applying computer vision and image processing techniques. When analyzing underwater images in computer vision, a critical and fundamental difference between images taken in water and air must be considered.
First, the light rays are attenuated and scattered as they travel through the water body. The former leads to a loss of photons while the latter leads to a gain of photons [
22]. Both effects are wavelength dependent and therefore affect image coloration by producing bluish/greenish tints in underwater images. Second, light rays are refracted at the water-air interface of the camera housing, generating geometric distortions in the image. Therefore, as introduced in the following subsection, it is essential to discuss the characteristics of the [
23] underwater image model to improve underwater image analysis.
3.1. Scattering
Underwater light scattering occurs when dust particles are present. When the refracted light from the object outside reaches the camera, it combines with the floating particles in the imaging medium, causing a scattering effect. Two forms of scattering affect underwater images; forward and backward scattering [
12,
24,
25]. When light reflected from an object is scattered on its way to the camera, it is termed forward scattering. In contrast, backscattering happens when reflected light reaches the camera immediately before reaching the lighted scene. Forward scattering results in blurred images, while backscattering causes effects such as low contrast and hazy in the image [
24].
3.2. Underwater Image Model
Jaffe-McGlamery is an imaging model for underwater image enhancement that depends on physical models [
13,
26]. This model was developed as a simulator for designing underwater image systems and evaluating the use of computer vision algorithms. Therefore, the model was adopted to incorporate several factors, such as light sources, color, and shadows. It is also based on realistically modeling the water medium and linear superposition. When the irradiance enters the camera, it contains a linear combination of three different contents: the direct component
, the forward-scattered component
, and the backscattered component
. Hence, the total irradiance
is computed by:
where
is the light reflected by the object and reaches the camera without being scattered,
is forward scattering, and
is backscattering. This model is widely applied for image restoration and requires complex computations and a longer execution time [
27,
28]. If the distance between the used camera and the underwater scene is very small, forward scattering can be eliminated, and only the background scattering and direct transmission are considered [
29,
30,
31,
32,
33].
The simplified Image Formation Model (IFM) is a typical and effective model for restoring underwater images. It is computed by Equation
2.
where
I is the camera,
J is the underwater scene,
t is the residual energy ratio,
x is a particular
on the image scene,
c is a channel from RGB channels,
is the direct transmission, and
is the background scattering.
3.3. Underwater Light Degradation
The empirical Lambert-Beer law states that "The Decline in light intensity is based on the properties of the medium through which the light travels" [
13]. As per this law, the intensity of light to create underwater images decays exponentially when traveling through water. This intensity loss is called attenuation. The absorption effects make the light to lose energy, while scattering causes a change in the electromagnetic energy direction. The absorption and scattering phenomenon leads to light attenuation [
34].
Light attenuation is a major concern when dealing with underwater imaging as it causes the hazy effect that makes image processing applications difficult. It limits visibility to about 20m in clear water and 5m in murky water [
33]. The light absorption in water changes by wavelength. As shown in
Figure 3, the colors in the visible spectrum disappear as the water depth increases. Red light is absorbed first because of its longer wavelength. Due to its shorter wavelength, blue penetrates the deepest, leaving a bluish hue in underwater images [
6,
13].
Figure 3.
Underwater color reduction.
Figure 3.
Underwater color reduction.
4. Classification of Underwater Image Processing Techniques
Due to the increasing demand for clear good-quality images for understanding and analyzing the real-life underwater environment, many studies have discussed the analysis of underwater images. As mentioned, underwater image processing is classified into two main classes: image restoration and enhancement. The main difference between these classes is that image restoration is based on the original IFM, but image enhancement is not.
In this section, The current studies related to underwater image processing are presented. These are classified into three main classes, image restoration, enhancement, and a fusion of both. Then, each class is divided into its corresponding sub-classes as shown in
Figure 4.
Figure 4.
Taxonomy of underwater image analysis techniques.
Figure 4.
Taxonomy of underwater image analysis techniques.
4.1. Underwater Image Restoration Techniques
The underwater image restoration method depends on physical models. It builds the physical model by understanding the physical image degradation mechanism and the core physics of light propagation. Then, it deduces the basic physical model parameters using prior knowledge and finally restores the restored image [
35]. The simplified IFM is identified by Equation 2, which is a typical and effective underwater image restoration model. Underwater image restoration is classified into two groups: hardware-based restoration and software-based restoration. Hardware-based restoration is subdivided into three groups: polarization characteristic-based, stereo imaging, and range-gated imaging. Whereas software-based restoration is subdivided into three groups: optical image-based, prior knowledge-based, and deep learning-based restoration techniques.
Table 5 presents a comparison of these underwater image restoration methods.
Table 5.
Summary of underwater image restoration methods.
Table 5.
Summary of underwater image restoration methods.
Reference |
Method Based |
Advantages |
Disadvantages |
Huang et al. (2016) |
Polarization |
Effective in the cases of both scattered light and object radiance |
High computational complexity |
Hu et al. (2017) |
Polarization |
Enhanced visibility and low computational complexity |
Didn’t effectively remove noise and no application for color images |
Han et al. (2017) |
Polarization |
Suppressed backscattering and extracted edges |
No experiments were applied in real-life conditions |
Hu et al. (2018) |
Polarization |
Enhanced the underwater images even in turbid media |
Complex computational time |
Hu et al. (2018) |
Polarization |
Intensity and DCP of backscattering were suppressed |
Solving the and spatial distribution was very difficult |
Ferreira et al. (2019) |
Polarization |
Effective method for underwater images recovery |
Complicated the cost function and time-consuming |
Yang et al. (2019) |
Polarization |
Enhanced the contrast in underwater images |
Noise wasn’t removed |
Wang et al. (2022) |
Polarization |
Qualitatively and quantitatively improved the underwater images and removed noise |
High time complexity |
Jin et al. (2020) |
Polarization |
higher signal to noise ratio and higher contrast |
Noise wasn’t removed |
Fu et al. (2020) |
Polarization |
Enhanced visibility in underwater images |
High computational complexity |
Burno et al. (2010) |
Stereo Imaging |
Good quality underwater images |
High time complexity |
Roser et al. (2014) |
Stereo Imaging |
improved stereo estimation |
Didn’t work well in shallow water due to various light conditions |
Lin et al. (2019) |
Stereo Imaging |
Enhanced the stereo imaging system |
High computational time |
Luczynski et al. (2019) |
Stereo Imaging |
Effective method |
Noise wasn’t removed |
Tan et al. (2006) |
Rang Gated |
Enhanced underwater images contrast |
Noise wasn’t removed |
Li et al. (2009) |
Rang Gated |
Reduced speckle noise and preserved features details |
High computational complexity |
Liu et al. (2018) |
Rang Gated |
Enhanced image visibility |
Didn’t effectively remove noise |
Wang et al. (2020) |
Rang Gated |
Enhanced image contrast and visibility |
High computational complexity |
Wang et al. (2021) |
Rang Gated |
Enhanced image contrast and worked well even if the estimated depth was smaller |
Complication of cost function |
Trucco and Olmos-Antillon (2006) |
Optical |
Optimized the computed parameters values automatically |
Increased the time and computation complexity |
Hou et al. (2007) |
Optical |
Effective method that depended on point spread function |
Importance of estimating the parameters of illumination scattering |
Boffety et al. (2012) |
Optical |
An effective smoothing method was used |
Low contrast in images |
Wen et al. (2013) |
Optical |
Enhanced the perception of underwater images |
Poor flexibility and adaptability |
Ahn et al. (2018) |
Optical |
Effective and accurate method |
Increased time complexity |
Chao and Wang (2010) |
DCP |
Recovered the underwater images and removed scattering |
Underwater images suffered from color distortion |
Yang et al. (2011) |
DCP |
Fast method for underwater images restoration |
Only suitable for underwater images with rich colors. |
Chiang and Chen (2011) |
DCP |
Restored underwater images color balance and removed haze |
High computational complexity |
Serikawa and Lu (2014) |
DCP |
Improved the contrast and visibility |
High computational time |
Peng et al. (2015) |
DCP |
Exploited the blurriness of underwater image |
Noise wasn’t removed |
Lu et al. (2015) |
UDCP |
Color correction of underwater images effectively |
Decreased the contrast |
Lu et al. (2017) |
UDCP |
Effective method for recovering the underwater images |
Increased noise |
Galdran et al. (2015) |
UDCP |
Enhanced the artificial light and contrast |
Colors of some restored images were unreal and incorrect |
Carlevaris-Bianco et al. (2010) |
MIP |
Reduced the haze effects and provided color correction |
Didn’t solve problems of attenuation and scattering |
Zhao et al. (2015) |
MIP |
Removed haze effect and corrected colours |
Illumination wasn’t considered |
Li et al. (2016) |
MIP |
Increased brightness and contrast of underwater images |
Noise wasn’t removed |
Peng and Cosman (2017) |
Other Prior |
Worked well for various underwater images |
Noise wasn’t removed |
Peng et al. (2018) |
Other Prior |
Restored degraded images and increased contrast |
High computational complexity |
Li et al. (2016) |
Other Prior |
Increased brightness and contrast |
Couldn’t remove noise effects |
Wang et al. (2017) |
Other Prior |
Enhanced contrast and corrected colours |
High time complexity |
Song et al. (2018) |
Other Prior |
improved quality of underwater images and Lowest running time |
Noise wasn’t removed |
Ding et al. (2017) |
DL |
Increased contrast |
Highest running time |
Cao et al. (2018) |
DL |
Restored images effectively |
Blurring and low visibility of underwater images |
Barbosa et al. (2018) |
DL |
Increased the underwater images quality |
Noise wasn’t removed |
Hou et al. (2018) |
DL |
Increased contrast, and restored natural appearance |
Noise and some blurring |
4.1.1. Hardware based Restoration
Monitoring and exploring the underwater environment requires many hardware devices. These devices are also used to improve underwater images. The methods that hardware-based need hardware components for underwater image restoration. This includes using lasers, sensors, polarizers, ROVs, polaricams, and stereo imaging. Polarization processing has been used to reduce backscattering precisely. The polarization process is executed by applying a polarized light source for taking pictures or using polarization cameras. The laser-based methods have been used to eliminate backscattering by using a camera that closes the flash gate at a particular moment. Waterproof sensors have been applied for sensing marine snow, macroparticles, and swimming organisms to prevent reflections. Aqua tripods are used for capturing underwater images more effectively, and these devices are placed on the seafloor. Underwater image restoration, depending on hardware, can be classified into three categories, namely, polarization characteristics, stereo imaging, and range gate imaging.
4.1.1.1. Polarization characteristic-based
Light polarization is the property of light waves that describes their direction of oscillation. Polarization vibrates light in only one direction [
29,
36,
37,
38]. In air, reflected light is partially polarized, while in water, the light is visible in most directions. Therefore, it is much weaker, and this scattering along multiple paths degrades the polarization through meters. Because of the advantages of avoiding the scattering and absorption of light, polarization imaging has become a more significant underwater image restoration technique. [
39] presented a technique that depended on the effect of polarization on objects. This method recovered the objects’ radiance based on the target signal’s estimated polarization and enhanced the underwater image quality in cases where backscatter and object radiance was found. It has been used in many applications, such as artifact objects.
Hu et al. [
40] solved underwater vision problems such as signal attenuation and backscatter veiling. They developed an underwater image recovery method that depended on transmittance correction. It transformed the transmittance of low depolarization objects from negative to positive values, optimizing underwater images’ quality with the simple polynomial fitting algorithm. This method was very effective for underwater images with a high or low degree of depolarization. Han et al. [
41] enhanced low-resolution and low-contrast underwater images resulting from light attenuation and scattering in water. They depended on the PSFs that were estimated using a slant-edge method. Subsequently, the modulation transfer function (MTF) was proposed for evaluating resolution variation with spatial frequencies. This method reduced the effect of underwater image scattering.
Hu et al. [
42] proposed a method for polarimetric image restoration in turbid media using the circular polarization arising from illumination. The restored underwater images contain linear and circular polarization information. This method produced more effective experimental results than the previous methods. The results of this method confirmed that it enhanced the quality of recovered underwater images recorded in turbid water.
Hu et al. [
43] developed a restoration method that estimated the polarization degree and the backscatter intensity at different positions in the underwater images. This method considered the field of non-uniform optics in underwater image retrieval. Recovering the radiance of objects uses an estimation of backscatter intensity at different image positions and degrees of polarization (DOP) and was highly effective in enhancing underwater images. Sanchez et al. [
44] developed a method for restoring underwater images through the estimation of model parameters using the bioinspired optimization metaheuristic with a cost function: a no-referenced image quality metric (NR-IQA). This method could restore the underwater images, but with a complicated cost function.
Yang et al.[
45] developed an underwater image restoration method that relied on polarimetric images using active non-polarized illumination. The non-polarized illumination indicated that the polarization effect could be discounted, and it did not matter whether the degree of polarization was low or high. This method improved the visibility and image contrast. Wang et al. [
46] presented a new technology for restoring underwater images that depended on the periodic integration of polarization images. It replaced one or two pairs of orthogonal polarization images by integrating a series of polarization images into the polarization differential imaging (PDI) system. This method captured images at different positions during a complete cycle of image intensity. Then, these images were combined, and the result was calculated based on integrating the polarized light’s intensity. Finally, the polarization degree at each pixel was computed, and a clear image was restored.
Jin et al. [
47] developed a new method for removing polarization scattering based on automatically executing polarimetric calculations of the target light at each pixel, which helped restore the underwater image. The polarization degree of the target light in this method was constant. This method was very effective in retrieving underwater images and enhanced the visibility and contrast in underwater images. Fu et al.[
48] proposed a new underwater image restoration method consisting of scattering and absorption compensation. It depended on the wavelength and depth of the scene in the underwater signals. In the scattering method, an automatic map was used to estimate the backlight without considering the existence or not of any object. In absorption, a new compensation strategy was introduced in color restoration.
4.1.1.2. Stereo imaging
The stereo imaging method simulates the human visual system. This method uses traditional cameras to take pictures of the same target from various views and perspectives and then computes the depth of the field from these stereo images. Due to the emergence of the charged-coupled device (CCD), this method consists of a binocular vision device that obtains the depth information. Higher resolutions and refresh rates along with lower costs make this method of stereo imaging more popular in AUV systems.
Bruno et al. [
49] proposed a structured illumination and light in the stereo imaging method with various conditions of water turbidity. This method applied 3D underwater reconstructions that depended on the combination of stereo-photogrammetry and structured light. The patterns of structured light were projected using a video projector and acquired by the stereo-vision system. This method achieved effective results even if in turbid conditions.
Roser et al. [
50] developed a method for improving stereo perception in AUV systems. This method was applied for enhancing and restoring underwater images to improve the stereo range resolution using natural, dynamic lighting under turbid conditions. This method used a model for underwater light attenuation to estimate the visibility parameters. First, contrast enhancement was performed by employing visibility estimation and computing disparate densities. Second, the light attenuation model was used for ocean water to obtain color enhanced images.
Lin et al. [
51] proposed an image restoration method for AUVs that depended on an object-recognition and stereo-imaging system. The Hough transform used with the optical flow method for linear features and movement speeds in dynamic underwater imaging and used the Harris corner detector for target distance estimation. The AUV had a binocular camera with wide-angle lenses. This method was highly effective and produced accurate results.
Luczynski et al. [
52] proposed a method for improving stereo imaging hardware for deep sea operations. The method had the computation power for processing onboard stereo vision and also for tasks of computer vision such as inspection, object recognition, mapping, navigation, and intervention. They formalized a method for stereo component selection that included optimizing and validating the pressure in cameras using the finite element method (FEM).
4.1.1.3. Range gated imaging
The system of range-gated imaging includes a fast camera that uses a CCD image sensor, a timing control unit (TCU), and a pulsed laser. It controls the gate of camera that intakes the reflected light directly and prevents backscattering from reaching the sensor. The camera gate’s switching intervals depend on prior information, manual settings, various sensors, and a laser range finder. This gate is opened for a short time until the pulses return after hitting an object and then immediately close.
For an ROV, tan et al. [
53] presented a hardware optimization method for range-gated imaging in highly turbid conditions. They advanced hardware for a range-gated imaging system and the optimization stages of tailgating and preprocessing techniques. The tail gating system was applied by a camera delay to the tail of the reflected image temporal profile (RITP) and this was followed by contrast limited adaptive histogram equalization (CLAHE) for image enhancement.
Li et al. [
54] used a range-gated system for restoring underwater image visibility and quality in turbid conditions. It utilized time discrimination for enhancing the ratio of signal-to-backscattering noise by rejecting the backscattered light in the medium. It consisted of a synchronous and control system, a pulsed laser system, and a camera with a high-speed gate. This method efficiently reduced speckle noise in the underwater images and preserved the details of features.
Liu et al. [
55] proposed a system for constructing the scattering model and developed an optimal pulse through coordinated gate control. This method used a 532 nm narrow-pulse laser with a self-built gain CCD system to form the range-gated imaging system. This method was verified by simulation and computing the relative ratio for the images that were acquired through the laser distance gating system.
Wang et al. [
56] developed a 3D dehazing range-gated system for removing the scattering. This method greatly advanced underwater target navigation, detection, and marine scientific research because of the excellent suppression of backscatter. This method depends on the characteristics of how light propagates in water. The reference image and coefficient of water attenuation were needed for computing the depth-noise maps (DNMs). The experiments on this method were conducted under various water conditions.
Wang et al. [
57] proposed a method to decrease the input images number and restore their clarity. This method was used for dehazing underwater images using only a single-gated underwater image. It depended on the prior that target intensity distributes due to the range intensity profile (RIP) in RGI. The depth noise map and depth transmission were computed from the scene depth. Finally, the high quality of the images was restored and enhanced.
4.1.2. Software based Restoration
Software-based restoration is a non-physical restoration approach that aims to create the imaging model and compute the parameters used in this imaging model. These methods use restoration software algorithms to recover underwater images. Underwater image restoration depended on software can be classified into three groups, namely, optical imaging, prior knowledge, and deep learning methods. Compared to hardware-based methods, those based on software- have many advantages such as lower computational time, easy modulation, better design, and reduced costs.
4.1.2.1. Optical
The Underwater Optical Imaging (UOI) model can obtain natural and clear underwater images by establishing a rough optical imaging model and reversing the degradation process [
58]. This model is defined by Equation 2. There are many underwater optical imaging applications, such as detectors for onboard underwater optics, aerial, ocean-surface, and underwater optical cameras [
59].
Trucco et al. [
60] developed a method for self-tuning image restoration that depends on the Jaffe-McGlamery UOI model [
58,
61]. The optimal filter parameters are automatically computed for each underwater image by optimizing the quality depended on the global contrast measure. The simplified physical model is suitable for diffused light with poor backscatter and various imaging conditions. This technique depends on the basic assumptions that the underwater images were affected both by forward scatter and homogeneous illumination.
Hou et al. [
62] presented a framework for underwater image restoration that depended on the UOI model. They assumed that the blurring in underwater images resulted from the scattering by suspended particles and organisms. The restoration was done by considering underwater image proprieties from different domains (i.e., spatial and frequency). From the spatial domain, they used the point spread function (PSF) and modulation transfer function (MTF) from the frequency domain. This method restored underwater images using deconvolution depended on estimating the light scattering parameters.
Boffety et al. [
63] developed a valuable simulation tool for color restoration that depended on underwater optical images. This method is based on studying the spectral discretization influence of the model parameters on color rendering. They demonstrated that if just RGB data from the simulation scene is available, the reconstruction step improves the image color.
Wen et al. [
64] presented an underwater optical technique for describing underwater image formation that depends on the physical process. Then, after using this model, an enhancement algorithm was applied to enhance the images. The new underwater dark channel prior was proposed to compute the scattering rate and the backlight in the UOI. The results showed that this method was efficient at restoring underwater images. As part of the sampling missions, ahn et al. [
65] presented an image transmission system as a sampling system-AUV (SAUV) and demonstrated its effectiveness on the high seas. This method applied underwater optical imaging for autonomous vehicles and increased underwater detection accuracy.
4.1.2.2. Prior knowledge-based Image Restoration
Light absorption, suspended particles, and scattering are the reasons for underwater image degradation. Many restoration methods depend on prior knowledge applied for underwater image restoration, such as the dark channel prior (DCP) [
66,
67], underwater dark channel prior (UDCP) [
68,
69], maximum intensity prior (MIP) [
70], red channel prior (RCP) [
71], and underwater light attenuation prior (ULAP) [
72]. The following subsections discuss the various types of these prior-based methods applied for underwater image restoration.
-
Dark Channel Prior (DCP) Method
[
66] presented the DCP method that is used for dehazing the images. Haze is a normal phenomenon that reduces visibility, obscures scenes, and changes colors. It is a problem for photographers as it causes the degradation of image quality. It threatens the reliability of many applications, such as object detection, outdoor surveillance, and aerial imagery. Therefore, removing the haze from images is crucial in computer graphics/vision. The DCP-based dehazing technique is used for enhancing underwater images. This method depends on the observation that good quality and clear underwater images have some pixels at very low intensities in at least one color channel.
For restoring clear underwater images, chao et al. [
31] proposed an effective DCP-based method, which was used to reduce the effects of water scattering and attenuation in underwater images. DCP was used to compute the turbid water depth by assuming that multiple patches in water-free underwater images consist of a few pixels with very low intensities in at least one color channel. yang et al. [
73] developed a low-complexity and efficient DCP-based method for restoration of underwater image. They calculated the depth maps of images by employing a media filter instead of soft matting. Color correction was also used to improve the contrast in the underwater image. This method was highly effective images restoration and reduced the execution time.
Chiang et al. [
33] presented a method for enhancing underwater images by applying Wavelength Compensation and Image Defogging (WCID). They used the dehazing algorithm to reduce for the attenuation discrepancy across the propagation path and to remove the possible light source influence presence. This method performed well in enhancing the underwater images objectively and subjectively. Serikawa et al. [
74] proposed a new method that compensates for the attenuation discrepancy across the propagation path and used a fast dehazing algorithm named joint trigonometric filtering (JTF). JTF improves the transmission map (TM), which, estimated by the DCP affords many improvements, such as scatter reduction, edge information, and image contrast. This algorithm is characterized by noise reduction, better exposure to dark regions, and improved contrast.
Peng et al. [
75] developed a method for computing depth maps for underwater image restoration. It depended on the observation that an object that was further from the camera was more blurred. They combined image blurriness with the image formation model (IFM) to compute the distance between the scene points and the camera. It was much more effective than any other IFM-based enhancement method. The DCP is affected by selective light attenuation in the underwater environment, so various underwater enhancement methods based on DCP were developed and used.
-
Underwater Dark Channel Prior (UDCP) Method
The underwater image red channel will dominate the dark channel because red light attenuates more rapidly than blue and green light as it travels through the water. To avoid the red influence, [
68] introduced the UDCP, which evaluates only the green and blue (GB) channels to determine the underwater DCP. [
76] proposed a new technique that compensates for the attenuation discrepancy in underwater images through the propagation path. They developed color lines depended on an ambient light estimator and adaptive filtering in shallow oceans for underwater image enhancement. They also presented a color correction algorithm for color restoration.
Lu et al. [
12] proposed a new technique for super-resolution (SR) and scattering in underwater images. First, based on self-similarity, a high resolution (HR) of the scattered and the de-scattered image is obtained through the SR algorithm. Then, the convex fusion rule is used for retrieving the HR image. This algorithm is highly effective in restoring underwater images. Galdran et al. [
71] developed a new, automatic method for the restoring of underwater images that depends on RCP. This RCP extracts the dark channel in which the blue and red channels are reserved. Their experimental results indicate that this method effectively enhances degraded underwater images.
-
Maximum Intensity Prior (MIP) Method
Suspended particles that cause turbidity or fogging degrade the underwater images quality. The difference in attenuation between the underwater images’ red (R) and GB channels is significant. Carlevaris et al. [
70] developed an effective algorithm that removes light scattering, known as dehazing, in underwater images. They presented a prior for estimating scene depth termed the maximum intensity prior (MIP). The MIP is the difference value between the R channel intensity and the maxima of the G and B channels. The closest point shift in the foreground represents the most significant difference between the color channels.
Zhao et al. [
77] developed a new method that derives the water’s optical properties. This method estimated the background light (BL) that depended on the DCP and MIP. First, it took the brightest 0.1% of the dark channel pixels and then chose the pixel that differed maximally in the B-G or G-R channels. Li et al. [
78] developed a new method for restoring underwater images that determines the selected background light using its maximally different pixels. This method depends on dehazing the blue-green channels and correcting the red channel. First, by using a blending strategy as Li et al. [
79,
80], a flat background region was selected in the quad-tree subdivision. Then, 0.1% of the region candidate’s brightest pixels from the dark channel were taken. Finally, a pixel with the greatest difference in the R-B channel was selected as the global backlight.
-
Other Prior-based Method
In addition to those listed above, some priors are not commonly applied but are helpful in underwater image restoration. For example, Peng et al. [
81] developed a new technique for computing the underwater scenes depth that depended on light absorption and image blurring. This method was used in the IFM for image restoration and its experimental results were much more accurate and effective than any other.
Peng et al. [
82] developed a method for enhancing and restoring underwater images by reducing light absorption, scattering, low contrast, and color distortion caused by light traveling through a turbid medium. First, ambient light was computed by color change that depended on depth. Then, the scene transmission was computed by the differences between the observed intensity and the ambient light. In addition, adaptive color correction was calculated. Li et al. [
79] developed a method for enhancing and restoring underwater images that depends on the minimum information loss principle (MILP). The dehazing algorithm was applied to recover underwater images’ color, natural appearance, and visibility. An effective contrast enhancement algorithm was applied to enhance underwater images’ contrast and brightness. It improved visual quality, accuracy, and other valuable information.
Wang et al. [
83] proposed the maximum attenuation identification (MAI) technique for deriving the depth map and backlight from degraded underwater images. Region background estimation was simultaneously applied to ensure optimal performance. Experiments were conducted on three image types: calibration plate, natural underwater scene, and colormap board. Song et al. [
72] presented an accurate, effective, and rapid scene depth estimation model that depended on ULAP. It assumed that the differences between the R intensity value and the G and B intensity values in only one pixel of the underwater image were strongly related to depth changes in the scene. In estimating the R-G-B channels, this model was applied for the BL and TMs.
4.1.2.3. Deep Learning
Restoring degraded and hazy underwater images is a challenge. Existing prior-based methods have inferior and limited performance in many situations because of their hand-designed features. Therefore, the tendency toward deep learning algorithms is critical. Due to the deep learning rapid development in underwater image restoration, researchers have seen a major shift from complete parameter selection using artificial optimization models to automatic and effective training models. They depend on instance data to extract valuable feature vectors using deep learning.
Ding et al. [
84] developed a technology for solving the problem of underwater images that were degraded due to light scattering and color casts. This method featured underwater enhancement that included color correction and an image dehazing method that depended on the atmospheric scattering model. First, the transmission map was derived from the color-corrected image. Then, a convolutional neural network (CNN) was used to the image patches extracted from the color-corrected image to predict the depth map of the scene. This method was exceptionally effective and accurate and was used in many applications, such as underwater object detection and recognition. Cao et al. [
85] developed a method for restoring underwater images that depended on two neural network techniques for estimating scene depth and backlight. This method solved problems such as color distortion and low contrast resulting from light scattering and absorption. The method’s effectiveness was confirmed by its experimental results.
Barbosa et al. [
86] developed a CNN-based technique for underwater image enhancement and restoration. This method did not require any ground truth data as it used image quality metrics to support underwater image restoration. The results of these experiments showed a notable improvement in the underwater images’ visual quality and preserved edges. Hou et al. [
87] developed a new framework for performing residual learning in the transmission and image domains. This method consisted of a data-driven residual model for transmission estimation and residual formulation based on the knowledge-driven illumination balance in the underwater environment. Qualitative and quantitative analyses both confirmed the method’s effectiveness.
4.2. Underwater Image Enhancement Techniques (IFM-free)
Studies related to enhancing underwater images often use the techniques of enhancement directly to the images [
88,
89]. These methods enhance the color and contrast of images depended on pixel intensity redistribution and do not depend on the principles of underwater imaging. Further enhancement methods are applied especially associated with the underwater image characteristics, such as low contrast, and haze. These methods make changes for the pixel values in the spatial or transformation domain. Deep learning methods, especially CNNs, have been applied for underwater image enhancement that relies on hidden features that can be learned for quality improvement. Underwater image enhancement is categorized into four groups: spatial domain-based image enhancement, frequency domain-based image enhancement, color constancy-based image enhancement, and deep learning-based image enhancement.
Table 6 presents a comparison of underwater image enhancement methods.
Table 6.
The summary of the underwater image enhancement methods.
Table 6.
The summary of the underwater image enhancement methods.
Reference |
Method Based |
Advantages |
Disadvantages |
Ancuti et al. (2012) |
Spatial Domain (SCM) |
Increased contrast of underwater images |
Didn’t work well with poor artificial light |
Ancuti et al. (2016) |
Spatial Domain (SCM) |
High accuracy in underwater images enhancement |
Noise wasn’t removed |
Liu et al. (2017) |
Spatial Domain (SCM) |
Enhanced underwater images contrast and visibility |
Low accuracy |
Torres-M´ endez and Dudek (2008) |
Spatial Domain (MCM) |
Depended on learned constraints for underwater images enhancement |
Some noise and blurring in underwater images |
Iqbal et al. (2007) |
Spatial Domain (MCM) |
Solved the problem of light |
Low contrast in underwater images |
Ghani and Isa (2017) |
Spatial Domain (MCM) |
Enhanced underwater images qualitatively and quantitatively |
High time complexity |
Hitam et al. (2013) |
Spatial Domain (MCM) |
Highest PSNR values and lowest MSE |
Blurring in underwater images |
Huang et al. (2018) |
Spatial Domain (MCM) |
Enhanced the visibility of underwater images |
Not suitable for all types of underwater images |
Petit et al. (2009) |
Frequency Domain |
Light attenuation was removed |
Low contrast and visibility |
Cheng et al. (2015) |
Frequency Domain |
Better Contrast and Higher visibility |
Highest time running |
Sun et al. (2011) |
Frequency Domain |
Removed the noise from underwater images |
Poor quality in low light conditions |
Ghani et al. (2018) |
Frequency Domain |
Highest contrast and visibility |
Highest run time |
Priyadharsini et al. (2018) |
Frequency Domain |
Better PSNR and SSIM results |
Some Noise wasn’t removed |
Joshi et al. (2008) |
Color Constancy |
Balance between machine and human vision |
Low color and contrast distortion |
Fu et al. (2014) |
Color Constancy |
Enhanced contrast, color, and edges and details |
High time complexity |
Zhang et al. (2017) |
Color Constancy |
Enhanced edges and reduced noise |
Couldn’t enhance the underwater images contrast |
Wang et al. (2018) |
Color Constancy |
Increased image quality and balanced color |
Noise and high time complexity |
Zhang et al. (2019) |
Color Constancy |
Good denoising and edge-preserving |
Low contrast |
Tang et al. (2013) |
Color Constancy |
Intensity channel was applied in multi-scale Retinex |
Filtering techniques were in efficient |
zhang et al. (2021) |
Color Constancy |
Increased contrast |
Noise wasn’t removed |
Dixit et al. (2016) |
Contrast |
Removed noise and preserved details |
Low efficiency and highest time |
Wang et al. (2016) |
Contrast |
Increased contrast and precision value |
Didn’t remove noise |
Bindhu and Maheswari (2017) |
Contrast |
Noise was reduced |
High computational complexity |
Guraksin et al. (2019) |
Contrast |
Visual information is more important |
Didn’t remove haze |
Sankpal and Deshpande (2019) |
Contrast |
Increased images’ contrast |
Entropy was still less than other researches |
Azmi et al. (2019) |
Contrast |
Improved images details and reduced color cast |
Low efficiency and highest time |
Wang et al. (2017) |
Deep Learning |
Enhanced contrast and color correction |
Low efficiency and highest time |
Fabbri et al. (2018) |
Deep Learning |
Enhanced contrast |
Noise and Light not solved |
Anwar et al. (2018) |
Deep Learning |
Enhanced contrast |
Didn’t remove haze values. |
Li et al. (2018) |
Deep Learning |
Corrected color cast |
Low contrast |
Li et al. (2019) |
Deep Learning |
Enhanced contrast |
Effects of attenuation and backscatter weren’t solved |
Pritish et al. (2019) |
Deep Learning |
Enhanced contrast and visibility of underwater images |
Noise wasn’t removed |
Li et al. (2020) |
Deep Learning |
Enhanced brightness and visibility |
Low contrast and noise wasn’t removed |
Hu et al. (2021) |
Deep Learning |
Enhanced contrast of underwater images |
Clarity of the image was far lower than that of the truth image |
Tanget al. (2023) |
Deep Learning |
Enhanced contrast |
The network was more weaker |
4.2.1. Spatial Domain-based Image Enhancement
The spatial domain process depends on the intensity histogram, which expands the gray levels depended on the grayscale mapping theory [
90]. Due to the nature of underwater images, histograms indicate a more concentrated and important pixel-value distribution than is found in natural images. The dynamic range expansion of the underwater image histogram improves the visibility, detailed information, and contrast of images. The spatial domain completes the intensity histogram in various standard color models, e.g., red-green-blue (RGB), hue-saturation-intensity (HSI), hue-saturation-value (HSV), and CIE-Lab. The spatial domain approach has significantly advanced in the area of image enhancement [
91,
92]. The spatial domain is divided into two subgroups: The Single-Color Model (SCM) and the Multi-Color model (MCM), as introduced in the following paragraphs.
4.2.2. Frequency Domain-based Image Enhancement
The frequency domain technique processes images using spatial, or convolution transform to enhance these images [
100]. There are two components in the frequency domain: the high frequency, which represents the edge region where pixel values show significant changes; the low frequency represents the flat region in the image [
101]. The frequency domain improves the underwater image quality through high-frequency amplification and by suppressing the low-frequency component [
102]. The problem of Degraded underwater images is that the difference between the low- and high-frequency components is minimal [
103]. Therefore, many techniques, such as homomorphic filtering [
104], transformation domain methods [
105], wavelet transform, and high-boost filtering are used to improve underwater images.
Petit et al. [
106] presented an effective method that depended on quaternions to improve object contrast and color reproduction. This method requires the preprocessing of color space contraction and inversion light attenuation. A low-pass filter was used to remove noise through the high-frequency suppressor component, and a high-pass filter was used to preserve details by reducing the low-frequency components. The results of this method were very accurate and effective. Cheng et al. [
107] developed a method for underwater image enhancement that designed the Jaffe-McGlamery optical model and proposed an accurate and effective algorithm for underwater image recovery. This algorithm used a prior dark red channel to compute the transmission and background light. They developed a simple low-pass filter to blurred and degraded underwater images by analyzing the physical property of the point scattering function. The experimental results confirmed that this method was highly effective.
Feifei et al. [
108] presented a method for underwater image enhancement that depends on wavelet decomposition and a high-pass filter. This highly effective and accurate method was developed to reduce noise in underwater images and solved the wavelet shortcoming when processing backscatter noise. Ghani et al. [
109] presented a technique to increase the visibility in deep underwater images that depended on homomorphic filtering, recursive superimposed CLAHE, and dual-image wavelet fusion. Homomorphic filtering was used to provide whole image illumination. The recursive overlapping CLAHE algorithm was used to stretch and separate overlapping blocks and adjacent overlapping blocks of the image channel. After that, these two images were fused using wavelet transform.
Priyadhars et al. [
110] developed a method for underwater image enhancement to solve the imperfections in these images, such as low contrast and visibility. These problems caused objects in underwater images to be obscure. This method used the stationary wavelet transform (SWT) to divide the input image into four components; high-high, high-low, low-high, and low-low. The results showed that it was highly effective and increased contrast.
4.2.3. Color Constancy-based Image Enhancement
The human visual system is based on color constancy and ensures that colored objects are perceived predictably under various lighting conditions. Color constancy consists of white balancing and Retinex. White balancing is applied to ensure that the color of objects under various conditions of lighting is recorded accurately. Retinex is a precise and automatic application that relies on color constancy theory and enables humans to explore the world under various lighting conditions.
Joshi et al. [
111] proposed a method to resolve imprecise coloration and low contrast in underwater images resulting from degradation. Retinex was used to achieve a balance between human and machine vision by applying color constancy. This method includes color rendering, dynamic range compression, and color constancy theory to produce highly effective and accurate results. Fu et al. [
112] developed a technique for enhancing underwater images to address problems, as visual fuzz, insufficient illumination, and color distortion. This method was based on retinex, which was used to improve a single underwater image. First, color correction was used to resolve color distortion. After that, retinex was used to analyze illumination and reflectance. Finally, illumination and reflectance we enhanced to eliminate fuzz and underexposure problems.
Zhang et al. [
113] developed a technique for underwater image enhancement to solve image problems, such as blurring, low contrast, and low visibility. This method depended on the Retinex framework that simulated the human visual system. Retinex is a portmanteau of "Retina" and "Cortex" and its function depends on a combination of trilateral and bilateral filters. This method effectively solved the degradation problem under various turbidity conditions. Yong et al. [
114] developed a new and effective method for enhancing underwater images by converting them from the RGB color space to HSV. Then, Retinex was used to divide the v channel into a detail layer and a lighting layer that relied on various methods for image enhancement. Finally, the improved V, H, and S channels were converted to an RGB color model to improve and enhance images.
Zhang et al. [
115] developed an underwater image enhancement technique to solve image degradation problems. They relied on a multiscale retinex with color restoration (MSRCR), which consisted of four main components: illumination estimation, guided operation filter, fog-free image reconstruction, and white balance operation. This highly effective method was used to improve image contrast and detail and produced excellent results. Tang et al. [
116] developed a more advanced technique for underwater image enhancement that relied on Retinex and was suitable for multi-scene images. First, the images were pre-corrected to edit the pixel distribution and decrease the dominant color. Then, a multiscale Retinex with an intensity channel was applied. Finally, they applied infinite impulse response and down-sampled using Gaussian filtering to increase the processing speed.
Zhang et al. [
117] developed a technique that resolved inferior image quality by enhancing the low contrast and color cast prevalent in underwater imaging. Their developments in color correction used the adaptive contrast enhancement technique for underwater images. First, the dedicated fractions were used to compensate for the lower color channels computed by considering the ratio of the difference between the lower and upper channel to the lower color channel. Then, the adaptive contrast enhancement technique was used to generate the underwater images with a stretched foreground and background. Finally, they applied an unwrap mask for sharpness.
4.2.4. Contrast-based Image Enhancement
Contrast contributes significantly to the subjective evaluation of underwater image quality. It refers to the brightness difference between dark and light areas in images. The luminance disparity reflected from two neighbouring surfaces creates contrast, and this deviation is the visual property that makes certain objects more distinguishable than others.
Dixit et al. [
118] presented a method for image enhancement that was depended on the DCP with ACCLAHE and HF. The DCP computed the blur region and removed them. ACCLAHE estimated the maximum bin height in a local histogram of the images and redistributed the pixels equally to every gray level. The HF algorithm was used for enhancing underwater images.
Wang et al. [
119] presented a method for enhancement of underwater image that contributes significantly to ocean research. This method depended on the model of a virtual retina and image quality assessment (IQA). The virtual retina is highly correlated with the human vision system and is applied for improving the contrast of underwater images and removing noise. After this, the adaptive enhancement of underwater images was measured with a type of no-reference image quality assessment. This method achieved higher performance than those produced by other research.
Bindhu et al. [
120] proposed a method for solving underwater image problems such as low contrast, color loss, and haze. This method enhanced the underwater images’ quality using interpolation enhancement that was based on increasing the underwater images’ contrast. This method produced better entropy, a lower mean square error (MSE), and peak signal-to-noise ratio (PSNR) values.
Guraksin et al. [
121] presented a method for underwater images that was depended on a wavelet transform algorithm and the differential evolution algorithm. First, contrast adjustment on underwater images. Then, homomorphic filtering was applied for the image’s brightness normalization. The images were divided into R, G, and B components. wavelet transform and Haar wavelet decomposition were applied to each channel. The method’s performance was tested by determining the PSNR, entropy, and MSE.
Sankpal et al. [
122] proposed a method for solving light attenuation in water that caused degradation in underwater images. The method improved underwater images by correcting the backward scattering effect using Rayleigh stretching for every color channel’s maximum likelihood computation of the scale parameter. Correcting the signal corrected the underwater images.
Azmi et al. [
123] proposed a method for underwater image color enhancement that consisted of four steps. First, a method was introduced to neutralize the color cast. The color channels were improved depending on the gain factors that were computed through the differences value between inferior and superior color channels. Second, the fusion of dual-intensity depended on the mean and median average. Third, the swarm intelligence depended on the equalization mean for enhancing images. Finally, the technique of unsharp masking was applied for enhancing images.
4.2.5. Deep Learning-based Image Enhancement
Deep learning methods produce superior feature extraction results more rapidly because of the deep network structure. These methods are widely used for defogging images [
124], target detection [
125], and image segmentation [
126]. For instance, Wang, Zhang, Cao, and Wang (2017c) presented an effective and novel technique for underwater image enhancement that depended on a CNN. This technique, named UIE-Net, enhanced the contrast and brightness of underwater images degraded by dispersion and absorption. The UIE-Net framework’s tasks included haze removal and color correction.
[
127] presented an underwater image enhancement technique to solve underwater image problems, caused by suspended particles, light absorption, and refraction. This highly accurate method improved the image quality using a generative adversarial network (GAN) to increase the reliability and safety of using visual perception. [
128] developed a CNN-based method to improve underwater images. The UWCNN is a very effective and accurate model of an automatic mechanism for reconstructing clear and high-contrast underwater images. The UWCNN was efficiently trained using a synthetic underwater image database.
To solve imaging problems such as scattering and attenuation through water, Li et al. [
129] proposed a correction method that depends on the supervised color transfer model. This model designed the multi-term loss function that included the measure of cycle consistency loss, similarity index loss, and adversary loss and its results were very effective and accurate. Li et al. [
130] presented comprehensive research and analysis on the enhancement of underwater images that have been degraded because of light absorption and scattering. Using this technique, they compiled the underwater image enhancement benchmark (UIEB), a real dataset that contains 950 images that were trained using CNNs. The comprehensive study was analyzed quantitatively and qualitatively.
Uplavikar et al. [
131] developed a technique for underwater image enhancement to resolve light scattering and attenuation that reduces image detail and contrast. This method solved a water-type diversity problem that hindered underwater image enhancement. This was done by learning and defining the content features of underwater images using untangling the annoyances of water types. Li et al. [
132] developed an effective method to improve underwater images based on using a CNN that processed the underwater scene prior. The method combined the underwater image’s physical model and the underwater scene’s optical properties. This method was used to solve imaging problems such as light absorption and scattering that degraded the contrast and visibility in images. This method directly reconstructed clear images with high contrast.
Hu et al. [
133] developed a method for enhancing underwater images degraded because of light scattering and absorption. A GAN that efficiently completes high-quality underwater image style conversions was applied to underwater image enhancement. Despite being widely used, GANs are affected by the quality of underwater images. This research added the natural image quality assessment (NIQE) index to the GAN algorithm to better compare underwater images. Tang et al. [
134] proposed a more generative network based on attention U-Net that had attention gate mechanism. This gate filtered invalid features and saved texture, contour, and style information. This paper used three different loss functions to evaluate image quality for color, global content, and structural information.
4.3. Fusion of Restoration and Enhancement
Recently, many studies have tended to work on restoring and enhancing underwater images rather than working on just one of them. The fusion approach takes advantage of two models to increase brightness, contrast, clarify details, increase visibility, and remove noise using many filters. For example, Gao et al.[
135] developed a method for restoring and enhancing underwater images. First, it drew on the prior dark channel in the image dehazing field to rectify and estimate the bright channel image, transmittance image, and atmospheric light. After applying these methods, restoration was performed. Second, these restored images were enhanced very effectively through histogram equalization with excellent results.
Zhou et al. [
136] proposed a technique for underwater image restoration and enhancement to solve image problems, such as the lack of details and color deviation. This technique enhanced the visual effect and quality in underwater images. First, this method applies color restoration by adjusting the pixel value. Then, for color enhancement, the histogram is applied on the H channel to the source of the underwater images. Finally, for image enhancement, the edge preservation method is used. This method is very effective and accurate. Luo et al. [
137] presented a technique for restoring and enhancing underwater images. Three techniques were applied: contrast optimization, color balancing, and histogram stretching. For color balancing, the scalar values of the R, G, and B channels were renewed to match the three channels’ distributions. Then, the optimized contrast algorithm was applied. The histogram stretching technique depends on the red channel for contrast and brightness improvement. This method enhances the underwater image quality and increases the contrast.
Dewangan et al. [
138] developed a method for restoring and enhancing underwater images. It improves the underwater image clarity. They used HSV filters to enhance the images. It does not need any segmentation. The applied restoration and enhancement were accurate for object detection. Also, this method applied haze removal from underwater images that helped produce depth information from the vision techniques. This method includes many steps for underwater image enhancement that do not eliminate unwanted noise but enhance an image’s illumination, contrast, and visual quality.
Sequeira et al. [
139] presented a method for enhancing and restoring underwater images that depends on processing them. This method applies the innovative algorithm for underwater image restoration and enhancement in a single underwater image processing technique. Their restoration depends on an effective red channel algorithm for the blue channel, as the red color channel has no intensity underwater. After that, the integrated color model is applied for underwater image enhancement. The results of these algorithms showed high contrast and were more realistic.
Daway et al. [
140] developed a method for the restoration and enhancement of underwater images. Images were improved by applying color correction. The color restoration depended on the integrated color model with Rayleigh distribution (CRBICMRD). which was used for color restoration in the RGB and YCbCr model for color transformation. This method applied the multiscale retinex technique with unsupervised color correction, color restoration, and Rayleigh stretching. It was highly effective and improved the images quality.
Zhou et al. [
21] developed a technology for resolving the low contrast and color distortion problems. It depended on the Jaffe-McGlamery model. The maximum bright proportions were applied for color correction of the underwater images. After that, a histogram was applied for contrast enhancement. Finally, two-level wavelet decomposition of the color-corrected and contrast-stretched underwater images was performed.
5. Underwater Image Analysis Techniques
This section reviews the most prominent image analysis techniques, including histogram equalization, adaptive histogram equalization, CLAHE, histogram sliding, brightness preserving bi-histogram equalization, generalized unsharp masking, contrast stretching, noise filtering, discrete wavelet transform, discrete cosine transform, and dark channel prior.
5.1. Histogram Equalization (HE)
The histogram equalization (HE technique) is applied to improve underwater images and increase the contrast in images [
141]. It extends the intensity of the gray-level across the entire range in underwater images and enhances the underwater images’ contrast using a histogram. This technique is highly effective in cases of low-contrast images. If the image has high contrast, this method may aggravate the condition. It is calculated by Equation
3.
where
i is the pixel value.
represents image pixels.
L is the gray level number.
is the value of minimum non-zero of cumulative distribution function.
5.2. Adaptive Histogram Equalization (AHE)
This modified version of the HE technique [
142] applies multiple histograms to various sections of the same underwater image to improve the contrast. In HE, the same transform functions are applied to transform each pixel in the underwater image, consequently this technique is inadequate for image enhancement. AHE uses different transform functions to transform each pixel in the image to improve contrast. AHE solves the problems of HE, but its computational cost is high.
5.3. Contrast Limited Adaptive Histogram Equalization (CLAHE)
CLAHE is a modified version of the AHE technique [
143]. AHE causes excessive noise amplification in underwater images but CLAHE limits this noise by decomposing the underwater image into several sub-blocks and performing HE on each part of the entire image. The disadvantages of CLAHE are that it generates ring and noise artifacts in the flat regions in images [
144].
where
j is the new value of pixel and
is the cumulative probability distribution [
145].
5.4. Histogram Sliding
Histogram sliding is a technique for graphically illustrating pixel intensity values [
146]. It is applied to manipulate brightness in underwater images. It adjusts the darkness or brightness in images to maintain the relationships between gray-level values. When sliding it, the entire histogram is shifted only either to the right or left. This makes underwater images clearer. It is calculated by adding or subtracting a fixed number of gray-level values.
where the offset is the amount of histogram sliding. If the offset is positive, the contrast increases and the image is brighter, but if it is negative, the image is darkened.
5.5. Brightness Preserving Bi-Histogram Equalization (BBHE)
BBHE splits the degraded underwater image into two different and distinct images depended on the average of the input image [
147]. For every image, histogram equalization is computed to improve it. If the first enhanced image has an intensity lower than the mean, it is scaled with intensity values between 0 and the mean. If the second equalized image has intensity values higher than the mean, t is scaled with the intensities values between the mean and 256. Although this method increases the image contrast, it requires complicated and specified hardware.
5.6. Generalized Unsharp Masking (GUM)
GUM is applied to enhance the sharpness and underwater image contrast [
17]. It enhances underwater images by processing the residual and the model component. It reduces the halo effect by applying edge-preserving filter techniques. It also solves the out-of-range problem using tangent and logarithmic ratio methods. Although this method solves the halo effect and range problems, edge preservation is reduced.
5.7. Contrast Stretching
Contrast stretching is an effective and simple technique that improve the contrast of images by stretching the intensity values range [
148]. It adjusts every image’s pixel value to apply the structure visualization to both the lighter and the darker regions of the underwater image. Image contrast is the difference value between the minimum and maximum pixel intensity. It has some disadvantages; in low-contrast images, specific details are very difficult to compute.
5.8. Noise Filtering
Noise filtering is a set of filters and processes for removing noise in underwater images. It is applied in many image processing applications. Many filters are applied to remove noise from underwater images. Filters are chosen according to the noise type and filter behavior. For example, to remove Gaussian noise, the Gaussian/Bilateral filter is applied, and the median filter is applied to remove salt and pepper noise.
Figure 5 shows a classification of the various types of image noise filters.
Figure 5.
Classification of image noise filters.
Figure 5.
Classification of image noise filters.
5.9. Discrete Wavelet Transform (DWT)
The DWT divide an image into several sets, where each is a time series of coefficients that describe the time evaluation of the image in the frequency domain [
149]. Function
is classified into the weighted sum of the base functions
and
by applying the DWT.
where the
is the starting scale,
M is the signal length.
and
are the approximation coefficients. Images have two-dimensional, but DWT has one dimension. Therefore, the tensor product is computed for wavelet and scaling functions. For the size M and N, the decomposition of images f(x, y), are calculated by Equation
8.
where
decomposes the signal through high pass filter and low pass filter.
X and
Y are the signal directions and are applied for making the four sub-bands of the underwater image
,
,
, and
. This approach decomposes the underwater image into four sub-bands: LH, LL, HH, and HL. For recomposing the underwater image, the inverse of DWT is based on approximation and detail coefficients.
5.10. Discrete Cosine Transform (DCT)
The DCT is the simplest transform technique applied in image compression and image processing applications [
150]. It characterizes the underwater image as the sum of sinusoids having various frequencies and magnitudes. The purpose of the DCT is to focus most information in the signal’s low-frequency components owing to its powerful energy compression. It uses interpixel redundancies to increase the high-quality decorrelation of most images. The DCT decomposes the underwater image into sub-bands; each band is critical. It is calculated by using Equation
9.
where
; and
is the pixel intensity is
in row
x and column
y. The
and
functions calcuated by Equation
10.
5.11. Role of Evolutionary Algorithms in Contrast Enhancement
EAs are robust and stochastic metaheuristics from evolutionary computing and are applied to solve optimization problems in image processing applications. These algorithms, such as particle swarm optimization (PSO) [
151], artificial bee colony [
152], and genetic algorithms [
153] are applied to enhance the contrast of underwater images. These algorithms aim to maximize the suitability criterion for underwater image enhancement and are applied to compute the optimal gamma corrections parameters. Gamma correction is a simple and important technique that produces normal-looking images while retaining brightness. However, selecting the optimal value for gamma is a difficult task.
5.12. Dark Channel Prior (DCP)
DCP depends on the key observation and most of the local spots in haze-free outdoor images. These images include a few pixels with very low intensities in at least one color channel. For example, in the RGB channels, any of the three colors red, green, or blue, may have an intensity value of less than, or close to 0. This means that the minimum intensity value in the region has the smallest quantity value [
66]. This method is very easy to implement. In addition, it requires less processing time and negates the halo effect.
6. Limitations
Exploring the underwater environment by capturing images is critical and is conducted by employing skilled divers, optical cameras, specialized hardware, or underwater ROVs. With the exception of optical cameras, all other systems have many disadvantages, such as limited field of view, depth limits, and complex processes. Due to the unexpected nonlinear hydrodynamic effects and the lack of an accurate model, the ROV control system is complicated.
Underwater exploration is expensive as it requires the use of highly skilled divers. For a single investigation, standby divers and supervisors may be required. Moreover, a limited time amount can be spent underwater, especially when a diver conducts inspections. Consequently, the time required for investigation has increased. Using underwater image enhancement techniques, this drawback can be considerably mitigated. Based on the aforementioned challenges, underwater image analysis limitations can be classified into environment-based and image-based limitations, as discussed in the following subsections.
6.1. Underwater Environment-based Limitations
These limitations are related to factors encountered in the underwater environment, such as equipment, refraction, non-uniform illumination, motion, scattering, and absorption. These limitations negatively affect underwater images and videos and make them hazy and degraded.
6.1.1. Equipment
Underwater images are captured using two camera equipment options. The first involves using an existing land camera with a housing unit. This housing unit must be enclosed with diving silicon to maintain a waterproof seal. This solution is the best option for photographers who have a high-quality land camera and are unable to buy an expensive underwater camera. The second option is using a specialized underwater camera. These specialized cameras differ in quality and price.
6.1.2. Refraction
Refraction describes how light bends as it passes from one medium to another [
154]. When reflected from an object light travels through water and passes through the underwater camera’s glass and air, and the object appears approximately 25% larger and closer than they actually are. Refraction makes it difficult to focus sharply on the subject, leading to blurred photos. Refraction can be used to reconstruct the underwater scene [
155,
156].
6.1.3. Non-uniform Illumination
During underwater light propagation, light levels weaken as the depth increases. Natural light is not always available and varies depending on the time of the day. When the sun is directly overhead, the surface of the water reflects the least amount of light. The weather also influences the light availability. If the weather is stormy, turbulent water will significantly affect light conditions. There are many algorithms for solving non-uniform lighting [
157,
158,
159].
6.1.4. Motion
Motion occurs when the relative position between the imaging device and the target object changes owing to movement. Such movements between the imaging device and target objects, in addition to the movement of underwater currents, cause motion blur [
160,
161,
162], which affects underwater images. Motion blur produces distortions in the underwater image and degrades the luminance spectrum. Therefore, capturing images of stationary subjects such as corals or rock formations is easier than moving underwater objects. Many algorithms deblur underwater images for better clarity and quality [
163].
6.1.5. Scattering
Scattering denotes the angular light distribution deflected by suspended particles in one direction at a specified wavelength [
12,
164,
165]. It occurs by light falls on objects and is deflected and reflected many times by particles in the water before reaching the camera. Scattering degrades the contrast and visibility, blurs detail, and causes fogging in underwater images.
6.1.6. Absorption
Absorption occurs when one substance is absorbed or inextricably blended into another [
83,
166] and is divided into light absorption and color. Because water is a good natural light filter, it absorbs a fraction of the light that passes through it. The general formula for underwater light loss is that half the light is lost for every 10 meters of depth. Additionally, the available light is not constant and changes depending on the time of day and other factors such as the weather and surface conditions. The images do not exhibit a hue but green or blue for color absorption. As a result, the longer wavelengths of light, yellow, red, and orange are absorbed by the water.
6.2. Underwater Image-based Limitations
The limitations of underwater imaging have a degrading and hazy effect on the images’ quality. Therefore, it is critical to use restoration and enhancement algorithms to underwater images and videos. These limitations include low contrast, noise, blurring, poor visibility, and haze. Solving the limitations in underwater imaging is easier than changing the environmental limitations.
6.2.1. Low Contrast
Contrast is the computed difference in color or luminance that allows objects to be distinguishable from other objects through the same field of view [
167]. Poor illumination, scattering, and absorption reduce contrast. This low contrast degrades the underwater images and reduces visibility and details thus contrast enhancement is a critical process and deciding whether the contrast is global or local is crucial. Local contrast means dividing the image into small regions, and contrast enhancement is performed on each independently. Global contrast indicates an increase of contrast in the entire image.
6.2.2. Noise
Image noise denotes random variations or changes in color or brightness [
13,
168]. This noise affects the resolution of the underwater images. There are several types of noise, such as:
Salt and pepper noise: This signifies the smaller and larger grayscale values of a specified pixel or region.
Gaussian noise: This is the most common noise type and is a statistical noise with a probability density function equal to normal distribution.
Fixed mode noise: It is the underwater clutter that degrades the image.
Such noise degrades underwater images, therefore, noise removal methods are critical for image enhancement.
6.2.3. Blurring
Blurring is used for removing the edges to smooth an image. In the case of underwater images, blurring degrades them and obscures detail [
169,
170,
171]. Because light is scattered and absorbed, underwater images are severely distorted by blurring and color cast, which decreases the image quality. The low quality makes it difficult to process images for object detection, classification, and segmentation. Due to the blurring effects in underwater images, deblurring methods are in high demand for image enhancement and restoration.
6.2.4. Poor visibility
Visibility means whether objects are detectable by sight or the distance at which light or objects can be discerned. The difficulty associated with ensuring the objects visibility at long or short distances in underwater scenes poses a challenge for the image processing community. Due to the backscatter and absorption caused by noise and suspended particles, such images suffer from poor visibility, which is a major problem for oceanic applications in computer vision. Light attenuation limits the viewing distance to about twenty meters in clear water and five meters or less in turbid and murky water. The poor objects visibility at short or long distances in underwater scenes causes image processing problems. Many studies have improved the visibility of underwater images [
13,
172].
6.2.5. Hazy
Hazy images are captured in foggy or hazy weather conditions and degraded by absorption and scattering. These hazy images have weak contrast and low visibility, rendering it more difficult to identify the objects in images by human vision. Due to the many hazy effects on underwater images, the enhancement of hazy underwater images is important [
173,
174].
Figure 6 indicates some examples from the datasets of underwater images with different limitations.
Figure 6.
Some examples of underwater images from available datasets with various scenes limitation: (a) low light, (b) low contrast, (c) haze, and (d) blur.
Figure 6.
Some examples of underwater images from available datasets with various scenes limitation: (a) low light, (b) low contrast, (c) haze, and (d) blur.
7. Underwater Image Datasets
In this section, we present the underwater imaging datasets that researchers use for enhancing and restoring underwater images. There is no complete dataset for underwater imaging because collecting underwater images is very difficult. Most underwater datasets have limitations such as few categories, single target objects, and imperfect information for labeling.
Figure 7 shows some examples of images from underwater images datasets.
Figure 7.
Samples of underwater images from the available underwater datasets.
Figure 7.
Samples of underwater images from the available underwater datasets.
-
Real-World Underwater Image Enhancement (RUIE) Dataset
The RUIE dataset [
175] is a large-scale dataset that contains 4000 underwater images from multiple views. According to the underwater image enhancement network (UIE) algorithms, the RUIE dataset is classified into three subsets: the underwater image quality set (UIQS), underwater color cast set (UCCS), and underwater task-oriented test suite (UHTS), as presented in
Table 9. These subsets are used to restore color cast, enhance visual appearance, and aid in computer vision detection/classification at a higher level.
-
Underwater Image Enhancement Benchmark (UIEB) Dataset
The UIEB dataset [
130] contains 950 real-world underwater images, 890 of which have a corresponding reference image. The remaining 60 were retained as testing data. This dataset is used in qualitative and quantitative underwater image enhancement algorithms. The UIEB dataset includes many levels of resolution and covers several scene/main object categories.
-
Enhancement of Underwater Visual Perception (EUVP) Dataset
The EUVP dataset [
176] is a large-scale dataset that includes a paired and unpaired collection of low and good-quality underwater images used for adversarial supervised learning. These images were collected using seven different cameras in different situations. The unpaired data was collected by six human assistants and the paired data was collected by relying on human perception. This dataset includes 12K paired and 8K unpaired images, as shown in
Table 7 and
Table 8.
Table 7.
EUVP paired dataset.
Table 7.
EUVP paired dataset.
Dataset Name |
Training Images |
Validation Images |
Total Images |
Underwater Dark |
5550 |
570 |
11670 |
Underwater ImageNet |
3700 |
1270 |
8670 |
Underwater Scenes |
2185 |
130 |
4500 |
Table 8.
EUVP unpaired dataset.
Table 8.
EUVP unpaired dataset.
Poor quality |
Good quality |
Validation |
Total Images |
3195 |
3140 |
330 |
6665 |
-
U-45 Dataset
The U-45 dataset [
177] is an effective public underwater test dataset that includes 45 underwater images chosen from among real underwater images. This dataset contains the low contrast, color casts, and haze-like effects that contribute to image degradation.
-
Jamaica Port Royal Dataset
The Jamaica Port Royal dataset [
178] was gathered in Port Royal, Jamaica, at the site of a submerged city containing both natural and artificial structures. These images were collected using a handheld diver rig. Sixty-five hundred images were collected during a single dive at a maximum depth of 1.5 m above the seabed.
-
Marine Autonomous Robotics for Interventions (MARI) Dataset
The MARI dataset [
179] aims to improve the development of cooperative AUVs for underwater interventions in offshore industries, rescue, search, and various types of scientific exploration tasks. This dataset presents diverse underwater videos and images captured underwater by a stereo vision system.
-
MOUSS
The MOUSS dataset [
24] was obtained by using a stationary camera on the ocean floor. At 1–2 m, with sufficient ambient lighting, 159 images of fish and other relevant objects were acquired. The test dataset was a combination of images from training and new collections.
-
MBARI Dataset
The MBARI dataset [
24] was collected from different regions and consisted of 666 images of fish and other relevant objects. This dataset was obtained by the Monterey Bay Aquarium Research Institute.
-
AFSC Dataset
The AFSC dataset [
24] was collected from the ROV that was placed underwater and equipped with an RGB video camera. It consisted of numerous videos from various ROV missions and contained 571 images.
-
NWFSC Dataset
The NWFSC dataset [
24] was collected using a remotely operated vehicle and looking downward at the ocean floor. The first dataset contained 123 images of fish and other objects near the seabed.
-
RGBD Dataset
The RGBD dataset [
24] collected for underwater image restoration and enhancement contained a waterproof color chart in the underwater environment. It consisted of over 1100 images.
-
Fish4knowledge Dataset
The Fish4knowledge dataset [
180] consisted of fish data collected from a live video dataset. It had 27370 fish images. The entire dataset was divided into 23 clusters, with each distinct cluster representing a particular species.
-
Wild Fish Marker Dataset
The Wild Fish Marker dataset [
181] was collected using a remotely operated vehicle under different ocean conditions. This dataset contained fish images depending on the cascade classifiers of Haar-like features. These images were not unconstrained as the underwater environment was variable because of the moving recording platform. It included an annotated training and validation dataset and independent test data.
-
HabCam Dataset
The HabCam dataset [
24] was collected from underwater images on the seafloor. The HabCam vehicle was used for recording. It flew over the ocean taking six images in one second. These images are critical for studying the ecosystem and advancing the marine sciences.
-
Port Royal Underwater Image Dataset
The Port Royal underwater image dataset [
178] was collected using a GAN to create realistic underwater images. These images were taken using a camera onboard autonomous as well as operated vehicles. This method is capable of recording high-resolution underwater images.
-
OUCVISION Dataset
The OUCVISION dataset [
182] is a large-scale underwater image enhancement and restoration dataset which is used for recognizing and detecting salient objects. It contains 4400 images of 220 objects. Each object was taken with four pose variations (right, left, back, and front) and five spatial regions (bottom right, bottom left, center, top right, top left) to obtain 20 images.
-
Underwater Rock Image Database
The underwater rock image database [
24] was collected to enhance and restore underwater images. It depended on the GAN to generate realistic underwater images.
-
Underwater Photography Fish Database
The underwater photography fish database [
24] was collected from reef- and fish-life photographs taken in locations all over the world, such as the Indian Ocean, Red Sea, etc. This dataset contained many reef fish species, including Parrotfish, Butterflyfish, Angelfish, Wrasse, and Groupers. It also includes non-fish subjects like nudibranchs, corals, and octopi.
Table 9.
List of underwater imaging datasets.
Table 9.
List of underwater imaging datasets.
Dataset |
Source |
No. of Images |
Objects |
Resolution |
RUIE [175] |
Dalian Univ. of Technology |
UIQS 3630 (726 × 5) UCCS 300 (100 × 3) UHTS 300 (60 × 5) |
Sea cucmbers, scallops, and urchins |
400 × 300 |
UIEB [130] |
—– |
950 |
Diverse objects |
Variable |
EUVP [176] |
—– |
31505 |
Diverse objects |
256 × 256 |
U-45 [177] |
Nanjing Univ. of Information Science and Technology, China |
45 |
Diverse objects |
256 × 256 |
Jamaica Port Royal [178] |
—– |
6500 |
Fishes and other related objects |
1360 × 1024 |
MARI [179] |
—– |
variable |
Fished and other related objects |
1292 × 964 |
MOUSS [24] |
CVPR AAMVEM workshop |
159 |
Fishes |
968 × 728 |
MBARI [24] |
Monterey Bay Aquariam Research Insitute |
666 |
Fishes |
1920 × 1080 |
AFSC [24] |
CVPR AAMVEM workshop |
571 |
Fishes and other related objects |
2112 × 2816 |
NWFSC [24] |
Integrated by CVPR AAMVEM workshop |
123 |
Fishes and other related objects |
2448 × 2050 |
RGBD [24] |
Tel Aviv Univ. |
1100 |
Diverse objects |
1369 × 914 |
Fish4knowledge [180] |
The Fish4knowledge team |
images from underwater videos |
Diverse objects |
Variable |
Wild Fish Marker [181] |
NOAA Fisheries |
1934 positive images and 3167 negative images, 2061 fish images |
Fishes and other related objects |
Variable |
HabCam [24] |
Integrated by CVPR AAMVEM workshop |
10465 |
Sand dollars, scallops, rocks, and fishes |
2720 × 1024 |
Port Royal Underwater image [178] |
Real scientific surveys in Port Royal |
18091 |
Artificial and natural structures |
1360 × 1024 |
OUCVISION [182] |
Ocean Univ. of china |
4400 |
Artificial targets or rocks |
2592 × 1944 |
Underwater Rock Image Database [24] |
Univ. of Michigan |
15057 |
Rocks in pool |
1360 × 1024 |
The underwater Photography Fish Database [24] |
Amateur contribution |
8644 |
Reef fishes, Coral, and others |
variable |
8. Underwater Image Quality Evaluation Metrics
Assessing the underwater images quality is an essential task that can be used automatically and accurately. Image quality assessment (IQA) approaches are categorized to include: (a) objective and (b) subjective methods [
183,
184] for automatically assessing images’ quality.
Subjective image quality metrics are time-consuming, expensive, and not not sufficient for most real-time applications. Objective image quality assessment techniques apply mathematical and statistical models that rely on the human visual system (HVS) to compute images’ quality.
Objective IQA techniques are classified into three groups; full reference (FR), reduced reference (RR), and no reference (NR), as indicated in
Figure 8. with FR IQA the underwater reference image is available. With the RR AQI, partial information from underwater images is available. With NR IQA, the reference image is not applicable. In addition to the conventional evaluation metric, to evaluate the underwater image quality effectively, specialized metrics are presented in the literature as defined below and listed in
Figure 9.
Figure 8.
Classification of objective image quality assessment methods.
Figure 8.
Classification of objective image quality assessment methods.
Figure 9.
Classification of specialized underwater image quality assessment metrics.
Figure 9.
Classification of specialized underwater image quality assessment metrics.
Mean Square Error (MSE): calculates the squared error between the original and enhanced images [
98]. The lower the MSE, the better the quality and the less error. The MSE is computed mathematically using Equation
11.
where the
is the image size,
is the original image, and
is the enhanced image.
Peak-Signal-to-Noise Ratio (PSNR): computes the peak error and is the percentage of the quality measurement between the original, and enhanced images [
98]. The greater the PSNR, the good the reconstructed or enhanced image quality. It is calculated from the MSE using Equation
12.
where
is the maximum pixel value in an image and is 255 in case of gray level image.
Entropy: represents a statistical value of the information in the image. It represents the randomness degree in the image that can be applied to characterize the texture of the image [
185,
186]. The higher entropy value indicates that the image has minimal information loss. It is computed by using Equation
13.
where i is the gray level number in a pixel in the image
F and
is the probability of intensity
i.
Structure Similarity Index Measure (SSIM): is applied to compute the similarity value between the original and enhanced images. It is presented by Wang [
187] and formulated in [
188], [
189]. x and y are the patch locations of two different images. The SSIM involves three measures: contrasts C(x, y), brightness B(x, y), and structure S(x, y). The greater the SSIM value, the better the enhancement and the less distortion. The SSIM is computed by using Equation
14.
where
,
are the values of means and the
,
are the values of standard deviation of x and y patches of pixels.
is the covariance value of x and y patches of pixels and
and
are the small constant values to prevent the instability. L is the dynamic range value of pixels,
and
.
Colour Enhancement Factor (CEF): This is used to represent the enhancement effect. The greater the CEF, the better quality of the enhanced image. It is calculated by using Equation
15.
where
and
denote enhanced and original images. The
.
and
are the standard deviation values, and
and
are average values of
and
.
Contrast to Noise Ratio (CNR): This is used to compute the underwater image quality [
190]. It is the signal amplitude associated with the surrounding noise in underwater images.
where
represents the original image average value,
is the enhanced image average value, and
is the standard deviation.
Image Enhancement Metric (IEM): computes the sharpness and contrast in an underwater image by classifying the image into non-overlapping blocks [
191]. It represents the mean value ratio of the center pixel’s absolute difference from eight neighbors in the original and enhanced images. It is calculated by using Equation
17.
where
and
are non-overlapping blocks. SFS and e are original and improved images.
and
are the intensities of the center pixel.
and
are the neighbours intensities from the center pixel.
Absolute Mean Brightness Error (AMBE): This indicates the brightness that is preserved after image enhancement [
192]. It is the value of the absolute difference between the average of the original and improved underwater images. A median AMBE value denotes good brightness.
where
and
are the average values of the original and improved image.
Spatial Spectral Entropy based Quality index (SSEQ): This is an efficient and accurate image NR IQA model presented by [
193]. It computes the underwater image quality when it is affected by many distorting factors. It is computed by using Equation
19.
where P(i, j) represents the spectral probability map that is computed by Equation
20.
Measure of Enhancement (EME): This computes the contrast in underwater images and assists in selecting the processing parameters [
194,
195]. It is computed using Equation
21.
where
and
are the maximum and minimum values of image within the block
.
Measure of Enhancement by Entropy (EMEE): It computes the contrast in underwater images [
194,
195].The greater the EMEE value, the better quality of the image. It is calculated by using Equation
22.
where
are the blocks into which the underwater image is divided.
Root Mean Square Error (RMSE): It is applied to compute the difference value between the original and enhanced images. It calculates the square root of MSE. The lower the RMSE value, the better contrast value for underwater images. It is calculated by using Equation
23.
where
F and
e represent the original and improved images.
Underwater Colour Image Quality Evaluation metric (UCIQE): It is a linear combination of saturation, contrast, and chroma [
196]. It computes the effects of low contrast, non-uniform color cast, and blur issues that degrade underwater images. It converts the RGB space into the CLELAB color space as it approximates the human eye’s visual perception. The higher UCIQE value means that underwater images have a good balance between contrast, chroma, and saturation. It is computed using Equation
24.
where
,
, and
represent the weighted coefficients,
indicates the standard deviation,
is the contrast, and
represents the average value of saturation.
Underwater Image Quality Measure (UIQM): measures the quality of underwater images and depends on the model of the human visual system and functions without the reference image [
197]. It relies on the feature or measuring component of the underwater images to represent the visual quality. It consists of three measurements, the underwater image sharpness measurement (UISM), the underwater image color measurement (UICM), and the underwater image contrast measurement (UIConM). A higher UIQM value denotes a higher quality value for underwater images.
-
Colourfulness Contrast Fog density index (CCF): This computes the color quality of underwater images and is the non-referenced IQA model [
198]. It is a weighted combination of contrast, the colorfulness index, and fog density. It is calculated using Equation
26.
The colorfulness results from absorption and blurring, whereas low contrast, caused by forwarding scattering and fog density, is due to backward scattering.
Average Gradient (AG): This is a full reference method that measures the underwater images sharpness. It computes the rate change per minute as it presents in underwater images. It is computed using Equation
27.
where
L and
M represent the width and height of the underwater image,
, and
are the gradient in the
x and
y directions.
Patch-based Contrast Quality Index (PCQI): This predicts the perceived distortion of contrast to the human eye [
199]. It is based on the patch model instead of relying on global statistics. It is based on three independent image quantities: structure, signal strength, and average. The greater the PCQI value, the better contrast values in underwater images. It is computed using Equation
28.
where
P represents the patch number in the underwater image.
,
, and
are the comparison functions.
Normalized Cross-Correlation (NCC): This evaluates the underwater images quality by calculating similarities between the enhanced and original images. It represents the correlation value in the image group [
200]. The brightness of an underwater image varies due to lighting conditions, so this is the essential reason for normalizing the image. NCC produces a result value between -1 and 1. If the underwater images are uncorrelated, the value is 1; if the underwater images are perfectly correlated, the value is -1.
where
is the original image and
is the enhanced image.
i and
j are the image coordinates.
M and
N are the pixel numbers in horizontal and vertical coordinates.
Average Difference (AD): The average difference value calculates the differences between filtered and low-quality images [
200]. It calculates the mean value between the original and the processed image. This measurement is quantitative and is applied for object detection and recognition applications. Many image processing applications find the average value of the difference value between images through this quantitative measure. The image quality is very poor when the AD value is too high. It is computed using Equation
30.
where
is the enhanced image and
is the original image at i,j coordinates. M and N are the number of image pixels in the horizontal and vertical coordinates.
where
is the enhanced image and
is the original image at
i and
j coordinates.
M and
N are pixels of image in horizontal and vertical direction.
Maximum Difference (MD): This computes the maximum error signals by calculating the difference between the original and enhanced underwater images [
201]. It uses a low-pass filter for the sharp edges of underwater images. It is similar to AD. The higher the MD value, the poorer the underwater images.
where
is the original image and
is the enhanced image.
Normalized Absolute Error (NAE): This computes the underwater images’ quality [
202]. The NAE value is inversely proportional to the image quality. If the NAE value is higher, the quality of the underwater image is poorer.
where
is the original image and
is the enhanced image.
is the absolute error in the underwater image.
The evaluation of algorithms used for enhancing and restoring underwater images of different categories is very important. These evaluation measures provide the scores that represent the similarity or distortion between the original and the enhanced images. These evaluations help to estimate the best parameters for use in different applications. Underwater image quality metrics (IQM) help estimate the quality of the underwater enhancement and restoration algorithms.
9. Performance Evaluation
Experiments were tested on several of the 890 images in the UIEB dataset for evaluating the quality, quantity and computational complexity of the enhancement and restoration. Several restoration and enhancement algorithms were tested on these selected images using six common evaluation metrics. This section has been divided into four subsections: Experimental Setting, Qualitative Evaluation, Quantitative Evaluation, and Computational Complexity.
9.1. Experimental Setting
Extensive experiments concerning subjective and objective evaluation were conducted on various techniques for enhancing and restoring underwater images. The computer configuration for these experiments was an Intel(R) core (TM) i7-9750H CPU @2.60 GHZ (Lenovo, Beijing, China); 16 GB RAM; Microsoft Windows 10 (Microsoft, Redmond, WA, USA); MATLAB R2018a and python 3.6.
9.2. Qualitative Evaluation
Subjective evaluation is critical for visualizing the underwater image restoration and enhancement effects.
Figure 10 presents the subjective results for the restoration of five raw images selected from the UIEB dataset using the following restoration algorithms: DCP [
75], UDCP [
78], MIP [
70], IBLA [
81], and ULAP [
72]. This figure indicates that the IBLA and ULAP algorithms restoration characteristics results were superior because they adopt all underwater light attenuation to create the correct depth map. Recent restoration methods only dehaze underwater images and cannot deal with color restoration effectively for multiple underwater images. Therefore, the color corrections algorithms can be used in preprocessing to enhance the color, brightness and contrast in restored images.
Figure 11 presents the subjective evaluation results for images enhancement on the same five images using the following enhancement algorithms: HE [
203], CLAHE [
204], ICM [
96], UCM [
205] and RGHS [
99]. From these results, we note that the HE enhancement results are inferior. CLAHE equally distributes red, green, and blue pixels thus enhancing underwater images and outperforming HE. ICM indicates the equalization in the color casts. The UCM enhancement results are superior to those of ICM. RGHS is based on adaptive parameters to avoid blind pixel redistribution or global histogram stretching to reduce sharpness. RGHS exhibits a greater dehazing effect.
Figure 10.
Subjective comparative results for underwater images restoration on UIEB dataset. The results are generated by using DCP, UDCP, MIP, IBLA and ULAP.
Figure 10.
Subjective comparative results for underwater images restoration on UIEB dataset. The results are generated by using DCP, UDCP, MIP, IBLA and ULAP.
Figure 11.
Subjective comparative results for underwater images Enhancement on UIEB dataset. The results are generated by using HE, CLAHE, ICM, UCM and RGHS.
Figure 11.
Subjective comparative results for underwater images Enhancement on UIEB dataset. The results are generated by using HE, CLAHE, ICM, UCM and RGHS.
9.3. Quantitative Evaluation
A quantitative evaluation confirms the visual quality of the resultant underwater images by objective evaluation and also validates the methods’ effectiveness. Objective analysis is computed using image quality evaluation metrics such as SSIM, MSE, PSNR, PIQE, UCIQE, and UIOM.
Table 10 and
Table 11 respectively present the quality metrics for five restoration algorithms and five enhancement algorithms computed on five raw images from the UIEB dataset. To understand the obtained results, we should take into consideration the following points. The lower MSE value indicates noise or errors in the underwater images’ content. A higher PSNR indicates lower noise. A high PSNR and low MSE indicate a good resultant image. When the SSIM values are nearer to 1 it indicates a good similarity value. The higher the UIQM, the more balance in saturation contrast and the sharper are the underwater images. The lower the PIQE, the more the underwater images are enhanced. The higher the UCIQE, the more enhanced are the underwater images.
Table 10.
Quantitative evaluation of different restoration algorithms using underwater images from UIEB dataset.
Table 10.
Quantitative evaluation of different restoration algorithms using underwater images from UIEB dataset.
Image |
Algorithm |
MSE |
SSIM |
PSNR |
PIQE |
UCIQE |
UIQM |
(a) |
DCP |
838 |
0.6 |
18.89 |
18.43 |
0.48 |
3.11 |
UDCP |
775 |
0.9 |
19.23 |
20.92 |
0.51 |
2.71 |
MIP |
1198 |
0.6 |
17.34 |
27.59 |
0.53 |
1.28 |
IBLA |
1116 |
0.8 |
17.65 |
31.60 |
0.51 |
1.57 |
ULAP |
237 |
0.8 |
24.38 |
26.30 |
0.56 |
2.35 |
(b) |
DCP |
5886 |
0.7 |
10.43 |
29.68 |
0.40 |
2.25 |
UDCP |
12963 |
0.4 |
7.03 |
30.40 |
0.36 |
1.98 |
MIP |
1655 |
0.2 |
15.94 |
25.18 |
0.51 |
2.65 |
IBLA |
10530 |
0.3 |
7.90 |
30.57 |
0.42 |
1.33 |
ULAP |
3703 |
0.3 |
12.44 |
17.13 |
0.60 |
2.92 |
(c) |
DCP |
314 |
0.7 |
23.16 |
28.61 |
0.55 |
2.15 |
UDCP |
3655 |
0.6 |
12.50 |
28.35 |
0.52 |
2.03 |
MIP |
4661 |
0.5 |
11.44 |
45.58 |
0.63 |
0.96 |
IBLA |
1600 |
0.8 |
16.08 |
26.26 |
0.60 |
2.40 |
ULAP |
3675 |
0.8 |
12.47 |
27.10 |
0.63 |
2.46 |
(d) |
DCP |
2807 |
0.6 |
13.64 |
51.66 |
0.47 |
2.30 |
UDCP |
4617 |
0.5 |
11.48 |
53.30 |
0.46 |
1.79 |
MIP |
2025 |
0.7 |
15.06 |
50.96 |
0.50 |
1.76 |
IBLA |
974 |
0.6 |
18.24 |
47.02 |
0.50 |
1.37 |
ULAP |
1359 |
0.7 |
16.79 |
49.24 |
0.49 |
2.12 |
(e) |
DCP |
1864 |
0.8 |
15.42 |
22.43 |
0.52 |
2.30 |
UDCP |
4160 |
0.6 |
11.93 |
25.66 |
0.52 |
1.79 |
MIP |
7004 |
0.5 |
9.67 |
36.25 |
0.68 |
1.76 |
IBLA |
5152 |
0.5 |
11.01 |
35.72 |
0.54 |
1.37 |
ULAP |
1550 |
0.6 |
16.22 |
25.05 |
0.58 |
2.12 |
Table 11.
Quantitative evaluation for enhancement of underwater images on UIEB dataset.
Table 11.
Quantitative evaluation for enhancement of underwater images on UIEB dataset.
Image |
Algorithm |
MSE |
SSIM |
PSNR |
PIQE |
UCIQE |
UIQM |
(a) |
HE |
2472 |
0.7 |
14.19 |
19.61 |
0.60 |
3.31 |
CLAHE |
1055 |
0.8 |
17.89 |
18.74 |
0.57 |
3.53 |
ICM |
276 |
0.9 |
23.71 |
27.29 |
0.50 |
3.41 |
UCM |
146 |
0.9 |
26.47 |
18.05 |
0.52 |
3.17 |
RGHS |
156 |
0.9 |
26.19 |
19.93 |
0.53 |
2.08 |
(b) |
HE |
1530 |
0.6 |
16.28 |
15.96 |
0.61 |
3.30 |
CLAHE |
815 |
0.7 |
19.01 |
18.01 |
0.50 |
3.22 |
ICM |
1333 |
0.7 |
16.88 |
26.65 |
0.46 |
3.56 |
UCM |
3826 |
0.2 |
12.30 |
23.52 |
0.48 |
3.03 |
RGHS |
515 |
0.9 |
21.01 |
15.24 |
0.57 |
3.12 |
(c) |
HE |
2672 |
0.7 |
13.86 |
24.23 |
0.60 |
2.91 |
CLAHE |
1812 |
0.9 |
15.54 |
21.51 |
0.58 |
3.17 |
ICM |
594 |
0.9 |
20.39 |
16.76 |
0.56 |
2.71 |
UCM |
2781 |
0.8 |
13.68 |
18.25 |
0.61 |
2.88 |
RGHS |
531 |
0.9 |
20.87 |
23.42 |
0.58 |
2.27 |
(d) |
HE |
1575 |
0.7 |
16.15 |
45.38 |
0.59 |
2.70 |
CLAHE |
735 |
0.8 |
19.46 |
49.06 |
0.53 |
2.44 |
ICM |
1705 |
0.7 |
15.81 |
48.33 |
0.49 |
2.03 |
UCM |
1005 |
0.8 |
18.10 |
47.62 |
0.54 |
2.59 |
RGHS |
1274 |
0.7 |
17.07 |
49.73 |
0.55 |
2.71 |
(e) |
HE |
948 |
0.7 |
18.36 |
22.33 |
0.61 |
2.83 |
CLAHE |
506 |
0.9 |
21.08 |
23.63 |
0.56 |
2.98 |
ICM |
700 |
0.9 |
19.67 |
21.03 |
0.54 |
2.72 |
UCM |
654 |
0.8 |
19.96 |
20.65 |
0.56 |
2.78 |
RGHS |
413 |
0.9 |
21.96 |
21.13 |
0.58 |
2.71 |
9.4. Computational Complexity
Although the quality of underwater image enhancement and restoration algorithms is critical, the computational time, especially for real-time applications, should also be considered. Computational time indicates how long an algorithm runs when enhancing or restoring images. Lower computational times mean that the algorithm is more effective. Each algorithm’s running time for restoration and enhancement is shown in seconds in
Table 12.
Table 12.
Computational time in seconds(secs) for restoration and enhancement algorithms of underwater images on UIEB dataset.
Table 12.
Computational time in seconds(secs) for restoration and enhancement algorithms of underwater images on UIEB dataset.
Image |
DCP |
UDCP |
MIP |
IBLA |
ULAP |
HE |
CLAHE |
ICM |
UCM |
RGHS |
(a) |
13 |
11.88 |
15.2 |
30 |
0.02 |
0.03 |
0.03 |
3 |
7.2 |
4 |
(b) |
13 |
12 |
24 |
90 |
0.1 |
0.04 |
0.04 |
3.3 |
7 |
4 |
(c) |
48 |
43 |
48 |
130 |
0.6 |
0.07 |
0.16 |
11.19 |
24 |
14 |
(d) |
91 |
85 |
60 |
190 |
0.12 |
0.13 |
0.12 |
22.2 |
40 |
28 |
(e) |
7 |
14 |
7 |
50 |
0.1 |
0.02 |
0.02 |
3 |
4 |
3 |
10. Applications of Underwater Image Analysis
For increasing numbers of applications, capturing clear underwater videos and images is essential. Researchers use underwater images or videos to gain valuable and useful information while studying the underwater environment. In this section, the most common applications of this topic have been introduced.
10.1. Underwater Navigation
Autonomous navigation by underwater vehicles for exploring underwater resources is a popular research topic [
17]. The main cause is the increasing need to collect underwater data such as mine detection and environmental monitoring. Some underwater vehicles are focused on improving underwater images.
10.2. Fish Detection and Identification
Boudhane et al. [
206] developed a new method for preprocessing underwater images and detecting and locating fish. This method consists of three steps. First, the noise was removed by estimating the Poisson-Gaussian mixture and enhancing the underwater images. After that, they applied the mean shift technique to decompose the underwater image into regions. Finally, these regions were combined through an estimation that depended on the log-likelihood test.
Li et al. [
207] presented a method for detecting fish food rests in underwater videos and images by applying adaptive thresholding. With the greatest accuracy, i.e., 95.6%. this approach was applied to minimize financial losses and waste. The expectation-maximization (EM) that depended on the Gaussian mixture algorithm was applied for histogram fitting and the histogram type identification for adaptive threshold computation.
Villon et al. [
208] developed a new method to count and identify the types of coral reef fish in underwater videos and images by applying a CNN. This method was trained and its quality and performance were tested on several photographic databases with different post-processing decision rules to identify 20 types of fish. This method effectively and accurately detected either the whole or partial body of the fish.
Cui et al. [
209] presented a new method for fish detection that uses a CNN with three optimization algorithms to increase the learning samples number and simplify it. The training process was made more efficient by accelerating it. Loss and training time were decreased by applying the dropout method and refining the loss function. The improvement in accuracy and the decrease in processing time showed the potential for the AUV implementation method.
10.3. Corrosion Estimation of Subsea Pipelines
Khan et al. [
210] presented a new and effective method for estimating underwater pipeline corrosion through color information. First, underwater image enhancement and restoration were developed to improve degraded images. The offshore oil and gas industry has severe pipeline corrosion problems. which causes leaks and cracks. It was very difficult for human divers to follow the pipeline because of unfavorable conditions. In this work, corrosion estimation depended on wavelet transform was used to restore and enhance the underwater images.
10.4. Coral-reef Monitoring
Underwater digital imaging has improved data collection for monitoring benthic communities, but analysis of these underwater images remains difficult. A new and effective method by [
211] was based on a deep learning CNN to analyze underwater images. This method used a global coral reef monitoring dataset and artificial intelligence for simulation, data processing, and decision-making. Several layers of the CNN were used for learning through feature extraction. Probabilistic inference was used to interpret the output to the network. The experimental accuracy of this method was 97%.
10.5. Sea Cucumber Image Enhancement
The products of sea cucumber are very rich in low-fat, high-quality protein, and vitamins. Such products contribute significantly to meeting people’s dietary needs since they depend on animal protein for nutrition. Li et al. [
212] developed an effective and novel method for enhancing blurred, degraded, and color distorted underwater images. This method depended on the fusion of the prior dark channel and retinex. First, preprocessing that depended on the prior dark channel was used. Then, the Gaussian template and the underwater images were convolved to produce improved underwater images. Finally, the brightness and saturation were enhanced in the HSV color space.
Qiao et al. [
213] developed the technique for the automatic segmentation of sea cucumber images taken underwater. First contrast enhancement was applied through the fusion of the CLAHE algorithm and the RGB color model. Then, rectangular edges and the sea cucumber edges were extracted and distinguished using active contour segmentation.
10.6. Other applications
Fatan et al. [
214] developed a new method for tracking and detecting cables using an autonomous underwater vehicle. First, the edges of the underwater images were computed. Then, they were categorized based on the texture information using a support vector machine (SVM) and a multilayer perceptron (MLP) neural network. Subsequently, only the edges applied for the next processes remained. Finally, for tracking and detecting the cables, the filtered edges were processed using the Hough transform.
Zhou et al. [
215] developed a method for detecting motion in underwater videos that is critical for different computer vision applications such as target tracking and recognition. This method depends on the enhancement of underwater images. It enhances the clarity and the target contrast through adaptive underwater color imaging algorithms and then extracts moving objects through the background model.
11. Future Directions
Underwater image analysis is expected to be an active research topic in various disciplines, as computer vision, pattern recognition, and machine learning, owing to its extensive and complex functional applications. Despite several effective and good studies and many trials, several promising research directions can still be suggested. The performance in many domains remains low compared with that of other underwater image restoration and enhancement techniques, causing problems such as deblurring, super-resolution, and dehazing. In the following paragraphs, some of the potential future directions are discussed.
Efforts should be directed into noise removal as some research experiments cause high noise in underwater videos and images.
Studies should be dedicated to real-time object tracking and detection from enhanced underwater images.
Reducing the high computational cost and execution time required for restoring and enhancing underwater images.
The performance of the enhancement methods of contrast is still poor in many aspects. Therefore, increasing the contrast in underwater images and videos is critical research that has attracted considerable attention in recent years.
Underwater image datasets are primarily used for model testing rather than training. Although there are many underwater image datasets, a limited number of them contain a finite number of underwater images. Therefore, a more compact dataset that can enhance underwater images is needed.
Evaluation metrics must be developed to consider more features in underwater videos and images, such as texture, noise, and depth estimation.
Lightweight instruments and tools must be constructed to capture and take underwater images in challenging conditions.
The computational efficiency and robustness of underwater imaging methods must be improved. The desired methods must be adaptable to diverse underwater conditions and effective strategies for different types of underwater applications should be developed. For recovering realistic scenes, the fusion of restoration and enhancement techniques improves the computational efficiency of underwater imaging. However, it is time-consuming to compute the two main parameters. Conversely, IFM-free methods can improve image quality by redistributing the pixel values and produce optimal color distributions.
Several deep learning techniques, such as GAN to create the white balance and RNN to increase detail and decrease noise, should be used for underwater image enhancement. Learning-based underwater image enhancement methods depend heavily on datasets. These datasets require multiple numbers of paired and referenced images. Therefore, compiling a public benchmark dataset of various enhanced and hazed underwater images is essential.
In the future, high-level tasks such as target detection through visibility degradation will be applied to evaluate underwater image enhancement methods. Current methods for underwater imaging focus on enhancing the perceptual effects but ignore whether enhanced imaging increases the accuracy and quality of analysis of high-level features for classification and detection. Therefore, the relationships between low-level underwater image enhancement and high-level classification and detection should be advanced.
The methods for the enhancement of deep-sea underwater images differ from those used for shallow-water environments. The natural light propagated underwater is absorbed below 1000 meters; therefore, only artificial light sources strongly affect the images. The existing underwater image restoration and enhancement methods cannot recover deep sea underwater images. Therefore, to improve image quality and reduce halo effects, a new and effective imaging model for deep-sea imaging is required to resolve uneven illumination, light attenuation, scatter interference, and low brightness.
Conclusion
This paper discusses an extensive survey of underwater image enhancement and restoration studies. The background of the underwater environment is presented. The latest categories and classifications of underwater image enhancement/restoration techniques are presented and elucidated. The limitations faced in this environment are listed. Existing underwater datasets are classified, discussed, and compared in terms of various aspects. Evaluation metrics are presented and described. Underwater images from the UIEB dataset are experimentally evaluated for the qualitative, quantitative, and computational time assessment of different enhancement and restoration techniques. Recent and essential applications for underwater image enhancement and restoration are described. Although many underwater image restoration and enhancement techniques are available, none can be used to improve underwater images captured in various environments at different depths. Moreover, the computational complexity of these techniques should be reduced. Thus, several approaches that should be investigated in future research are highlighted.
Author Contributions
All researchers participated in the research as follows: conceptualization, Y.M.A., N.A.N., S.E., T.A., and M.E.; methodology, Y.M.A., N.A.N., S.E., T.A., and M.E.; software, Y.M.A., N.A.N., S.E., T.A., and M.E.; validation, Y.M.A., N.A.N., S.E., T.A., and M.E.; formal analysis, Y.M.A., N.A.N., S.E., T.A., and M.E.; investigation, Y.M.A., N.A.N., S.E., T.A., and M.E.; resources, Y.M.A., N.A.N., S.E., T.A., and M.E.; data curation, Y.M.A., N.A.N., S.E., T.A., and M.E.; writing—original draft preparation, Y.M.A., N.A.N., S.E., T.A., and M.E.; writing—review and editing, Y.M.A., N.A.N., S.E., T.A., and M.E.; visualization, Y.M.A., N.A.N., S.E., T.A., and M.E.; supervision, N.A.N., S.E., T.A., and M.E.; project administration, N.A.N., S.E., T.A., and M.E. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation), and the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2021R1A2C1011198).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The datasets used in some experiments were mentioned in section (7), which are called underwater image datasets.
Acknowledgments
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ICT Creative Consilience Program (IITP-2021-2020-0-01821) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation), and the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2021R1A2C1011198).
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The used abbreviations are mentioned in
Table 1.
References
- McLellan, B.C. Sustainability assessment of deep ocean resources. Procedia Environmental Sciences 2015, 28, 502–508. [Google Scholar] [CrossRef]
- Lu, H.; Wang, D.; Li, Y.; Li, J.; Li, X.; Kim, H.; Serikawa, S.; Humar, I. CONet: A cognitive ocean network. IEEE Wireless Communications 2019, 26, 90–96. [Google Scholar] [CrossRef]
- Jian, M.; Liu, X.; Luo, H.; Lu, X.; Yu, H.; Dong, J. Underwater image processing and analysis: A review. Signal Processing: Image Communication 2020, p. 116088.
- Krishnapriya, T.; Kunju, N. Underwater Image Processing using Hybrid Techniques. In Proceedings of the 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT). IEEE; 2019; pp. 1–4. [Google Scholar]
- Dharwadkar, N.V.; Yadav, A.M. Survey on Techniques in Improving Quality of Underwater Imaging. In Computer Networks and Inventive Communication Technologies; Springer, 2021; pp. 243–256.
- Zhang, W.; Dong, L.; Pan, X.; Zou, P.; Qin, L.; Xu, W. A survey of restoration and enhancement for underwater images. IEEE Access 2019, 7, 182259–182279. [Google Scholar] [CrossRef]
- Jaffe, J.S. Underwater optical imaging: the past, the present, and the prospects. IEEE Journal of Oceanic Engineering 2014, 40, 683–700. [Google Scholar] [CrossRef]
- Fan, J.; Wang, X.; Zhou, C.; Ou, Y.; Jing, F.; Hou, Z. Development, calibration, and image processing of underwater structured light vision system: A survey. IEEE Transactions on Instrumentation and Measurement 2023, 72, 1–18. [Google Scholar] [CrossRef]
- Singh, N.; Bhat, A. A systematic review of the methodologies for the processing and enhancement of the underwater images. Multimedia Tools and Applications 2023, pp. 1–26.
- Ahn, J.; Yasukawa, S.; Sonoda, T.; Ura, T.; Ishii, K. Enhancement of deep-sea floor images obtained by an underwater vehicle and its evaluation by crab recognition. Journal of Marine Science and Technology 2017, 22, 758–770. [Google Scholar] [CrossRef]
- Johnsen, G.; Ludvigsen, M.; Sørensen, A.; Aas, L.M.S. The use of underwater hyperspectral imaging deployed on remotely operated vehicles-methods and applications. IFAC-PapersOnLine 2016, 49, 476–481. [Google Scholar] [CrossRef]
- Lu, H.; Li, Y.; Zhang, Y.; Chen, M.; Serikawa, S.; Kim, H. Underwater optical image processing: a comprehensive review. Mobile networks and applications 2017, 22, 1204–1211. [Google Scholar] [CrossRef]
- Schettini, R.; Corchs, S. Underwater image processing: state of the art of restoration and image enhancement methods. EURASIP Journal on Advances in Signal Processing 2010, 2010, 1–14. [Google Scholar] [CrossRef]
- Lu, H.; Li, Y.; Serikawa, S. Computer vision for ocean observing. In Artificial Intelligence and Computer Vision; Springer, 2017; pp. 1–16.
- Wang, Y.; Song, W.; Fortino, G.; Qi, L.Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
- Sahu, P.; Gupta, N.; Sharma, N. A survey on underwater image enhancement techniques. International Journal of Computer Applications 2014, 87. [Google Scholar] [CrossRef]
- Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A review on intelligence dehazing and color restoration for underwater images. IEEE Transactions on Systems, Man, and Cybernetics: Systems 2018, 50, 1820–1832. [Google Scholar] [CrossRef]
- Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light. IEEE Transactions on Circuits and Systems for Video Technology 2020, 30, 4861–4875. [Google Scholar] [CrossRef]
- Almutiry, O.; Iqbal, K.; Hussain, S.; Mahmood, A.; Dhahri, H. Underwater images contrast enhancement and its challenges: a survey. Multimedia Tools and Applications 2021, pp. 1–26.
- Raveendran, S.; Patil, M.D.; Birajdar, G.K. Underwater image enhancement: a comprehensive review, recent trends, challenges and applications. Artificial Intelligence Review 2021, pp. 1–55.
- Zhou, J.; Wei, X.; Shi, J.; Chu, W.; Lin, Y. Underwater image enhancement via two-level wavelet decomposition maximum brightness color restoration and edge refinement histogram stretching. Optics Express 2022, 30, 17290–17306. [Google Scholar] [CrossRef]
- Papadopoulos, C.; Papaioannou, G. Realistic real-time underwater caustics and godrays. In Proceedings of the Proc. GraphiCon, 2009; Vol. 9, pp. 89–95. [Google Scholar]
- Sedlazeck, A.; Koch, R. Simulating deep sea underwater images using physical models for light attenuation, scattering, and refraction 2011.
- Yang, M.; Hu, J.; Li, C.; Rohde, G.; Du, Y.; Hu, K. An in-depth survey of underwater image enhancement and restoration. IEEE Access 2019, 7, 123638–123657. [Google Scholar] [CrossRef]
- Anwar, S.; Li, C. Diving deeper into underwater image enhancement: A survey. Signal Processing: Image Communication 2020, 89, 115978. [Google Scholar] [CrossRef]
- Vasamsetti, S.; Mittal, N.; Neelapu, B.C.; Sardana, H.K. Wavelet based perspective on variational enhancement technique for underwater imagery. Ocean Engineering 2017, 141, 88–100. [Google Scholar] [CrossRef]
- Chen, Z.; Wang, H.; Shen, J.; Li, X.; Xu, L. Region-specialized underwater image restoration in inhomogeneous optical environments. Optik 2014, 125, 2090–2098. [Google Scholar] [CrossRef]
- Berman, D.; Levy, D.; Avidan, S.; Treibitz, T. Underwater single image color restoration using haze-lines and a new quantitative dataset. IEEE transactions on pattern analysis and machine intelligence 2020. [Google Scholar] [CrossRef]
- Schechner, Y.Y.; Karpel, N. Recovery of underwater visibility and structure by polarization analysis. IEEE Journal of oceanic engineering 2005, 30, 570–587. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE transactions on pattern analysis and machine intelligence 2011, 33, 2341–2353. [Google Scholar] [PubMed]
- Chao, L.; Wang, M. Removal of water scattering. In Proceedings of the 2010 2nd international conference on computer engineering and technology. IEEE, 2010; Vol. 2, pp. 2–35. [Google Scholar]
- Wu, X.; Li, H. A simple and comprehensive model for underwater image restoration. In Proceedings of the 2013 IEEE International Conference on Information and Automation (ICIA). IEEE; 2013; pp. 699–704. [Google Scholar]
- Chiang, J.Y.; Chen, Y.C. Underwater image enhancement by wavelength compensation and dehazing. IEEE transactions on image processing 2011, 21, 1756–1769. [Google Scholar] [CrossRef] [PubMed]
- Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color balance and fusion for underwater image enhancement. IEEE Transactions on image processing 2017, 27, 379–393. [Google Scholar] [CrossRef] [PubMed]
- Chang, H.H.; Cheng, C.Y.; Sung, C.C. Single underwater image restoration based on depth estimation and transmission compensation. IEEE Journal of Oceanic Engineering 2018, 44, 1130–1149. [Google Scholar] [CrossRef]
- Cronin, T.W.; Marshall, J. Patterns and properties of polarized light in air and water. Philosophical Transactions of the Royal Society B: Biological Sciences 2011, 366, 619–626. [Google Scholar] [CrossRef]
- Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Polarization-based vision through haze. Applied optics 2003, 42, 511–525. [Google Scholar] [CrossRef]
- Li, Y.; Ruan, R.; Mi, Z.; Shen, X.; Gao, T.; Fu, X. An underwater image restoration based on global polarization effects of underwater scene. Optics and Lasers in Engineering 2023, 165, 107550. [Google Scholar] [CrossRef]
- Huang, B.; Liu, T.; Hu, H.; Han, J.; Yu, M. Underwater image recovery considering polarization effects of objects. Optics express 2016, 24, 9826–9838. [Google Scholar] [CrossRef]
- Hu, H.; Zhao, L.; Huang, B.; Li, X.; Wang, H.; Liu, T. Enhancing visibility of polarimetric underwater image by transmittance correction. IEEE Photonics Journal 2017, 9, 1–10. [Google Scholar] [CrossRef]
- Han, P.; Liu, F.; Yang, K.; Ma, J.; Li, J.; Shao, X. Active underwater descattering and image recovery. Applied optics 2017, 56, 6631–6638. [Google Scholar] [CrossRef]
- Hu, H.; Zhao, L.; Li, X.; Wang, H.; Yang, J.; Li, K.; Liu, T. Polarimetric image recovery in turbid media employing circularly polarized light. Optics Express 2018, 26, 25047–25059. [Google Scholar] [CrossRef] [PubMed]
- Hu, H.; Zhao, L.; Li, X.; Wang, H.; Liu, T. Underwater image recovery under the nonuniform optical field based on polarimetric imaging. IEEE Photonics Journal 2018, 10, 1–9. [Google Scholar] [CrossRef]
- Sánchez-Ferreira, C.; Coelho, L.; Ayala, H.V.; Farias, M.C.; Llanos, C.H. Bio-inspired optimization algorithms for real underwater image restoration. Signal Processing: Image Communication 2019, 77, 49–65. [Google Scholar] [CrossRef]
- Yang, L.; Liang, J.; Zhang, W.; Ju, H.; Ren, L.; Shao, X. Underwater polarimetric imaging for visibility enhancement utilizing active unpolarized illumination. Optics Communications 2019, 438, 96–101. [Google Scholar] [CrossRef]
- Wang, J.; Wan, M.; Gu, G.; Qian, W.; Ren, K.; Huang, Q.; Chen, Q. Periodic integration-based polarization differential imaging for underwater image restoration. Optics and Lasers in Engineering 2022, 149, 106785. [Google Scholar] [CrossRef]
- Jin, H.; Qian, L.; Gao, J.; Fan, Z.; Chen, J. Polarimetric Calculation Method of Global Pixel for Underwater Image Restoration. IEEE Photonics Journal 2020, 13, 1–15. [Google Scholar] [CrossRef]
- Fu, X.; Liang, Z.; Ding, X.; Yu, X.; Wang, Y. Image descattering and absorption compensation in underwater polarimetric imaging. Optics and Lasers in Engineering 2020, 132, 106115. [Google Scholar] [CrossRef]
- Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A.V. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS Journal of Photogrammetry and Remote Sensing 2011, 66, 508–518. [Google Scholar] [CrossRef]
- Roser, M.; Dunbabin, M.; Geiger, A. Simultaneous underwater visibility assessment, enhancement and improved stereo. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2014; pp. 3840–3847. [Google Scholar]
- Lin, Y.H.; Chen, S.Y.; Tsou, C.H. Development of an image processing module for autonomous underwater vehicles through integration of visual recognition with stereoscopic image reconstruction. Journal of Marine Science and Engineering 2019, 7, 107. [Google Scholar] [CrossRef]
- uczyński, T.; uczyński, P.; Pehle, L.; Wirsum, M.; Birk, A. Model based design of a stereo vision system for intelligent deep-sea operations. Measurement 2019, 144, 298–310. [Google Scholar] [CrossRef]
- Tan, C.; Sluzek, A.; GL, G.S.; Jiang, T. Range gated imaging system for underwater robotic vehicle. In Proceedings of the OCEANS 2006-Asia Pacific. IEEE; 2006; pp. 1–6. [Google Scholar]
- Li, H.; Wang, X.; Bai, T.; Jin, W.; Huang, Y.; Ding, K. Speckle noise suppression of range gated underwater imaging system. In Proceedings of the Applications of Digital Image Processing XXXII. SPIE, 2009; Vol. 7443, pp. 641–648. [Google Scholar]
- Liu, W.; Li, Q.; Hao, G.y.; Wu, G.j.; Lv, P. Experimental study on underwater range-gated imaging system pulse and gate control coordination strategy. In Proceedings of the Ocean Optics and Information Technology. SPIE, 2018; Vol. 10850, pp. 201–212. [Google Scholar]
- Wang, M.; Wang, X.; Sun, L.; Yang, Y.; Zhou, Y. Underwater 3D deblurring-gated range-intensity correlation imaging. Optics Letters 2020, 45, 1455–1458. [Google Scholar] [CrossRef] [PubMed]
- Wang, M.; Wang, X.; Zhang, Y.; Sun, L.; Lei, P.; Yang, Y.; Chen, J.; He, J.; Zhou, Y. Range-intensity-profile prior dehazing method for underwater range-gated imaging. Optics Express 2021, 29, 7630–7640. [Google Scholar] [CrossRef] [PubMed]
- McGlamery, B. A computer model for underwater camera systems. In Proceedings of the Ocean Optics VI. International Society for Optics and Photonics, 1980; Vol. 208, pp. 221–231. [Google Scholar]
- Shen, Y.; Zhao, C.; Liu, Y.; Wang, S.; Huang, F. Underwater optical imaging: key technologies and applications review. IEEE Access 2021. [Google Scholar] [CrossRef]
- Trucco, E.; Olmos-Antillon, A.T. Self-tuning underwater image restoration. IEEE Journal of Oceanic Engineering 2006, 31, 511–519. [Google Scholar] [CrossRef]
- Jaffe, J.S. Computer modeling and the design of optimal underwater imaging systems. IEEE Journal of Oceanic Engineering 1990, 15, 101–111. [Google Scholar] [CrossRef]
- Hou, W.; Gray, D.J.; Weidemann, A.D.; Fournier, G.R.; Forand, J. Automated underwater image restoration and retrieval of related optical properties. In Proceedings of the 2007 IEEE international geoscience and remote sensing symposium. IEEE; 2007; pp. 1889–1892. [Google Scholar]
- Boffety, M.; Galland, F.; Allais, A.G. Color image simulation for underwater optics. Applied optics 2012, 51, 5633–5642. [Google Scholar] [CrossRef]
- Wen, H.; Tian, Y.; Huang, T.; Gao, W. Single underwater image enhancement with a new optical model. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE; 2013; pp. 753–756. [Google Scholar]
- Ahn, J.; Yasukawa, S.; Sonoda, T.; Nishida, Y.; Ishii, K.; Ura, T. An optical image transmission system for deep sea creature sampling missions using autonomous underwater vehicle. IEEE Journal of Oceanic Engineering 2018, 45, 350–361. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence 2010, 33, 2341–2353. [Google Scholar]
- Fayaz, S.; Parah, S.A.; Qureshi, G. Efficient underwater image restoration utilizing modified dark channel prior. Multimedia Tools and Applications 2023, 82, 14731–14753. [Google Scholar] [CrossRef]
- Drews, P.; Nascimento, E.; Moraes, F.; Botelho, S.; Campos, M. Transmission estimation in underwater single images. In Proceedings of the Proceedings of the IEEE international conference on computer vision workshops, 2013, pp.; pp. 825–830.
- Drews, P.L.; Nascimento, E.R.; Botelho, S.S.; Campos, M.F.M. Underwater depth estimation and image restoration based on single images. IEEE computer graphics and applications 2016, 36, 24–35. [Google Scholar] [CrossRef]
- Carlevaris-Bianco, N.; Mohan, A.; Eustice, R.M. Initial results in underwater single image dehazing. In Proceedings of the Oceans 2010 Mts/IEEE Seattle. IEEE; 2010; pp. 1–8. [Google Scholar]
- Galdran, A.; Pardo, D.; Picón, A.; Alvarez-Gila, A. Automatic red-channel underwater image restoration. Journal of Visual Communication and Image Representation 2015, 26, 132–145. [Google Scholar] [CrossRef]
- Song, W.; Wang, Y.; Huang, D.; Tjondronegoro, D. A rapid scene depth estimation model based on underwater light attenuation prior for underwater image restoration. In Proceedings of the Pacific Rim Conference on Multimedia. Springer; 2018; pp. 678–688. [Google Scholar]
- Yang, H.Y.; Chen, P.Y.; Huang, C.C.; Zhuang, Y.Z.; Shiau, Y.H. Low complexity underwater image enhancement based on dark channel prior. In Proceedings of the 2011 Second International Conference on Innovations in Bio-inspired Computing and Applications. IEEE; 2011; pp. 17–20. [Google Scholar]
- Serikawa, S.; Lu, H. Underwater image dehazing using joint trilateral filter. Computers & Electrical Engineering 2014, 40, 41–50. [Google Scholar]
- Peng, Y.T.; Zhao, X.; Cosman, P.C. Single underwater image enhancement using depth estimation based on blurriness. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP). IEEE; 2015; pp. 4952–4956. [Google Scholar]
- Lu, H.; Li, Y.; Zhang, L.; Serikawa, S. Contrast enhancement for images in turbid water. JOSA A 2015, 32, 886–893. [Google Scholar] [CrossRef]
- Zhao, X.; Jin, T.; Qu, S. Deriving inherent optical properties from background color and underwater image enhancement. Ocean Engineering 2015, 94, 163–172. [Google Scholar] [CrossRef]
- Li, C.; Quo, J.; Pang, Y.; Chen, S.; Wang, J. Single underwater image restoration by blue-green channels dehazing and red channel correction. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE; 2016; pp. 1731–1735. [Google Scholar]
- Li, C.Y.; Guo, J.C.; Cong, R.M.; Pang, Y.W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Transactions on Image Processing 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
- Li, C.; Guo, J.; Chen, S.; Tang, Y.; Pang, Y.; Wang, J. Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP). IEEE; 2016; pp. 1993–1997. [Google Scholar]
- Peng, Y.T.; Cosman, P.C. Underwater image restoration based on image blurriness and light absorption. IEEE transactions on image processing 2017, 26, 1579–1594. [Google Scholar] [CrossRef]
- Peng, Y.T.; Cao, K.; Cosman, P.C. Generalization of the dark channel prior for single image restoration. IEEE Transactions on Image Processing 2018, 27, 2856–2868. [Google Scholar] [CrossRef]
- Wang, N.; Zheng, H.; Zheng, B. Underwater image restoration via maximum attenuation identification. IEEE Access 2017, 5, 18941–18952. [Google Scholar] [CrossRef]
- Ding, X.; Wang, Y.; Zhang, J.; Fu, X. Underwater image dehaze using scene depth estimation with adaptive color correction. In Proceedings of the OCEANS 2017-Aberdeen. IEEE; 2017; pp. 1–5. [Google Scholar]
- Cao, K.; Peng, Y.T.; Cosman, P.C. Underwater image restoration using deep networks to estimate background light and scene depth. In Proceedings of the 2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI). IEEE; 2018; pp. 1–4. [Google Scholar]
- Barbosa, W.V.; Amaral, H.G.; Rocha, T.L.; Nascimento, E.R. Visual-quality-driven learning for underwater vision enhancement. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE; 2018; pp. 3933–3937. [Google Scholar]
- Hou, M.; Liu, R.; Fan, X.; Luo, Z. Joint residual learning for underwater image enhancement. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE; 2018; pp. 4043–4047. [Google Scholar]
- Wang, Z.; Shen, L.; Xu, M.; Yu, M.; Wang, K.; Lin, Y. Domain adaptation for underwater image enhancement. IEEE Transactions on Image Processing 2023, 32, 1442–1457. [Google Scholar] [CrossRef]
- Xu, S.; Zhang, M.; Song, W.; Mei, H.; He, Q.; Liotta, A. A systematic review and analysis of deep learning-based underwater object detection. Neurocomputing 2023. [Google Scholar] [CrossRef]
- Xu, Y.; Wen, J.; Fei, L.; Zhang, Z. Review of video and image defogging algorithms and related studies on image restoration and enhancement. Ieee Access 2015, 4, 165–188. [Google Scholar] [CrossRef]
- Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics 2007, 53, 593–600. [Google Scholar] [CrossRef]
- Kim, T.; Paik, J. Adaptive contrast enhancement using gain-controllable clipped histogram equalization. IEEE Transactions on Consumer Electronics 2008, 54, 1803–1810. [Google Scholar] [CrossRef]
- Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition. IEEE; 2012; pp. 81–88. [Google Scholar]
- Liu, X.; Zhong, G.; Liu, C.; Dong, J. Underwater image colour constancy based on DSNMF. IET Image Processing 2017, 11, 38–43. [Google Scholar] [CrossRef]
- Torres-Méndez, L.A.; Dudek, G. Color correction of underwater images for aquatic robot inspection. In Proceedings of the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer; 2005; pp. 60–73. [Google Scholar]
- Iqbal, K.; Salam, R.A.; Osman, A.; Talib, A.Z. Underwater Image Enhancement Using an Integrated Colour Model. IAENG International Journal of computer science 2007, 34. [Google Scholar]
- Ghani, A.S.A.; Isa, N.A.M. Automatic system for improving underwater image contrast and color through recursive adaptive histogram modification. Computers and electronics in agriculture 2017, 141, 181–195. [Google Scholar] [CrossRef]
- Hitam, M.S.; Awalludin, E.A.; Yussof, W.N.J.H.W.; Bachok, Z. Mixture contrast limited adaptive histogram equalization for underwater image enhancement. In Proceedings of the 2013 International conference on computer applications technology (ICCAT). IEEE; 2013; pp. 1–5. [Google Scholar]
- Huang, D.; Wang, Y.; Song, W.; Sequeira, J.; Mavromatis, S. Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition. In Proceedings of the International conference on multimedia modeling. Springer; 2018; pp. 453–465. [Google Scholar]
- Agaian, S.S.; Panetta, K.; Grigoryan, A.M. Transform-based image enhancement algorithms with performance measure. IEEE Transactions on image processing 2001, 10, 367–382. [Google Scholar] [CrossRef]
- Asmare, M.H.; Asirvadam, V.S.; Hani, A.F.M. Image enhancement based on contourlet transform. Signal, Image and Video Processing 2015, 9, 1679–1690. [Google Scholar] [CrossRef]
- Panetta, K.; Samani, A.; Agaian, S. A robust no-reference, no-parameter, transform domain image quality metric for evaluating the quality of color images. IEEE Access 2018, 6, 10979–10985. [Google Scholar] [CrossRef]
- Wang, Y.; Ding, X.; Wang, R.; Zhang, J.; Fu, X. Fusion-based underwater image enhancement by wavelet decomposition. In Proceedings of the 2017 IEEE International Conference on Industrial Technology (ICIT). IEEE; 2017; pp. 1013–1018. [Google Scholar]
- Grigoryan, A.M.; Agaian, S.S. Color image enhancement via combine homomorphic ratio and histogram equalization approaches: Using underwater images as illustrative examples. International Journal on Future Revolution in Computer Science & Communication Engineering 2018, 4, 36–47. [Google Scholar]
- Kaur, G.; Kaur, M. A study of transform domain based image enhancement techniques. International Journal of Computer Applications 2016, 152. [Google Scholar] [CrossRef]
- Petit, F.; Capelle-Laizé, A.S.; Carré, P. Underwater image enhancement by attenuation inversionwith quaternions. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE; 2009; pp. 1177–1180. [Google Scholar]
- Cheng, C.Y.; Sung, C.C.; Chang, H.H. Underwater image restoration by red-dark channel prior and point spread function deconvolution. In Proceedings of the 2015 IEEE international conference on signal and image processing applications (ICSIPA). IEEE; 2015; pp. 110–115. [Google Scholar]
- Feifei, S.; Xuemeng, Z.; Guoyu, W. An approach for underwater image denoising via wavelet decomposition and high-pass filter. In Proceedings of the 2011 Fourth International Conference on Intelligent Computation Technology and Automation. IEEE, 2011; Vol. 2, pp. 417–420. [Google Scholar]
- Ghani, A.S.A. Image contrast enhancement using an integration of recursive-overlapped contrast limited adaptive histogram specification and dual-image wavelet fusion for the high visibility of deep underwater image. Ocean Engineering 2018, 162, 224–238. [Google Scholar] [CrossRef]
- Priyadharsini, R.; Sharmila, T.S.; Rajendran, V. A wavelet transform based contrast enhancement method for underwater acoustic images. Multidimensional Systems and Signal Processing 2018, 29, 1845–1859. [Google Scholar] [CrossRef]
- Joshi, K.; Kamathe, R. Quantification of retinex in enhancement of weather degraded images. In Proceedings of the 2008 International Conference on Audio, Language and Image Processing. IEEE; 2008; pp. 1229–1233. [Google Scholar]
- Fu, X.; Zhuang, P.; Huang, Y.; Liao, Y.; Zhang, X.P.; Ding, X. A retinex-based enhancing approach for single underwater image. In Proceedings of the 2014 IEEE international conference on image processing (ICIP). IEEE; 2014; pp. 4572–4576. [Google Scholar]
- Zhang, S.; Wang, T.; Dong, J.; Yu, H. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing 2017, 245, 1–9. [Google Scholar] [CrossRef]
- Yong-xin, W.; Ming, D.; Chuang, H. Underwater image enhancement algorithm based on iterative histogram equalization with conventional light source. Acta Photonica Sinica 2018, 47, 1101002. [Google Scholar] [CrossRef]
- Zhang, W.; Dong, L.; Pan, X.; Zhou, J.; Qin, L.; Xu, W. Single image defogging based on multi-channel convolutional MSRCR. IEEE Access 2019, 7, 72492–72504. [Google Scholar] [CrossRef]
- Tang, C.; von Lukas, U.F.; Vahl, M.; Wang, S.; Wang, Y.; Tan, M. Efficient underwater image and video enhancement based on Retinex. Signal, Image and Video Processing 2019, 13, 1011–1018. [Google Scholar] [CrossRef]
- Zhang, W.; Pan, X.; Xie, X.; Li, L.; Wang, Z.; Han, C. Color correction and adaptive contrast enhancement for underwater image enhancement. Computers & Electrical Engineering 2021, 91, 106981. [Google Scholar]
- Dixit, S.; Tiwari, S.K.; Sharma, P. Underwater image enhancement using DCP with ACCLAHE and homomorphism filtering. 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES). IEEE, 2016; pp. 2042–2046. [Google Scholar]
- Wang, Y.; Chang, R.; He, B.; Liu, X.; Guo, J.H.; Lendasse, A.; et al. Underwater image enhancement strategy with virtual retina model and image quality assessment. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey. IEEE; 2016; pp. 1–5. [Google Scholar]
- Bindhu, A.; Maheswari, O.U. Under water image enhancement based on linear image interpolation and limited image enhancer techniques. In Proceedings of the 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). IEEE; 2017; pp. 1–5. [Google Scholar]
- Guraksin, G.E.; Deperlioglu, O.; Kose, U. A novel underwater image enhancement approach with wavelet transform supported by differential evolution algorithm. In Nature Inspired Optimization Techniques for Image Processing Applications; Springer, 2019; pp. 255–278.
- Sankpal, S.; Deshpande, S. Underwater image enhancement by rayleigh stretching with adaptive scale parameter and energy correction. In Computing, Communication and Signal Processing; Springer, 2019; pp. 935–947.
- Azmi, K.Z.M.; Ghani, A.S.A.; Yusof, Z.M.; Ibrahim, Z. Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm. Applied Soft Computing 2019, 85, 105810. [Google Scholar] [CrossRef]
- Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks with holistic edges. International Journal of Computer Vision 2020, 128, 240–259. [Google Scholar] [CrossRef]
- Li, X.; Ye, M.; Liu, Y.; Zhu, C. Adaptive deep convolutional neural networks for scene-specific object detection. IEEE Transactions on Circuits and Systems for Video Technology 2017, 29, 2538–2551. [Google Scholar] [CrossRef]
- Pan, X.; Li, L.; Yang, H.; Liu, Z.; Yang, J.; Zhao, L.; Fan, Y. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing 2017, 229, 88–99. [Google Scholar] [CrossRef]
- Fabbri, C.; Islam, M.J.; Sattar, J. Enhancing underwater imagery using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2018; pp. 7159–7165. [Google Scholar]
- Anwar, S.; Li, C.; Porikli, F. Deep underwater image enhancement. arXiv preprint arXiv:1807.03528 2018. [Google Scholar]
- Li, C.; Guo, J.; Guo, C. Emerging from water: Underwater image color correction based on weakly supervised color transfer. IEEE Signal processing letters 2018, 25, 323–327. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Transactions on Image Processing 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
- Uplavikar, P.M.; Wu, Z.; Wang, Z. All-in-One Underwater Image Enhancement Using Domain-Adversarial Learning. In Proceedings of the CVPR Workshops; 2019; pp. 1–8. [Google Scholar]
- Li, C.; Anwar, S.; Porikli, F. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognition 2020, 98, 107038. [Google Scholar] [CrossRef]
- Hu, K.; Zhang, Y.; Weng, C.; Wang, P.; Deng, Z.; Liu, Y. An underwater image enhancement algorithm based on generative adversarial network and natural image quality evaluation index. Journal of Marine Science and Engineering 2021, 9, 691. [Google Scholar] [CrossRef]
- Tang, P.; Li, L.; Xue, Y.; Lv, M.; Jia, Z.; Ma, H. Real-World Underwater Image Enhancement Based on Attention U-Net. Journal of Marine Science and Engineering 2023, 11, 662. [Google Scholar] [CrossRef]
- Gao, Y.; Li, H.; Wen, S. Restoration and enhancement of underwater images based on bright channel prior. Mathematical Problems in Engineering 2016, 2016. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, D.; Zhang, W. Adaptive histogram fusion-based colour restoration and enhancement for underwater images. International Journal of Security and Networks 2021, 16, 49–59. [Google Scholar] [CrossRef]
- Luo, W.; Duan, S.; Zheng, J. Underwater Image Restoration and Enhancement Based on a Fusion Algorithm With Color Balance, Contrast Optimization, and Histogram Stretching. IEEE Access 2021, 9, 31792–31804. [Google Scholar] [CrossRef]
- Dewangan, S.K. Visual quality restoration & enhancement of underwater images using HSV filter analysis. In Proceedings of the 2017 International Conference on Trends in Electronics and Informatics (ICEI). IEEE; 2017; pp. 766–772. [Google Scholar]
- Sequeira, G.; Mekkalki, V.; Prabhu, J.; Borkar, S.; Desai, M. Hybrid Approach for Underwater Image Restoration and Enhancement. In Proceedings of the 2021 International Conference on Emerging Smart Computing and Informatics (ESCI). IEEE; 2021; pp. 427–432. [Google Scholar]
- Daway, H.G.; Daway, E.G.; et al. Underwater image enhancement using colour restoration based on YCbCr colour model. In Proceedings of the IOP conference series: materials science and engineering. IOP Publishing, 2019; Vol. 571, p. 012125. [Google Scholar]
- Gupta, E.S.; Kaur, Y. Review of different histogram equalization based contrast enhancement techniques. International Journal of advanced research in computer and communication Engineering 2014, 3. [Google Scholar]
- Coltuc, D.; Bolon, P.; Chassery, J.M. Exact histogram specification. IEEE Transactions on Image Processing 2006, 15, 1143–1152. [Google Scholar] [CrossRef]
- Shukla, K.N.; Potnis, A.; Dwivedy, P. A review on image enhancement techniques. International Journal of Engineering and Applied Computer Science (IJEACS) 2017, 2, 232–235. [Google Scholar] [CrossRef]
- Puiono, P.; Purnama, I.; Hariadi, M. Color enhancement of underwater voral reef images using contrast limited adaptive histogram equalization (CLAHE) with Rayleigh distribution. In Proceedings of the Proc. 7th ICTS; 2013; pp. 14233–140251. [Google Scholar]
- Garg, D.; Garg, N.K.; Kumar, M. Underwater image enhancement using blending of CLAHE and percentile methodologies. Multimedia Tools and Applications 2018, 77, 26545–26561. [Google Scholar] [CrossRef]
- Wei, Y.; Tao, L. Efficient histogram-based sliding window. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE; 2010; pp. 3003–3010. [Google Scholar]
- Kim, Y.T. Contrast enhancement using brightness preserving bi-histogram equalization. IEEE transactions on Consumer Electronics 1997, 43, 1–8. [Google Scholar]
- Yang, C.C. Image enhancement by modified contrast-stretching manipulation. Optics & Laser Technology 2006, 38, 196–201. [Google Scholar]
- Demirel, H.; Anbarjafari, G. Image resolution enhancement by using discrete and stationary wavelet decomposition. IEEE transactions on image processing 2010, 20, 1458–1460. [Google Scholar] [CrossRef] [PubMed]
- Ge, M.; Hong, Q.; Zhang, L. A hybrid DCT-CLAHE approach for brightness enhancement of uneven-illumination underwater images. In Proceedings of the Proceedings of the 2018 the 2nd International Conference on Video and Image Processing, 2018, pp.; pp. 123–127.
- Kanmani, M.; Narsimhan, V. An image contrast enhancement algorithm for grayscale images using particle swarm optimization. Multimedia Tools and Applications 2018, 77, 23371–23387. [Google Scholar] [CrossRef]
- Chen, J.; Yu, W.; Tian, J.; Chen, L.; Zhou, Z. Image contrast enhancement using an artificial bee colony algorithm. Swarm and Evolutionary Computation 2018, 38, 287–294. [Google Scholar] [CrossRef]
- Hashemi, S.; Kiani, S.; Noroozi, N.; Moghaddam, M.E. An image contrast enhancement method based on genetic algorithm. Pattern Recognition Letters 2010, 31, 1816–1824. [Google Scholar] [CrossRef]
- Chari, V.; Sturm, P. Multiple-view geometry of the refractive plane. In Proceedings of the BMVC 2009-20th British Machine Vision Conference. The British Machine Vision Association (BMVA); 2009; pp. 1–11. [Google Scholar]
- Ishihara, S.; Asano, Y.; Zheng, Y.; Sato, I. Underwater Scene Recovery Using Wavelength-Dependent Refraction of Light. In Proceedings of the 2020 International Conference on 3D Vision (3DV). IEEE; 2020; pp. 32–40. [Google Scholar]
- Chadebecq, F.; Vasconcelos, F.; Lacher, R.; Maneas, E.; Desjardins, A.; Ourselin, S.; Vercauteren, T.; Stoyanov, D. Refractive two-view reconstruction for underwater 3d vision. International Journal of Computer Vision 2020, 128, 1101–1117. [Google Scholar] [CrossRef] [PubMed]
- Sankpal, S.S.; Deshpande, S.S. Nonuniform illumination correction algorithm for underwater images using maximum likelihood estimation method. Journal of Engineering 2016, 2016. [Google Scholar] [CrossRef]
- Cao, X.; Rong, S.; Liu, Y.; Li, T.; Wang, Q.; He, B. NUICNet: non-uniform illumination correction for underwater image using fully convolutional network. IEEE Access 2020, 8, 109989–110002. [Google Scholar] [CrossRef]
- Bazeille, S.; Quidu, I.; Jaulin, L.; Malkasse, J.P. Automatic underwater image pre-processing. In Proceedings of the CMM’06, 2006p; p. xx. [Google Scholar]
- Hu, Z.; Yang, M.H. Fast Non-uniform Deblurring using Constrained Camera Pose Subspace. In Proceedings of the BMVC, 2012; Vol. 2, p. 4. [Google Scholar]
- Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the European conference on computer vision. Springer; 2010; pp. 157–170. [Google Scholar]
- Xu, L.; Zheng, S.; Jia, J. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp.; pp. 1107–1114.
- Raj, M.V.; Murugan, S.S. Motion Deblurring Analysis for Underwater Image Restoration. In Proceedings of the Journal of Physics: Conference Series. IOP Publishing, 2021; Vol. 1911, p. 012028. [Google Scholar]
- Abas, P.E.; De Silva, L.C.; et al. Review of underwater image restoration algorithms. IET Image Processing 2019, 13, 1587–1596. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; Haber, T.; Bekaert, P. Fusion-based restoration of the underwater images. In Proceedings of the 2011 18th IEEE International Conference on Image Processing. IEEE; 2011; pp. 1557–1560. [Google Scholar]
- Lu, H.; Li, Y.; Xu, X.; Li, J.; Liu, Z.; Li, X.; Yang, J.; Serikawa, S. Underwater image enhancement method using weighted guided trigonometric filtering and artificial light correction. Journal of Visual Communication and Image Representation 2016, 38, 504–516. [Google Scholar] [CrossRef]
- Güraksin, G.E.; Köse, U.; Deperlıoğlu, Ö. Underwater image enhancement based on contrast adjustment via differential evolution algorithm. In Proceedings of the 2016 International Symposium on INnovations in Intelligent SysTems and Applications (INISTA). IEEE; 2016; pp. 1–5. [Google Scholar]
- Jian, S.; Wen, W. Study on underwater image denoising algorithm based on wavelet transform. In Proceedings of the Journal of Physics: Conference Series. IOP Publishing, 2017; Vol. 806, p. 012006. [Google Scholar]
- Li, Y.; Lu, H.; Li, K.C.; Kim, H.; Serikawa, S. Non-uniform de-scattering and de-blurring of underwater images. Mobile Networks and Applications 2018, 23, 352–362. [Google Scholar] [CrossRef]
- Liu, Z.; Yu, Y.; Zhang, K.; Huang, H. Underwater image transmission and blurred image restoration. Optical Engineering 2001, 40, 1125–1131. [Google Scholar] [CrossRef]
- Xu, Y.; Wang, H.; Cooper, G.D.; Rong, S.; Sun, W. Learning-Based Dark and Blurred Underwater Image Restoration. Complexity 2020, 2020. [Google Scholar] [CrossRef]
- Cho, Y.; Kim, A. Visibility enhancement for underwater visual SLAM based on underwater light scattering model. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2017; pp. 710–717. [Google Scholar]
- Emberton, S.; Chittka, L.; Cavallaro, A. Underwater image and video dehazing with pure haze region segmentation. Computer Vision and Image Understanding 2018, 168, 145–156. [Google Scholar] [CrossRef]
- Biswas, M. Hazy Underwater Image Enhancement based on Contrast and Color improvement using fusion technique. Image Processing & Communications 2017, 22, 31–38. [Google Scholar]
- Liu, R.; Fan, X.; Zhu, M.; Hou, M.; Luo, Z. Real-world underwater enhancement: challenges, benchmarks, and solutions. arXiv preprint arXiv:1901.05320 2019. [Google Scholar]
- Islam, M.J.; Xia, Y.; Sattar, J. Fast underwater image enhancement for improved visual perception. IEEE Robotics and Automation Letters 2020, 5, 3227–3234. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Wang, W. A fusion adversarial underwater image enhancement network with a public test dataset. arXiv preprint arXiv:1906.06819 2019. [Google Scholar]
- Li, J.; Skinner, K.A.; Eustice, R.M.; Johnson-Roberson, M. WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robotics and Automation letters 2017, 3, 387–394. [Google Scholar] [CrossRef]
- Oleari, F.; Kallasi, F.; Rizzini, D.L.; Aleotti, J.; Caselli, S. An underwater stereo vision system: From design to deployment and dataset acquisition. In Proceedings of the OCEANS 2015-Genova. IEEE, 2015, pp. 1–6.
- Boom, B.J.; Huang, P.X.; Beyan, C.; Spampinato, C.; Palazzo, S.; He, J.; Beauxis-Aussalet, E.; Lin, S.I.; Chou, H.M.; Nadarajan, G.; et al. Long-term underwater camera surveillance for monitoring and analysis of fish populations. VAIB12 2012. [Google Scholar]
- Cutter, G.; Stierhoff, K.; Zeng, J. Automated detection of rockfish in unconstrained underwater videos using Haar cascades and a new image dataset: labeled fishes in the wild. In Proceedings of the 2015 IEEE Winter Applications and Computer Vision Workshops. IEEE; 2015; pp. 57–62. [Google Scholar]
- Jian, M.; Qi, Q.; Dong, J.; Yin, Y.; Zhang, W.; Lam, K.M. The OUC-vision large-scale underwater image database. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE; 2017; pp. 1297–1302. [Google Scholar]
- Mohammadi, P.; Ebrahimi-Moghadam, A.; Shirani, S. Subjective and objective quality assessment of image: A survey. arXiv preprint arXiv:1406.7799 2014. [Google Scholar]
- Shigwan, S.S.; Birajdar, G.K. Objective image quality assessment using perceptual distortion for image retargeting. In Proceedings of the 2015 1st International Conference on Next Generation Computing Technologies (NGCT). IEEE; 2015; pp. 955–959. [Google Scholar]
- Tsai, D.Y.; Lee, Y.; Matsuyama, E. Information entropy measure for evaluation of image quality. Journal of digital imaging 2008, 21, 338–347. [Google Scholar] [CrossRef] [PubMed]
- Wu, Y.; Zhou, Y.; Saveriades, G.; Agaian, S.; Noonan, J.P.; Natarajan, P. Local Shannon entropy measure with statistical tests for image randomness. Information Sciences 2013, 222, 323–342. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C. A universal image quality index. IEEE signal processing letters 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C. Modern image quality assessment. Synthesis Lectures on Image, Video, and Multimedia Processing 2006, 2, 1–156. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Bechara, B.; McMahan, C.A.; Moore, W.S.; Noujeim, M.; Geha, H.; Teixeira, F.B. Contrast-to-noise ratio difference in small field of view cone beam computed tomography machines. Journal of oral science 2012, 54, 227–232. [Google Scholar] [CrossRef]
- Jaya, V.; Gopikakumari, R. IEM: a new image enhancement metric for contrast and sharpness measurements. International Journal of Computer Applications 2013, 79. [Google Scholar]
- Chen, S.D.; Ramli, A.R. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE transactions on Consumer Electronics 2003, 49, 1310–1319. [Google Scholar] [CrossRef]
- Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal processing: Image communication 2014, 29, 856–863. [Google Scholar] [CrossRef]
- Agaian, S.S.; Panetta, K.; Grigoryan, A.M. A new measure of image enhancement. In Proceedings of the IASTED International Conference on Signal Processing & Communication. Citeseer; 2000; pp. 19–22. [Google Scholar]
- Panetta, K.; Agaian, S.; Zhou, Y.; Wharton, E.J. Parameterized logarithmic framework for image enhancement. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 2010, 41, 460–473. [Google Scholar] [CrossRef] [PubMed]
- Yang, M.; Sowmya, A. An underwater color image quality evaluation metric. IEEE Transactions on Image Processing 2015, 24, 6062–6071. [Google Scholar] [CrossRef]
- Panetta, K.; Gao, C.; Agaian, S. Human-visual-system-inspired underwater image quality measures. IEEE Journal of Oceanic Engineering 2015, 41, 541–551. [Google Scholar] [CrossRef]
- Wang, Y.; Li, N.; Li, Z.; Gu, Z.; Zheng, H.; Zheng, B.; Sun, M. An imaging-inspired no-reference underwater color image quality assessment metric. Computers & Electrical Engineering 2018, 70, 904–913. [Google Scholar]
- Wang, S.; Ma, K.; Yeganeh, H.; Wang, Z.; Lin, W. A patch-structure representation method for quality assessment of contrast changed images. IEEE Signal Processing Letters 2015, 22, 2387–2390. [Google Scholar] [CrossRef]
- Rajkumar, S.; Malathi, G. A comparative analysis on image quality assessment for real time satellite images. Indian J. Sci. Technol 2016, 9. [Google Scholar] [CrossRef]
- Memon, F.; Unar, M.A.; Memon, S. Image quality assessment for performance evaluation of focus measure operators. Mehran University Research Journal of Engineering & Technology 2015, 34, 379–386. [Google Scholar]
- Kaur, R.; Saini, D. Image enhancement of underwater digital images by utilizing L* A* B* color space on gradient and CLAHE based smoothing. Image 2016, 4. [Google Scholar] [CrossRef]
- Hummel, R. Image enhancement by histogram transformation. Unknown 1975. [Google Scholar] [CrossRef]
- Hasibuan, Z.; Andono, P.; Pujiono, D.; Setiadi, R.; et al. Contrast Limited Adaptive Histogram Equalization for Underwater Image Matching Optimization use SURF. In Proceedings of the Journal of Physics: Conference Series. IOP Publishing, 2021; Vol. 1803, p. 012008. [Google Scholar]
- Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A.; Talib, A.Z.H. Enhancing the low quality images using unsupervised colour correction method. In Proceedings of the 2010 IEEE International Conference on Systems, Man and Cybernetics. IEEE; 2010; pp. 1703–1709. [Google Scholar]
- Boudhane, M.; Nsiri, B. Underwater image processing method for fish localization and detection in submarine environment. Journal of Visual Communication and Image Representation 2016, 39, 226–238. [Google Scholar] [CrossRef]
- Li, D.; Xu, L.; Liu, H. Detection of uneaten fish food pellets in underwater images for aquaculture. Aquacultural Engineering 2017, 78, 85–94. [Google Scholar] [CrossRef]
- Villon, S.; Mouillot, D.; Chaumont, M.; Darling, E.S.; Subsol, G.; Claverie, T.; Villéger, S. A deep learning method for accurate and fast identification of coral reef fishes in underwater images. Ecological informatics 2018, 48, 238–244. [Google Scholar] [CrossRef]
- Cui, S.; Zhou, Y.; Wang, Y.; Zhai, L. Fish detection using deep learning. Applied Computational Intelligence and Soft Computing 2020, 2020. [Google Scholar] [CrossRef]
- Khan, A.; Ali, S.S.A.; Anwer, A.; Adil, S.H.; Meriaudeau, F. Subsea pipeline corrosion estimation by restoring and enhancing degraded underwater images. IEEE Access 2018, 6, 40585–40601. [Google Scholar] [CrossRef]
- Gonzalez-Rivero, M.; Beijbom, O.; Rodriguez-Ramirez, A.; Bryant, D.E.; Ganase, A.; Gonzalez-Marrero, Y.; Herrera-Reveles, A.; Kennedy, E.V.; Kim, C.J.; Lopez-Marcano, S.; et al. Monitoring of coral reefs using artificial intelligence: a feasible and cost-effective approach. Remote Sensing 2020, 12, 489. [Google Scholar] [CrossRef]
- Li, Z.; Li, G.; Niu, B.; Peng, F. Sea cucumber image dehazing method by fusion of retinex and dark channel. IFAC-PapersOnLine 2018, 51, 796–801. [Google Scholar] [CrossRef]
- Qiao, X.; Bao, J.; Zeng, L.; Zou, J.; Li, D. An automatic active contour method for sea cucumber segmentation in natural underwater environments. Computers and Electronics in Agriculture 2017, 135, 134–142. [Google Scholar] [CrossRef]
- Fatan, M.; Daliri, M.R.; Shahri, A.M. Underwater cable detection in the images using edge classification based on texture information. Measurement 2016, 91, 309–317. [Google Scholar] [CrossRef]
- Zhou, Y.; Li, Q.; Huo, G. Underwater moving target detection based on image enhancement. In Proceedings of the International Symposium on Neural Networks. Springer; 2017; pp. 427–436. [Google Scholar]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).