1. Introduction
Sea fog is a representative phenomenon of second-flow fog which occurs when warm and wet air moves over cold surfaces or water surfaces [
1]. Sea for differ from land for in that it changes faster, it occurs more locally and frequently even during the day light [
2]. Therefore, it is more difficult to predict the occurrence of sea fog due to the great difference in the possibility of fog depending on the time and region. Moreover, sea fog plays a key factor in deciding whether to operate ships, which effects people’s livelihoods in particular, small fishing vessels. Also, some regulations or policies of navigation consider the sea fog intensity such as the lamp light regulation since sea fog limit the visibility which can increase likelihood of accidents at sea.
Accidents at sea are much more affected by bad weather conditions compared to accidents on land. Poor weather conditions such as sea fog, wind, and waves make it difficult to secure navigation routes due to limited visibility. In particular, low visibility conditions, in which the distance of visibility due to sea fog is reduced to less than 1km, is a primary cause of accidents. During the five years from 2016 to 2020, there were 544 vessel accidents and 3,652 casualties because of sea fog in South Korea [
3]. The accident rate is increasing as sea traffic volume increases [
3]. To prevent the accidents, the Korean government has installed marine observation systems at major ports. For those systems, LiDAR, optical and temperature sensors, satellite imaginary information processing systems, and other expensive equipment are used (e.g. forward scattering visibility meters). However, the costs are considerable so that it is not viable to install them at many different spots such as on the large numbers of small vessels, at minor ports, and on a number of buoys in Korean waters.
Therefore, this paper proposes a method of estimating sea fog intensity at an affordable cost. The proposed algorithm called RDCP is simple and easy to apply and also requires low energy usage. The aim of this paper is to show that the RDCP algorithm can be applied immediately to any type of existing cameras which are already mounted without any additional costs for sea fog and visibility estimation.
Section 2 reviews the related literature especially Dark Channel Prior (DCP) which is a fundamental component of the RDCP algorithm.
Section 3 explains the details of the RDCP algorithm and section 4 describes the experiments of RDCP and the results of this evaluation.
2. Literature Review
Image processing using cameras have been intensively studied for autonomous driving, drones, defense industry, etc. The primary assumption of those studies is good weather conditions. Therefore, image processing in harsh weather conditions such as at night time, dense fog, heavy rains, storms are a separate study subject and this section reviews fog related research.
2.1. Fog Dehazing on Land
Dehazing or defogging algorithms have been developed in many studies since fog has a significant influence on the performance of image processing algorithms. Dehazing studies can be categorized into two approach types. The first type increases the contrast of images; a mapping method according to pixel values [
4] and an improved high boost algorithm [
5] are in this category. [
5] emphasizes the high-frequency components of an image to enhance the clarity and contrast of the image. This type of algorithm accurately restores the contrast of the image which is degraded by fog, however if the depth of the image is not properly considered, the contrast can be excessively increased. The second category estimates the depth of an image; for example, DCP [
6], image-specific fusion [
7], and a method using convolutional neural networks [
8]. [
8] utilizes deep learning to estimate depth information and utilize it for fog removal. This type of studies estimates a transmission map or a depth map by considering both contrast and chromaticity as well as visibility loss. However, these algorithms increase complexity and requires greater computation volume.
2.2. Fog Dehazing in the Marine Enviornment
Fog removal algorithms on land and in a marine environment apply different approach for dehazing. Fog removal studies on land consider geometric characteristics such as roads, buildings and structures, and extract necessary data for fog removal from these environments. Fog removal studies in marine environments should consider complex optical phenomena such as refraction and reflection of water, disturbance at sea level and in the air therefore, fog removal in these environments is much more complex.
Hu [
9] propose a light source decomposition algorithm to remove the light source-induced luminous effect from sea fog images and to recover objects covered by sea fog. First, the luminous effect of light from the input image is identified through a light source decomposition algorithm, and then light source information is extracted. This first step provides information about the origin and intensity of light, and allows identification of the location and intensity of the light source that illuminates the object brightly. Next, the second step increases the visibility of the object by extracting the outline and details of the object covered by the sea fog using separated light source information.
2.3. Sea Fog Visibility Estimation
For image processing studies related to fog images captured on land, dehazing is a major subject [
9,
10]. However, in terms of ocean fields, real-time visibility estimation is as important as dehazing. However, there are few studies on visibility estimation in sea fog images. Bae [
11] proposes a visibility estimation method combining DCP and the distance of a fixed object in a coastal area to estimate comprehensive visibility on a large area. Palvanov [
12] proposes a method of estimating visibility in sea fog images using a deep synthetic multiplication neural network. Most visibility estimation algorithms for ocean images are based on DCP since the algorithm has long been verified and embedded in many different commercialized products such as cameras for cars and CCTVs on roads. Next section discusses DCP in detail and some studies based on the DCL algorithm.
2.4. DCP
He [
6] proposes a light source decomposition algorithm which removes the luminous effect on the light source from a fog image and recovers the object covered by the fog. The DCP algorithm is widely used fog studies in land and sea environments, and this algorithm can estimate visibility loss due to fog and effectively eliminate fog. DCP utilizes the fog model [
13] which is most commonly used to define the atmospheric scattering characteristics of fog images. The fog model is a mathematical model which defines how fog scatters light and is used in different fog related studies.
While other existing algorithms require multiple images to remove fog, DCP is able to remove fog from a single image. However, a large amount of calculation is required to operate DCP, for example, to relieve the blocking effect, additional techniques are added such as soft matting.
Figure 1 summarizes the main processes of DCP.
DCP uses an empirical assumption: in fog free images, most pixels tend to have very low brightness values in at least one of the three channels, i.e. Red (R), Green, (G), and Blue (B). As shown in
Figure 1, the first step of DCP is acquiring the Dark Channel (DC),
by (1) and
varies from 0 to 255.
where
is a color channel,
represents
pixels called local patch centered at
, and
means a pixel in the local patch. The RDCP algorithm in this paper takes only the dark channel acquiring process of DCP, therefore further processes of DCP are not described here. Local patch size can be various depending on the size of images, 15x15 is used in [
6] and the patch size is not a critical factor for the algorithm. in this paper, the experiments are conducted at 10×10.
There are many studies to restore the visibility of images degraded by fog using the DCP algorithm. Huang [
14] proposed a technique to effectively overcome the visibility problem caused by fog by combining two main modules: Haze Thickness Estimation (HTE) module and the Image Visibility Restoration (IVR) module. The HTE module is used to estimate fog thickening. It is effective in eliminating fog by identifying the loss of visibility due to fog and estimating the depth information of fog. The IVR module serves to restore the visibility of the image. This module solves the color problem caused by fog and the visibility problem of the image. This module performs the process of improving the clarity and contrast of the image. L Yang [
15] proposes to estimate the range of visibility, not by restoring fog. This study focuses on improving the DCP algorithm and estimating the range of visibility in combination with Grayscale Image Entropy (GIE) and Support Vector Machine (SVM). GIE is used in the field of image processing and computer vision, which is a statistical measurement method in order to measure image information, analyze complexity of images, and measures the degree of disorder of the pixel value distribution of images. SVM is a supervised learning algorithm for classification and regression analysis used in the fields of machine learning and pattern recognition. These GIEs and SVMs are used to estimate the visibility of current road and traffic conditions and provide appropriate speed limits to improve traffic safety and reduce traffic congestion.
3. Reduced DCP
This paper proposes an algorithm called RDCP which utilizes the initial step of DCP to estimate sea fog density and visibility. To estimate sea fog density in real-time, estimating from a single image is more helpful than processing multiple images, which means DCP is suitable for the purpose of RDCP. However, DCP is a dehazing algorithm primarily for images on land and this implies significant calculations of DCP do not cause energy consumption problems since the cameras usually are not battery operated on land. However, ocean facilities such as buoys in the ocean environment are mostly operating under a highly limited power condition, therefore, RDCP must estimate visibility while restricted to extremely low power supplies.
This study proposes using only the initial process of DCP because 1) we found the DC values of each pixel (
) are sufficient to estimate the sea fog density and visibility and 2) this leads to a reduction of algorithm complexity and computation requirements.
Figure 2 provides the diagram of RDCP. Further processes of RDCP are explained in the next subsections. Dark channel acquiring process is formally expressed in (1).
3.1. Applying a Threshold
The RDCP algorithm first acquires DC values which ranged from 0 to 255 and uses those values for an index of fog intensity. DC values are compared in
Figure 3: the blue graph is a result of an image without sea fog of
Figure 4(a), and the orange line with dense fog of
Figure 4(b). In the case of the dense fog image, 44% of DC values are less than 100 whilst 1.4% of DC values of a no fog image are more than 100. RDCP use the percentage as a criterion of fog density as it is clearly distinct according to the fog density. The RDCP percentage is explained in (2). Based on the empirical results of 320 images in four different locations, 100 is set as the threshold value in this paper. Threshold optimization is possibly be required in different places.
3.2. Cropping a Sky Region
The disadvantage of DCP is that DCP does not work properly at sky regions in an image [
6], so that the study manually cuts out the sky regions for the experiment dataset. Therefore, RDCP also crops the sky region from an image, which provides two primary benefits. First, it removes the ambiguity of DC values so that the estimation becomes more distinct according to the fog density. Second, it significantly decreases the number of pixels required for the calculation. There are various image processing algorithms to divide sea and sky in ocean images however, this paper does not discuss cropping algorithms since it is out of its scope. This paper uses [
16] for divide the sky and sea regions in an image.
Figure 5.
Examples of cropped images from four different ports.
Figure 5.
Examples of cropped images from four different ports.
4. Experiments and Evaluations
4.1. Experiments Datasets
Image-based fog studies usually use datasets of pairs of haze images and fog-free images for an identical scene. The datasets for research purposes usually use artificially synthesized images since it is not easy to film and precisely pair them at on-sites. Most of these existing fog datasets are road images for autonomous vehicle research [
10]. As an example of a road synthetic fog dataset, Tarel [
17] proposes a paired dataset in which several types of fog are added to a virtual fog-free road composite image. Sakaridis [
18] applies a semantic segmentation method to the ground truth and different types of fog are synthesized to it.
However, in the case of sea fog studies, image datasets are very small in number and type compared to road image datasets. Therefore, this study uses raw image datasets which are captured in real-time by cameras owned by Korea Meteorological Administration Agency (KMAA) [
19] and the Korea Hydrographic and Oceanographic Agency (KHOA) [
20]. The images are captured from four different ports by cameras mounted on buoys. The four ports are located at Ganghwa Island, Pyeongtaek, Baengnyeong Island, and Ji Island and the locations are indicated in
Figure 6.
KMAA and KHOA also provide different data labels along with those images. The sea fog density labels are provided and they are categorized as No-fog, Low-fog, Mid-fog, and Dense-fog.
Figure 7,
Figure 8,
Figure 9 and
Figure 10 show example images having four different sea fog intensity labels at the four different locations. Other labels are also provided such as temperatures, humidity, pressure, visibility, wind directions, and water temperatures. This paper chooses fog density and visibility labels to evaluate the performance of RDCP.
Table 1 compares the label values of the sea fog intensity and the visibility. To fairly evaluate the performance of RDCP using the labels, dense intensity estimation results of RDCP are equally defined into four categories.
4.2. Sea Fog Intensity Estimation
To verify that the RDCP percentage of (2) can be a criterion of sea fog density, a total of 320 images are used. The size of all images is 1280 x 720 pixels. At each location, 20 images are selected in the four different fog intensity labels; 20 images of no-fog, 20 images of low-fog, 20 images of mid-fog, and 20 images of dense-fog. RDCP percentage values are calculated with each image and
Figure 11 shows the trends of them. X axis means the number of images (i.e. 20 images) and Y axis shows the RDCP percentage value of each image.
As shown in
Figure 11, RDCP percentage values are fairly consistent according to the fog intensity, which implies RDCP percentage values have the potential to be used as a criterion of sea fog intensity. The graphs of no-fog (i.e. blue lines) represents the lowest intensity of sea fog and they have the largest RDCP values at average 62.3% since they have the highest number of pixels which are smaller than the threshold. In this manner, the higher the intensity of sea fog, the lower the RDCP percentage values are in
Figure 11.
After applying the threshold, RDCP crops images in order to block out external light sources having similar DC values to sea fog.
Figure 12 shows the RDCP percentage value graphs of cropped images in different category labels. The value distinction in different categories becomes greater compared with
Figure 11. For example, the percentage of no fog images have an average value of 62.3% in
Figure 11, however, the value dramatically increases to 95.6% in
Figure 12. Cropping the sky regions removes the ambiguity of DC values of an image, which means cropped images are less affected by external light sources. Therefore, the RDCP percentage values are more distinguishable by sea fog intensity and sufficient to estimate the sea fog intensity from a single image.
Optimization of the threshold is critical factor of RDCP as the performance of RDCP is highly affected by the value.
Figure 13 shows the changes of RDCP percentage values of 20 images captured from Ji Island when different thresholds are applied to the RDCP algorithm. The threshold values used here are 80, 110, 130, and 150 which are not the optimized value. Compare to
Figure 12(d), the graphs fluctuate which means the sea fog intensity estimation is not stable in
Figure 13. Moreover, it can be found that the RDCP values are not sufficient to play a role of sea fog intensity index. For example, when 80 is applied to the threshold, the graphs are overlapped each other since most DC values of the 20 images are greater than 80 regardless of sea fog intensity.
Another benefit of cropping is a reduction in complexity and calculation, which is helpful for energy saving.
Table 2 compares the average processing time and the number of pixels of 320 images for the RDCP value calculation. The sky region is removed so that the number of pixels for the RDCP value calculation decreases by up to 75%. The average processing time to estimate the fog intensity of 230 images is improved by up to 50% compared to the original image.
4.3. Visibility Estimation
The RDCP values and the visibility labels are discussed in this subsection. Using the RDCP percentage values, RDCP is able to estimate visibility in real-time. For this experiment, the RDCP algorithm is applied to real-time images from 6am to 6pm at the Pyeongteak port on a day when sea fog changed significantly.
Figure 14 shows some examples of sea fog images during the day.
The red graph in
Figure 15 indicates the visibility label values on the same day. There was dense sea fog from 6am to 7:30am, then the sea fog intensity gradually decreased, and finally it showed a clear day. The sea fog changes also can be recognized in
Figure 14. In
Figure 15(a), the RDCP values of the original images are compared with the visibility label values in time and the RDCP values of the cropped images are compared in
Figure 15(b). The RDCP values of cropped images reflect the real-time visibility changes better than the original images. For example, from 3pm the RDCP value of original images remains stable while the visibility label values continue to increase until 5pm. This mismatch is caused since the original image is more influenced by external light sources from the sky region when the sky is very clear.
The prototype of RDCP is demonstrated using a camera at Baengnyeong Island port.
Figure 16 show results on different days and in different fog intensities. The information of sea fog intensity and visibility is displayed at top left side of the screen. The range of the visibility label is from 0km to 20km, so the weight between visibility and RDCP percent value is set to 0.2 in this case. The RDCP results properly matches the intensity of fog as shown in
Figure 16.
5. Conclusion
This paper proposes the RDCP algorithm to estimate sea fog intensity and visibility at low cost for immediate use in many locations, vessels, and buoys. RDCP utilizes the dark channel acquiring process of the DCP algorithm because the results of the acquiring process are sufficient to estimate fog intensity. Next, RDCP applies an optimized threshold to the acquired DC and then crops an image to exclude the sky region of the image. RDCP is a simple algorithm which does not require high computation therefore, it is appropriate for ocean facilities since they are usually battery operated.
The 320 raw images captured from cameras at the four ports in different locations and the labels along with those images are used for RDCP evaluation. Evaluation results show that the RDCP algorithm is able to estimate fog intensity and visibility. RDCP requires a single image for this estimation and as a result it shows a good performance for real-time estimation as the prototype of RDCP demonstrates. Therefore, it is expected that RDCP allows real-time fog density and visibility estimation at low cost and consequently the estimation is available in many marine activity points using existing cameras installed without any expensive additional equipment.
Funding
This research was supported by Korea Research Institute of Ships and Ocean engineering a grant from Endowment Project of “Development of Open Platform Technologies for Smart Maritime Safety and Industries” funded by Ministry of Oceans and Fisheries (1525014880, PES4880). This research was supported by Korea Institute of Marine Science & Technology Promotion (KIMST) funded by the Ministry of Oceans and Fisheries grant number 20210636.
Data Availability Statement
References
- Lewis, J.M.; Koračin, D.; Redmond, K.T. Sea fog research in the United Kingdom and United States: A historical essay including outlook. Bulletin of the American Meteorological Society 2004, 85, 395–408. [Google Scholar] [CrossRef]
- Cho, Y.K.; Kim, M.O.; Kim, B.C. Sea fog around the Korean Peninsula. Journal of Applied Meteorology 2000, 39, 2473–2479. [Google Scholar] [CrossRef]
- Korea Coast Guard. Available online: https://www.kcg.go.kr/kcg/na/ntt/selectNttInfo.do?nttSn=34010 (accessed on 20 October 2023).
- Xu, H.; Zhai, G.; Wu, X.; Yang, X. Generalized equalization model for image enhancement. IEEE Transactions on Multimedia 2013, 16, 68–82. [Google Scholar] [CrossRef]
- Ma, Z.; Wen, J.; Zhang, C.; Liu, Q.; Yan, D. An effective fusion defogging approach for single sea fog image. Neurocomputing 2016, 173, 1257–1267. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence 2010, 33, 2341–2353. [Google Scholar] [CrossRef]
- Wang, Y.K.; Fan, C.T. Single image defogging by multiscale depth fusion. IEEE Transactions on image processing 2014, 23, 4826–4837. [Google Scholar] [CrossRef] [PubMed]
- Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE transactions on image processing 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed]
- Hu, H.M.; Guo, Q.; Zheng, J.; Wang, H.; Li, B. Single image defogging based on illumination decomposition for visual maritime surveillance. IEEE Transactions on Image Processing 2019, 28, 2882–2897. [Google Scholar] [CrossRef] [PubMed]
- Xu, Y.; Wen, J.; Fei, L.; Zhang, Z. Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access 2015, 4, 165–188. [Google Scholar] [CrossRef]
- Bae, T.W.; Han, J.H.; Kim, K.J.; Kim, Y.T. Coastal visibility distance estimation using dark channel prior and distance map under sea-fog: Korean peninsula case. Sensors 2019, 19, 4432. [Google Scholar] [CrossRef] [PubMed]
- Palvanov, A.; Cho, Y.I. Visnet: Deep convolutional neural networks for forecasting atmospheric visibility. Sensors 2019, 19, 1343. [Google Scholar] [CrossRef] [PubMed]
- Koschmieder, H. Theorie der horizontalen Sichtweite. Beitrage zur Physik der freien Atmosphare 1924, pp.33-53.
- Huang, S.C.; Ye, J.H.; Chen, B.H. An advanced single-image visibility restoration algorithm for real-world hazy scenes. IEEE Transactions on Industrial Electronics 2014, 62, 2962–2972. [Google Scholar] [CrossRef]
- Yang, L. Comprehensive visibility indicator algorithm for adaptable speed limit control in intelligent transportation systems. Doctoral dissertation, University of Guelph, Canada, 5 Mar 2018.
- Jeon, H.S.; Park, S.H.; Im, T.H. Grid-Based Low Computation Image Processing Algorithm of Maritime Object Detection for Navigation Aids. Electronics 2023, 12, 2002. [Google Scholar] [CrossRef]
- Tarel, J.P.; Hautiere, N.; Caraffa, L.; Cord, A.; Halmaoui, H.; Gruyer, D. Vision enhancement in homogeneous and heterogeneous fog. IEEE Intelligent Transportation Systems Magazine 2012, 4, 6–20. [Google Scholar] [CrossRef]
- Sakaridis, C.; Dai, D.; Van, G.L. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision 2018, 126, 973–992. [Google Scholar] [CrossRef]
- Korea Meteorological Administration. Available online: https://www.kma.go.kr/neng/index.do (accessed on 20 October 2023).
- Korea Hydrographic and Oceanographic Agency. Available online: https://www.khoa.go.kr/eng/Main.do (accessed on 20 October 2023).
Figure 1.
Flow chart of the primary DCP processes.
Figure 1.
Flow chart of the primary DCP processes.
Figure 2.
Flow chart of the RDCP algorithm.
Figure 2.
Flow chart of the RDCP algorithm.
Figure 3.
DC value comparison of a sample image.
Figure 3.
DC value comparison of a sample image.
Figure 4.
The sample images used for
Figure 3. Both images are captured by an identical camera mounted in the same buoy. Image size is 1280 x 720 pixels. (
a) An image captured from the camera without sea fog; (
b) An image captured from the camera with dense sea fog.
Figure 4.
The sample images used for
Figure 3. Both images are captured by an identical camera mounted in the same buoy. Image size is 1280 x 720 pixels. (
a) An image captured from the camera without sea fog; (
b) An image captured from the camera with dense sea fog.
Figure 6.
Four different ports in which the datasets are captured in South Korea.
Figure 6.
Four different ports in which the datasets are captured in South Korea.
Figure 7.
Images captured at the Gwanghwa Island port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 7.
Images captured at the Gwanghwa Island port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 8.
Images captured at the Pyeongtaek port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 8.
Images captured at the Pyeongtaek port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 9.
Images captured at the Baengyeong Island port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 9.
Images captured at the Baengyeong Island port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 10.
Images captured at the Ji Island port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 10.
Images captured at the Ji Island port and corresponding labels: (a) Image having a no-fog label; (b) Image having a low-fog label; (c) Image having a mid-fog label; (d) Image having a dense-fog label.
Figure 11.
RDCP percent comparison of original images according to different sea fog intensity: (a) Gwanghwa Island; (b) Pyeongtaek (c) Baengyeong Island (d) Ji Island.
Figure 11.
RDCP percent comparison of original images according to different sea fog intensity: (a) Gwanghwa Island; (b) Pyeongtaek (c) Baengyeong Island (d) Ji Island.
Figure 12.
RDCP percent comparison of cropped images according to different sea fog intensity: (a) Gwanghwa Island; (b) Pyeongtaek (c) Baengyeong Island (d) Ji Island.
Figure 12.
RDCP percent comparison of cropped images according to different sea fog intensity: (a) Gwanghwa Island; (b) Pyeongtaek (c) Baengyeong Island (d) Ji Island.
Figure 13.
RDCP percentage value changes according to the thresholds. 20 images captured at the Ji Island port: (a) When the threshold is 80; (b) When the threshold is 110; (c) When the threshold is 130; (d) When the threshold is 150.
Figure 13.
RDCP percentage value changes according to the thresholds. 20 images captured at the Ji Island port: (a) When the threshold is 80; (b) When the threshold is 110; (c) When the threshold is 130; (d) When the threshold is 150.
Figure 14.
Sea fog changes from the Pyeongteack port in different times in a day.
Figure 14.
Sea fog changes from the Pyeongteack port in different times in a day.
Figure 15.
Comparison of RDCP percentage (%) and visibility label value (km): (a) RDCP values of original images and visibility in km; (b) RDCP values of cropped images and visibility in km.
Figure 15.
Comparison of RDCP percentage (%) and visibility label value (km): (a) RDCP values of original images and visibility in km; (b) RDCP values of cropped images and visibility in km.
Figure 16.
Prototype of the RDCP algorithm was applied to a camera and real-time estimation was provided: (a) Image with a no-fog label; (b) Image with a low-fog label; (c) Image with a mid-fog label; (d) Image with dense-fog label.
Figure 16.
Prototype of the RDCP algorithm was applied to a camera and real-time estimation was provided: (a) Image with a no-fog label; (b) Image with a low-fog label; (c) Image with a mid-fog label; (d) Image with dense-fog label.
Table 1.
Label value comparison between sea fog intensity and visibility provided by KMAA and KHOA.
Table 1.
Label value comparison between sea fog intensity and visibility provided by KMAA and KHOA.
Sea Fog Intensity |
Visibility (meters) |
No-fog |
1500~ |
Low-fog |
1000~1500 |
Mid-fog |
500~1000 |
Dense-fog |
0~500 |
Table 2.
Comparison of the average processing time and the related average number of pixels between 230 original images and 230 cropped images.
Table 2.
Comparison of the average processing time and the related average number of pixels between 230 original images and 230 cropped images.
|
Average processing time |
Number of pixels for the algorithm calculation |
Original images |
34ms |
921,600 |
Cropped images |
17ms |
329,707 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).