Submitted:
09 October 2024
Posted:
10 October 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Data and Methodology
2.1. Experiment and Data
2.2. Data Augmentation
2.3. Extraction of Image Features Related to Fog Density
2.4. Deep Learning Approaches: VGG16, VGG19, ResNet50, DenseNet169 and the Improved Random Forest
- Hierarchical Clustering. Initially, each decision tree is treated as an independent cluster. The Dunn index is used to calculate the similarity between any two decision trees, and the two clusters with the smallest similarity are merged. This process is repeated until the number of remaining clusters reaches a predetermined value. Then, the decision tree with the best classification performance is selected from each cluster to form a new Random Forest model.
- K-Medoids Clustering. The cluster centers obtained from hierarchical clustering are used as the initial clusters for k-Medoids clustering. The similarity between the unclassified decision trees and each cluster center is calculated, and the decision trees are reassigned based on the nearest neighbor principle. Then, the decision tree with the best performance within each cluster is selected as the new cluster center. This process is repeated until the cluster centers stabilize or the maximum number of iterations is reached.
- Model Training and Prediction: The preprocessed feature data is input into the improved Random Forest model for training. The model constructs a large number of decision trees, with each tree independently predicting the fog density. The final output is the average of the predictions from all decision trees.
2.5. Assessment Method
3. Result
3.1. Augmented Data
3.2. Relationship between Image Features and Fog Density
3.3. Estimation of Fog Density
4. Conclusion and Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Z. Li, “STUDIES OF FOG IN CHINA OVER THE PAST 40 YEARS,” Acta Meteorologica Sinica, vol.5, pp.616-624, 2001. [CrossRef]
- Z. Bao, Y. Tang, C. Li, et al., “Road Traffic Safety Technology Series: Highway Traffic Safety and Meteorological Impact (1st Edition),” Beijing: People’s Traffic Press, 2008, pp.1-15.
- General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China, and Standardization Administration of China, "Fog Forecast," Meteorological Standard, GB/T 27964-2011, Beijing: General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China, and Standardization Administration of China, December 30, 2011, pp. 1-6.
- Wang, Y. , Jia, L., Li, X., Lu, Y., and Hua, D., “A measurement method for slant visibility with slant path scattered radiance correction by lidar and the SBDART model," Optics Express, vol.29, no.2, pp.837–853, 2020. [CrossRef]
- Xian, J. , Han, Y., Huang, S., Sun, D., and Li, X., "Novel lidar algorithm for horizontal visibility measurement and sea fog monitoring," Optics Express, vol.26, no.2, pp.34853–34863, 2018. [CrossRef]
- Y. Li, H. Sun, and M. Xu, “The Present Situation and Problems on Detecting Fog by Remote Sensing with Meteorological Satellite,” Remote Sensing Technology and Application, vol.15, no.4, pp.223-227, 2000. [CrossRef]
- K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” Computer Vision and Pattern Recognition, Jun. 2014. [CrossRef]
- D. Yuan, J. Huang, X. Yang and J. Cui, “Improved random forest classification approach based on hybrid clustering selection,” in 2020 Chinese Automation Congress (CAC), Shanghai, China, 2020, pp.1559-1563. [CrossRef]
- Q. Li, S. Tang, X. Peng, and Q. Ma, “A Method of Visibility Detection Based on the Transfer Learning,” Journal of Atmospheric and Oceanic Technology, vol.36, no.10, pp.1945-1956, Oct. 2019. [CrossRef]
- W. L. Lo, M. Zhu, and H. Fu, “Meteorology Visibility Estimation by Using Multi-Support Vector Regression Method,” Journal of Advances in Information Technology, pp.40-47, 2020. [CrossRef]
- J. Jonnalagadda and M. Hashemi, “Forecasting Atmospheric Visibility Using Auto Regressive Recurrent Neural Network,” in 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science, pp.209-215, Aug. 2020. [CrossRef]
- J. Li, W. L. Lo, H. Fu, and H. S. H. Chung, “A Transfer Learning Method for Meteorological Visibility Estimation Based on Feature Fusion Method,” Applied Sciences, vol.11, no.3, p.997, Jan. 2021. [CrossRef]
- Wai Lun Lo, H. Shu, and H. Fu, “Experimental Evaluation of PSO Based Transfer Learning Method for Meteorological Visibility Estimation,” Atmosphere, vol.12, no.7, pp.828-828, Jun. 2021. [CrossRef]
- Y. Li, Y. Ji, J. Fu, and X. Chang, “FGS-Net: A Visibility Estimation Method Based on Statistical Feature Stream in Fog Area,” Research Square (Research Square), Feb. 2023. [CrossRef]
- Y. Choi, H.-G. Y. Choi, H.-G. Choe, Jae Young Choi, Kyeong Tae Kim, J.-B. Kim, and N.-I. Kim, “Automatic Sea Fog Detection and Estimation of Visibility Distance on CCTV,” Journal of Coastal Research, vol.85, pp.881-885, May 2018. [CrossRef]
- F. Zhang et al., “Deep Quantified Visibility Estimation for Traffic Image,” Atmosphere, vol.14, no.1, pp.61-61, Dec. 2022. [CrossRef]
- C. Busch and E. Debes, “Wavelet transform for visibility analysis in fog situations,” IEEE Intelligent Systems, vol.13, no.6, pp.66–71, 1998.
- N. Hauti´ere, J.-P. Tarel, J. Lavenant, and D. Aubert, “Automatic fog detection and estimation of visibility distance through use of an on board camera,” Machine Vision and Applications, vol.17, no.1, pp.8–20, 2006.
- M. Negru and S. Nedevschi, “Image based fog detection and visibility estimation for driving assistance systems,” in 2013 IEEE 9th International Conference on Intelligent Computer Communication and Processing (ICCP), IEEE, 2013, pp.163–168.
- F. Guo, H. Peng, J. Tang, B. Zou, and C. Tang, “Visibility detection approach to road scene foggy images,” KSII Transactions on Internet & Information Systems, vol.10, no.9, 2016.
- W. Wauben and M. Roth, “Exploration of fog detection and visibility estimation from camera images,” in WMO Technical Conference on Meteorological and Environmental Instruments and Methods of Observation (CIMOTECO), 2016, pp.1–14.
- L. Yang, R. Muresan, A. Al-Dweik, and L. J. Hadjileontiadis, “Image based visibility estimation algorithm for intelligent transportation systems,” IEEE Access, vol.6, pp.76728–76740, 2018.
- X. Cheng, G. Liu, A. Hedman, K. Wang, and H. arXiv preprint arXiv:1804.04601, arXiv:1804.04601, 2018.
- Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Processing, vol.24, no.11, pp.3522-3533, Nov. 2015. [CrossRef]
- Junyi Chai, Hao Zeng, Anming Li, Eric W.T. Ngai, “Deep learning in computer vision: A critical review of emerging techniques and application scenarios,” Machine Learning with Applications, vol.6, 2021. [CrossRef]
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”. arXiv:1409.1556, 2014. [CrossRef]
- He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2015. arXiv:1512.03385, 2015. [CrossRef]
- G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,”. arXiv:1608.06993, 2016. [CrossRef]
- L. Breiman, “Random forest,” Machine Learnning, vol. A45, pp. 5–32, 2001.
- W. Yang, Y. Zhao, Q. Li, F. Zhu, and Y. Su, “Multi visual feature fusion based fog visibility estimation for expressway surveillance using deep learning network,” Expert Systems with Applications, vol.234, p.121151, 2023. [CrossRef]
- K. Miao, J. Zhou, P. Tao, et al., “Self-Adaptive Hybrid Convolutional Neural Network for Fog Image Visibility Recognition,” 2024.
- L. Huang, Z. Zhang, P. Xiao, et al., “Classification and application of highway visibility based on deep learning,” Trans Atmos Sci, vol.45, no.2, pp.203-211, 2022.
- T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila, “Training generative adversarial networks with limited data,” in Proceedings of the 34th International Conference on Neural Information Processing Systems, pp.11, 2020.
- I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Networks," in *Advances in Neural Information Processing Systems*, pp. 2672–2680, 2014. https://arxiv.org/abs/1406.2661.
- D. L. Ruderman, “The statistics of natural images,” Network: Computation in Neural Systems, vol.5, no.4, pp.517-548, Jan. 1994. [CrossRef]
- D. Makkar and M. Malhotra, “Single Image Haze Removal Using Dark Channel Prior,” International Journal Of Engineering And Computer Science, vol.33, Jan. 2016. [CrossRef]
- D. Hasler and S. E. Suesstrunk, “Measuring colorfulness in natural images,” Proc. SPIE, vol.5007, p.87, 2003.
- L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Processing, vol.24, no.11, pp.3888-3901, Nov. 2015. [CrossRef]
- K. Gu, G. Zhai, X. Yang, and W. Zhang, “Using Free Energy Principle For Blind Image Quality Assessment,” IEEE Transactions on Multimedia, vol.17, no.1, pp.50-63, Jan. 2015. [CrossRef]
- Berns, “Billmeyer and Saltzman’s Principles of Color Technology, 4th Edition,” John Wiley & Sons, 2021.
- A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ’completely blind’ image quality analyzer,” IEEE Signal Process. Lett., vol.20, no.3, pp.209–212, Mar. 2013.
- D. L. Ruderman, T. W. Cronin, and C.-C. Chiao, “Statistics of cone responses to natural images: implications for visual coding,” Journal of the Optical Society of America A, vol.15, no.8, p.2036, Aug. 1998. [CrossRef]






| Fog Density | Visibility Range |
|---|---|
| Light Fog | |
| Moderate Fog | |
| Dense Fog | |
| Thick Fog | |
| Very Thick Fog |
| 0-50m | 50-200m | 200-500m | 500-1 000m | 1 000-10 000m | Av. | |
|---|---|---|---|---|---|---|
| Inception Score | 3.81 | 3.21 | 4.26 | 3.42 | 3.33 | 3.61 |
| FID value | 98.01 | 98.19 | 98.91 | 93.53 | 92.69 | 96.27 |
| Fog-relevant Features | Serial Number | Correlation Coefficient |
|---|---|---|
| Coefficients of MSCN Variance | F1 | 0.493 |
| Dark channel | F2 | 0.562 |
| Colorfulness | F3 | 0.581 |
| Sharpness | F4 | 0.457 |
| Coefficient of sharpness variance | F5 | 0.477 |
| Entropy | F6 | 0.481 |
| Combination of saturation and value in HSV space | F7 | 0.440 |
| Chroma | F8 | 0.632 |
| Variance of chroma | F9 | 0.534 |
| Weber contrast of luminance | F10 | 0.555 |
| Local contrast | F11 | 0.512 |
| Contrast energy (gray) | F12 | 0.354 |
| Contrast energy (yb) | F13 | 0.313 |
| Contrast energy (rg) | F14 | 0.367 |
| Gradient magnitude | F15 | 0.295 |
| Color variance | F16 | 0.308 |
| Fog Density | ||||||
|---|---|---|---|---|---|---|
| Very Thick Fog | Thick Fog | Dense Fog | Moderate Fog | Light Fog | Total | |
| VGG-16(%) | 63.2 | 73.3 | 84.1 | 85.5 | 87.7 | 83.9 |
| VGG-19(%) | 64.7 | 71.4 | 84.7 | 86.6 | 86.3 | 85.6 |
| ResNet-50(%) | 68.5 | 76.5 | 85.3 | 88.4 | 90.1 | 86.9 |
| DenseNet-169(%) | 64.7 | 69.3 | 83.4 | 89.6 | 90.1 | 85.8 |
| Random Forest(%) | 55.1 | 68.7 | 83.4 | 88.9 | 90.8 | 84.1 |
| Random Forest based on hybrid clustering(%) | 58.5 | 70.5 | 85.5 | 89.7 | 91.1 | 86.4 |
| Fog Density | ||||||
|---|---|---|---|---|---|---|
| Very Thick Fog | Thick Fog | Dense Fog | Moderate Fog | Light Fog | Total | |
| VGG-16(%) | 81.3 | 88.3 | 86.1 | 88.5 | 89.7 | 86.2 |
| VGG-19(%) | 82.7 | 86.2 | 89.7 | 90.3 | 91.3 | 89.6 |
| ResNet-50(%) | 83.4 | 89.3 | 91.3 | 89.1 | 90.4 | 88.9 |
| DenseNet-169(%) | 84.3 | 89.6 | 91.4 | 92.6 | 93.6 | 91.2 |
| Random Forest(%) | 89.0 | 92.7 | 91.4 | 89.9 | 92.3 | 90.1 |
| Random Forest based on hybrid clustering(%) | 89.8 | 91.7 | 94.5 | 92.7 | 94.9 | 93.0 |
| Reference | Used Method | Visibility Range (m) | Feature Extractor | Classifier/Regressor | Dataset | Accuracy |
|---|---|---|---|---|---|---|
| Li et al. [12] | Deep learning approach based on the fusion of extracted features from the selected subregions for visibility estimation. | 0-12 000 | VGG-16 | Multi-SVR | HKO (4841 images) | 0.88 |
| Loet al.[13] | PSO-based transfer learning approach for feature selection and Multi-SVR model to estimate visibility. | 10 000-40 000 | VGG-19 DenseNet ResNet_50 VGG-16 VGG-19 DenseNet ResNet_50 | Multi-SVR | Private Dataset (6048 images) | 0.88 0.90 0.91 0.90 0.90 0.91 0.93 |
| Liu et al.[14] | STCN-Net model that combines engineered and learned features. | 50-10 000 | Swin-T + ResNet-18 | Fully Connected | VID I | 0.98 |
| Choi et al. [15] | Detection of daytime sea fog and estimating visibility distance from CCTV images. | 0-20 000 | VGG19 | Fully Connected | Private Dataset (5104 images) | 0.72 |
| Zhanget al.[16] | Estimation of quantified visibility based on physical laws and deep learning architectures. | 0-35 000 | DQVENet | Specific algorithm | QVEData | 0.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
