Preprint
Article

Automated Cloud Shadow Detection from Satellite Orthoimages with Un-corrected Cloud Relief Displacements

Altmetrics

Downloads

77

Views

26

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

06 August 2024

Posted:

06 August 2024

You are already at the latest version

Alerts
Abstract
The presence of cloud and cloud shadows in satellite images affects the accuracy of land cover classification and object detection. Therefore, their existence is considered noise and a target for removal by many researchers. This paper focuses on precise cloud shadow detection from ortho-rectified images. The directions of cloud shadows in original satellite images are determined by the sensor and sun illumination directions. However, those in ortho-rectified images have not been studied explicitly. This study proposes that the directions of cloud shadow in ortho-rectified images are also determined by both sensor and sun directions. This is because relief displacements due to cloud height above ground are not corrected through ortho-rectification processes. This study also proposed to detect cloud shadows by shifting a bounding box of a cloud region through the cloud shadow direction and identifying dark pixels within the shifted box. This study utilized Rapideye images acquired in various viewing angles and viewing directions. The proposed method improved the accuracy of estimating cloud shadow directions and extracting cloud shadow regions greatly. The outcomes of this study are expected to be utilized as precise cloud shadow detection and correction. As a future work, there is a need to enhance accuracy of cloud shadow detection through post-processing methods such as watershed algorithm. Additionally, an interesting future study would be to check whether the findings of this study are applicable to other objects in ortho- or unmanned aerial vehicle (UAV) images with uncorrected relief displacements, such as high-rise buildings.
Keywords: 
Subject: Environmental and Earth Sciences  -   Remote Sensing

1. Introduction

The utility of optical satellites is increasing due to their ability to periodically observe extensive regions and to enable effective Earth observation missions [1]. However, despite these significant advantages, satellite imagery, particularly in the field of remote sensing, could present various challenges. One such challenge encountered in this field is shadows [2]. These shadows could be caused by different sources, such as buildings, clouds, and terrain. Shadows from these objects could lead to misleading outcomes like classification or object detection because the presence of shadows leads to a significant loss of radiometric information in the shadowed areas [2,3]. This vulnerability that also affects unmanned aerial vehicle (UAV) imagery [4,5]. Given the significant impact shadows could have on the accuracy and reliability of satellite imagery, many researchers perceive shadows as noise and targets for removal or correction in detection operations [6].
Building upon the general challenges posed by shadows, cloud shadows also present significant issues in optical satellite imagery. Due to the annual average cloud cover of 66% [7], a significant amount of information is lost as result of clouds and the corresponding cloud shadows. As noted above, the presence of cloud and cloud shadows in optical satellite images could interfere with ground observations [8,9,10]. Therefore, detecting cloud and cloud shadow is crucial for ensuring the reliability of satellite imagery [11].
To detect cloud shadows, which are perceived as noise, Zhu and Woodcock [12] conducted cloud and cloud shadow detection for the Landsat and Sentinel-2 satellite. Within the scope of cloud shadows, this method involves estimating the projected location of cloud shadows from cloud masks detected a priori by considering the solar and satellite azimuth and zenith angles. Foga et al. [13] reported that this method is well performed and tested by the United States Geological Survey (USGS). In high resolution satellite imagery, Le Hégarat-Mascle and André [14] and Fisher [15] developed a cloud shadow detection method focusing on the geometric relationship between clouds and their shadows for SPOT satellite images. Similarly, Zhong et al. [16] also developed a cloud shadow detection method based on the geometric relationships for Chinese satellite images.
As shown in the above developments, it is well known that the direction of shadow in raw satellite images depends on the direction of sun illumination and sensor viewing direction. It is also well known that ortho-rectified images remove relief displacements due to oblique sensor viewing direction. This implies that the direction of shadow and the direction of sun illumination should coincide in ortho-rectified images. However, we observed that this implication may not be realized for objects such as cloud and cloud shadows.
The reason is that the height of clouds is not considered during the ortho-rectification process, leading to relief displacement concerning cloud height in orthoimages. This phenomenon was also mentioned in the study by Pailot-Bonnétat et al. [17], however, without explicit experiments. This paper will highlight the necessity of the direction of sensor viewing as well as that of sun illumination for determining the direction of shadow of objects with un-corrected relief displacements such as clouds. We will show that relief displacement of clouds still exists in orthoimages and point out the importance of considering sensor geometry for cloud shadow detection from orthoimages. We will then propose an automated approach for cloud shadow detection.
In pursuit of cloud shadow detection, we utilized the OCM (Object-oriented Cloud and Cloud-shadow Matching) method previously researched by Zhong et al. [16] to generate a cloud map first. Candidate regions of cloud shadows were then projected from the cloud map along the direction of shadow and actual cloud regions were detected. We employed for experiments RapidEye satellite images taken at nadir and oblique viewing angles. To examine relief displacements of clouds in orthoimages, geometric correction and ortho-rectification were applied to the data. The direction of cloud shadows in orthoimages were estimated by two cases: the first case considered the direction of sun illumination only and the second case the direction of sun illumination and sensor viewing direction. The estimated direction of cloud shadows was compared with true direction manually measured. In the orthoimages made from nadir images, the angles from the two cases were similar to the true angle. However, in the orthoimages made from high-oblique images, there was a maximum difference of 21.3° between the angle from the first case and the true angle. The difference between the angle from the second case and the true angle was less than 4.0°. We performed cloud shadow detection using the shadow angles from the two cases. Accuracy results showed that the shadow detection considering the angle from the second case improved the average f1-score by 0.17 and increased the average detection rate by 7.7%, compared to the results from the first case. By considering both solar and sensor geometries, higher accuracy in cloud shadow detection was achieved. This result support the estimation of shadow direction from clouds with un-corrected relief displacements and cloud shadow detection proposed in this paper.

2. Materials and Methods

The cloud shadow detection method proposed in this study consists of three main steps: data preprocessing, estimation of the search range for cloud shadow detection, and detection of cloud shadows, as illustrated in Figure 1. In the preprocessing step, the pixel values of a satellite image are converted to Top-of-Atmosphere (TOA) values and cloud pixels are detected by the OCM method [16]. the search range estimation step, metadata at the time of satellite image acquisition, including the satellite and solar zenith and azimuth angles, are used to calculate the direction vector from clouds to cloud shadows. Subsequently, the search range where cloud shadows may be projected was calculated by presumed minimum and maximum cloud heights. In the cloud shadow detection step, the process involves shifting cloud pixels from the minimum to maximum range incrementally along the direction of cloud shadows. Cloud shadows are then detected by selecting the shift value that show the best spectral characteristics of cloud shadows.

2.1. Methods

2.1.1. Data Preprocessing

The pixel values (Digital Number, DN) in a satellite image contain distortions due to variations in solar angle resulting from temporal differences, changes in the distance between the Sun and Earth, and spectral differences [18]. Distortions result in variations in pixel values between images, making it challenging to accurately determine the physical quantities of the surface [19]. Therefore, a process is necessary to convert pixel values into reflectance values corrected for distortions. This study employed the formulas (1) and (2) to convert pixel values into Top-of-Atmospheric reflectance (TOA reflectance) values from the Product Specification provided by Planet Labs [20]. In Equation (1), R A D i represents the radiance of the i -th band, where i denotes the number of bands, and ScaleFactor denotes the coefficient for radiance conversion. In Equation (2), R E F T O A i represents the Top-of-Atmospheric reflectance of the i -th band, S u n D i s t is the distance between the Earth and the Sun at the time of image acquisition, E A I ( i ) denotes Exo-Atmospheric Irradiance, and S o l a r Z e n i t h represents the solar zenith angle. The cloud map is generated using a cloud map with low confidence (CML) of the OCM method. The CML represents the operation of loosening the thresholds of the VNIR bands to sufficiently detect clouds [16].
R A D i = D N i × S c a l e F a c t o r ,
R E F T O A i = R A D ( i ) × π × S u n D i s t E A I ( i ) × C o s ( S o l a r Z e n i t h ) ,

2.1.2. Calculation of Cloud Shadow Direction

The geometric principle behind shadow formation is that sunlight is obstructed by objects, resulting in the casting of shadows opposite to the solar direction. Shadows can be detected by utilizing the solar azimuth angle and the height of the object. However, as mentioned in the introduction, considering the geometry of the sensor is essential for cloud shadow detection. This is illustrated in Figure 2 and Figure 4. Figure 2 (a) illustrates the projected locations of cloud shadows based on the height of the clouds. C 0 represents the true ground location of the clouds. C 1 and C 2 represent clouds locations at different heights h 1 and h 2 . C S 1 and C S 2 denote the projected cloud shadows from C 1 and C 2 due to the oblique sun direction.
Figure 2 (b) illustrates the positions of clouds and cloud shadows projected in an orthoimage. If ortho-rectification is performed ideally by considering the height of clouds, all clouds would be positioned as at C o in Figure 2 (b). The cloud shadows for clouds with various heights would be projected along the line from the sun to C o . However, in reality, the displacement due to cloud height will not be corrected during the ortho-rectification process. As a result, clouds depicted in the orthoimage are positioned along the line from the sensor to C o on C 1 to C 2 according to their heights. With an increase in cloud height, relief displacement occurs in the orthoimages, posing a challenge in estimating cloud shadow positions solely based on the geometry of the sun [17].
Figure 3 illustrates clouds and their shadows in the original images before orthorectification (a) and in the orthoimage (b). The cloud shadow pixels in (b) have been ortho-rectified based on the height information from the corresponding surface heights. As a result, it is challenging to discern significant differences between the cloud and cloud shadow positions in (a) and (b). This indicates that the cloud relief displacement in the orthoimages has not been effectively corrected.
Figure 4 illustrates the cloud relief displacement caused by the satellite's viewing angle. Figure 4 (a) and (b) illustrate the positions of cloud relief displacement in vertical and oblique images, respectively. Here, C o represents the ground point directly beneath the cloud, and C 1 and C 2 indicate clouds at different heights PC, P C 1 , and P C 2 represent the positions of the projected ground point below the clouds and the positions of the clouds based on their heights. As the satellite captures images at more oblique angle, P C 2 in (b) exhibits a more pronounced relief displacement compared to P C 2 in (a). In summary, relief displacement in cloud height persists in orthoimages. As the satellite captures images at higher angles, the variation in cloud height becomes more pronounced. Therefore, it is crucial to consider the satellite's azimuth angle, which allows for the incorporation of cloud height variations.
To detect clouds that still exhibit relief displacement in orthoimages, it is necessary to determine the direction from clouds to their shadows. The calculation method for the cloud-to-shadow direction vector ( C ) is illustrated in Figure 5. Since the relief displacement of clouds is not corrected in orthoimages, the clouds are projected onto the position of the Projected cloud in Figure 5. When applying the solar direction vector ( S ), Projected
cloud does not align with the direction of the cloud shadow. Therefore, it is necessary to consider the sensor direction vector ( V ). Both vectors ( S and V ) can be calculated using the metadata at the time of image acquisition. C can be obtained through the difference operation of S and V , as shown in Equation (3). In Equation (4), θ S , φ S , θ V ​, and φ V represent, respectively, the solar zenith and azimuth angles, and the sensor zenith and azimuth angles.
  C   =   S     V   ,
  C   = π + tan 1 ( t a n ( θ S ) × s i n ( φ S ) ) ( t a n ( θ V ) × s i n ( φ V ) ) ( t a n ( θ S ) × c o s ( φ S ) ) ( t a n ( θ V ) × c o s ( φ V ) ) ,

2.1.3. Calculation of the Cloud Shadow Range

Once the cloud-to-shadow direction is determined, cloud shadow locations can be determined using the height of clouds [16]. However, in optical satellites, it is not possible to determine the height of the cloud. It is necessary to make assumptions about the minimum and maximum heights of the cloud to estimate the search range for cloud shadow detection. In mid-latitude regions of the Northern Hemisphere, excluding tropical areas, the maximum height of clouds is known to be 12 km [16,21]. The minimum height varies depending on the research; however, based on previous studies [22], we assumed a minimum cloud height of 200 m, as illustrated in Figure 6. The minimum-maximum search range ( D ( m ) ) for cloud shadows is calculated using the minimum-maximum height of clouds ( h m i n and h m a x ) and the cloud-to-cloud shadow direction angle ( C ), as shown in Equation (5).
D ( m ) = h ×   C   ,

2.1.4. Cloud Shadow Detection

The proposed cloud shadow detection method in this study involves shifting the clouds using the search range for cloud shadows and the cloud shadow direction angle. Subsequently, infrared band-based statistical analysis is conducted to identify the point with the minimum value, indicating the detection of cloud shadows.
Clouds have varying heights for each object; hence, cloud shadow detection needs to be performed for each cloud object. In this study, we employ the outline information of clouds within the cloud mask to individually extract cloud objects. The clouds extracted for each object are shifted fbased on the Ground Sample Distance (GSD) in the cloud shadow direction angle, ranging from the minimum to the maximum detectable distance. To calculate the shifted position of clouds, the ground coordinates of the shifted clouds are computed using Equations (6) and (7) as shown in Figure 7. These ground coordinates are then transformed into image coordinates.
Clouds have varying heights for each object; hence, cloud shadow detection needs to be performed for each cloud object. In this study, we employ the outline information of clouds within the cloud mask to individually extract cloud objects. The clouds extracted for each object are shifted based on the Ground Sample Distance (GSD) in the cloud shadow direction angle, ranging from the minimum to the maximum detectable distance. To calculate the shifted position of clouds, the ground coordinates of the shifted clouds are computed using Equations (6) and (7) as shown in Figure 7. These ground coordinates are then transformed into image coordinates.
X g e o Y g e o = G T 0 + G T 1 × B o u n d a r y T L x + D ( m ) × C o s (   C   ) G T 3 + G T 5 × B o u n d a r y T L y + D ( m ) × S i n (   C   ) ,
X I m g Y I m g = ( X g e o G T 0 ) G T 1 ( Y g e o G T 3 ) G T 5 T ,
Due to the unavailability of the true cloud height or the true distance of shadow projection for each cloud object, it is essential to incorporate a measure considering the spectral characteristics of cloud shadows. The formulation of the measure is as follows: First, cloud shadows typically exhibit low reflectance in the near-infrared (NIR) band. Second, to identify points with low reflectance values, it is imperative to exclude cloud areas with the cloud mask from operation. Third, the areas with the most affected by the shadow correspond to areas where the sum of NIR reflectance within the shifted cloud mask is at its minimum. Accordingly, we define the measure in Equation (8), and its explanation is provided below. x r e p r e s e n t s pixels, A the region of a shifted cloud object, B the cloud mask map, h x the statistics of the NIR band as defined in Equation (9), f x the region of cloud objects satisfying the condition, and g x the objective function to satisfy the minimum value among the cloud object regions that meet the condition. In Equation (9), μ N I R and σ N I R represent the mean and standard deviation of the NIR band, respectively. α denotes the Z-score, and a value of 1.96 was applied for α .
g x = i f   x   A B ,   h x < 0.17 ,   D m   D m a x ,     f x i f   x   A B ,   h x 0.17 ,   D m > D m a x ,     C l o u d   s h a d o w o t h e r w i s e ,                                                         n e x t   l o o p ,
h x = ( μ N I R + α × σ N I R ) ,
P o t e n t i a l   C l o u d   S h a d o w   R e g i o n ( P C S R ) = a r g m i n g ( x ) ,
P o t e n t i a l   C l o u d   S h a d o w = ( N I R T O A < 0.17   a n d   N I R T O A R E D T O A > 1.0 )   P C S R ,
We detect cloud shadows in the following procedures. If the distance of the shifted cloud object is less than the maximum detectable distance ( D m a x ), the iterative process is repeated by moving one pixel along the clod shadow direction. Through iterative operations, the region with the lowest statistics in the near-infrared band is identified and confirmed as the cloud shadow region. This region is formally defined as the Potential Cloud Shadow Region, as described by Equation (10). Finally, we assume that one cloud object corresponds to one cloud shadow object. Within the confirmed area, we detect Potential Cloud Shadow (PCS) pixels using Equation (11). Figure 8 (a) represents the state before noise removal, while (b) represents after noise removal. In PCS pixels, we extract an object with the largest outer contour, considering the rest as noise.

2.1. Materials and Validation Method

In this study, cloud shadow detection experiments were conducted using RapidEye images at 5m spatial resolution. As the images were provided in Level 1B, which is uncorrected for geometric distortion, geometric correction and ortho-rectification were applied based on the method developed in our previous studies [23,24,25]. To examine the relief displacement of clouds based on sensor geometry, this study collected images captured by sensors with different viewing geometry. Figure 9 shows images used for the experiments and Table 1 summarizes the metadata of the images used in the experiments. All three images were taken in September over the Dongducheon-si and Gaesong-si, whose latitude difference is less than 0° 7′ 35″ degree. Therefore, the sun illumination angles for the three images were very similar.
Image (a) was captured obliquely tilting the satellite to the west, with a sensor zenith angle of 16.3° and an azimuth angle of 281.3°. Image (b) was captured near vertically tilting the satellite to the east, with a sensor zenith angle of 3.8° and an azimuth angle of 99.8°. Image (c) was captured obliquely tilting the satellite to the east, with a sensor zenith angle of 17.1° and an azimuth angle of 98.8°. Note that the second image was captured with near nadir viewing and that the other two images was captured with significant viewing angles in opposite direction to each other. Images (a) ~ (c) corresponded to cloud cover percentages of 1.33%, 0.58%, and 12.66%, respectively. The cloud map was generated using the OCM method.
For verification of the method proposed here, reference cloud shadow directions were measured through visual interpretation of the angles from cloud to cloud shadow. Similarly, reference cloud shadow regions were extracted manually by visual interpretation. The accuracy assessment involved qualitative and quantitative analyses by comparing the reference data with the results from our method. Qualitative analysis was performed by visually interpreting the reference data. The accuracy assessment involved quantitative analysis using a confusion matrix, which includes True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). Precision, recall, and f1-score, calculated from the confusion matrix results (Equations 12, 13, and 14), represent the level of accuracy, with values closer to 1 indicating higher accuracy.
P r e c i s i o n = T P T P + F P ,
R e c a l l = T P T P + F N ,
F 1   s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,

3. Results

3.1. Verification of Cloud Shadow Direction

In this section, verification of the direction angle for cloud shadow was conducted. Verification data was collected by visually inspecting 10 direction angles from cloud to cloud shadow in each image. The collection criteria involved connecting the center of cloud to the center of cloud shadow or connecting the edge of cloud to the corresponding edge of cloud shadow, depending on the shape of cloud and cloud shadows. Verification was performed by comparing the direction vector C 1 of cloud shadow obtained using only the solar direction vector and the direction vector C 2 obtained using both the solar direction vector and the sensor direction vector against the verification data ( T ).
Figure 10 shows the 10 verification data extracted from Scene-1, which was obtained by tilting the satellite to the west by 16.3°. Figure 11 highlights the comparison of cloud shadow detection results for in Scene-1. The left images represent the NIR-R-G composite images indicating the two direction angles of cloud shadow, C 1   a n d   C 2 and the verification data T . The right image indicates the bounding boxes for cloud shadow shifted from the bound box of cloud along the two cloud shadow directions. As shown in Figure 11, C 2 correctly points towards the center of the cloud shadow and the bounding box shifted along C 2 covers the entire cloud shadow region.
Table 2 shows the angle values of C 1 , C 2 , and T , the angle differences between C 1 and T , and t h e a n g l e d i f f e r e n c e s b e t w e e n C 2 and T for 10 verification data. The absolute mean angle difference between C 1 and T was about 17.4° and that between C 2 and T was about 3.2°. Figure 11 and Table 2 supports the determination of cloud shadow directions using the sun and sensor directions, as proposed in this paper.
Figure 12 shows the 10 verification data extracted from Scene-2, which was obtained by nearly vertical viewing angle. Figure 13 illustrates the comparison of cloud shadow detection results in Scene-2. For Scene-2, the differences between C 1   a n d   C 2 were not as large as those in Scene-1. For near vertical images, the relief displacements are not severe and hence the shadow direction angles using the sun direction are somewhat similar to those using the sun and sensor directions. As before, Table 3  C 1 , C 2 , T , C 1 T and C 2 T for Scene-2. The absolute mean angle difference between C 1 and T was about 1.5° and that between C 2 and T was about 2.3°. Figure 12 and Table 3 may indicate that sun directions can used directly as cloud shadow directions for near vertical images. They also support the proposed determination of cloud shadow directions using the sun and sensor directions for precise angle determination for near vertical images.
Figure 14 shows the 10 verification data extracted from Scene-3, which was obtained by tilting to the east by 17.1°. Figure 15 illustrates the comparison of cloud shadow detection results in Scene-3. For Scene-3, the differences between C 1   a n d   C 2 were large as those in Scene-1. It is also notable that the location of shadow direction vectors, C 2 , and the corresponding shadow bounding boxes compared to the locations of C 1 and the corresponding shadow bounding boxes in Scene-3 are in opposite to those in Scene-1. This is because Scene-3 was tilted to the east whereas Scene-1 to the west and hence relief displacements occurred in opposite directions. The proposed determination of cloud shadow directions can handle un-corrected cloud relief displacements in various tilt directions.
Table 4 shows C 1 , C 2 ,   T , C 1 T , and C 2 T for Scene-3. The absolute mean angle difference between C 1 and T was about 21.3° and that between C 2 and T was about 3.7°. The magnitude of the absolute mean angle difference between C 1 and T for Scene-3 was larger than that for Scene-1. This is because the magnitude of sensor’s tilt angle for Scene-3 was larger than for Scene-1. The results and observations in Figure 14 and Table 4 strongly support that the proposed determination of cloud shadow directions using the sun and sensor directions should be valid.

3.2. Cloud Shadow Detection Accuracy

In this section, we conducted an accuracy analysis of cloud shadow detection results using C 1 and C 2 . Figure 16 shows the intermediate process of cloud shadow detection using C 1 and C 2 for Scene-3. In this Figure 16, (a) represents the NIR-R-G composite image. (b) to (e) illustrate the location of a shifted cloud object using C 1 and h x as described by Equation (9), while (f) to (i) illustrate the same for C 2 . The NIR located in the top right corner of the figure represents the statistics as described in Equation (9), which are equivalent to h x . Orange pixels represent a shifted cloud object, while red pixels indicate PCSR as described in Equation (10). yellow directional vectors represent C 1 , and green directional vectors C 2 .
The distance for cloud shadow detection in C6 of Scene-3 was confirmed to be 989 m as seen in (h) using C 2 . However, it was observed that when using C 1 , the corresponding cloud shadow was not detected at distances similar to those of C 2 , and was skipped. For the same reasons mentioned in Section 4.1, a precise cloud shadow directional angle was not set. These results further support the validity of cloud shadow direction determination using the direction of the sun and sensor in the orthoimage.
Figure 17 illustrates the post-processing process following the detection of PCSR. Figure 17 (a) represents the reference data for clouds and cloud shadows, while (b) denotes before post-processing, and (c) post-processing. The red pixels in (b) and (c) represent final cloud shadow pixels, while yellow pixels denote remaining noise pixels such as terrain shadows or water bodies. PCS was detected within the boundary of the shifted cloud using Equation (11), such as in C6 of Figure 15, which is indicated by the green bounding box. Subsequently, the PCS of the object with the largest outer contour of the detected pixels, as shown in Figure 17 (C), is considered the final cloud shadow.
The results of cloud shadow detection are presented in Figure 18, Figure 19, and Figure 20 for Scene-1, 2, and 3, respectively. In these figures, (a) to (c) represent, in order, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 . White pixels represent clouds detected separately by the OCM method [16] and black pixels represent cloud shadows. One can observe that the shadow detection results using C 1 could not find cloud shadows at all, in particular for Scene-1 and Scene-3. This is because the wrong cloud shadow directions could not guide the shifted cloud shadow bounding boxes to the location close enough to the real cloud shadow.
On the other hand, the cloud shadow detection using C 2 has successfully identified the cloud shadows corresponding to the clouds, but there was small shape discrepancy in some results compared to reference data. This discrepancy could be attributed to the variation in the shadow's shape caused by the terrain relief. In the future research, there is a need to reduce this discrepancy through post-processing methods such as watershed algorithm.
Quantitative analysis results for the three scenes using C 1 and C 2 are summarized in Table 5. They indicate that the metrics values are consistently higher when using C 2 for Scene-1 and -3. In the case of Scene-1, the detection rate was 97.5% for C 2 , while it was 86.2% for C 1 . In the case of Scene-3, the detection rate for C 2 was 94.0%, while it was 80.8% for C 1 . For Scene-2, the shadow detection results from C 1 and C 2 were very similar to each other. This is because C 1 and C 2 were very similar and the location of shifted bounding boxes were also very similar.
The average f1-score for cases using C 1 and C 2 are 0.51 and 0.68, respectively. The detection accuracy of C 2 improved by 0.17 compared to C 1 . The average detection rates for C 1 and C 2 are calculated as 88.8% and 96.5%, respectively. This result confirms that, even if the relief displacement of clouds is not corrected in the orthoimages, the proposed method allows for more accurate cloud shadow detection.

4. Discussion and Conclusions

This study applies a geometric method to detect cloud shadows satellite images. It is well-known that the direction of shadow in raw satellite images is determined by the direction of sensor and sun illumination. However, shadow directions in orthoimages has not been studied explicitly. This study proposed and verified that the direction of cloud shadow in orthoimages is also determined by both sensor and sun directions. This is because relief displacements due to cloud height above ground are not corrected through ortho-rectification processes. The findings in this paper should be applicable to other objects in ortho- or unmanned aerial vehicle (UAV) images with uncorrected relief displacements, such as high-rise buildings. This can be a new interesting research topic.
Experiment results support that cloud shadow directions cannot be determined solely by the sun illumination vector, in particular, in orthoimages generated from oblique images. The inclusion of sensor geometry improved the accuracy of the estimation of cloud shadow directions. This improvement contributed to significant improvements for detecting cloud shadows. By shifting a bounding box of a cloud region through the cloud shadow direction, we could locate the bounding box on cloud shadows. Cloud shadow regions were extracted precisely by identifying dark pixels within the box. In the future, there is a need to enhance accuracy of cloud shadow detection through post-processing methods such as watershed algorithm. The outcomes of this study are expected to be utilized as precise cloud shadow detection and cloud shadow correction.

Author Contributions

Conceptualization, T.K. and H.K.; methodology, H.K.; validation, T.K., W.Y.; investigation, H.K.; writing—original draft preparation, H.K.; writing—review and editing, T.K., H.K. and W.Y.; visualization, W.Y.; supervision, T.K.; project administration, T.K.; funding acquisition, T.K.

Funding

This study was supported jointly by the Korea Forest Service’s Nation Institute of Forest Science under the project titled “Reception, Processing, ARD Standardization, and Development of Intelligent Forest Information Platform (task number: FM0103-2021-01) and by the Korea Agency for Infrastructure Technology Advancement grant funded by the Ministry of Land, Infrastructure and Transport (RS-2022-00141819, Development of Advanced Technology for Absolute, Relative, and Continuous Complex Positioning to Acquire Ultra-precise Digital Land Information)

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kang, J.; Kim, G.; Jeong, Y.; Kim, S.; Youn, Y.; Cho, S.; Lee, Y. U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images. Korean Journal of Remote Sensing. 2021, 37(5_1), 1149–1161. [CrossRef]
  2. Shahtahmassebi, A.; Yang, N.; Wang, K.; Moore, N.; Shen, Z. Review of shadow detection and de-shadowing methods in remote sensing. Chin. Geogr. Sci. 2013, 23, 403–420. [CrossRef]
  3. Mostafa, Y. A review on various shadow detection and compensation techniques in remote sensing images. Can. J. Remote Sens. 2017, 43, 545–562. [CrossRef]
  4. Aboutalebi, M.; Torres-Rua, A. F.; Kustas, W. P.; Nieto, H.; Coopmans, C.; McKee, M. Assessment of different methods for shadow detection in high-resolution optical imagery and evaluation of shadow impact on calculation of NDVI, and evapotranspiration. Irrigation science. 2019, 37, 407-429. [CrossRef]
  5. Liu, X.; Yang, F.; Wei, H.; Gao, M. Shadow Removal from UAV Images Based on Color and Texture Equalization Compensation of Local Homogeneous Regions. Remote Sens. 2022, 14, 2616. [CrossRef]
  6. Alavipanah, S.K.; Karimi Firozjaei, M.; Sedighi, A.; Fathololoumi, S.; Zare Naghadehi, S.; Saleh, S.; Naghdizadegan, M.; Gomeh, Z.; Arsanjani, J.J.; Makki, M.; et al. The Shadow Effect on Surface Biophysical Variables Derived from Remote Sensing: A Review. Land 2022, 11, 2025. [CrossRef]
  7. Mao, K.B.; Yuan, Z.J.; Zuo, Z.Y.; Xu, T.R.; Shen, X.Y.; Gao, C.Y. Changes in global cloud cover based on remote sensing data from 2003 to 2012. Chin. Geogr. Sci. 2019, 29, 306–315. [CrossRef]
  8. Kim, B. H.; Kim, Y.; Han, Y. K.; Choi, W. S.; Kim, Y. Fully Automated Generation of Cloud-free Imagery Using Landsat-8. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography. 2014, 32(2), 133–142. [CrossRef]
  9. Byeon, Y.; Choi, S.; Jin, D.; Seong, N.; Jung, D.; Sim, S.; Woo, J.; Jeon, U.; Han, K. Quality Evaluation through Inter-Comparison of Satellite Cloud Detection Products in East Asia. Korean Journal of Remote Sensing. 2021, 37(6_2), 1829–1836. [CrossRef]
  10. Zekoll, V.; de los Reyes, R.; Richter, R. A Newly Developed Algorithm for Cloud Shadow Detection—TIP Method. Remote Sens. 2022, 14, 2922. [CrossRef]
  11. Li, Z.; Shen, H.; Li, H.; Xia, G.; Gamba, P.; Zhang, L. Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery. Remote Sens. Environ. 2017, 191, 342–358. [CrossRef]
  12. Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [CrossRef]
  13. Foga, S.; Scaramuzza, P.L.; Guo, S.; Zhu, Z.; Dilley, R.D.; Beckmann, T.; Schmidt, G.L.; Dwyer, J.L.; Joseph Hughes, M.; Laue, B. Cloud Detection Algorithm Comparison and Validation for Operational Landsat Data Products. Remote Sens. Environ. 2017, 194, 379–390. [CrossRef]
  14. Le Hegarat-Mascle, S.; Andre, C. Use of Markov Random Fields for automatic cloud/shadow detection on high resolution optical images. ISPRS J. Photogramm. Remote Sens. 2009, 64, 351–366.
  15. Fisher, A. Cloud and Cloud-Shadow Detection in SPOT5 HRG Imagery with Automated Morphological Feature Extraction. Remote Sens. 2014, 6, 776-800. [CrossRef]
  16. Zhong, B.; Chen, W.; Wu, S.; Hu, L.; Luo, X.; Liu, Q. A cloud detection method based on relationship between objects of cloud and cloud-shadow for Chinese moderate to high resolution satellite imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4898–4908. [CrossRef]
  17. Pailot-Bonnétat, S.; Harris, A.J.L.; Calvari, S.; De Michele, M.; Gurioli, L. Plume Height Time-Series Retrieval Using Shadow in Single Spatial Resolution Satellite Images. Remote Sens. 2020, 12, 3951. [CrossRef]
  18. Prabhakar, M.; Gopinath, K.; Reddy, A.; Thirupathi, M.; Rao, C.S. Mapping hailstorm damaged crop area using multispectral satellite data. Egypt. J. Remote Sens. Space Sci. 2019, 22, 73–79.
  19. Elsharkawy, A.; Elhabiby, M.; El-Sheimy, N. New combined pixel/object-based technique for efficient urban classsification using WorldView-2 data. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2012, 39, 191-195.
  20. Satellite Imagery Product Specifications. Available online: https://assets.planet.com/docs/1601.RapidEye.Image.Product.Specs_Jan16_V6.1_ENG.pdf (accessed on 10 January 2024).
  21. Luo, Y.; Trishchenko, A.P.; Khlopenkov, K.V. Developing Clear-Sky, Cloud and Cloud Shadow Mask for Producing Clear-Sky Composites at 250-Meter Spatial Resolution for the Seven MODIS Land Bands over Canada and North America. Remote Sens. Environ. 2008, 112, 4167–4185.
  22. Sun, L.; Liu, X.; Yang, Y.; Chen, T.T.; Wang, Q.; Zhou, X. A Cloud Shadow Detection Method Combined with Cloud Height Iteration and Spectral Analysis for Landsat 8 OLI Data. ISPRS J. Photogramm. Remote Sens. 2018, 138, 193–207.
  23. Yoon, W. A Study on Development of Automatic GCP Matching Technology for CAS-500 Imagery. Master’s thesis, Inha University, Incheon, Republic of Korea, 2019.
  24. Park, H.; Son, J. H.; Jung, H. S.; Kweon, K. E.; Lee, K. D.; Kim, T. Development of the Precision Image Processing System for CAS-500. Korean Journal of Remote Sensing. 2020, 36(5–2): 881–891. [CrossRef]
  25. Son, J. H.; Yoon, W.; Kim, T.; Rhee, S. Iterative Precision Geometric Correction for High-Resolution Satellite Images. Korean Journal of Remote Sensing. 2021, 37(3), 431–447. [CrossRef]
Figure 1. A workflow of cloud shadow detection method.
Figure 1. A workflow of cloud shadow detection method.
Preprints 114426 g001
Figure 2. Illustration of the location where cloud shadows are projected: (a) a case where cloud shadows are projected based on the height of the clouds; (b) the position of clouds and cloud shadows depicted in image.
Figure 2. Illustration of the location where cloud shadows are projected: (a) a case where cloud shadows are projected based on the height of the clouds; (b) the position of clouds and cloud shadows depicted in image.
Preprints 114426 g002
Figure 3. An example of the positions of clouds and cloud shadows in satellite image: (a) before orthorectification; (b) after orthorectification.
Figure 3. An example of the positions of clouds and cloud shadows in satellite image: (a) before orthorectification; (b) after orthorectification.
Preprints 114426 g003
Figure 4. Illustration of depicting the cloud relief displacement: (a) a case of vertical image; (b) a case of high oblique image.
Figure 4. Illustration of depicting the cloud relief displacement: (a) a case of vertical image; (b) a case of high oblique image.
Preprints 114426 g004
Figure 5. A calculation method for the direction vector from cloud to cloud shadow in a 3-dimensoional coordinate system.
Figure 5. A calculation method for the direction vector from cloud to cloud shadow in a 3-dimensoional coordinate system.
Preprints 114426 g005
Figure 6. Explanation for search range of cloud shadow based on cloud height.
Figure 6. Explanation for search range of cloud shadow based on cloud height.
Preprints 114426 g006
Figure 7. Calculation method for cloud object movement in image coordinate using ground coordinate.
Figure 7. Calculation method for cloud object movement in image coordinate using ground coordinate.
Preprints 114426 g007
Figure 8. An example of noise removal: (a) before noise removal; (a) after noise removal.
Figure 8. An example of noise removal: (a) before noise removal; (a) after noise removal.
Preprints 114426 g008
Figure 9. Satellite image and reference data used in the experiment (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 9. Satellite image and reference data used in the experiment (White pixels and black pixels denote clouds and cloud shadows, respectively).
Preprints 114426 g009
Figure 10. Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-1.
Figure 10. Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-1.
Preprints 114426 g010
Figure 11. The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-1 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 11. The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-1 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Preprints 114426 g011
Figure 12. Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-2.
Figure 12. Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-2.
Preprints 114426 g012
Figure 13. The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-2 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 13. The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-2 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Preprints 114426 g013
Figure 14. Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-3.
Figure 14. Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-3.
Preprints 114426 g014
Figure 15. The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 15. The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Preprints 114426 g015
Figure 16. Intermediate process of cloud shadow detection for searching PCSR in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 16. Intermediate process of cloud shadow detection for searching PCSR in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Preprints 114426 g016
Figure 17. Post-processing process for cloud shadow detection (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 17. Post-processing process for cloud shadow detection (White pixels and black pixels denote clouds and cloud shadows, respectively).
Preprints 114426 g017
Figure 18. Cloud shadow detection result from scene-1 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 from an enlarged image.
Figure 18. Cloud shadow detection result from scene-1 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 from an enlarged image.
Preprints 114426 g018
Figure 19. Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 from an enlarged image.
Figure 19. Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 from an enlarged image.
Preprints 114426 g019
Figure 20. Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 from an enlarged image.
Figure 20. Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using C 1 , and the detection results using C 2 from an enlarged image.
Preprints 114426 g020
Table 1. Summary of information from RapidEye image used in the experiment.
Table 1. Summary of information from RapidEye image used in the experiment.
No. Scene-1 Scene-2 Scene-3
Category
Acquisition date 2018. 09. 17 2018. 09. 27 2018. 09. 22
Cloud cover (%) 1.33 0.58 12.66
Sun’s azimuth
/ zenith angle (°)
159.4 / 39.6 155.6 / 44.0 151.4 / 42.6
Viewing’s azimuth
/ zenith angle (°)
281.3 / 16.3 99.8 / 3.8 98.8 / 17.1
Used band Red (555.0 nm), Near-infrared (710.0 nm)
Product level L1B
Spatial resolution 5m
Table 2. Analysis results of the verification data collected from scene-1.
Table 2. Analysis results of the verification data collected from scene-1.
Case Angle   values   for   C 1 Angle   values   for   C 2 Angle   values   for   T C 1 T C 2 T
Case 1 339.4° 325.2° 322.8° 16.6° 2.4°
Case 2 324.4° 15.0° 0.8°
Case 3 323.8° 15.6° 1.4°
Case 4 321.3° 18.1° 3.9°
Case 5 321.8° 17.6° 3.4°
Case 6 323.0° 16.4° 2.2°
Case 7 324.5° 14.9° 0.7°
Case 8 322.3° 17.1° 2.9°
Case 9 316.2° 23.2° 9.0°
Case 10 319.3° 20.1° 5.9°
Absolute Mean - - - 17.4° 3.2°
Table 3. Analysis results of the verification data collected from scene-2.
Table 3. Analysis results of the verification data collected from scene-2.
Case Angle   values   for   C 1 Angle   values   for   C 2 Angle   values   for   T C 1 T C 2 T
Case 1 335.6° 339.1° 337.2° -1.6 1.9
Case 2 336.5° -0.9 2.6
Case 3 335.1° 0.5 4
Case 4 334.5° 1.1 4.6
Case 5 337.9° -2.3 1.2
Case 6 337.7° -2.1 1.4
Case 7 337.8° -2.1 1.3
Case 8 338.3° -2.7 0.8
Case 9 335.1° 0.5 4
Case 10 337.3° -1.7 1.8
Absolute Mean - - - 1.5° 2.3°
Table 4. Analysis results of the verification data collected from scene-3.
Table 4. Analysis results of the verification data collected from scene-3.
Case Angle   values   for   C 1 Angle   values   for   C 2 Angle   values   for   T C 1 T C 2 T
Case 1 331.3° 349.8° 355.1° -23.8° -5.3°
Case 2 351.8° -20.5° -2.0°
Case 3 354.0° -22.7° -4.2°
Case 4 354.1° -22.8° -4.3°
Case 5 346.0° -14.7° 3.8°
Case 6 351.4° -20.1° -1.6°
Case 7 356.7° -25.4° -6.9°
Case 8 355.6° -24.3° -5.8°
Case 9 352.4° -21.1° -2.6°
Case 10 349.3° -18.0° 0.5°
Absolute Mean - - - 21.3° 3.7°
Table 5. Accuracy result of cloud shadow detection according to two vectors in experiment images.
Table 5. Accuracy result of cloud shadow detection according to two vectors in experiment images.
No. Accuracy of cloud shadow detection in Scene-1 using Accuracy of cloud shadow detection in Scene-2 using Accuracy of cloud shadow detection in Scene-3 using
C 1 C 2 C 1 C 2 C 1 C 2
Precision 0.79 0.87 0.80 0.82 0.93 0.96
Recall 0.32 0.59 0.45 0.45 0.33 0.62
F1 score 0.46 0.70 0.58 0.58 0.49 0.76
Detection ratio (%) 86.2 97.5 99.6 98.3 80.8 94.0
Total cloud objects 1332 300 2011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated