1. Introduction
The utility of optical satellites is increasing due to their ability to periodically observe extensive regions and to enable effective Earth observation missions [
1]. However, despite these significant advantages, satellite imagery, particularly in the field of remote sensing, could present various challenges. One such challenge encountered in this field is shadows [
2]. These shadows could be caused by different sources, such as buildings, clouds, and terrain. Shadows from these objects could lead to misleading outcomes like classification or object detection because the presence of shadows leads to a significant loss of radiometric information in the shadowed areas [
2,
3]. This vulnerability that also affects unmanned aerial vehicle (UAV) imagery [
4,
5]. Given the significant impact shadows could have on the accuracy and reliability of satellite imagery, many researchers perceive shadows as noise and targets for removal or correction in detection operations [
6].
Building upon the general challenges posed by shadows, cloud shadows also present significant issues in optical satellite imagery. Due to the annual average cloud cover of 66% [
7], a significant amount of information is lost as result of clouds and the corresponding cloud shadows. As noted above, the presence of cloud and cloud shadows in optical satellite images could interfere with ground observations [
8,
9,
10]. Therefore, detecting cloud and cloud shadow is crucial for ensuring the reliability of satellite imagery [
11].
To detect cloud shadows, which are perceived as noise, Zhu and Woodcock [
12] conducted cloud and cloud shadow detection for the Landsat and Sentinel-2 satellite. Within the scope of cloud shadows, this method involves estimating the projected location of cloud shadows from cloud masks detected a priori by considering the solar and satellite azimuth and zenith angles. Foga et al. [
13] reported that this method is well performed and tested by the United States Geological Survey (USGS). In high resolution satellite imagery, Le Hégarat-Mascle and André [
14] and Fisher [
15] developed a cloud shadow detection method focusing on the geometric relationship between clouds and their shadows for SPOT satellite images. Similarly, Zhong et al. [
16] also developed a cloud shadow detection method based on the geometric relationships for Chinese satellite images.
As shown in the above developments, it is well known that the direction of shadow in raw satellite images depends on the direction of sun illumination and sensor viewing direction. It is also well known that ortho-rectified images remove relief displacements due to oblique sensor viewing direction. This implies that the direction of shadow and the direction of sun illumination should coincide in ortho-rectified images. However, we observed that this implication may not be realized for objects such as cloud and cloud shadows.
The reason is that the height of clouds is not considered during the ortho-rectification process, leading to relief displacement concerning cloud height in orthoimages. This phenomenon was also mentioned in the study by Pailot-Bonnétat et al. [
17], however, without explicit experiments. This paper will highlight the necessity of the direction of sensor viewing as well as that of sun illumination for determining the direction of shadow of objects with un-corrected relief displacements such as clouds. We will show that relief displacement of clouds still exists in orthoimages and point out the importance of considering sensor geometry for cloud shadow detection from orthoimages. We will then propose an automated approach for cloud shadow detection.
In pursuit of cloud shadow detection, we utilized the OCM (Object-oriented Cloud and Cloud-shadow Matching) method previously researched by Zhong et al. [
16] to generate a cloud map first. Candidate regions of cloud shadows were then projected from the cloud map along the direction of shadow and actual cloud regions were detected. We employed for experiments RapidEye satellite images taken at nadir and oblique viewing angles. To examine relief displacements of clouds in orthoimages, geometric correction and ortho-rectification were applied to the data. The direction of cloud shadows in orthoimages were estimated by two cases: the first case considered the direction of sun illumination only and the second case the direction of sun illumination and sensor viewing direction. The estimated direction of cloud shadows was compared with true direction manually measured. In the orthoimages made from nadir images, the angles from the two cases were similar to the true angle. However, in the orthoimages made from high-oblique images, there was a maximum difference of 21.3° between the angle from the first case and the true angle. The difference between the angle from the second case and the true angle was less than 4.0°. We performed cloud shadow detection using the shadow angles from the two cases. Accuracy results showed that the shadow detection considering the angle from the second case improved the average f1-score by 0.17 and increased the average detection rate by 7.7%, compared to the results from the first case. By considering both solar and sensor geometries, higher accuracy in cloud shadow detection was achieved. This result support the estimation of shadow direction from clouds with un-corrected relief displacements and cloud shadow detection proposed in this paper.
Author Contributions
Conceptualization, T.K. and H.K.; methodology, H.K.; validation, T.K., W.Y.; investigation, H.K.; writing—original draft preparation, H.K.; writing—review and editing, T.K., H.K. and W.Y.; visualization, W.Y.; supervision, T.K.; project administration, T.K.; funding acquisition, T.K.
Figure 1.
A workflow of cloud shadow detection method.
Figure 1.
A workflow of cloud shadow detection method.
Figure 2.
Illustration of the location where cloud shadows are projected: (a) a case where cloud shadows are projected based on the height of the clouds; (b) the position of clouds and cloud shadows depicted in image.
Figure 2.
Illustration of the location where cloud shadows are projected: (a) a case where cloud shadows are projected based on the height of the clouds; (b) the position of clouds and cloud shadows depicted in image.
Figure 3.
An example of the positions of clouds and cloud shadows in satellite image: (a) before orthorectification; (b) after orthorectification.
Figure 3.
An example of the positions of clouds and cloud shadows in satellite image: (a) before orthorectification; (b) after orthorectification.
Figure 4.
Illustration of depicting the cloud relief displacement: (a) a case of vertical image; (b) a case of high oblique image.
Figure 4.
Illustration of depicting the cloud relief displacement: (a) a case of vertical image; (b) a case of high oblique image.
Figure 5.
A calculation method for the direction vector from cloud to cloud shadow in a 3-dimensoional coordinate system.
Figure 5.
A calculation method for the direction vector from cloud to cloud shadow in a 3-dimensoional coordinate system.
Figure 6.
Explanation for search range of cloud shadow based on cloud height.
Figure 6.
Explanation for search range of cloud shadow based on cloud height.
Figure 7.
Calculation method for cloud object movement in image coordinate using ground coordinate.
Figure 7.
Calculation method for cloud object movement in image coordinate using ground coordinate.
Figure 8.
An example of noise removal: (a) before noise removal; (a) after noise removal.
Figure 8.
An example of noise removal: (a) before noise removal; (a) after noise removal.
Figure 9.
Satellite image and reference data used in the experiment (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 9.
Satellite image and reference data used in the experiment (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 10.
Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-1.
Figure 10.
Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-1.
Figure 11.
The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-1 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 11.
The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-1 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 12.
Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-2.
Figure 12.
Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-2.
Figure 13.
The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-2 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 13.
The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-2 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 14.
Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-3.
Figure 14.
Verification data for checking azimuth angle of cloud shadow from cloud collected in scene-3.
Figure 15.
The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 15.
The result of comparing the yellow directional vector considering only the geometry of the sun and the green directional vector considering both a sensor and the sun geometry in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 16.
Intermediate process of cloud shadow detection for searching PCSR in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 16.
Intermediate process of cloud shadow detection for searching PCSR in scene-3 (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 17.
Post-processing process for cloud shadow detection (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 17.
Post-processing process for cloud shadow detection (White pixels and black pixels denote clouds and cloud shadows, respectively).
Figure 18.
Cloud shadow detection result from scene-1 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using , and the detection results using from an enlarged image.
Figure 18.
Cloud shadow detection result from scene-1 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using , and the detection results using from an enlarged image.
Figure 19.
Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using , and the detection results using from an enlarged image.
Figure 19.
Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using , and the detection results using from an enlarged image.
Figure 20.
Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using , and the detection results using from an enlarged image.
Figure 20.
Cloud shadow detection result from scene-2 image (White pixels and black pixels denote clouds and cloud shadows, respectively): (a) ~ (c) represent, in sequence, the reference cloud and cloud shadows, the detection results using , and the detection results using from an enlarged image.
Table 1.
Summary of information from RapidEye image used in the experiment.
Table 1.
Summary of information from RapidEye image used in the experiment.
|
No. |
Scene-1 |
Scene-2 |
Scene-3 |
Category |
|
Acquisition date |
2018. 09. 17 |
2018. 09. 27 |
2018. 09. 22 |
Cloud cover (%) |
1.33 |
0.58 |
12.66 |
Sun’s azimuth / zenith angle (°) |
159.4 / 39.6 |
155.6 / 44.0 |
151.4 / 42.6 |
Viewing’s azimuth / zenith angle (°) |
281.3 / 16.3 |
99.8 / 3.8 |
98.8 / 17.1 |
Used band |
Red (555.0 nm), Near-infrared (710.0 nm) |
Product level |
L1B |
Spatial resolution |
5m |
Table 2.
Analysis results of the verification data collected from scene-1.
Table 2.
Analysis results of the verification data collected from scene-1.
Case |
|
|
|
|
|
Case 1 |
339.4° |
325.2° |
322.8° |
16.6° |
2.4° |
Case 2 |
324.4° |
15.0° |
0.8° |
Case 3 |
323.8° |
15.6° |
1.4° |
Case 4 |
321.3° |
18.1° |
3.9° |
Case 5 |
321.8° |
17.6° |
3.4° |
Case 6 |
323.0° |
16.4° |
2.2° |
Case 7 |
324.5° |
14.9° |
0.7° |
Case 8 |
322.3° |
17.1° |
2.9° |
Case 9 |
316.2° |
23.2° |
9.0° |
Case 10 |
319.3° |
20.1° |
5.9° |
Absolute Mean |
- |
- |
- |
17.4° |
3.2° |
Table 3.
Analysis results of the verification data collected from scene-2.
Table 3.
Analysis results of the verification data collected from scene-2.
Case |
|
|
|
|
|
Case 1 |
335.6° |
339.1° |
337.2° |
-1.6 |
1.9 |
Case 2 |
336.5° |
-0.9 |
2.6 |
Case 3 |
335.1° |
0.5 |
4 |
Case 4 |
334.5° |
1.1 |
4.6 |
Case 5 |
337.9° |
-2.3 |
1.2 |
Case 6 |
337.7° |
-2.1 |
1.4 |
Case 7 |
337.8° |
-2.1 |
1.3 |
Case 8 |
338.3° |
-2.7 |
0.8 |
Case 9 |
335.1° |
0.5 |
4 |
Case 10 |
337.3° |
-1.7 |
1.8 |
Absolute Mean |
- |
- |
- |
1.5° |
2.3° |
Table 4.
Analysis results of the verification data collected from scene-3.
Table 4.
Analysis results of the verification data collected from scene-3.
Case |
|
|
|
|
|
Case 1 |
331.3° |
349.8° |
355.1° |
-23.8° |
-5.3° |
Case 2 |
351.8° |
-20.5° |
-2.0° |
Case 3 |
354.0° |
-22.7° |
-4.2° |
Case 4 |
354.1° |
-22.8° |
-4.3° |
Case 5 |
346.0° |
-14.7° |
3.8° |
Case 6 |
351.4° |
-20.1° |
-1.6° |
Case 7 |
356.7° |
-25.4° |
-6.9° |
Case 8 |
355.6° |
-24.3° |
-5.8° |
Case 9 |
352.4° |
-21.1° |
-2.6° |
Case 10 |
349.3° |
-18.0° |
0.5° |
Absolute Mean |
- |
- |
- |
21.3° |
3.7° |
Table 5.
Accuracy result of cloud shadow detection according to two vectors in experiment images.
Table 5.
Accuracy result of cloud shadow detection according to two vectors in experiment images.
No. |
Accuracy of cloud shadow detection in Scene-1 using |
Accuracy of cloud shadow detection in Scene-2 using |
Accuracy of cloud shadow detection in Scene-3 using |
|
|
|
|
|
|
Precision |
0.79 |
0.87 |
0.80 |
0.82 |
0.93 |
0.96 |
Recall |
0.32 |
0.59 |
0.45 |
0.45 |
0.33 |
0.62 |
F1 score |
0.46 |
0.70 |
0.58 |
0.58 |
0.49 |
0.76 |
Detection ratio (%) |
86.2 |
97.5 |
99.6 |
98.3 |
80.8 |
94.0 |
Total cloud objects |
1332 |
300 |
2011 |