Preprint
Article

Effect of the Light Environment on Image-Based SPAD Value Prediction of Radish Leaves

Altmetrics

Downloads

116

Views

86

Comments

1

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

03 November 2023

Posted:

06 November 2023

You are already at the latest version

Alerts
Abstract
This study aims to clarify the influence of photographic environments under different light sources on image-based SPAD value prediction. Radish leaf patches of 1.5 cm diameter were photographed under halogen or LED light. The input variables for the SPAD value prediction using random forests were RGB values, HSL values, and HSV values. Additionally, the light color temperature (LCT) and illuminance (ILL) were used as the input variables. Model performance was assessed using Pearson’s correlation coefficient (COR), Nash-Sutcliffe efficiency (NSE), and root mean squared error (RMSE). SPAD value prediction resulted in high accuracy in a stable light environment; CORRGB+ILL+LCT and CORHSL+ILL+LCT were 0.929 and 0.922, respectively. Image-based SPAD value prediction was effective under halogen light with a similar color temperature at dusk; CORRGB+ILL and CORHSL+ILL were 0.895 and 0.876, respectively. The HSL value under LED could be used to predict the SPAD value with high accuracy; COR, NSE, and RMSE were 0.972, 0.944, and 2.07, respectively. The partial dependence plots of the H value indicate a change from blue to green with increasing SPAD values. Further studies are required to examine this method under outdoor conditions in spatiotemporally dynamic light environments.
Keywords: 
Subject: Biology and Life Sciences  -   Agricultural Science and Agronomy

1. Introduction

In the cultivation of root crops such as radish and carrots, it is difficult to control the production environment while checking the condition of roots. Therefore, production environment control for root crops requires the estimation of subsurface information from aboveground information. It has been proven that chlorophyll content in leaves is related to nitrogen content in crops, and can be used as an indicator to control the production environment during crop cultivation [1,2,3]. Conventional chemical analysis of chlorophyll content is labor-intensive, destructive, and limited to the small measurement area. In contrast, the chlorophyll value (SPAD value) measured by a chlorophyll meter (SPAD-502Plus, Konica Minolta, Inc.) can accurately evaluate chlorophyll content in plant leaves in a non-destructive manner. SPAD values can not only indirectly measure nitrogen content in crops, but are also related to plant growth indices and yield [3,4], and it has been reported that water stress during crop cultivation causes SPAD values to increase [5]. However, the measurement area for SPAD values using existing SPAD meters is still limited, and only one leaf can be measured at a time. Therefore, a method to predict leaf SPAD values on a large scale and in a short time would greatly contribute to the optimization of production environment control and cultivation management.
With the development of color image acquisition technology, color information obtained from digital cameras is now widely used to predict the physiological characteristics of crops, such as vegetation index (VI) [6,9], shape analysis [10], and leaf nitrogen and chlorophyll content estimation [11]. For example, VI based on the RGB color system has been integrated with crop height information and used for yield prediction [9]. HSV and HSL color systems, which are obtained by converting from the RGB color system, are also used for semantic segmentation [12], such as geo-referencing and crop quality monitoring [13]. There are also studies using multispectral cameras [14,15] and hyperspectral cameras [16,17], but these methods require high equipment costs.
Images acquired with a digital camera are easily affected by light sources such as brightness and color temperature [7]. Therefore, it is desirable to acquire images using digital cameras under stable light conditions. However, because the intensity of sunlight, for example, changes from moment to moment, the pixel values of an image may change spatiotemporally even for the same subject. In response to this problem, machine learning was applied to predict SPAD values from pixel values using a photographic device that creates a stable light environment [18]. However, similar to conventional SPAD meters, this device can only measure the SPAD values of individual leaves.
In this study, we constructed a SPAD value prediction model based on Random Forests [19] using RGB values obtained from images of radish leaves, as well as HSV and HSL values obtained by conversion from RGB values, and color temperature and illuminance as input variables. We focus on the effect of the light environment at the time of photography on the pixel values of digital images and model the relationship between leaf color and SPAD values under varying photographic conditions with respect to the color temperature and illuminance of the light source. Furthermore, based on information such as the variable importance calculated by the random forests, we examined the influence of photographic conditions on the prediction of SPAD values.

2. Materials and Methods

2.1. Plant Material

The material tested was leaves cut at 1.5 cm in diameter (leaf patches) from a radish produced by the JA Toyohashi Omura gardening club, purchased at a supermarket in Tsuzuki ward, Yokohama city, Kanagawa prefecture, Japan on April 1, 2023.

2.2. Photographic Environment and Data Acquisition Methods

The images were taken using a box-type device with a height of 900 mm, width of 460 mm, and depth of 460 mm, constructed on a flat room floor. A digital camera (DC-GX7MK3, Panasonic Corp.) was fixed on a tripod so that the camera could capture images of leaves from above, for expected applications to image-based SPAD value prediction in an open field using a UAV or the use of fixed-point camera for a greenhouse. In this photographic environment, the color temperature and light intensity can be controlled by switching the lighting using a dimmer. In this study, halogen bulbs (color temperature: 3000 K) and LED bulbs (color temperature: 5000 K) were used, with nine levels of illuminance for halogen bulbs (3, 50, 100, 300, 500, 800, 1000, 2000, and 2700 lx) and nine levels of illuminance for LED bulbs (120, 300, 500, 800, 1000, 2000, 3000, 4000 and 5440 lx).
In this study, leaf patches of radish were prepared, and the color information was obtained for an image of a leaf patch with a circularity greater than 0.85. The minimum, mean, median, and maximum values of the RGB, HSL, and HSV color systems (see Section 2.3 for detail) were calculated for each leaf patch. The RGB, HSL, and HSV values of the center of gravity of the leaf patches were calculated for each leaf patch and used as input variables in the random forests analysis (described in Section 2.4), together with the color temperature and illuminance of the photographic environment, to predict the SPAD values.

2.3. Color Information Collection Methods

Color information (RGB values as well as HSL and HSV values calculated based on RGB values) was obtained from the images captured from the leaf patches. HSL values are values in the HSL color system, where H, S, and L represent hue, saturation, and lightness, respectively. The values of H and S in the HSL color system are reported to be indices that are theoretically unaffected by the intensity of illuminance because illuminance information is concentrated only in L [13]. Therefore, it is possible to reduce the influence of the light source on the photographic environment.
The HSL color system can be converted from the RGB color system in a cubic space, through the icosahedral Hue, Saturation, Intensity (HSI) color system, which is similar to the HSL color system and the HSV color system. The difference between the HSL and HSV color systems lies in the color when S (saturation) is reduced. The HSL color system ranges from 0 to 360 for H, 0 to 100 for S, and 0 to 100 for L. The HSV color system ranges from 0 to 360 for H, 0 to 100 for S, and 0 to 100 for V. In this study, HSL and HSV values were calculated from each RGB value using a colormap in a Python library, and normalized to the range of 0 to 1 in each value range.

2.4. Modeling with Random Forests

Random Forests [19] is an ensemble learning algorithm that integrates and analyzes multiple classification and regression trees. First, the algorithm extracts a large number of bootstrap samples. Then, it generates a decision tree for each bootstrap sample. These decision trees only use a small number of randomly selected variables as features. The final output is the mean value for regression and the majority vote for classification, as the results are obtained from multiple decision trees. Decision trees have the problem that the deeper the tree, the more complex the structure becomes, and the more prone to over-fitting. Random forests, however, have better generalization ability than decision trees because of bagging, which alleviates the problem of over-fitting. Another important feature of random forests is their ability to evaluate the importance of each input variable [20].
In this study, we constructed 17 different models for predicting SPAD values based on environmental information (color temperature and illuminance) (Table 1). The 17 models consisted of eight models that used data collected under a halogen and LED light sources (HAL+LED), three models that used data collected under a halogen light source only (HAL), three models that used data collected under an LED light source only (LED), and three models that used data collected under an LED light source based on illuminance information (LED_RES). The LED_RES condition is a condition where the indoor lighting environment has low light intensity compared to sunlight, and thus it was set up to verify the applicability of the prediction models for extrapolated conditions.
The input variables to random forests were the color information and the light environment in which the image was captured. The color information is the minimum, mean, median, and maximum values of each color component in the RGB, HSL, and HSV color systems, and the values at the center of gravity of the leaf patch (Table 1). Color information GB is a model that uses the minimum, mean, median, and maximum values of the G and B values in the RGB color system and the values at the center of gravity of the leaf patch. The reason for constructing a model that uses only GB values as input variables for color information is to exclude the R values for red light that can be absorbed by water molecules. The input variables for the lighting environment were the color temperature of the light source (LCT) and illuminance of the light source (ILL) (Table 1).
In this study, we implemented the random forests computation using Scikit-learn [21] in Python. We used the default hyper-parameters of the random forests, except for random_state. In this study, a five-fold cross-validation was conducted by changing the random seed (i.e., random_state) 50 times to evaluate the variability of the model structure originating from the randomization process in the random forests computation. Note that a five-fold cross-validation was not performed for LED_RES because the training and validation data were divided according to the explicit objectives. The model performance was evaluated based on the Pearson's correlation coefficient (COR), Nash-Sutcliffe coefficient (NSE) [22], and root mean squared error (RMSE) between the observed SPAD values and model output values. For interpreting the models, SHapley Additive exPlanations (SHAP) [23], Partial Dependence (PD) and Individual Conditional Expectation (ICE) were employed to visualize the model structures in this study. SHAP decomposes the difference between the predictions for a particular instance and the mean prediction into the contribution of each feature based on the Shapley value concept of cooperative game theory. However, it cannot explain "how the predictions react to changes in feature values," and thus PD and ICE were used together to compensate this part of the interpretability.

3. Results

3.1. Modeling Results

Random forests could predict the SPAD values by capturing the color changes in different light environments (Table 1, Figs. 1-2). Table 1 shows the mean and standard deviations of the best models for each dataset. Figure 1 shows a scatter plot of the mean values of the best model for the five-fold cross-validation, and Figure 2 shows the results of the best model for illuminance-limited training data (LED_RES). The accuracy of both models was high, with a COR of 0.85 or higher.
In the analysis integrating halogen and LED light source data (HAL+LED), the model (HAL+LED_RGB+LCT+ILL) with the minimum, mean, median, and maximum values in the RGB color system and color temperature and illuminance as model input had the highest accuracy in predicting the SPAD values (Figure 1(b)). In the same model, the prediction accuracy was improved by adding color temperature and illuminance information as model input. However, the underestimation of high SPAD values did not improve. In the analysis of halogen light source (HAL) data, the model using RGB values and illuminance as model input (HAL_RGB+ILL) exhibited the highest prediction accuracy (Figure 1(l)). The model using the HSL values and illuminance as model input (LED_HSL + ILL) had the highest accuracy (Figure 1(j)). Among the models constructed in this study, the model with the HSL value and illuminance as model input (LED_RGB+ILL) using data obtained using LED light sources had the highest accuracy when COR was used as the performance measure (Figure 1(j)).
In this study, the prediction accuracy was high with a COR of 0.85 or higher even in the case of limited illuminance, and the applicability was verified under extrapolated conditions (LED_RES). In this case, the highest prediction accuracy was obtained when the HSL value was used as model input, as in the case of data obtained under LED light sources (Figure 2(b)).

3.2. Variable Importance

Figure 3 shows the top 10 mean absolute SHAP values for data collected under halogen and LED light sources (HAL+LED) with RGB values, illuminance, and color temperature as model input (HAL+LED_RGB+LCT+ILL; Figure 3(a)), for data taken under LED light sources (LED) with RGB values and illuminance as input values (LED_RGB+ILL; Figure 3(b)), and for data taken under HSL values and illuminance (LED_HSL+ILL; Figure 3(c)). The figure compares the differences in light source information and color information among the three models from the top five CORs in Table 1, excluding the two models using illuminance-limited data and the model using GB values. In the HAL+LED_RGB+LCT+ILL model, the mean value of G contributed the most (Figure 3(a)). The LED_RGB+ILL model indicated that the median value of B had the highest contribution to the SPAD value prediction, followed by the mean value of B (Figure 3(b)). The LED_HSL+ ILL model exhibited the median value of H had the highest contribution to the model (Figure 3(c)).
For the HAL+LED_RGB+LCT+ILL, LED_RGB+ILL, and LED_HSL+ILL models, the PD and ICE for the top three variables of the average absolute SHAP values for each model are shown in Figure 4. PD and ICE out of importance not shown in Figure 4 are shown in Figure A1. SPAD values tended to be higher when the median value of B was low (Figure 4(d)). SPAD values were higher when G was low (Figs. 4(a) and 4(b)), and SPAD values were higher when R was high (Figs. A1(a) and A1(b)). For illuminance above 4000 lx, the SPAD values tended to increase slightly, when the illuminance was high (Figure A1(c)). In addition, the SPAD value tended to be high when the H value was low (Figure 4(g)). This change in the H value indicates that the SPAD value decreases as the leaf color changes from green to blue in the hue circle. The S and L values did not change significantly with respect to the SPAD values (Figs. 4(h) and 4(i)).

4. Discussion

In this study, it was found that the accuracy of SPAD value prediction was enhanced by using color information (RGB, HSL, and HSV) obtained from digital images and photographic environment information (color temperature and illuminance) as model input. Previous studies have proposed methods to stabilize the light environment during photographic [18] and color-corrected images using color charts as color standards [24]. However, a method to stabilize the light environment at the time of photography can only predict SPAD values on an individual leaf basis, although it can provide a highly accurate SPAD value prediction. In addition, the color correction of an image is often difficult under certain conditions. In contrast, our method presented the potential of sensor fusion of a color meter or an illuminance sensor for image-based SPAD value prediction that can be applied for a UAV [6,7,25] or fixed camera [26,27] without limiting the photographic conditions.
No significant decrease in prediction accuracy was observed when data obtained under both halogen and LED light sources were combined for training. However, the SPAD value prediction by the HAL model was less accurate than the LED model. This suggests that the color temperature affects the image-based SPAD value prediction using random forests. The result also suggests that SPAD values can be predicted even under non-white conditions, such as dusk, by collecting and learning data under different illuminance conditions in various light source environments. The high accuracy of the SPAD value prediction for the LED model suggests that the LED light source is a more suitable as a light source for image-based SPAD value prediction. However, if the object is in shadow, the SPAD value prediction may be difficult or the error in the SPAD value prediction may be large. Therefore, it is necessary to consider a method that uses the ratio of shadows as a model input or a color correction method for acquired images when shadows exist on an object [27,28].
Following the previous study [7] reported the results of similar analysis excluding the R value which is a wavelength absorbed by water molecules, we conducted the modeling using only GB values, excluding the R values in this study. As a result, the GB-based model also showed high accuracy, with a COR of 0.85 or higher. However, the GB-based model showed a lower model performance than the RGB-based models. This suggests that the GB-based models can exclude the effect of water molecules, but the R values may contain important factors for the SPAD value prediction.
The HSL value obtained under the LED light source, which produced models with the highest model performance in this study, is converted using RGB values. Therefore, the color component may be affected by water molecules in the leaves and may not be an appropriate color space when considering the effect of water molecules in the leaves. However, because the HSL color space can theoretically aggregate illuminance information into L [13], the HSL color system is considered useful in environments with continuously changing lighting conditions.
In the HAL+LED model (Figure 3(a)), which was constructed from data under halogen and LED light sources, the G values were important, whereas the B values were important in the LED model (Figure 3(b)). Previous studies have reported that the blue and green bands contribute significantly to SPAD value prediction [29]. The importance of these variables in this study differed depending on the light source, implying that the important variables for SPAD value prediction vary depending on the light environment. Therefore, it is important to incorporate light environment information, such as the color temperature and illuminance of the light source, for image-based SPAD prediction.
Although this study was conducted in a closed indoor environment, there exists a wide range of fields where image-based SPAD value prediction can be conducted, ranging from fields [24] to greenhouses [26]. The results of this study can be used in environments where leaf color can be obtained under a stable light source, as in previous studies [18]. However, when images are acquired under unstable light environments, such as in the fields or in greenhouses, the accuracy of the SPAD value prediction becomes an issue. In addition, the maximum illuminance in this study was 5000 lx, whereas the outdoor environment was highly illuminant, ranging from 10000 to 100000 lx. Further study is therefore needed to collect observation data for building data-driven models specifically in outdoor environments for the proposed method to be applied to a variety of lighting conditions.

5. Conclusions

This study was conducted to clarify the influence of light environment on SPAD value prediction using random forests under different light source environments. Leaf patches with a diameter of 1.5 cm were prepared using Japanese radish (Raphanus sativus L. var. sativus), photographed under the established stable light environment, and used for extracting variables for image-based SPAD value prediction. The variables obtained by image analysis were the RGB values, which were pixel values, and the HSL and HSV values obtained by color space conversion from the RGB values. In addition, color temperature and illuminance, which were measured separately, were used as input variables. The results showed that, under a stable LED light source, the SPAD values could be predicted with high accuracy (COR 0.929 for RGB, color temperature, and illuminance as input variables; COR 0.922 for HSL, color temperature, and illuminance as input variables). It was also found that image-based SPAD value prediction was possible with relatively high accuracy, even under a halogen light source with a color temperature close to that of dusk (COR 0.895 for RGB and illuminance as model input, and COR 0.876 for HSL and illuminance as model input). Under LED light sources, the use of HSL values resulted in accurate SPAD value prediction by separating the illumination information into L (COR=0.972, NSE=0.944, RMSE=2.07). In addition, focusing on the H value, the H value tended to change from blue to green as the SPAD value increased. In this study, the SPAD value prediction was performed in a stable light environment in a closed indoor environment; therefore, the development of a method for outdoor conditions, where the light environment is spatiotemporally dynamic, is a future challenge.

Author Contributions

Y.K. conceptualized this study and conducted the experiment and analysis. Y.K. drafted this paper, and S.F. finalized it. S.F. supervised this research.

Funding

This work was supported by the FLOuRISH Fellowship Program for Next Generation Researchers at Tokyo University of Agriculture and Technology.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Partial Dependence and Individual Conditional Expectation in the best model for SPAD value prediction using RGB values in Halogen and/or LED light source out of importance not shown in Figure 4: (a) HAL+LED_RGB+LCT+ILL_R_median, (b) LED_RGB+ILL_R_median, (c) LED_RGB+ILL_Illuminance value.
Figure A1. Partial Dependence and Individual Conditional Expectation in the best model for SPAD value prediction using RGB values in Halogen and/or LED light source out of importance not shown in Figure 4: (a) HAL+LED_RGB+LCT+ILL_R_median, (b) LED_RGB+ILL_R_median, (c) LED_RGB+ILL_Illuminance value.
Preprints 89593 g0a1

References

  1. Wang, Y.; Wang, D.; Shi, P.; Omasa, K. Estimating rice chlorophyll content and leaf nitrogen concentration with a digital still color camera under natural light. Plant Methods 2014, 10, 36. [Google Scholar] [CrossRef] [PubMed]
  2. Alena, T.N.; Eliemar, C.; Jurandi, G.O.; Ricardo, E.B.S. Photosynthetic pigments, nitrogen, chlorophyll a fluorescence and SPAD-502 readings in coffee leaves. Scientia Horticulturae 2005, 104, 199–209. [Google Scholar] [CrossRef]
  3. Guo, Y.; Chen, S.; Li, X.; Cunha, M.; Jayavelu, S.; Cammarano, D.; Fu, Y. Machine Learning-Based Approaches for Predicting SPAD Values of Maize Using Multi-Spectral Images. Remote Sensing 2022, 14, 1337. [Google Scholar] [CrossRef]
  4. Sakamoto, M.; Komatsu, Y.; Suzuki, T. Nutrient Deficiency Affects the Growth and Nitrate Concentration of Hydroponic Radish. Horticulturae 2021, 7, 525. [Google Scholar] [CrossRef]
  5. Stagnari, F.; Galieni, A.; D'Egidio, S.; Pagnani, G.; Pisante, M. Responses of radish (Raphanus sativus) to drought stress. Annals of Applied Biology 2018, 172, 170–186. [Google Scholar] [CrossRef]
  6. Liu, Y.; Hatou, K.; Aihara, T.; Kurose, S.; Akiyama, T.; Kohno, Y.; Lu, S.; Omasa, K. A Robust Vegetation Index Based on Different UAV RGB Images to Estimate SPAD Values of Naked Barley Leaves. Remote Sensing 2021, 13, 686. [Google Scholar] [CrossRef]
  7. Liu, Y.; Hatou, K.; Aihara, T.; Kurose, S.; Akiyama, T.; Kohno, Y.; Lu, S.; Omasa, K. Assessment of naked barley leaf SPAD values using RGB values under different growth stages at both the leaf and canopy levels. Eco-Engineering 2021, 33, 31–38. [Google Scholar] [CrossRef]
  8. Kandel, B.P. Spad value varies with age and leaf of maize plant and its relationship with grain yield. BMC Research Notes 2020, 13, 475. [Google Scholar] [CrossRef] [PubMed]
  9. Sumesh, K.C.; Ninsawat, S.; Somard, J. Integration of RGB-based vegetation index, crop surface model and object-based image analysis approach for sugarcane yield estimation using unmanned aerial vehicle. Computers and Electronics in Agriculture 2021, 180, 105903. [Google Scholar] [CrossRef]
  10. Iwata, H.; Niikura, S.; Matsuura, S.; Takano, Y.; Ukai, Y. Evaluation of variation of root shape of Japanese radish (Raphanus sativus L.) based on image analysis using elliptic Fourier descriptors. Euphytica 1998, 102, 143–149. [Google Scholar] [CrossRef]
  11. Lu, J.; Cheng, D.; Geng, C.; Zhang, Z.; Xiang, Y.; Hu, T. Combining plant height, canopy coverage and vegetation index from UAV-based RGB images to estimate leaf nitrogen concentration of summer maize. Biosystems Engineering 2021, 202, 42–54. [Google Scholar] [CrossRef]
  12. Waldamichael, F.G.; Debelee, T.G.; Ayano, Y.M. Coffee disease detection using a robust HSV color-based segmentation and transfer learning for use on smartphones. International J. of Intelligent Systems 2022, 37, 4967–4993. [Google Scholar] [CrossRef]
  13. Motonaga, Y.; Kameoka, T.; Hashimoto, A. Constructing Color Image Processing System for Managing the Surface Color of Agricultural Products. J. of the Japanese Society of Agricultural Machinery 1997, 59, 13–22. [Google Scholar] [CrossRef] [PubMed]
  14. Cao, Y.; Li, G.L.; Luo, Y.K.; Pan, Q.; Zhang, S.Y. Monitoring of sugar beet growth indicators using wide-dynamic-range vegetation index (WDRVI) derived from UAV multispectral images. Computers and Electronics in Agriculture 2020, 171, 105331. [Google Scholar] [CrossRef]
  15. Sulemane, S.; Matos-Carvalho, J.P.; Pedro, D.; Moutinho, F.; Correia, S.D. Vineyard Gap Detection by Convolutional Neural Networks Fed by Multi-Spectral Images. Algorithms 2022, 15, 440. [Google Scholar] [CrossRef]
  16. Kanning, M.; Kühling, I.; Trautz, D.; Jarmer, T. High-Resolution UAV-Based Hyperspectral Imagery for LAI and Chlorophyll Estimations from Wheat for Yield Prediction. Remote Sensing 2018, 10, 2000. [Google Scholar] [CrossRef]
  17. Pourdarbani, R.; Sabzi, S.; Dehghankar, M.; Rohban, M.H.; Arribas, J.I. Examination of Lemon Bruising Using Different CNN-Based Classifiers and Local Spectral-Spatial Hyperspectral Imaging. Algorithms 2023, 16, 113. [Google Scholar] [CrossRef]
  18. Tan, L.; Zhou, L.; Zhao, N.; He, Y.; Qiu, Z. Development of a low-cost portable device for pixel-wise leaf SPAD estimation and blade-level SPAD distribution visualization using color sensing. Computers and Electronics in Agriculture 2021, 190, 106487. [Google Scholar] [CrossRef]
  19. Breiman, L. Random Forests. Machine Learning 2001, 45, 5–32. [Google Scholar] [CrossRef]
  20. Cutler, D.R.; Edwards, T.C. Jr.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J. Random Forests for Classification in Ecology. Ecology 2007, 88, 2783–2792. [Google Scholar] [CrossRef] [PubMed]
  21. Pedregosa, F.; et al. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 2021, 12, 2825–2830. [Google Scholar] [CrossRef]
  22. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I-A discussion of principles. J. of Hydrology 1970, 10, 282–290. [Google Scholar] [CrossRef]
  23. Scott, M.L.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17), New York, United States, 4-9 December 2017; pp. 4768–4777. [Google Scholar]
  24. Jagan, M.P.; Dutta, G.S. Intelligent image analysis for retrieval of leaf chlorophyll content of rice from digital images of smartphone under natural light. Photosynthetica 2019, 57, 388–398. [Google Scholar] [CrossRef]
  25. Guo, Y.; Yin, G.; Sun, H.; Wang, H.; Chen, S.; Senthilnath, J.; Wang, J.; Fu, Y. Scaling Effects on Chlorophyll Content Estimations with RGB Camera Mounted on a UAV Platform Using Machine-Learning Methods. Sensors 2020, 20, 5130. [Google Scholar] [CrossRef]
  26. Yang, H.; Hu, Y.; Zheng, Z.; Qiao, Y.; Hou, B.; Chen, J. A New Approach for Nitrogen Status Monitoring in Potato Plants by Combining RGB Images and SPAD Measurements. Remote Sensing 2022, 14, 4814. [Google Scholar] [CrossRef]
  27. Putra, B.T.W.; Soni, P. Improving nitrogen assessment with an RGB camera across uncertain natural light from above-canopy measurements. Precision Agriculture 2020, 21, 147–159. [Google Scholar] [CrossRef]
  28. Yuan, Y.; Wang, X.; Shi, M.; Wang, P. Performance comparison of RGB and multispectral vegetation indices based on machine learning for estimating Hopea hainanensis SPAD values under different shade conditions. Frontiers in Plant Science 2022, 13, 928953. [Google Scholar] [CrossRef] [PubMed]
  29. Sudu, B.; Rong, G.; Guga, S.; Li, K.; Zhi, F.; Guo, Y.; Zhang, J.; Bao, Y. Retrieving SPAD Values of Summer Maize Using UAV Hyperspectral Data Based on Multiple Machine Learning Algorithm. Remote Sensing 2022, 14, 5407. [Google Scholar] [CrossRef]
Figure 1. Results of SPAD value prediction using color information for environments with different lighting. (a) HAL+LED_RGB, (b) HAL+LED_RGB+LCT+ILL, (c) HAL+LED_HSL, (d) HAL+LED_HSL+LCT+ILL, (e) HAL+LED_HSV, (f) HAL+LED_HSV+LCT+ILL, (g) HAL+LED_GB, (h) HAL+LED_GB+LCT+ILL, (i) LED_RGB+ILL, (j) LED_HSL+ILL, (k) LED_GB+ILL, (l) HAL_RGB+ILL, (m) HAL_HSL+ILL, (n) HAL_GB+ILL.
Figure 1. Results of SPAD value prediction using color information for environments with different lighting. (a) HAL+LED_RGB, (b) HAL+LED_RGB+LCT+ILL, (c) HAL+LED_HSL, (d) HAL+LED_HSL+LCT+ILL, (e) HAL+LED_HSV, (f) HAL+LED_HSV+LCT+ILL, (g) HAL+LED_GB, (h) HAL+LED_GB+LCT+ILL, (i) LED_RGB+ILL, (j) LED_HSL+ILL, (k) LED_GB+ILL, (l) HAL_RGB+ILL, (m) HAL_HSL+ILL, (n) HAL_GB+ILL.
Preprints 89593 g001
Figure 2. Results of SPAD value prediction using limited illuminance data (training data: < 300 lux; test data: > 300 lux): (a) LED_RES_RGB+ILL, (b) LED_RES_HSL+ILL, (c) LED_RES_GB+ILL.
Figure 2. Results of SPAD value prediction using limited illuminance data (training data: < 300 lux; test data: > 300 lux): (a) LED_RES_RGB+ILL, (b) LED_RES_HSL+ILL, (c) LED_RES_GB+ILL.
Preprints 89593 g002
Figure 3. The top 10 mean absolute SHAP values of the best model in the SPAD value prediction using Halogen and/or LED light source data: (a) HAL+LED_RGB+LCT+ILL, (b) LED_RGB+ILL, (c) LED_HSL+ILL.
Figure 3. The top 10 mean absolute SHAP values of the best model in the SPAD value prediction using Halogen and/or LED light source data: (a) HAL+LED_RGB+LCT+ILL, (b) LED_RGB+ILL, (c) LED_HSL+ILL.
Preprints 89593 g003
Figure 4. Partial Dependence and Individual Conditional Expectation in the best model for SPAD value prediction using RGB values in Halogen and/or LED light source: (a) HAL+LED_RGB_G_mean, (b) HAL+LED_RGB_G_median, (c) HAL+LED_RGB_G_mu, (d) LED_RGB_B_median, (e) LED_RGB_B_mean, (f) LED_RGB_B_mu, (g) LED_HSL_H_median, (h) LED_HSL_H_mean, (i) LED_HSL_S_mean.
Figure 4. Partial Dependence and Individual Conditional Expectation in the best model for SPAD value prediction using RGB values in Halogen and/or LED light source: (a) HAL+LED_RGB_G_mean, (b) HAL+LED_RGB_G_median, (c) HAL+LED_RGB_G_mu, (d) LED_RGB_B_median, (e) LED_RGB_B_mean, (f) LED_RGB_B_mu, (g) LED_HSL_H_median, (h) LED_HSL_H_mean, (i) LED_HSL_S_mean.
Preprints 89593 g004
Table 1. Model performance of the Random Forests for SPAD value prediction on the test data sets (mean±SD).
Table 1. Model performance of the Random Forests for SPAD value prediction on the test data sets (mean±SD).
Light Type Input Variables COR NSE RMSE
HAL+LED RGB 0.9255±0.01676 0.8538±0.03042 3.375±0.2837
RGB + LCT + ILL 0.9291±0.01452 0.8601±0.02635 3.305±0.2437
HSL 0.9212±0.01534 0.8434±0.02726 3.503±0.2786
HSL + LCT + ILL 0.9215±0.01519 0.8438±0.02693 3.498±0.2740
HSV 0.9174±0.01735 0.8359±0.03125 3.583±0.3117
HSV + LCT+ ILL 0.9185±0.01708 0.8378±0.03077 3.562±0.3083
GB 0.9114±0.02031 0.8283±0.03695 3.656±0.3073
GB+ LCT + ILL 0.9186±0.01622 0.8411±0.02949 3.522±0.2502
HAL RGB + ILL 0.8954±0.02725 0.7951±0.04803 3.964±0.4723
HSL + ILL 0.8759±0.03248 0.7557±0.05531 4.326±0.4892
GB + ILL 0.8946±0.02776 0.7937±0.04906 3.977±0.4744
LED RGB + ILL 0.9536±0.01416 0.9057±0.02690 2.675±0.3240
HSL + ILL 0.9722±0.008233 0.9436±0.01634 2.065±0.2258
GB + ILL 0.9317±0.01837 0.8644±0.03491 3.209±0.3273
LED_RES RGB+ILL 0.9143±0.002475 0.8272±0.004129 3.715±0.04429
HSL+ILL 0.9452±0.002044 0.8871±0.002044 3.002±0.05471
GB+ILL 0.8899±0.002079 0.7902±0.002079 4.093±0.03818
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated