Preprint
Review

Research Progress on the Application of Crop Yield Calculation based on Image Analysis Technology

Altmetrics

Downloads

338

Views

114

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

05 February 2024

Posted:

06 February 2024

You are already at the latest version

Alerts
Abstract
Yield calculation is an important link in modern precision agriculture, which is an effective means to improve breeding efficiency, and adjust planting and marketing plans. With the continuous progress of artificial intelligence and sensing technology, yield calculation schemes based on image processing technology have many advantages such as high accuracy, low cost, and non-destructive calculation, and have been favored by a large number of researchers. This article reviews the research progress of crop yield calculation based on remote sensing images and visible light images, describes the technical characteristics and applicable objects of different schemes, and focuses on detailed explanations of data acquisition, independent variable screening, algorithm selection and optimization. Common issues are also discussed and summarized. Finally, solutions are proposed for the main problems that have arisen so far, and future research directions are predicted, with the aim of achieving more progress and wider popularization of yield calculation solutions based on image technology.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Science

1. Introduction

Crop growth is a complex process, and its yield is influenced by various factors, such as crop genes, soil conditions, irrigation, fertilization, light, pests and diseases, etc[1]. Yield calculation is a necessary step in adjusting breeding plans and improving traits. Predicting crop yield accurately can effectively change breeding plans and improve breeding efficiency. which is of great significance for breeding analysis, adjusting production strategy, and previous period estimation.
At present, yield calculation mainly relies on traditional yield calculation methods such as artificial field surveys, meteorological models, and growth models. Among them, the artificial field survey method has a low technical threshold and strong universality, and is most frequently used in actual yield calculation. However, this method is cumbersome to operate and inefficient; The yield calculation method based on meteorological and growth models requires a large amount of historical data to support[2], and with numerous parameters, it is only applicable for specific planting areas or varieties. In recent years, with the continuous development of sensors and artificial intelligence technology[3], yield calculation based on remote sensing technology or visible light images[4,5] has a rapid development, remote sensing calculation can obtain multi band reflection information of crop canopy, which can well reflect the internal growth status and phenotype information of crops, and is particularly suitable for large-scale grain crop yield calculation; The yield calculation method based on visible light images is suitable for crops with relatively regular and clear targets, such as wheat ears, apples, grapes, citrus, etc, mainly by extracting their color, texture, morphology and other features[6], and achieve object segmentation counting by combining machine learning algorithms[7,8],alternatively, deep learning algorithms can be used to automatically achieve object detection and counting[9,10], especially neural network models represented by CNN, which have achieved good calculation performance[11]. The main methods and advantages and disadvantages of crop yield calculation are shown in Table 1.
The existing reviews have sorted out crop yield calculation from the perspective of model algorithms[12],but there is no analysis of crop yield calculation from the perspective of images. This article mainly analyzes the research and application progress of image-based crop yield calculation technology, and compares and summarizes its technical points, main problems, and development trends. To reflect the latest research results, this article mainly focuses on the research results after 2020. More than 1200 relevant scientific research papers are searched in the Web of Science database using keywords such as images, crops, and yield calculation. Through further research and exclusion, 142 papers closely related to this topic are selected for in-depth research. The first section of the article analyzes common research objects in the literature, and the main characteristics of crops and yield calculation methods; The second section focuses on introducing the progress of literature research according to different technical routes; The third section discusses the main algorithms and common problems in current research; Finally, a summary of the entire article was provided, and future development trends were discussed.

2. Yield Calculation Indicators for Different Crops

Different types of crops have different external performance traits, and the parameter indicators and technical solutions used for yield calculation are also different. Table 2 explains the hot varieties studied in the literature, with a focus on introducing yield calculation indicators. Among them, grain crops are mainly multi-seed crops, such as corn, wheat, and rice. When calculating the yield of these crops, it is generally achieved by estimating the number of grains per unit volume, seed density, and unit weight; Economic crops are divided into more categories, mainly fruit, tuber, stem, and leaf, and yield calculation indicators are mainly determined by their physiological structural characteristics.

3. The Application of Image Technology in Crop Yield Calculation

With the development of artificial intelligence technology, image analysis technology has been widely applied in fields such as crop disease detection, soil analysis, crop management, agricultural product quality inspection, and farm safety monitoring[13]. According to the imaging categories, it can be divided into visible light, hyperspectral, infrared, near-infrared, thermal imaging, fluorescence, 3D, laser, CT, etc. Among them, visible light imaging, hyperspectral, infrared, and thermal imaging technologies can well reflect the internal growth status and phenotype parameters of crops[14,15], and have been widely used in crop growth monitoring and yield prediction. At the same time, the growth process of crops is extremely complex, and their yield is related to various factors such as variety, planting environment, and cultivation methods. The external structure of different types of crops is also different, and the calculation methods of yield are also different. It is necessary to choose the corresponding single or composite imaging technology based on specific objects and scenes. For the convenience of description, this article describes two technical solutions based on remote sensing images and visible light images. Remote sensing images generally cover multiple imaging categories, often including multi-channel data sources, while visible light images are mainly captured through digital cameras, mobile phones, and other means.
Figure 1. Crop yield calculation process based on image technology.
Figure 1. Crop yield calculation process based on image technology.
Preprints 98219 g001

3.1. Yield Calculation by Remote Sensing Image

Remote sensing images mainly reflect the electromagnetic wave information emitted or reflected by objects, and can effectively express their internal or phenotypic information. After processing and extracting remote sensing images, many key target information can be captured[16]. Remote sensing technology plays a crucial role in precision agriculture, through continuously and extensively collecting remote sensing image information from planting areas, crop growth status can be well monitored and understood[17], which is of great significance for crop growth regulation and refined management. Different growth stages of crops exhibit different spectral characteristics in visible light, near-infrared, shortwave near-infrared, and other bands. Based on these characteristics, remote sensing technology utilizes multiple sensor devices to detect vegetation reflection or emission of electromagnetic waves of different wavelengths, combined with other channel data to form remote sensing data. Agricultural remote sensing data mainly includes vegetation indices, crop physical parameters, and environmental data, which have obvious big data attributes and are used to monitor crop growth[18] as the main data sources. The process of crop growth is very complex, and its yield is closely related to factors such as light, temperature, soil, and water. Therefore, feature information closely related to crop growth can be extracted from remote sensing data to predict crop yield[19]. Remote sensing technology has the advantages of wide coverage, short cycle, low cost, and long-term equality, playing an important role in crop growth monitoring and yield calculation. Remote sensing data obtained using spaceborne, airborne, and unmanned aerial vehicles (UAVs) have been successfully used for crop yield prediction[20]. The main advantages of crop yield prediction based on remote sensing are reliability, time-saving, and cost-effectiveness, which can be used for yield calculation in different growth regions, categories, and cultivation methods.
At present, remote sensing platforms are mainly divided into low-altitude remote sensing based on drones and high-altitude remote sensing based on satellite platforms[21]. Compared with space and airborne platforms, unmanned aerial vehicle remote sensing technology equipped with visible light cameras, thermal infrared cameras, and spectral cameras has many advantages[22,23], for example, high spatiotemporal resolution, flexible acquisition windows, and less atmospheric attenuation, which make it more suitable for crop monitoring and yield prediction on a farm or field scale; Satellite remote sensing is mainly based on various artificial satellites for data collection[24], which has the advantages of good continuity and high stability[25], especially suitable for long-term monitoring of crops grown in large fields such as wheat, rice, and corn. Although satellite remote sensing data is highly valuable due to its large-scale coverage, insufficient high-altitude resolution remains a prominent issue. Many prediction models can only provide more accurate crop yield predictions in large-scale crops, and cannot describe detailed changes in crop yield at smaller scales (such as individual fields). In addition, due to the possibility of satellites being obstructed by clouds and being affected by large weather images, it is not possible to obtain timely information on the entire growth cycle of crops.
The spectral information obtained based on remote sensing technology is generally divided into multispectral (MSI) and hyperspectral (HSI)[26]. The multispectral sensors installed on drones consist of suitable spectral bands in the visible and near-infrared (VNIR) range, which are highly effective in obtaining various vegetation indices (VIs) sensitive to crop health[27], Such as Normalized Difference Vegetation Index (NDVI)[28], Green Normalized Difference Vegetation Index(GNDVI), Triangle Vegetation Index (TVI), etc. Multi-spectral data based on drones, combined with machine learning (ML) models[29] has been effectively used to monitor biomass information[30] and yield prediction of various crops, such as corn, wheat, rice, soybeans, cotton, and other varieties; Compared with natural light and multispectral imaging modes, hyperspectra[31,32] has over 100 bands with narrow distances between them, which can more accurately express plant canopy reflectance and contain rich crop structural information, making them more advantageous for analyzing crop row shapes. At the same time, there are also issues such as data redundancy, spectral overlap, and interference[33], the increase in data volume has also brought difficulties to model construction, so suitable band selection algorithms are needed for dimensionality reduction.
The key to crop biomass or yield monitoring based on remote sensing images is to identify the spectral bands that are most sensitive to canopy reflectance, for example, NDVI is calculated from the red and near-infrared bands, while EVI is obtained from a combination of the red, near-infrared, and blue light bands. Extract vegetation indices, biophysical parameters, growth environment parameters, and other indicators from remote sensing data, and establish correlations with crop-dependent variables through machine learning or deep learning algorithms[34]. The vegetation indices (VIs) formed based on spectral information of various bands have a high correlation with yield, and can reliably provide spatiotemporal information of vegetation coverage, which are currently widely used spectral index information. Table 3 explains the commonly extracted remote sensing feature information[35,36].
Table 3. Common remote sensing indicator Information.
Table 3. Common remote sensing indicator Information.
Type Title Extraction method or description Remarks
vegetation index Normalized Vegetation Index (NDVI) (NIR - R) / (NIR + R) Reflect the coverage and health status of plants
Red edge chlorophyll vegetation index (ReCl) (NIR / RED) - 1 Display the photosynthetic activity of the canopy
Enhanced Vegetation Index (EVI2) 2.5*(NIR - R) / (NIR + 2.4* R +1)*(1 - ATB) Accurately reflect the growth of vegetation
Ratio Vegetation Index (RVI) NIR/R Sensitive indicator parameters of green plants, which can be used to estimate biomass
Difference Vegetation Index (DVI) NIR-R Sensitive to soil background, beneficial for monitoring vegetation ecological environment
Vertical Vegetation Index (PVI) ((SR-VR)2+(SNIR-VNIR)2)1/2 S represents soil emissivity, V represents vegetation reflectance
Transformed Vegetation Index (TVI) (NDVI+0.5)1/2 Conversion of chlorophyll absorption
Green Normalized Difference Vegetation Index (GNDVI) (NIR-G)/(NIR+G) Strong correlation with nitrogen
Normalized Difference Red Edge Index (NDRE) (NIR-RE) / (NIR +RE) RE represents the emissivity of the red-edge band
Red Green Blue Vegetation Index (RGBVI) (G-R)/(G+R) Measuring vegetation and surface red characteristics
Green Leaf Vegetation Index (GLI) (2G-B-R)/(2G+B+R) Measuring the degree of surface vegetation coverage
Excess Green(ExG) (2G-R-B) Small-scale plant detection
Super Green Reduced Red Vegetation Index (ExGR) 2G-2.4R Small-scale plant detection
Excess Red(ExR) 1.4R-G Soil background extraction
Visible Light Atmospheric Impedance Vegetation Index (VARI) (G –R)/(G+R-B) Reduce the impact of lighting differences and atmospheric effects
Leaf Area Vegetation Index (LAI) leaf area (m2) / ground area (m2) The ratio of leaf area to the soil surface covered
Atmospheric Resilience Vegetation Index (ARVI) (NIR-(2*R)+B)/(NIR+(2*R)+B) Used in areas with high atmospheric aerosol content
Modified Soil Adjusted Vegetation Index (MSAVI) (2*NIR+1-sqrt((2*NIR+1)2-8*(NIR-RED)))/2 Reduce the impact of soil on crop monitoring results
Soil Adjusted Vegetation Index (SAVI) (NIR-R)*(1+L)/(NIR+R+L) L is a parameter that varies with vegetation density
Optimize Soil Adjusted Vegetation Index (OSAVI) (NIR–R) / (NIR + R +0.16) Using reflectance from NIR and red spectra
Normalized Difference Water Index (NDWI) (BG - BNIR ) / (BG +BNIR) Research on vegetation moisture or soil moisture
Conditional Vegetation Index (VCI) The ratio of the current NDVI to the maximum and minimum NDVI values during the same period time over the years Reflect the growth status of vegetation within the same physiological period
Biophysical parameters Leaf Area Index (LAI) Total leaf area/land area The total leaf area of plants per unit land area is closely related to crop transpiration, soil water balance, and canopy photosynthesis
Photosynthetically active radiation component (FPAR) Proportion of absorbable photosynthetically active radiation in photosynthetically active radiation (PAR) Important biophysical parameters commonly used to estimate vegetation biomass
Growth environment parameters Conditional Temperature Index (TCI) The ratio of the current surface temperature to the maximum and minimum surface temperature values over the same period time over the years Reflecting surface temperature conditions, widely used in drought inversion and monitoring
Conditional Vegetation Temperature Index (VTCI) The ratio of LST differences between all pixels with NDVI values equal to a specific value in a certain research area Quantitatively characterizing crop water stress information
Temperature Vegetation Drought Index (TVDI) Inversion of surface soil moisture in vegetation-covered areas Analyzing spatial changes in drought severity
Vertical Drought Index (PDI) The normal of the soil baseline perpendicular to the coordinate origin in the two-dimensional scatter space of near-infrared and red reflectance The spatial distribution characteristics commonly used for soil moisture

3.1.1. Yield Calculation by Low Altitude Remote Sensing Image

Different crops have different spectral characteristics, and the absorbed, radiated, and reflected spectra also differ, low altitude remote sensing technology is mainly based on the spectral characteristics of plants, which can be equipped with multi-channel image sensors[37], Collect different images in different bands and analyze the different characteristic parameters of crops. With the continuous advancement of flight control technology, unmanned aerial vehicles equipped with multiple sensors have high degrees of freedom in flight and flexible control[38,39]. Compared with satellite remote sensing technology, drone remote sensing has the advantages of a small observation range, high image resolution, and the ability to capture video images. Utilizing drone low-altitude flight to obtain high-resolution remote sensing images has become an ideal choice for agricultural applications.
Figure 2. Low altitude remote sensing imaging device and imaging effect.
Figure 2. Low altitude remote sensing imaging device and imaging effect.
Preprints 98219 g002
  • Yield Calculation of Food Crops
Food crops are an indispensable source of food in daily life, with corn, rice, and wheat accounting for more than half of the world's food. They have the characteristics of a wide planting range and high yield, which are the research objectives that people focus on. In terms of crop growth monitoring, the analysis methods for food crops are relatively similar, mainly by collecting remote sensing image information to obtain information on crop optics, structure, thermal characteristics, etc. Indicator prediction can be achieved by establishing biomass or yield fitting models through machine learning or deep learning algorithms[40].
Corn is one of the commodities widely cultivated in countries such as the United States, China, Brazil, Argentina, and Mexico, and there is also a lot of related research work. Wei Yang[41]used a drone platform to collect hyperspectral images of corn at different growth stages, extracted spectral and color image features, and used a CNN model to achieve a prediction accuracy of 75.5% for corn yield, with a Kappa coefficient of 0.69, which is better than single channel feature extraction and traditional neural network algorithms; Monica F. Danilevicz[42]proposed a multimodal corn yield prediction model, and drones were used to obtain corn multispectral images and extract eight vegetation indices, combining with field management and variety gene information, a multimodal prediction model based on tab-DNN and sp-DNN was established. The results showed a relative mean square error of 7.6% and R2 of 0.73, which is better than the modeling results by a single data type; Chandan Kumar[43] obtained the vegetation index VIs of corn at different stages using drones, and used multiple machine learning algorithms such as LR, KNN, RF, SVR, and DNN to predict corn yield, the effects of various variables on yield prediction results were evaluated and screened, proving that the combination of VIs and ML models can be used for corn yield prediction; Danyang Yu[44] obtained RGB and multispectral MS images of corn using drones, constructed raster data of crop surface model CMSs, and extracted vegetation plant VIs. Some corn aboveground biomass AGB prediction models based on DCNN and traditional machine learning algorithms were constructed, and the effects of different remote sensing datasets and models were compared. The results showed that using data fusion or deep learning algorithms had more advantages in results; Ana Paula Marques Ramos[45] obtained hyperspectral images of corn using drones and extracted 33 vegetation indices, a prediction model was established by using the random forest RF algorithm, and the contribution rate of vegetation indices to yield was evaluated and ranked. Finally, the optimal model was found to have a correlation coefficient of 0.78 for corn yield prediction.
Most rice cultivation is mainly concentrated in East Asia, Southeast Asia, and South Asia, and its growth period generally includes milk ripening, wax ripening, full ripening, and withering. Md. Suruj Mia[46] studied a multimodal rice yield prediction model, which combined multispectral data collected by drones with weather data, and established a prediction model using multiple CNN networks. The optimal model RMSPE was 14%, indicating that multimodal modeling has better prediction performance than single data source modeling; Emily S. Bellis[47] used drones to obtain hyperspectral and thermal images of rice, and extracted vegetation indices. Two depth models, 3D-CNN and 2D-CNN, were used to establish rice yield prediction models, resulting in RMSE of 8.8% and 7.4%-8.2%, respectively, indicating the superiority of convolutional autoencoders in yield prediction.
Most countries in the world rely on wheat as their main source of food, making it the world's largest crop in terms of planting area, yield, and distribution. There have been numerous research reports on wheat breeding, planting technology management, storage, and transportation, and yield prediction are particularly important. The maturity stage of wheat is generally divided into milk maturity stage, wax maturity stage, and complete maturity stage, and the characteristics expressed at different stages are also different. The calculation of wheat yield based on remote sensing images is mainly achieved through spectral data. In the field of multispectral research on wheat yield calculation, Chaofa Bian[48] used drones to obtain multispectral data of multi-stage wheat and extracted multiple vegetation indices. Machine models such as Gaussian process regression (GPR), support vector regression (SVR), and random forest regression (RFR) were used to establish a wheat yield prediction model based on vegetation indices. The GPR model R2 reached a maximum of 0.88; Yixiu Han[49] used drones to capture multispectral images of wheat and extracted its feature indices. Using the GOA-XGB model based on the Grasshopper optimization algorithm, the optimal prediction accuracy R2 for aboveground biomass AGB of wheat was obtained, which was 0.855; Xinbin Zhou[50] studied the correlation between multispectral reflectance and wheat yield, protein content, evaluated the performance of various machine learning models such as random forest(RF), artificial neural network(ANN), and support vector regression(SVR), and which were compared with linear models based on vegetation indices, the results demonstrated the modeling advantages of machine learning algorithms; Prakriti Sharma[51] used a drone equipped with multiple sensors to collect multispectral images of oats at different growth stages in three experimental fields, and extracted multiple vegetation indices VIs. The performance of four machine learning models, namely partial least squares(PLS), support vector machine(SVM), artificial neural network(ANN), and random forest(RF), was evaluated. In the collected multiple images, the Pearson coefficient r was between 0.2 and 0.65, and the reasons for the unsatisfactory prediction performance were analyzed; Similar studies include Falv Wang[52], Malini Roy Choudhury[53], etc. which combined spectral indices with machines to calculate yield by collecting multispectral data during the growth period. In terms of hyperspectral imaging, Yuanyuan Fu[54] used drones to obtain hyperspectral images of wheat and used Multiscale_Gabor_GLCM to extract its canopy texture features, combined with vegetation index and other spectral features, and used filtered parameter variables and LSSVM algorithm to obtain the highest accuracy in wheat biomass calculation. R2 was 0.87; Ryoya Tanabe[55] applied CNN networks to wheat yield prediction based on unmanned aerial vehicle hyperspectral data, which achieved better performance than traditional machine learning algorithms; Zongpeng Li[56] used drones to obtain hyperspectral images of winter wheat during flowering and filling stages, extracted a large number of spectral indices, and used three algorithms for feature filtering to reduce dimensionality. The highest prediction result was obtained by using an integrated model based on SVM, GP, LRR, and RF, with an R2 of 0.78, which was superior to a single machine learning algorithm and independent variables without feature optimization. In terms of data fusion yield calculation, the main approach is to use multi-sensor and multi-channel data to establish a wheat yield calculation model, and many results have been achieved than a single dimension, such as Shuaipeng Fei[36], Rui Li[57], Alireza Sharif[58], Falv Wang[59] etc. and have studied some wheat yield calculation models based on the fusion of multi-channel data such as RGB images, multispectral, thermal infrared images, and meteorological data. The results obtained through multiple machine learning algorithms are superior to single-channel modeling, and the calculation accuracy and robustness were more advantageous.
  • Yield Calculation of Economic Crops
Economic crops typically play an important role in the food industry, as well as in industrial raw materials such as soybeans, potatoes, cotton, grapes, etc. Soybeans occupy an important position in global crop trade, with Brazil, the United States, and Argentina contributing over 90% of global soybean yield. The combination of spectral indices and machine learning algorithms is also a common research topic in yield prediction. Maitiniyazi Maimaitijiang[60] used a drone equipped with multiple sensors to collect RGB, multispectral, and thermal images of soybeans, and extracted multimodal features such as canopy spectra, growth structures, thermal information, and textures. Multiple algorithms such as PLSR, RFR, SVR, and DNN were used to predict soybean yield, which verified that multimodal information was more accurate than single channel data sources, the highest R2 reached 0.72 using DNN-F2, and the RMSE was 15.9%; Jing Zhou[61] extracted 7 feature indicators from hyperspectral images obtained by drones, combined with maturity and drought resistance classification factors, and built a hybrid CNN model to predict soybean yield. The predicted result of the model was 78% of the actual yield; Paulo Eduardo Teodoro[35] used drones to collect multi-temporal spectral data of soybeans, extracted multiple spectral indices, and used multi-layer deep regression networks to predict the maturity stage(DM), plant height(PH), and seed yield(GY) of soybeans. The modeling effect was superior to traditional machine learning algorithms, which provided a good solution for soybean yield prediction; Mohsen Yoosefzadeh-Najafabadi[62] extracted the hyperspectral index (HVI) of soybeans for predicting yield and fresh biomass (FBIO), established a prediction model using DNN-SPEA2, and studied the effects of different band and index selections on the prediction results, and compared it with traditional machine learning algorithms, achieving good expected results; Mohsen Yoosefzadeh-Najafabadi[63] obtained hyperspectral reflectance data of soybeans, used recursive feature elimination(RFE) to reduce data dimensionality and screen variables, evaluated MLP, SVM, and RF machine learning algorithms, and found the optimal combination of exponential independent variables and models; Yujie Shi[64] studied the feasibility of estimating the AGB and LAI of mung beans and red beans using multispectral data collected by drones[65], compared and analyzed the sensitive bands and spectral parameters that affect AGB and LAI, evaluated multiple machine learning algorithms such as LR, SMLR, SVM, PLSR, and BPNN, and finally achieved the best fitting effect through the SVM model. The predicted R2 for AGB of red beans and green beans reached 0.811 and 0.751 respectively; Yishan Ji[66] obtained RGB images of fava beans by using a drone, extracted vegetation index, structural information, and texture information to predict aboveground biomass (AGB) and yield (BY). The impact of different growth stages, variable combinations, and learning models on prediction performance was evaluated. Finally, an ensemble learning model was used to predict fava bean yield with an R2 of 0.854.
The yield prediction based on drone remote sensing technology is also common in crops such as potatoes, cotton, sugarcane, tea, alfalfa, etc. Different types of crops have different spectral reflectance characteristics and sensitive feature indices, and it is necessary to gradually screen according to the actual contribution rate, and ultimately establish a high-precision and robust prediction model. Yang Liu[67] studied the aboveground biomass (AGB) prediction of potatoes based on unmanned aerial vehicle (UAV) multispectral images. Multiple variable information such as COS, FDS, VIs, and CH were extracted from the spectral images. The focus was on analyzing the correlation between different channel variable characteristics, different growth stages, regression models, and AGB, and selected the independent variables and combinations with the highest correlation; Chen Sun[68] used drones to collect hyperspectral images of potatoes to predict potato tuber yield and setting rate. Ridge regression was used to predict tuber yield with R2 of 0.63, and partial least squares was used to predict setting rate with R2 of 0.69; Weicheng Xu[69] studied cotton yield calculation based on time series unmanned aerial vehicle remote sensing data. U-Net network was used for semantic segmentation, multiple feature information was extracted, and a non-linear prediction model was established by using the BP neural network. Through variable screening and result evaluation, the optimal yield prediction model was obtained with an average R2 of 0.854; Chiranjibi Poudyal[70] and Romário Porto de Oliveira[71] used hyperspectral and multispectral methods to calculate sugarcane yield respectively; Zongtai He[72] used drones to collect hyperspectral images of spring tea canopy to predict its fresh yield, extracted multiple common chlorophyll spectral indices and leaf area spectral indices, studied the differences caused by single or multiple spectral indices, and evaluated the prediction accuracy of LMSV, PLMSVs, and PLMCVs models, and good expected results were achieved, demonstrating the potential of hyperspectral remote sensing in estimating spring tea fresh yield; Luwei Feng[73] used hyperspectral images collected by drones to predict alfalfa yield. Firstly, a large number of spectral indices were extracted from the images and dimensionality was reduced. Then, three machine learning algorithms, namely random forest(RF), support vector machine(SVR), and KNN, were used for training. Finally, the best prediction performance was achieved by integrating machine models, with an R2 of 0.854; Matthias Wengert[74] collected multispectral image data of grasslands in different seasons by using drones, analyzed characteristic bands and vegetation indices, and evaluated the model performance of four machine learning algorithms, and finally found that the model based on the CBR algorithm had the best prediction performance, with high prediction accuracy and robustness; Joanna Pranga[75] fused the structure and spectral data of drones to predict ryegrass yield, extracted canopy height and vegetation index information collected by sensors, the model performance of PLSR, RF, and SVM machine learning algorithms for predicting dry matter DMY was evaluated, and found that the prediction accuracy based on multi-channel fusion was higher, and the RF algorithm had the best prediction performance, with a maximum error of no more than 308 kg ha-1; Kai-Yun Li[76] used drones to collect multispectral image information of red clover and extracted six spectral indices to predict its dry matter yield. the predictive performance of three machine learning algorithms was evaluated, and finally found that the model established through artificial neural networks had the best performance, with R2 of 0.90 and NRMSE of 0.12 respectively.
In economic crops fruit counting is used to calculate yield, visible light image segmentation or detection is mainly used. However, some scholars have also evaluated the overall yield through remote sensing technology, such as tomatoes, grapes, apples, almonds, and other varieties. Kenichi Tatsumi[77] used high-resolution RGB and multispectral images of tomatoes collected by drones to measure their biomass and yield. A total of 756 first-order and second-order feature information were extracted from the images. Multiple variable screening algorithms were used to identify the independent variable factors that contribute significantly to SM, FW, and FN of tomato, and the impact of three machine learning algorithms on model performance was evaluated. Finally, the best biomass indicator calculation models were established through multiple experiments; Rocío Ballesteros[78] used drones to obtain hyperspectral images of vineyards, extracted vegetation index VIs, and vegetation coverage information, which were used to establish a fitting relationship with yield through artificial neural networks. The impact of different variables on yield prediction accuracy was evaluated, providing a good reference for grape yield prediction based on remote sensing technology; Riqiang Chen[79] studied apple tree yield prediction based on drone multispectral images and sensors, evaluated the contribution of spectral and morphological features to yield, and established an ensemble learning model by combining SVR and KNN machine learning algorithms. Finally, through feature priority and model optimization, the R2 of the optimal model on the validation set reached 0.813, and on the test set reached 0.758, providing a good case for apple yield prediction based on remote sensing images; Minmeng Tang[80] collected multispectral aerial images of almonds and established an improved CNN network for almond yield prediction, achieving good prediction accuracy. The results were significantly better than those obtained by machine learning algorithms based on vegetation indices, demonstrating the advantages of deep learning algorithms in automatically extracting features.
Table 4. Research progress on crop yield calculation based on low-altitude remote sensing.
Table 4. Research progress on crop yield calculation based on low-altitude remote sensing.
Crop varieties Author Year Task Network framework and algorithms Result
corn Wei Yang[41] 2021 Predict the yield of corn CNN AP: 75.5%
Monica F. Danilevicz[42] 2021 Predict the yield of corn tab-DNN and sp-DNN R2: 0.73
Chandan Kumar[43] 2023 Predict the yield of corn LR, KNN, RF, SVR, DNN R2: 0.84
Danyang Yu[44] 2022 Estimate biomass of corn DCNN, MLR,RF, SVM R2: 0.94
Ana Paula Marques Ramos[45] 2020 Predict the yield of corn RF R2: 0.78
rice Md. Suruj Mia[46] 2023 Predict the yield of rice CNN RMSPE: 14%
Emily S. Bellis[47] 2022 Predict the yield of rice 3D-CNN, 2D-CNN RMSE: 8.8%
wheat Chaofa Bian[48] 2022 Predict the yield of wheat GPR R2: 0.88
Shuaipeng Fei[36] 2023 Predict the yield of wheat Ensemble learning algorithms of ML R2: 0.692
Zongpeng Li[56] 2022 Predict the yield of wheat Ensemble learning algorithms of ML R2: 0.78
Yixiu Han[49] 2022 Estimate biomass AGB of Wheat GOA-XGB R2: 0.855
Rui Li[57] 2022 Estimate yield of wheat RF R2: 0.86
Xinbin Zhou[50] 2021 Calculate the yield and protein content of wheat SVR, RF, and ANN R2: 0.62
Yuanyuan Fu[54] 2021 Estimate biomass of wheat LSSVM R2: 0.87
Falv Wang[52] 2022 Estimate biomass of wheat RF R2: 0.97
Malini Roy Choudhury[53] 2021 Calculate the yield of wheat ANN R2: 0.88
Falv Wang[59] 2023 Predict the yield of wheat MultimodalNet R2: 0.7411
Ryoya Tanabe[55] 2023 Predict the yield of wheat CNN RMSE: 0.94 t ha− 1
Prakriti Sharma[51] 2022 Estimate biomass of oat PLS, SVM, ANN, RF r: 0.65
Alireza Sharif[58] 2020 calculation yield of barley GPR R2: 0.84
beans Maitiniyazi Maimaitijiang[60] 2020 Predict the yield of soybean DNN-F2 R2: 0.72
Jing Zhou[61] 2021 Predict the yield of soybean CNN R2: 0.78
Paulo Eduardo Teodoro[35] 2021 Predict the yield of soybean DL and ML r: 0.44
Mohsen Yoosefzadeh-Najafabadi[62] 2021 Predict yield and biomass DNN-SPEA2 R2: 0.77
Mohsen Yoosefzadeh-Najafabadi[63] 2021 Predict the yield of soybean seed RF AP: 93%
Yujie Shi[64] 2022 Predict AGB and LAI SVM R2: 0.811
Yishan Ji[65] 2022 Estimate plant height and yield of broad beans SVM R2: 0.7238
Yishan Ji[66] 2023 Predict biomass and yield of broad beans Ensemble learning algorithms of ML R2: 0.854
potato Yang Liu[67] 2022 Estimate biomass of potatoes SVM, RF, GPR R2: 0.76
Chen Sun[68] 2020 Predict the yield of potato tuber ridge regression R2: 0.63
cotton Weicheng Xu[69] 2021 Predict the yield of cotton BP neural network R2: 0.854
sugarcane Chiranjibi Poudyal[70] 2022 Predict component yield of sugarcane GBRT AP: 94%
Romário Porto de Oliveira[71] 2022 Predict characteristic parameters of sugarcane RF R2: 0.7
spring tea Zongtai He[72] 2023 Predict fresh yield of spring tea PLMSVs R2: 0.625
alfalfa Luwei Feng[73] 2020 Predict yield Ensemble learning algorithms of ML R2: 0.854
meadow Matthias Wengert[74] 2022 Predict the yield of meadow CBR R2: 0.87
ryegrass Joanna Pranga[75] 2021 Predict the yield of ryegrass PLSR, RF, SVM RMSE: 13.1%
red clover Kai-Yun Li[76] 2021 Estimate the yield of red clover ANN R2: 0.90
tomato Kenichi Tatsumi[77] 2021 Predict biomass and yield of tomato RF, RI, SVM rMSE: 8.8%
grape Rocío Ballesteros[78] 2020 Estimate the yield of the vineyard ANN RE: 21.8%
apple Riqiang Chen[79] 2022 Predict the yield of apple tree SVR, KNN R2: 0.813
almond Minmeng Tang[80] 2020 Estimate yield of almond Improved CNN R2: 0.96
With the continuous advancement of flight control technology, the cost of obtaining high-resolution remote sensing data is becoming lower and lower. Significant progress has also been made in crop yield monitoring by using drone platforms, among which machine learning algorithm ML has played an irreplaceable role. However, there are also many problems, such as the inability to obtain stable and continuous image data, and how to filter feature indices is also an important issue that affects prediction accuracy. Further consideration is also needed in machine learning algorithms.

3.1.2. Yield Calculation by High Altitude Satellite Remote Sensing Image

Compared to drone platforms, satellite platforms have advantages in coverage and stability, which can continuously monitor crop growth in different spectral bands, extract multiple vegetation indices for yield prediction, and collect data across growth periods and over a large area more efficiently. However, the resolution is lower when collecting small-scale plots, which are more affected by weather changes and expensive. Remote sensing satellites can provide free and continuous remote sensing data collection tools for constructing crop growth models. The common representative satellites in the world mainly include the LANDSAT series operated in cooperation with the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), the SPOT series developed and operated by the French National Center for Space Studies (CNES), the NOAA series operated by the National Oceanic and Atmospheric Administration (NOAA) of the United States, the Sentinel series developed by the European Space Agency (ESA), and the ZY-3, GF-2 developed and operated by the National Space Administration of China. The entire process of crop yield prediction and processing based on remote sensing satellites includes data acquisition, preprocessing, image correction, feature extraction, classification and interpretation, accuracy evaluation, and post-processing and analysis. Each step has its specific methods and techniques, and the order and specific implementation of these steps may vary depending on the calculation indicators and data characteristics.
Figure 3. Remote sensing satellite imaging process and effect diagram.
Figure 3. Remote sensing satellite imaging process and effect diagram.
Preprints 98219 g003
Maria Bebie[81] used Sentinel-2 satellite to obtain remote sensing image data of wheat, and extracted emissivity data from multiple growth stages as input parameters. An evaluation model was established by using random forest (RF), KNN, and BR. The highest R2 reached 0.91 When images of all growth stages are used; Maria Bebie[82] studied yield prediction of wheat based on satellite images and climate time series data, analyzed the effects of different vegetation indices, machine learning algorithms, growth stages, and other factors on the prediction accuracy of the model. and finally obtained the best prediction effect through support vector machine algorithm, with an R2 of 0.77, which was better than other single machine learning algorithms or ensemble models; Yuanyuan Liu[83] used satellite remote sensing, climate, and crop yield data to predict wheat yield, multiple linear regression, machine learning algorithms, and deep learning methods were compared, and evaluated the impact of different satellite variables, vegetation indices, and other factors on the prediction results; Nguyen-Thanh Son[84] studied the prediction of rice yield in fields based on Sentinel-2 satellite images, evaluated three machine model algorithms: RF, SVM, and ANN, and the yield for four consecutive growing seasons was predicted, which obtained satisfactory prediction results; Carlos Alberto Matias de Abreu Júnior[85] estimated the yield of coffee trees by obtaining multispectral images from satellites and using machine learning algorithms. the correlation between different bands of vegetation index selection and yield was analyzed, evaluated the prediction accuracy of various model algorithms, and finally found that the highest prediction accuracy was obtained by the neural network algorithm NN; Yang Liu[86] obtained remote sensing impact data of rubber trees through Sentinel-2 satellite for one consecutive year. By extracting six vegetation index data, including GSAVI, MSR, NBR, NDVI, NR, and RVI, the impact of single or multiple indices on prediction accuracy was evaluated. Finally, the optimal yield prediction model was established through multiple linear regression, with an R2 of 0.80, providing a reference for yield prediction based on high-altitude remote sensing images; Patrick Filippi[87] studied the prediction of cotton yield based on remote sensing datasets in both temporal and spatial domains, including satellite images, terrain data, soil, weather, etc. A prediction model was constructed by using the random forest to evaluate the effects of different resolutions, time spans, coverage area, and other factors on the prediction results; Johann Desloires[88] studied a corn yield prediction model based on Sentinel-2, temperature, and other data, evaluated the effects of time, spectral information, machine algorithms and other factors on yield, and finally obtained the best result by integrating multiple machine learning algorithms, with an average error of 15.2%; Fan Liu[89] proposed a hybrid neural network algorithm for predicting grain yield. Based on remote sensing image data provided by MODIS satellites, and combined with channel data such as vegetation index and temperature, a convolutional neural network incorporating a CBAM attention mechanism was used to enhance the extraction of vegetation index and temperature features. Finally, LSTM was used to analyze time series data, and the final model obtained an R2 of up to 0.989 in the application.
Table 5. Research progress on crop yield calculation based on high-altitude satellite remote sensing.
Table 5. Research progress on crop yield calculation based on high-altitude satellite remote sensing.
Crop varieties Author Year Task Network framework and algorithms Result
wheat Maria Bebie[81] 2022 Predict the yield of wheat RF, KNN, BR R2: 0.91
Elisa Kamir[82] 2020 Predict the yield of wheat SVM R2: 0.77
Yuanyuan Liu[83] 2022 Predict the yield of wheat SVR R2: 0.87
rice Nguyen-Thanh Son[84] 2022 Predict the yield of rice SVM, RF, ANN MAPE:3.5%
coffee tree Carlos Alberto Matias de Abreu Júnior[85] 2022 Predict the yield of coffee tree NN R2: 0.82
rubber Yang Liu[86] 2023 Predict the yield of rubber LR R2: 0.80
cotton Patrick Filippi[87] 2020 Predict the yield of cotton RF LCCC:0.65
corn Johann Desloires[88] 2023 Predict the yield of corn Ensemble learning algorithms of ML R2: 0.42
foodstuff Fan Liu[89] 2023 Predict the yield of foodstuff LSTM R2:0.989
Over the years, with the continuous iteration of technology, satellite-based remote sensing data acquisition has become more convenient, and calculating various vegetation indices (VIs) has also become more convenient. However, there are also issues such as spatial resolution and cloud cover, and optical remote sensing satellites are heavily affected by weather images. Therefore, it can be combined with microwave remote sensing data, which can receive longer electromagnetic wave information from the surface. These longer electromagnetic waves can effectively penetrate clouds and mist, making microwave remote sensing capable of monitoring the surface. Therefore, it has strong synergistic potential by combining the above two.

3.2. Yield Calculation by Visible Light Image

Visible light images can reflect the absorption and reflection of white light by crops, and high-resolution digital images contain rich color, structure, and morphological information[90], which can be used to analyze the growth and yield prediction of crops by fully extracting feature information. The extraction of color features from digital images is the most effective and widely used method for monitoring crop growth characteristics. Information such as crop coverage, leaf area index, biomass, plant nutrition, and pests and diseases will be reflected in color, commonly used color indices include VARI, ExR, ExG, GLI, ExGR, NDI, etc; Texture is a description of the grayscale of image pixels. Compared with color features, texture can better balance the overall and detailed aspects. Therefore, texture analysis plays a very important role in image analysis, which usually includes two aspects: extracting detailed texture features of the image, such as contrast (CON), correlation (COR), and entropy (ENT); classifying the image based on the extracted features; Morphological features are often associated with crop features and used together to describe image content. However, due to the complexity of crop growth, the information expressed by a single channel's features is often not complete enough. It is necessary to comprehensively study features such as color, texture, and morphology to more accurately monitor crop growth characteristics. With the continuous maturity of digital imaging technology and the widespread use of high-resolution camera equipment, there is an increasing amount of research on evaluating crop growth by analyzing crop growth images. According to different processing methods, there are mainly two types: traditional image processing based on segmentation and depth processing.

3.2.1. Yield Calculation by Traditional Image

Traditional image processing is mainly achieved through information extraction and segmentation. Image segmentation is the core of plant phenotype image processing, with the main purpose of extracting the parts of interest and removing background or other irrelevant noise from the image. When image segmentation, the object of interest is defined by the internal similarity of pixels in features such as texture and color. The simplest algorithm is threshold segmentation, which creates pixel groups on grayscale based on intensity levels to separate the background from the target[91]. Feature extraction is one of the key technologies for target recognition and classification based on computer vision[92], the main purpose is to provide various classifications for machine learning, and the features extracted from the image are processed into "feature vectors", including edges, pixel intensity, geometric shapes, combinations of pixels in different color spaces, etc. Feature extraction is a challenging task that often requires manual screening and testing by using multiple feature extraction algorithms in traditional image processing, until satisfactory feature information is extracted.
The traditional image processing process is relatively complex and requires manual feature selection, followed by the establishment of calculation models using classification or regression algorithms. Jafar Massah[93] used a self-developed robot platform to collect images and extract features such as grayscale histogram, gradient direction histogram, shape context, and local binary to achieve statistical analysis of kiwifruit quantity. The image was segmented based on the RGB threshold segmentation method, and a support vector machine algorithm was used to achieve quantity prediction. The prediction result with R2 of 0.96 was obtained, which was superior to FCN-8S, ZFNet, AlexNet, etc Google Net and ResNet deep networks; Youming Zhang[94] extracted color and texture features from high-resolution RGB images obtained by a drone to predict the LAI of kiwifruit orchards. Two regression algorithms (SWR and RFR) were used for modeling and comparative analysis. The highest estimated R2 for LAI was 0.972, and the RMSE was 0.035, providing a good reference for kiwifruit growth monitoring and yield calculation; Youming Zhang[95] developed a new vegetation index MRBVI for predicting chlorophyll and yield of corn. The experiment showed that the determination coefficients R2 for estimating chlorophyll content and predicting yield by using MRBVI were 0.462 and 0.570 respectively, which were better than the other seven commonly used VI methods; Meina Zhang[96] used consumer-grade drones to capture RGB images of corn, extracted its color features by using ExG, and established a corn yield prediction model by using regression algorithms. The yield prediction models for three samples were significant, with a minimum MAPE range of 6.2% and a maximum of 15.1%, and R2 not exceeding 0.5. The reasons for this were analyzed. In addition, this research page evaluated the impact of nitrogen application on crop growth through ExG characteristics; Amine Saddik[97] developed a low-complexity apple counting algorithm, which was based on apple color and geometric shape for detection. The RGB images were subjected to HSV and Hough transformations, achieving a maximum accuracy of 97.22% on the test dataset, and apple counting without relying on a large amount of dataset and computing power; Wenjian Liu[98] estimated the plant height and aboveground biomass (AGB) of Toona sinensis seedlings by obtaining RGB and depth imaging data of canopy. Firstly, the U-Net model was used to segment the foreground and extract multiple feature indicators. Then, SLR was used to predict plant height. The performance of ML, RF, and MLP machine learning algorithms in predicting aboveground biomass (AGB) was compared, and the key factors for predicting AGB were analyzed. Finally, the selected model predicted R2 of fresh weight reach 0.83; Javier Rodriguez-Sanchez[99] obtained RGB images of cotton through aerial photography, and trained them by using SVM supervised learning algorithm. The accuracy of cotton pixel recognition reached 89%, and after further morphological processing, the R2 reached 0.93 in fitting the number of cotton bolls. This machine learning method reduced the performance requirements for model deployment.
Table 6. Research progress on crop yield calculation based on traditional image processing.
Table 6. Research progress on crop yield calculation based on traditional image processing.
Crop varieties Author Year Task Network framework and algorithms Result
kiwifruit Jafar Massah[93] 2021 Count of fruit quantity SVM R2: 0.96
Youming Zhang[94] 2022 Calculate the leaf area index of kiwifruit RFR R2: 0.972
corn Youming Zhang[95] 2020 Predict the yield of corn BP,SVM,RF,ELM R2: 0.570
Meina Zhang[96] 2020 Estimate yield of corn Regression Analysis MAPE: 15.1%
apple Amine Saddik[97] 2023 Count Apple fruit Raspberry AP: 97.22%
Toona sinensis Wenjian Liu[98] 2021 Predict aboveground biomass MLR R2: 0.83
cotton Javier Rodriguez-Sanchez[99] 2022 Estimate the yield of Cotton SVM R2: 0.93
The RGB color model and HIS model are the most common in image processing. The RGB color model mixes natural colors in different proportions by selecting red, green, and blue as the primary colors. The RGB mode uses the color light additive method. In the HIS model, H represents hue, S represents saturation, and I represents intensity. Saturation represents the brightness of a color, and intensity is determined by the size of the object's reflection coefficient. Compared with RGB color models, HIS color models are more suitable for human visual senses and can be more conveniently used in image processing and computer vision algorithms. There is a conversion relationship between RGB models and HIS models, which can be easily exchanged and provide more ways for image processing. The combination of color index, texture features[91], and morphological information with machine learning algorithms enhances the predictive performance of the model, which can well meet the requirements of biomass or yield calculation in general scenarios. In addition, feature fusion has better robustness and result accuracy than prediction models established with single-dimensional features[100].

3.2.2. Yield Calculation by Deep Learning Image

Deep learning algorithms mainly include convolutional neural networks, recursive neural networks, long short-term memory networks, generative adversarial networks, autoencoders, and reinforcement learning. These algorithms have achieved significant results in fields such as computer vision, natural language processing, and generative models, Convolutional neural network models[101,102] have been widely applied in crop phenotype parameter acquisition and biomass calculation[103]. Crop yield calculation based on deep learning technology is generally achieved through object detection or segmentation[104], counting the number of fruits in a single image. For densely planted or heavily occluded crops, only a portion of the entire yield can be detected in the image, so regression statistics are often needed to achieve this. In addition, with the continuous progress of digital imaging counting, the resolution of images that can be obtained is getting higher and higher, and there is also an increasing amount of research on refined detection for individual plants and grains[105]. Object detection refers to locating and identifying interested seeds or fruits in an image or video, while object segmentation accurately segments and extracts the target from the background in the image. Object detection algorithms mainly include region and single-stage-based object detection[103], region based object detection algorithms mainly include R-CNN, Fast R-CNN, and FasterR-CNN[106], this type of algorithm first generates candidate regions, then extracts features from each candidate region, and then performs target classification and bounding box regression on the extracted features through a classifier. By introducing candidate region generation modules and deep learning-based feature extraction modules, the accuracy and efficiency of object detection are greatly improved. Single-stage object detection algorithms mainly include YOLO and SSD, which directly perform object detection on feature maps by dividing anchor boxes and bounding boxes. These algorithms have faster detection speed, but slightly lower accuracy. Deep learning algorithms-based object segmentation mainly include semantic segmentation and instance segmentation. Semantic segmentation algorithms mainly include FCN, SegNet, and DeepLab. These algorithms introduce convolutional neural networks and dilated convolution techniques to achieve pixel-level classification of images, improving segmentation accuracy and efficiency; Instance segmentation is based on target segmentation, each target instance was further segmented and extracted to achieve fine recognition of each target, which mainly includes MaskR-CNN and PANet. The feature extraction process based on deep learning technology is automatically completed by machines, greatly improving the accuracy of feature extraction and simplifying operational complexity. Research on yield calculation based on deep learning technology mainly focuses on target segmentation, detection, and counting.
  • Yield Calculation of Food Crops
For food crops, the main goal is to achieve detection and counting of grain tassel, which is common in research on corn, wheat, and rice. Obtaining high-resolution images of grain tassels in specific scenarios can also be used for detecting single grains. Canek Mota-Delfin[107] used unmanned aerial vehicles to capture RGB images of corn growth stages, and used a series of models such as YOLOv4, YOLOv4 tiny, YOLOv4 tiny 3l, and YOLOv5 to detect and count corn plants. After comparison, the best prediction results were achieved by YOLOv5s, with an average accuracy of 73.1%; Yunling Liu[108] used the Faster R-CNN network to detect and count corn ears, and compared the performance of ResNet and VGG as feature extraction networks. The highest recognition accuracy for corn growth images captured by drones and mobile phones reached 94.99%; Honglei Jia[109] combined deep learning and image morphology processing methods to achieve the detection and counting of corn ears. First, a deep learning network based on VGG16 was used to complete the recognition of the entire corn plant, and then multiple features of image color, texture, and morphology in the known area were extracted to achieve recognition of corn ears. Finally, the recognition accuracy of the plant reached 99.47%, and the average accuracy of corn ears reached 97.02%.
In terms of wheat yield calculation, Yixin Guo[110] developed a second-order deep learning framework called SlypNet for wheat ear detection, which combined Mask R-CNN and U-Net to automatically extract rich morphological features from images. It could effectively overcome interference such as leaf overlap and occlusion in peak detection, and the accuracy of small ear detection model validation reached 99%; Petteri Nevavuori[111] used unmanned aerial vehicles to obtain RGB images and weather data of wheat growth stages, studied the feasibility of using spatiotemporal sequence-based datasets in yield prediction, the predictive performance of three model architectures CNN-LSTM, ConvLSTM, and 3D-CNN was compared, and more accurate prediction results were obtained than a single temporal phase; Ruicheng Qiu[112] studied a wheat spike automatic detection and counting method based on unsupervised image learning. Color images of four wheat strains were collected, and unsupervised spike labeling was achieved by using the watershed algorithm. A prediction model was established by using DCNN and transfer learning, and a maximum R2 of 0.84 was obtained, greatly improving the efficiency of wheat spike recognition; Yao Zhaosheng[113] applied an improved YOLOX-m object detection algorithm to detect wheat ears and evaluated the prediction accuracy of datasets with different growth stages, planting densities, and drone flight heights. The highest prediction accuracy obtained through the improved model reached 88.03%, an increase of 2.54% compared to the original; Hecang Zang[114] integrated the ECA attention mechanism module into the main network of YOLOv5s to achieve rapid detection of wheat spikes, enhancing the ability to extract detailed features. The accuracy in wheat spike count statistics reached 71.61%, which was 4.95% higher than the standard YOLOv5s, and could effectively solve the problem of wheat mutual occlusion and interference; Fengkui Zhao[115] studied an improved YOLOv4 network for detecting and counting wheat ears, mainly by adding a spatial feature pyramid SPP module to enhance feature fusion at different scales. The average accuracy on two datasets was 95.16% and 97.96% respectively, and the highest fitting R2 with the true value was 0.973.
Zhe Lin[116] used drones to obtain RGB images of sorghum canopy and labeled them with masks. A CNN segmentation model was established by using U-Net, and a prediction mask was used to detect and count sorghum, the final accuracy reached 95.5%; Yixin Guo[117] combined image segmentation and deep learning to automatically calculate the rice seed setting rate (RSSR) based on RGB images captured by mobile phones. During the experiment, multiple convolutional neural network algorithms were compared, and the best-performing YOLO v4 was ultimately selected to calculate RSSR. The detection accuracy for full grain, empty grain, and RSSR of rice was 97.69%, 93.20%, and 99.43% respectively; Jingye Han[118] proposed an image-driven data assimilation framework for rice yield calculation. The framework included error calculation schemes, image CNN models, and data assimilation models, which could estimate multi-phenotype and yield parameters of rice, providing a good innovative approach.
  • Yield Calculation of Economic Crops
There is a significant difference between the foreground and background of fruit crops, with obvious target features such as shape, boundary region, and color. It is easy to achieve target segmentation or detection by using deep learning algorithms, which are most commonly reported in crops such as kiwifruit, mango, grape, and apple. Zhongxian Zhou[119] used MobileNetV2, InceptionV3, and corresponding quantified networks to establish a fast detection model for kiwifruit in orchards. Both considering the true detection rate and model performance, the quantified MobileNetV2 network with a TDR of 89.7%, the lowest recognition time and size was selected to develop a lightweight mobile application; Juntao Xiong[120] studied a mango target detection method based on the deep learning algorithm YOLOv2 model, which achieved an accuracy of 96.1% under different fruit quantities and light conditions. Finally, a fruit tree calculation model was used to fit the actual mango quantity, with an error rate of 1.1%, achieving a relatively good expected effect.
Grapes are one of the popular fruits and important raw materials for wine. Predicting grape yield is of great significance for adjusting production and marketing plans. Image-based grape yield prediction mainly focuses on grape string detection and single-grain counting. Thiago T. Santos[121] used convolutional neural networks of Mask R-CNN, YOLOv2, and YOLOv3 to achieve grape instance segmentation prediction in grape string detection. The highest F1 score reached 0.91, which could accurately evaluate the size and shape of fruits; Lei Shen[122] conducted channel pruning on the YOLO v5 model to obtain YOLO v5s when studying grape string counting, which effectively reduced the number of model parameters, size, and FLOPs. NMS was introduced to improve detection performance during prediction, resulting in mAP and F1 scores of 82.3% and 79.5% on the image datasets respectively, which were validated through video data; Ubert Cecotti[123] studied grape detection based on convolutional neural network algorithms and compared the effects of three feature spaces: color images, grayscale images, and color histograms. Finally, the model trained using the Resnet network combined with transfer learning performed the best, with an accuracy of over 99% for both red and white grapes; Fernando Palacios[124] combined machine learning with deep learning algorithms to achieve grape berry detection and counting. SegNet was used to segment individual berries and extract canopy features. Three different yield prediction models were compared, and the experimental results showed that support vector machine regression was the most effective, resulting in an NRMSE of 24.99% and an R2 of 0.83; Shan Chen[125] designed an improved grape string segmentation method based on the PSPNet model, which CBAM attention mechanism and atrous convolution were mainly embedded in its backbone network to enhance the ability of detail feature extraction and multi-layer feature fusion. The improved model increased IOU and pixel density PA by 4.36% and 9.95% respectively, reaching 87.42% and 95.73%; Shan Chen[126] used three models, Object Detection, CNN, and Transformer, to count grape clusters and found that the Transformer architecture had the highest prediction accuracy, with a MAPE of 18%, and eliminated the step of manually labeling images, demonstrating significant advantages; Marco Sozzi[127] applied YOLOv3, YOLOv3 tiny, YOLOv4 tiny, YOLOv5 tiny, YOLOv5x, and YOLOv5s to grape string detection and counting, compared the prediction results of different models on different datasets, and finally selected YOLOv5x as the best performance with an average error of 13.3%; Fernando Palacios[128] used the deep convolutional neural network SegNet for grape flower detection and counting, VGG19 was used as the encoder, achieving good detection accuracy. The predicted flower count for each tree achieved an R2 of over 0.7 compared to the actual R2, and developed a mobile automatic detection device. In terms of apple yield calculation, Lijuan Sun[129] proposed the YOLOv5-PRE model for apple detection and counting based on YOLOv5s. By introducing lightweight structures of ShuffleNet and GhostNet, as well as attention mechanisms, it was found that the average accuracy of the YOLOv5 PRE model reached 94.03%, with significant improvements in accuracy and detection efficiency compared to YOLOv5s; Orly Enrique Apolo-Apolo[130] explored apple detection technology based on CNN networks, aerial images collected by drones were used as the training set, and Faster R-CNN was used as the training network, finally, R2 reached 0.86, and linear regression was used to fit the total number of apples in each tree to solve the occlusion problem of some fruit trees, providing a good solution for apple calculation.
In addition, similar studies have also been conducted in weed detection[131], chili biomass calculation, pod detection and counting, etc. Longzhe Quan[132] developed a dual stream dense feature fusion convolutional neural network model based on RGB-D to achieve weed detection and aboveground fresh weight calculation in land parcels, obtaining richer information than RGB images. By constructing a NiN-Block structural module to enhance feature extraction and fusion, the average accuracy of predicting weed fresh weight reached 75.34% when IoU was set to 0.5; Taewon Moon[133] combined simple formulas with deep learning networks to calculate the fresh weight and leaf area of greenhouse sweet peppers. The fresh weight was calculated by using the total weight and volumetric water content of the system in the device. The ConvNet network was used to calculate the sweet pepper leaf area, and R2 values are 0.7 and 0.95 respectively. This solution is universal and can be promoted in practical application scenarios; Wei Lu[134] used a camera to capture RGB images of plants, first Faster R-CNN, FPN, SSD, and YOLOv3 are selected for pod recognition. then selected the YOLOv3 network with the highest recognition accuracy, and in the foundation of this, the loss function, anchor box clustering algorithm, and some networks are improved to detect and count soybean leaves. Finally, the GRNN algorithm was used to model the number of pods and leaves, and obtained the optimal soybean yield prediction model, the average accuracy reached 97.43%; Luis G. Riera[135] developed a yield calculation framework based on multi-view images by using RGB images captured by cameras, and established a pod recognition and counting model by using RetinaNet, effectively overcoming the problem of pod counting occlusion.
Table 7. Research progress of deep learning in crop image yield calculation.
Table 7. Research progress of deep learning in crop image yield calculation.
Crop varieties Author Year Task Network framework and algorithms Result
corn Canek Mota-Delfin[107] 2022 Detect and count corn plants YOLOv4, YOLOv5 series mAP: 73.1%
Yunling Liu[108] 2020 Detect and count corn ears Faster R-CNN AP: 94.99%
Honglei Jia[109] 2020 Detect and count corn ears VGG16 mAP: 97.02%
wheat Yixin Guo[110] 2022 Detect and count wheat ears SlypNet mAP: 99%
Petteri Nevavuori[111] 2020 Predict wheat yield 3D-CNN R2: 0.962
Ruicheng Qiu[112] 2022 Detect and count wheat ears DCNN R2: 0.84
Yao Zhaosheng[113] 2022 detect wheat spikes Rapidly YOLOX-m AP: 88.03%
Hecang Zang[114] 2022 Detect and count wheat ears YOLOv5s AP: 71.61%
Fengkui Zhao[115] 2022 Detect and count wheat ears YOLOv4 R2: 0.973
sorghum Zhe Lin[116] 2020 Detect and count sorghum spikes U-Net AP: 95.5%
rice Yixin Guo[117] 2021 Calculate Rice Seed Setting Rate (RSSR) YOLO v4 mAP: 99.43%
Jingye Han[118] 2022 Estimate Rice Yield CNN R2: 0.646
kiwifruit Zhongxian Zhou[119] 2020 Count fruit quantity MobileNetV2,InceptionV3 TDR: 89.7%
mango Juntao Xiong[120] 2020 Detect and count Mango YOLOv2 error rate: 1.1%
grape Thiago T. Santos[121] 2020 Detect and count grape string Mask R-CNN, YOLOv3 F1-score: 0.91
Lei Shen[122] 2023 Detect and count grape string YOLO v5s mAP: 82.3%
Hubert Cecotti[123] 2020 Detect grape Resnet mAP: 99%
Fernando Palacios[124] 2022 Detect and count grapeberry quantity SegNet, SVR R2: 0.83
Shan Chen[125] 2021 Segment grape skewer PSPNet PA: 95.73%
Shan Chen[126] 2022 Detect and count grape string Object detection, CNN, Transformer MAPE: 18%
Marco Sozzi[127] 2022 Detect and count grape string YOLO MAPE: 13.3%
Fernando Palacios[128] 2020 Detect and count grapevine flower SegNet R2: 0.70
apple Lijuan Sun[129] 2022 Detect and count apple YOLOv5-PRE mAP: 94.03%
Orly Enrique Apolo-Apolo[130] 2020 Detect and count apple Faster R-CNN R2: 0.86
weed Longzhe Quan[132] 2021 Estimate aboveground fresh weight of weeds YOLO-V4 mAP: 75.34%
capsicum Taewon Moon[133] 2022 Estimate fresh weight and leaf area ConvNet R2: 0.95
soybean Wei Lu[134] 2022 Predict soybean yield YOLOv3, GRNN mAP: 97.43%
Luis G. Riera[135] 2021 Count soybean pods RetinaNet mAP: 0.71
The process of image processing based on deep learning is relatively complex, and its feature extraction is independently completed by machines without the need for manual intervention, resulting in relatively high accuracy. At the same time, deep learning requires a large amount of computation and has multiple layers in the network. As the network depth increases, feature maps and concept layer information are continuously extracted, resulting in reduced resolution and insufficient sensitivity to detail information, which will lead to missed or false detections; In addition, during the application process, the occlusion problem between plants, leaves, and fruits is also quite serious, and optimization is needed in image acquisition, preprocessing, and training network construction, such as background removal, branch and leaf construction, video streaming shooting, and other methods.

4. Discuss

At present, image-based crop yield calculation is mainly divided into remote sensing images and visible light images. The large amount of data collected by remote sensing can describe almost all plant physiology, and even internal changes of plants based on the resolution of sensors. That is, by extracting absorption spectra or reflected electromagnetic wave information through multi-channel sensors or remote sensing satellites, machine learning algorithms are used to establish crop biomass or yield calculation models, which can reflect the overall growth of crops and are suitable for large-scale cultivation of grain crops; Crop yield calculation based on visible light mainly achieves fruit counting through image segmentation or detection, which is suitable for economic crops such as eggplants and melons. High-resolution image acquisition, image preprocessing, feature variable selection, and model algorithm selection are all key factors that affect prediction accuracy. Yield calculation schemes of crops are compared based on two types of images in Table 8, mainly explaining image acquisition methods, preprocessing, extraction indicators, main advantages, main disadvantages, and representative algorithms.
Table 8. Comparison of crop yield calculation schemes by different technical categories.
Table 8. Comparison of crop yield calculation schemes by different technical categories.
Image types Obtaining methods Image preprocessing Extracting indicators Main advantages Main Disadvantages Representative algorithms
Remote sensing images Low altitude drone: equipped with multispectral cameras, visible light cameras, thermal imaging cameras, and hyperspectral cameras Size correction;
Multi-channel image fusion;
Projection conversion;
Resampling;
Surface reflectance;
Multispectral vegetation index; Biophysical parameters; Growth environment parameters;
Multi-channel image, containing time, space, temperature, and band information, multi-channel fusion, rich information The spatiotemporal and band attributes are difficult to fully utilize, and the shooting distance is far, making it suitable for predicting the yield of large-scale land parcels with low accuracy; Easily affected by weather
ML,ANN,CNN-LSTM,3DCNN
Satellite Low spatial and temporal resolution, long cycle time, and pixel mixing
Visible light images Digital camera Size adjustment;
Rotation;
Cropping;
Gaussian blur;
Color enhancement;
Brightening;
Noise reduction, etc;
Annotation;
Dataset partitioning;
Color index;
Texture index;
Morphological index;
Easy to obtain images at a low cost Only three bands of red, green, and blue have limited information content Linear regression,
ML,
YOLO,
Resnet,
SSD,
Mask R-CNN
Image acquisition. With the continuous advancement of digital imaging and sensor technology, obtaining high-resolution visible light or spectral images has become more convenient and efficient. At present, unmanned aerial vehicles (UAV) are widely used for data collection in agricultural and ecological-related applications, with advantages in economy and flexibility; The satellite platform based remote sensing data acquisition method has better stability, which makes it easy to obtain remote sensing data from multiple growth stages, and is convenient for long-term monitoring. In addition, factors such as image shooting angle, smoothness, backlight, shadows, occlusion, etc. lead to incomplete target segmentation, so it will effectively weaken the impact by improving image contrast, light compensation, etc.
Image preprocessing. The near-infrared region expresses the absorption of hydrogen-containing groups, but at the same time, the absorption is weak and the spectra overlap, requiring denoising and filtering to reduce the signal-to-noise ratio, aiming to enhance the distribution of vegetation characteristics and canopy structure changes; For visible light images, random aspect ratio cropping, horizontal flipping, vertical flipping, saturation enhancement, saturation reduction, Gaussian blur, grayscale, CutMix, Mosaic, etc. are commonly used processing methods to adjust the geometric shape of the image, expand the number of samples, enhance the signal-to-noise ratio, which can enhance the model's generalization ability.
Feature variable screening. There are over 40 crop indices, and selecting indices closely related to crop growth and yield still faces many difficulties. How to select highly correlated independent variables from a large amount of feature information is also the key to building a high-performance model. So it’s important to eliminate redundant or irrelevant variables, which can improve model robustness, and reduce computational complexity. Principal Component Analysis (PCA) is a common data dimensionality reduction algorithm that can identify high-scoring independent variables to describe potential relationships in data. Similar algorithms include decision tree DTM, genetic algorithm GA, simulated annealing algorithm SA, etc.
Selection of model algorithms. There are two main model algorithms used for yield calculation: machine learning (ML) and deep learning (DL)[136].
(1) learning (ML)
The growth process of crops is complex, and the response to different environmental changes is generally non-linear. Therefore, traditional statistical methods are not always sufficient to accurately estimate the growth of plants. Machine learning (ML)[12] is used to perform regression analysis on highly nonlinear problems and identify nonlinear relationships between input and output datasets (Bishop)[137], which can learn change patterns from a large amount of data, achieve autonomous decision-making, and provide a good solution for complex data analysis. It is widely used in scenarios such as image segmentation and target recognition. Compared with traditional crop models and statistical methods, yield prediction models established with ML can handle nonlinear relationships and identify independent variables that affect yield weight, but the interpretability is limited[138], and the generated models are usually targeted at specific application scenarios, requiring special attention to overfitting processes. In crop yield calculation research, commonly used machine learning algorithms such as artificial neural networks (ANN), support vector machines (SVM), Gaussian process regression (GPR), partial least squares regression (PLSR), multi-layer prediction (MLP), random forest (RF), k-nearest neighbor (KNN), etc. which need to be applied based on specific conditions such as datasets, variable types, crop types, growth stage, etc.
(2) Deep learning (DL)
Deep learning is a high-order machine learning method that includes multiple layers of neural networks, which can deeply explore the internal relationships from data, and automatically learn from large hierarchical representations of data by using complex nonlinear functions. Compared with ML, deep learning has higher accuracy. In recent years, DL has been increasingly used for crop biomass monitoring or yield calculation, proving its powerful feature extraction and self-learning ability. Convolutional neural networks (CNN) and recurrent neural networks (RNN) are the most commonly used DL methods in exploring the correlation between independent variables and production[139]. Among them, the Convolutional Neural Network (CNN) is the most widely used deep learning architecture in image processing, which mainly is composed of a convolutional layer and a pooling layer[101]. CNN can take images as input and automatically extract features such as color, geometry, texture, etc.[140]. It has been widely applied in field weed and pest identification, environmental stress, agricultural image segmentation, and yield calculation; CNN models are mainly used to capture spatial features of images, while RNN is mainly used to analyze temporal data, especially in analyzing remote sensing data and meteorological data with multiple growth periods and long time series. Long Short-Term Memory (LSTM) is an excellent version of RNN model iteration, which can effectively solve problems such as gradient explosion and vanishing. When combined with CNN, it is more accurate in establishing yield calculation models based on multimodal fusion of remote sensing data, meteorological data, phenological information, and other modalities.
Meanwhile, deep learning models generally have complex structures, and require a large amount of data samples and computing power to achieve the expected results. Small samples can easily cause overfitting, so data augmentation is particularly important. In addition, numerous model hyperparameters are also important factors affecting prediction accuracy. In many studies, hyperparameters are usually determined based on experience or model evaluation, and some algorithms are combined to achieve hyperparameter optimization, such as the Bayesian algorithm, genetic algorithm, particle swarm optimization algorithm, etc.

5. Conclusions and Outlooks

With the continuous progress of artificial intelligence and sensor technology, image analysis technology is being studied more and more in agricultural yield. Remote sensing images and visible light images are also being applied by scholars in crop target segmentation, detection, counting, biomass monitoring, and yield calculation. Image spectral index, geometric shape, texture, and other information can effectively reflect the internal growth status of crops, express the growth status of crops, and have been proven to be applicable for yield calculation of various food and economic crops. With the improvement of image resolution and continuous optimization of model algorithms, the accuracy of crop yield calculation is also increasing, but it also faces more problems and challenges.
Model algorithm optimization. There are still some problems in the application of deep learning-based object detection and segmentation algorithms, such as poor detection and segmentation performance for small objects and low accuracy for target boundaries. To solve these issues, researchers have proposed many improvement and optimization methods. On one hand, the performance of object detection and segmentation algorithms can be improved by changing the network structure and loss function. For example, it can enhance the network's attention and perception ability towards targets, and improve the accuracy of target detection and segmentation by introducing attention mechanisms and multi-scale fusion techniques. On the other hand, data augmentation can also effectively improve the effectiveness of object detection and segmentation. By performing transformations such as rotation, scaling, and translation on the data, the diversity of training data can be increased, and the robustness and generalization ability of the model can be improved. In addition, by pre-training the model parameters, the convergence speed of the model training can be accelerated, and the performance of object detection and segmentation can be improved. In summary, deep learning based object detection and segmentation algorithms have broad application prospects in the field of computer vision. By continuously improving and optimizing algorithms, the accuracy and efficiency of object detection and segmentation can be improved.
The fusion of multimodal and channel data. The growth process of crops is complex and variable, and is greatly influenced by factors such as light, precipitation, and temperature. Yield prediction is closely related to environmental factors, so time series samples are particularly important when predicting yield[141]. Therefore, data collection needs to cover remote sensing and meteorological data with multiple growth periods and long time series, and multi-feature fusion (multispectral, thermal infrared, weather) has higher accuracy than a single dimension. After data fusion, including image texture and multi-channel spectral information, further training of the model can be achieved by using private or other publicly available datasets. In addition, multimodal frameworks can be extended, integrating environmental factors such as meteorology, geography, soil, and altitude into yield prediction, which will get significant improvements in prediction results.
Compensation for insufficient sample size by transfer learning algorithms. Deep learning requires a large number of data samples as support[142]. Transfer learning can perform parameter fine-tuning based on trained models, resulting in better performance for new problems; In this method, a limited number of samples can be used to fine-tune the parameters of the pre-trained model on a large dataset to achieve optimal performance in new tasks. Specifically, it includes the following two aspects: region-based transfer learning. Firstly, the regions with sufficient sample size are used to learn the model, and then extend the model to other regions with fewer samples to achieve region transfer. The second is parameters-based transfer learning. Sharing partial parameters or prior distributions of hyperparameters between models for related tasks to improve overall performance. So far, although both methods have contributed to improving model performance, due to the complexity and diversity of data, there is currently no unified method for defining dataset similarity, and similarity-based transfer requires more quantitative and qualitative explanations. Therefore, in the future, with the accumulation of data, the advantages of deep learning models will gradually become prominent. In addition, in region-based transfer learning, the environments of different regions are heterogeneous, and how to achieve transfer in heterogeneous environments is a future research direction.
The combination of multiple collection platforms. When monitoring crop growth and estimating yield at the field scale, satellite remote sensing makes it difficult to overcome the impact of spatial heterogeneity on accuracy. However, drone platforms can better identify heterogeneity information. Therefore, it is possible to combine drone platforms with satellite platforms, and use drone platform data as an intermediate variable for scale conversion in the spatiotemporal fusion process of satellite data to ensure accuracy in the downscaling process.
The interpretability of the yield calculation model. The mechanism of deep learning algorithms is difficult to explain. Based on deep learning algorithms, feature extraction is mostly automatic from data. Growth models can better express information such as crop growth process, environment, and cultivation technology[9], thereby describing the growth and development process. Crop growth models can be combined to improve the explanatory power of crop growth.
Power requirements of model computing. When monitoring and estimating yield, it is necessary to use the complex network structure of deep learning to fully learn high-resolution data, which requires a lot of time for training. At the same time, high computer performance is required, so lightweight model algorithms are particularly important while ensuring accuracy. Therefore, how to efficiently and quickly learn features, ensure the integrity of learning features, and minimize the learning of redundant information is an important issue in using deep learning methods for field scale growth monitoring.

References

  1. Paudel, D.; Boogaard, H.; de Wit, A.; Janssen, S.; Osinga, S.; Pylianidis, C.; Athanasiadis, I.N. Machine learning for large-scale crop yield forecasting. Agricultural Systems 2021, 187. [CrossRef]
  2. Zhu, Y.; Wu, S.; Qin, M.; Fu, Z.; Gao, Y.; Wang, Y.; Du, Z. A deep learning crop model for adaptive yield estimation in large areas. International Journal of Applied Earth Observation and Geoinformation 2022, 110. [CrossRef]
  3. Akhtar, M.N.; Ansari, E.; Alhady, S.S.N.; Abu Bakar, E. Leveraging on Advanced Remote Sensing- and Artificial Intelligence-Based Technologies to Manage Palm Oil Plantation for Current Global Scenario: A Review. Agriculture-Basel 2023, 13. [CrossRef]
  4. Torres-Sánchez, J.; Souza, J.; di Gennaro, S.F.; Mesas-Carrascosa, F.J. Editorial: Fruit detection and yield prediction on woody crops using data from unmanned aerial vehicles. Frontiers in Plant Science 2022, 13. [CrossRef]
  5. Wen, T.; Li, J.-H.; Wang, Q.; Gao, Y.-Y.; Hao, G.-F.; Song, B.-A. Thermal imaging: The digital eye facilitates high-throughput phenotyping traits of plant growth and stress responses. Science of the Total Environment 2023, 899. [CrossRef]
  6. Farjon, G.; Huijun, L.; Edan, Y. Deep-learning-based counting methods, datasets, and applications in agriculture: a review. Precision Agriculture 2023, 24, 1683-1711. [CrossRef]
  7. Rashid, M.; Bari, B.S.; Yusup, Y.; Kamaruddin, M.A.; Khan, N. A Comprehensive Review of Crop Yield Prediction Using Machine Learning Approaches With Special Emphasis on Palm Oil Yield Prediction. IEEE Access 2021, 9, 63406-63439. [CrossRef]
  8. Attri, I.; Awasthi, L.K.; Sharma, T.P. Machine learning in agriculture: a review of crop management applications. Multimedia Tools and Applications 2023. [CrossRef]
  9. Di, Y.; Gao, M.; Feng, F.; Li, Q.; Zhang, H. A New Framework for Winter Wheat Yield Prediction Integrating Deep Learning and Bayesian Optimization. Agronomy 2022, 12. [CrossRef]
  10. Teixeira, I.; Morais, R.; Sousa, J.J.; Cunha, A. Deep Learning Models for the Classification of Crops in Aerial Imagery: A Review. Agriculture-Basel 2023, 13. [CrossRef]
  11. Bali, N.; Singla, A. Deep Learning Based Wheat Crop Yield Prediction Model in Punjab Region of North India. Applied Artificial Intelligence 2021, 35, 1304-1328. [CrossRef]
  12. van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Computers and Electronics in Agriculture 2020, 177. [CrossRef]
  13. Thakur, A.; Venu, S.; Gurusamy, M. An extensive review on agricultural robots with a focus on their perception systems. Computers and Electronics in Agriculture 2023, 212. [CrossRef]
  14. Abebe, A.M.; Kim, Y.; Kim, J.; Kim, S.L.; Baek, J. Image-Based High-Throughput Phenotyping in Horticultural Crops. Plants-Basel 2023, 12. [CrossRef]
  15. Alkhaled, A.; Townsend, P.A.A.; Wang, Y. Remote Sensing for Monitoring Potato Nitrogen Status. American Journal of Potato Research 2023, 100, 1-14. [CrossRef]
  16. Pokhariyal, S.; Patel, N.R.; Govind, A. Machine Learning-Driven Remote Sensing Applications for Agriculture in India-A Systematic Review. Agronomy-Basel 2023, 13. [CrossRef]
  17. Joshi, A.; Pradhan, B.; Gite, S.; Chakraborty, S. Remote-Sensing Data and Deep-Learning Techniques in Crop Mapping and Yield Prediction: A Systematic Review. Remote Sensing 2023, 15. [CrossRef]
  18. Muruganantham, P.; Wibowo, S.; Grandhi, S.; Samrat, N.H.; Islam, N. A Systematic Literature Review on Crop Yield Prediction with Deep Learning and Remote Sensing. Remote Sensing 2022, 14. [CrossRef]
  19. Ren, Y.; Li, Q.; Du, X.; Zhang, Y.; Wang, H.; Shi, G.; Wei, M. Analysis of Corn Yield Prediction Potential at Various Growth Phases Using a Process-Based Model and Deep Learning. Plants 2023, 12. [CrossRef]
  20. Zhou, S.; Xu, L.; Chen, N. Rice Yield Prediction in Hubei Province Based on Deep Learning and the Effect of Spatial Heterogeneity. Remote Sensing 2023, 15. [CrossRef]
  21. Darra, N.; Anastasiou, E.; Kriezi, O.; Lazarou, E.; Kalivas, D.; Fountas, S. Can Yield Prediction Be Fully Digitilized? A Systematic Review. Agronomy-Basel 2023, 13. [CrossRef]
  22. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sensing 2023, 15. [CrossRef]
  23. Istiak, A.; Syeed, M.M.M.; Hossain, S.; Uddin, M.F.; Hasan, M.; Khan, R.H.; Azad, N.S. Adoption of Unmanned Aerial Vehicle (UAV) imagery in agricultural management: A systematic literature review. Ecological Informatics 2023, 78. [CrossRef]
  24. Fajardo, M.; Whelan, B.M. Within-farm wheat yield forecasting incorporating off-farm information. Precision Agriculture 2021, 22, 569-585. [CrossRef]
  25. Yli-Heikkilä, M.; Wittke, S.; Luotamo, M.; Puttonen, E.; Sulkava, M.; Pellikka, P.; Heiskanen, J.; Klami, A. Scalable Crop Yield Prediction with Sentinel-2 Time Series and Temporal Convolutional Network. Remote Sensing 2022, 14. [CrossRef]
  26. Safdar, L.B.; Dugina, K.; Saeidan, A.; Yoshicawa, G.V.; Caporaso, N.; Gapare, B.; Umer, M.J.; Bhosale, R.A.; Searle, I.R.; Foulkes, M.J.; et al. Reviving grain quality in wheat through non-destructive phenotyping techniques like hyperspectral imaging. Food and Energy Security 2023, 12. [CrossRef]
  27. He, L.; Fang, W.; Zhao, G.; Wu, Z.; Fu, L.; Li, R.; Majeed, Y.; Dhupia, J. Fruit yield prediction and estimation in orchards: A state-of-the-art comprehensive review for both direct and indirect methods. Computers and Electronics in Agriculture 2022, 195. [CrossRef]
  28. Tende, I.G.; Aburada, K.; Yamaba, H.; Katayama, T.; Okazaki, N. Development and Evaluation of a Deep Learning Based System to Predict District-Level Maize Yields in Tanzania. Agriculture 2023, 13. [CrossRef]
  29. Leukel, J.; Zimpel, T.; Stumpe, C. Machine learning technology for early prediction of grain yield at the field scale: A systematic review. Computers and Electronics in Agriculture 2023, 207. [CrossRef]
  30. Elangovan, A.; Duc, N.T.; Raju, D.; Kumar, S.; Singh, B.; Vishwakarma, C.; Gopala Krishnan, S.; Ellur, R.K.; Dalal, M.; Swain, P.; et al. Imaging Sensor-Based High-Throughput Measurement of Biomass Using Machine Learning Models in Rice. Agriculture 2023, 13. [CrossRef]
  31. Hassanzadeh, A.; Zhang, F.; van Aardt, J.; Murphy, S.P.; Pethybridge, S.J. Broadacre Crop Yield Estimation Using Imaging Spectroscopy from Unmanned Aerial Systems (UAS): A Field-Based Case Study with Snap Bean. Remote Sensing 2021, 13. [CrossRef]
  32. Sanaeifar, A.; Yang, C.; Guardia, M.d.l.; Zhang, W.; Li, X.; He, Y. Proximal hyperspectral sensing of abiotic stresses in plants. Science of the Total Environment 2023, 861. [CrossRef]
  33. Li, K.-Y.; Sampaio de Lima, R.; Burnside, N.G.; Vahtmäe, E.; Kutser, T.; Sepp, K.; Cabral Pinheiro, V.H.; Yang, M.-D.; Vain, A.; Sepp, K. Toward Automated Machine Learning-Based Hyperspectral Image Analysis in Crop Yield and Biomass Estimation. Remote Sensing 2022, 14. [CrossRef]
  34. Elavarasan, D.; Vincent, P.M.D. Crop Yield Prediction Using Deep Reinforcement Learning Model for Sustainable Agrarian Applications. IEEE Access 2020, 8, 86886-86901. [CrossRef]
  35. Teodoro, P.E.; Teodoro, L.P.R.; Baio, F.H.R.; da Silva Junior, C.A.; dos Santos, R.G.; Ramos, A.P.M.; Pinheiro, M.M.F.; Osco, L.P.; Gonçalves, W.N.; Carneiro, A.M.; et al. Predicting Days to Maturity, Plant Height, and Grain Yield in Soybean: A Machine and Deep Learning Approach Using Multispectral Data. Remote Sensing 2021, 13. [CrossRef]
  36. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precision Agriculture 2022, 24, 187-212. [CrossRef]
  37. Ayankojo, I.T.T.; Thorp, K.R.R.; Thompson, A.L.L. Advances in the Application of Small Unoccupied Aircraft Systems (sUAS) for High-Throughput Plant Phenotyping. Remote Sensing 2023, 15. [CrossRef]
  38. Zualkernan, I.; Abuhani, D.A.; Hussain, M.H.; Khan, J.; ElMohandes, M. Machine Learning for Precision Agriculture Using Imagery from Unmanned Aerial Vehicles (UAVs): A Survey. Drones 2023, 7. [CrossRef]
  39. Zhang, Z.; Zhu, L. A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications. Drones 2023, 7. [CrossRef]
  40. Gonzalez-Sanchez, A.; Frausto-Solis, J.; Ojeda-Bustamante, W. Predictive ability of machine learning methods for massive crop yield prediction. Spanish Journal of Agricultural Research 2014, 12. [CrossRef]
  41. Yang, W.; Nigon, T.; Hao, Z.; Dias Paiao, G.; Fernández, F.G.; Mulla, D.; Yang, C. Estimation of corn yield based on hyperspectral imagery and convolutional neural network. Computers and Electronics in Agriculture 2021, 184. [CrossRef]
  42. Danilevicz, M.F.; Bayer, P.E.; Boussaid, F.; Bennamoun, M.; Edwards, D. Maize Yield Prediction at an Early Developmental Stage Using Multispectral Images and Genotype Data for Preliminary Hybrid Selection. Remote Sensing 2021, 13. [CrossRef]
  43. Kumar, C.; Mubvumba, P.; Huang, Y.; Dhillon, J.; Reddy, K. Multi-Stage Corn Yield Prediction Using High-Resolution UAV Multispectral Data and Machine Learning Models. Agronomy 2023, 13. [CrossRef]
  44. Yu, D.; Zha, Y.; Sun, Z.; Li, J.; Jin, X.; Zhu, W.; Bian, J.; Ma, L.; Zeng, Y.; Su, Z. Deep convolutional neural networks for estimating maize above-ground biomass using multi-source UAV images: a comparison with traditional machine learning algorithms. Precision Agriculture 2022, 24, 92-113. [CrossRef]
  45. Marques Ramos, A.P.; Prado Osco, L.; Elis Garcia Furuya, D.; Nunes Gonçalves, W.; Cordeiro Santana, D.; Pereira Ribeiro Teodoro, L.; Antonio da Silva Junior, C.; Fernando Capristo-Silva, G.; Li, J.; Henrique Rojo Baio, F.; et al. A random forest ranking approach to predict yield in maize with uav-based vegetation spectral indices. Computers and Electronics in Agriculture 2020, 178. [CrossRef]
  46. Mia, M.S.; Tanabe, R.; Habibi, L.N.; Hashimoto, N.; Homma, K.; Maki, M.; Matsui, T.; Tanaka, T.S.T. Multimodal Deep Learning for Rice Yield Prediction Using UAV-Based Multispectral Imagery and Weather Data. Remote Sensing 2023, 15. [CrossRef]
  47. Bellis, E.S.; Hashem, A.A.; Causey, J.L.; Runkle, B.R.K.; Moreno-García, B.; Burns, B.W.; Green, V.S.; Burcham, T.N.; Reba, M.L.; Huang, X. Detecting Intra-Field Variation in Rice Yield With Unmanned Aerial Vehicle Imagery and Deep Learning. Frontiers in Plant Science 2022, 13. [CrossRef]
  48. Bian, C.; Shi, H.; Wu, S.; Zhang, K.; Wei, M.; Zhao, Y.; Sun, Y.; Zhuang, H.; Zhang, X.; Chen, S. Prediction of Field-Scale Wheat Yield Using Machine Learning Method and Multi-Spectral UAV Data. Remote Sensing 2022, 14. [CrossRef]
  49. Han, Y.; Tang, R.; Liao, Z.; Zhai, B.; Fan, J. A Novel Hybrid GOA-XGB Model for Estimating Wheat Aboveground Biomass Using UAV-Based Multispectral Vegetation Indices. Remote Sensing 2022, 14. [CrossRef]
  50. Zhou, X.; Kono, Y.; Win, A.; Matsui, T.; Tanaka, T.S.T. Predicting within-field variability in grain yield and protein content of winter wheat using UAV-based multispectral imagery and machine learning approaches. Plant Production Science 2020, 24, 137-151. [CrossRef]
  51. Sharma, P.; Leigh, L.; Chang, J.; Maimaitijiang, M.; Caffé, M. Above-Ground Biomass Estimation in Oats Using UAV Remote Sensing and Machine Learning. Sensors 2022, 22. [CrossRef]
  52. Wang, F.; Yang, M.; Ma, L.; Zhang, T.; Qin, W.; Li, W.; Zhang, Y.; Sun, Z.; Wang, Z.; Li, F.; et al. Estimation of Above-Ground Biomass of Winter Wheat Based on Consumer-Grade Multi-Spectral UAV. Remote Sensing 2022, 14. [CrossRef]
  53. Roy Choudhury, M.; Das, S.; Christopher, J.; Apan, A.; Chapman, S.; Menzies, N.W.; Dang, Y.P. Improving Biomass and Grain Yield Prediction of Wheat Genotypes on Sodic Soil Using Integrated High-Resolution Multispectral, Hyperspectral, 3D Point Cloud, and Machine Learning Techniques. Remote Sensing 2021, 13. [CrossRef]
  54. Fu, Y.; Yang, G.; Song, X.; Li, Z.; Xu, X.; Feng, H.; Zhao, C. Improved Estimation of Winter Wheat Aboveground Biomass Using Multiscale Textures Extracted from UAV-Based Digital Images and Hyperspectral Feature Analysis. Remote Sensing 2021, 13. [CrossRef]
  55. Tanabe, R.; Matsui, T.; Tanaka, T.S.T. Winter wheat yield prediction using convolutional neural networks and UAV-based multispectral imagery. Field Crops Research 2023, 291. [CrossRef]
  56. Li, Z.; Chen, Z.; Cheng, Q.; Duan, F.; Sui, R.; Huang, X.; Xu, H. UAV-Based Hyperspectral and Ensemble Machine Learning for Predicting Yield in Winter Wheat. Agronomy 2022, 12. [CrossRef]
  57. Li, R.; Wang, D.; Zhu, B.; Liu, T.; Sun, C.; Zhang, Z. Estimation of grain yield in wheat using source–sink datasets derived from RGB and thermal infrared imaging. Food and Energy Security 2022, 12. [CrossRef]
  58. Sharifi, A. Yield prediction with machine learning algorithms and satellite images. Journal of the Science of Food and Agriculture 2020, 101, 891-896. [CrossRef]
  59. Ma, J.; Liu, B.; Ji, L.; Zhu, Z.; Wu, Y.; Jiao, W. Field-scale yield prediction of winter wheat under different irrigation regimes based on dynamic fusion of multimodal UAV imagery. International Journal of Applied Earth Observation and Geoinformation 2023, 118. [CrossRef]
  60. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sensing of Environment 2020, 237. [CrossRef]
  61. Zhou, J.; Zhou, J.; Ye, H.; Ali, M.L.; Chen, P.; Nguyen, H.T. Yield estimation of soybean breeding lines under drought stress using unmanned aerial vehicle-based imagery and convolutional neural network. Biosystems Engineering 2021, 204, 90-103. [CrossRef]
  62. Yoosefzadeh-Najafabadi, M.; Tulpan, D.; Eskandari, M. Using Hybrid Artificial Intelligence and Evolutionary Optimization Algorithms for Estimating Soybean Yield and Fresh Biomass Using Hyperspectral Vegetation Indices. Remote Sensing 2021, 13. [CrossRef]
  63. Yoosefzadeh-Najafabadi, M.; Earl, H.J.; Tulpan, D.; Sulik, J.; Eskandari, M. Application of Machine Learning Algorithms in Plant Breeding: Predicting Yield From Hyperspectral Reflectance in Soybean. Frontiers in Plant Science 2021, 11. [CrossRef]
  64. Shi, Y.; Gao, Y.; Wang, Y.; Luo, D.; Chen, S.; Ding, Z.; Fan, K. Using Unmanned Aerial Vehicle-Based Multispectral Image Data to Monitor the Growth of Intercropping Crops in Tea Plantation. Frontiers in Plant Science 2022, 13. [CrossRef]
  65. Ji, Y.; Chen, Z.; Cheng, Q.; Liu, R.; Li, M.; Yan, X.; Li, G.; Wang, D.; Fu, L.; Ma, Y.; et al. Estimation of plant height and yield based on UAV imagery in faba bean (Vicia faba L.). Plant Methods 2022, 18. [CrossRef]
  66. Ji, Y.; Liu, R.; Xiao, Y.; Cui, Y.; Chen, Z.; Zong, X.; Yang, T. Faba bean above-ground biomass and bean yield estimation based on consumer-grade unmanned aerial vehicle RGB images and ensemble learning. Precision Agriculture 2023, 24, 1439-1460. [CrossRef]
  67. Liu, Y.; Feng, H.; Yue, J.; Fan, Y.; Jin, X.; Zhao, Y.; Song, X.; Long, H.; Yang, G. Estimation of Potato Above-Ground Biomass Using UAV-Based Hyperspectral images and Machine-Learning Regression. Remote Sensing 2022, 14. [CrossRef]
  68. Sun, C.; Feng, L.; Zhang, Z.; Ma, Y.; Crosby, T.; Naber, M.; Wang, Y. Prediction of End-Of-Season Tuber Yield and Tuber Set in Potatoes Using In-Season UAV-Based Hyperspectral Imagery and Machine Learning. Sensors 2020, 20. [CrossRef]
  69. Xu, W.; Chen, P.; Zhan, Y.; Chen, S.; Zhang, L.; Lan, Y. Cotton yield estimation model based on machine learning using time series UAV remote sensing data. International Journal of Applied Earth Observation and Geoinformation 2021, 104. [CrossRef]
  70. Poudyal, C.; Costa, L.F.; Sandhu, H.; Ampatzidis, Y.; Odero, D.C.; Arbelo, O.C.; Cherry, R.H. Sugarcane yield prediction and genotype selection using unmanned aerial vehicle-based hyperspectral imaging and machine learning. Agronomy Journal 2022, 114, 2320-2333. [CrossRef]
  71. de Oliveira, R.P.; Barbosa Júnior, M.R.; Pinto, A.A.; Oliveira, J.L.P.; Zerbato, C.; Furlani, C.E.A. Predicting Sugarcane Biometric Parameters by UAV Multispectral Images and Machine Learning. Agronomy 2022, 12. [CrossRef]
  72. He, Z.; Wu, K.; Wang, F.; Jin, L.; Zhang, R.; Tian, S.; Wu, W.; He, Y.; Huang, R.; Yuan, L.; et al. Fresh Yield Estimation of Spring Tea via Spectral Differences in UAV Hyperspectral Images from Unpicked and Picked Canopies. Remote Sensing 2023, 15. [CrossRef]
  73. Feng, L.; Zhang, Z.; Ma, Y.; Du, Q.; Williams, P.; Drewry, J.; Luck, B. Alfalfa Yield Prediction Using UAV-Based Hyperspectral Imagery and Ensemble Learning. Remote Sensing 2020, 12. [CrossRef]
  74. Wengert, M.; Wijesingha, J.; Schulze-Brüninghoff, D.; Wachendorf, M.; Astor, T. Multisite and Multitemporal Grassland Yield Estimation Using UAV-Borne Hyperspectral Data. Remote Sensing 2022, 14. [CrossRef]
  75. Pranga, J.; Borra-Serrano, I.; Aper, J.; De Swaef, T.; Ghesquiere, A.; Quataert, P.; Roldán-Ruiz, I.; Janssens, I.A.; Ruysschaert, G.; Lootens, P. Improving Accuracy of Herbage Yield Predictions in Perennial Ryegrass with UAV-Based Structural and Spectral Data Fusion and Machine Learning. Remote Sensing 2021, 13. [CrossRef]
  76. Li, K.-Y.; Burnside, N.G.; Sampaio de Lima, R.; Villoslada Peciña, M.; Sepp, K.; Yang, M.-D.; Raet, J.; Vain, A.; Selge, A.; Sepp, K. The Application of an Unmanned Aerial System and Machine Learning Techniques for Red Clover-Grass Mixture Yield Estimation under Variety Performance Trials. Remote Sensing 2021, 13. [CrossRef]
  77. Tatsumi, K.; Igarashi, N.; Mengxue, X. Prediction of plant-level tomato biomass and yield using machine learning with unmanned aerial vehicle imagery. Plant Methods 2021, 17. [CrossRef]
  78. Ballesteros, R.; Intrigliolo, D.S.; Ortega, J.F.; Ramírez-Cuesta, J.M.; Buesa, I.; Moreno, M.A. Vineyard yield estimation by combining remote sensing, computer vision and artificial neural network techniques. Precision Agriculture 2020, 21, 1242-1262. [CrossRef]
  79. Chen, R.; Zhang, C.; Xu, B.; Zhu, Y.; Zhao, F.; Han, S.; Yang, G.; Yang, H. Predicting individual apple tree yield using UAV multi-source remote sensing data and ensemble learning. Computers and Electronics in Agriculture 2022, 201. [CrossRef]
  80. Tang, M.; Sadowski, D.L.; Peng, C.; Vougioukas, S.G.; Klever, B.; Khalsa, S.D.S.; Brown, P.H.; Jin, Y. Tree-level almond yield estimation from high resolution aerial imagery with convolutional neural network. Frontiers in Plant Science 2023, 14. [CrossRef]
  81. Bebie, M.; Cavalaris, C.; Kyparissis, A. Assessing Durum Wheat Yield through Sentinel-2 Imagery: A Machine Learning Approach. Remote Sensing 2022, 14. [CrossRef]
  82. Kamir, E.; Waldner, F.; Hochman, Z. Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 160, 124-135. [CrossRef]
  83. Liu, Y.; Wang, S.; Wang, X.; Chen, B.; Chen, J.; Wang, J.; Huang, M.; Wang, Z.; Ma, L.; Wang, P.; et al. Exploring the superiority of solar-induced chlorophyll fluorescence data in predicting wheat yield using machine learning and deep learning methods. Computers and Electronics in Agriculture 2022, 192. [CrossRef]
  84. Son, N.-T.; Chen, C.-F.; Cheng, Y.-S.; Toscano, P.; Chen, C.-R.; Chen, S.-L.; Tseng, K.-H.; Syu, C.-H.; Guo, H.-Y.; Zhang, Y.-T. Field-scale rice yield prediction from Sentinel-2 monthly image composites using machine learning algorithms. Ecological Informatics 2022, 69. [CrossRef]
  85. Abreu Júnior, C.A.M.d.; Martins, G.D.; Xavier, L.C.M.; Vieira, B.S.; Gallis, R.B.d.A.; Fraga Junior, E.F.; Martins, R.S.; Paes, A.P.B.; Mendonça, R.C.P.; Lima, J.V.d.N. Estimating Coffee Plant Yield Based on Multispectral Images and Machine Learning Models. Agronomy 2022, 12. [CrossRef]
  86. Bhumiphan, N.; Nontapon, J.; Kaewplang, S.; Srihanu, N.; Koedsin, W.; Huete, A. Estimation of Rubber Yield Using Sentinel-2 Satellite Data. Sustainability 2023, 15. [CrossRef]
  87. Filippi, P.; Whelan, B.M.; Vervoort, R.W.; Bishop, T.F.A. Mid-season empirical cotton yield forecasts at fine resolutions using large yield mapping datasets and diverse spatial covariates. Agricultural Systems 2020, 184. [CrossRef]
  88. Desloires, J.; Ienco, D.; Botrel, A. Out-of-year corn yield prediction at field-scale using Sentinel-2 satellite imagery and machine learning methods. Computers and Electronics in Agriculture 2023, 209. [CrossRef]
  89. Liu, F.; Jiang, X.; Wu, Z. Attention Mechanism-Combined LSTM for Grain Yield Prediction in China Using Multi-Source Satellite Imagery. Sustainability 2023, 15. [CrossRef]
  90. Tang, Y.; Qiu, J.; Zhang, Y.; Wu, D.; Cao, Y.; Zhao, K.; Zhu, L. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: a review. Precision Agriculture 2023, 24, 1183-1219. [CrossRef]
  91. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11. [CrossRef]
  92. Abbas, A.; Zhang, Z.; Zheng, H.; Alami, M.M.; Alrefaei, A.F.; Abbas, Q.; Naqvi, S.A.H.; Rao, M.J.; Mosa, W.F.A.; Abbas, Q.; et al. Drones in Plant Disease Assessment, Efficient Monitoring, and Detection: A Way Forward to Smart Agriculture. Agronomy-Basel 2023, 13. [CrossRef]
  93. Massah, J.; Asefpour Vakilian, K.; Shabanian, M.; Shariatmadari, S.M. Design, development, and performance evaluation of a robot for yield estimation of kiwifruit. Computers and Electronics in Agriculture 2021, 185. [CrossRef]
  94. Zhang, Y.; Ta, N.; Guo, S.; Chen, Q.; Zhao, L.; Li, F.; Chang, Q. Combining Spectral and Textural Information from UAV RGB Images for Leaf Area Index Monitoring in Kiwifruit Orchard. Remote Sensing 2022, 14. [CrossRef]
  95. Guo, Y.; Wang, H.; Wu, Z.; Wang, S.; Sun, H.; Senthilnath, J.; Wang, J.; Robin Bryant, C.; Fu, Y. Modified Red Blue Vegetation Index for Chlorophyll Estimation and Yield Prediction of Maize from Visible Images Captured by UAV. Sensors 2020, 20. [CrossRef]
  96. Zhang, M.; Zhou, J.; Sudduth, K.A.; Kitchen, N.R. Estimation of maize yield and effects of variable-rate nitrogen application using UAV-based RGB imagery. Biosystems Engineering 2020, 189, 24-35. [CrossRef]
  97. Saddik, A.; Latif, R.; Abualkishik, A.Z.; El Ouardi, A.; Elhoseny, M. Sustainable Yield Prediction in Agricultural Areas Based on Fruit Counting Approach. Sustainability 2023, 15. [CrossRef]
  98. Liu, W.; Li, Y.; Liu, J.; Jiang, J. Estimation of Plant Height and Aboveground Biomass of Toona sinensis under Drought Stress Using RGB-D Imaging. Forests 2021, 12. [CrossRef]
  99. Rodriguez-Sanchez, J.; Li, C.; Paterson, A.H. Cotton Yield Estimation From Aerial Imagery Using Machine Learning Approaches. Frontiers in Plant Science 2022, 13. [CrossRef]
  100. Gong, L.; Yu, M.; Cutsuridis, V.; Kollias, S.; Pearson, S. A Novel Model Fusion Approach for Greenhouse Crop Yield Prediction. Horticulturae 2022, 9. [CrossRef]
  101. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. The Journal of Agricultural Science 2018, 156, 312-322. [CrossRef]
  102. Chin, R.; Catal, C.; Kassahun, A. Plant disease detection using drones in precision agriculture. Precision Agriculture 2023, 24, 1663-1682. [CrossRef]
  103. Jiang, Y.; Li, C. Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review. Plant Phenomics 2020, 2020. [CrossRef]
  104. Sanaeifar, A.; Guindo, M.L.; Bakhshipour, A.; Fazayeli, H.; Li, X.; Yang, C. Advancing precision agriculture: The potential of deep learning for cereal plant head detection. Computers and Electronics in Agriculture 2023, 209. [CrossRef]
  105. Buxbaum, N.; Lieth, J.H.; Earles, M. Non-destructive Plant Biomass Monitoring With High Spatio-Temporal Resolution via Proximal RGB-D Imagery and End-to-End Deep Learning. Frontiers in Plant Science 2022, 13. [CrossRef]
  106. Lu, H.; Cao, Z. TasselNetV2+: A Fast Implementation for High-Throughput Plant Counting From High-Resolution RGB Imagery. Frontiers in Plant Science 2020, 11. [CrossRef]
  107. Mota-Delfin, C.; López-Canteñs, G.d.J.; López-Cruz, I.L.; Romantchik-Kriuchkova, E.; Olguín-Rojas, J.C. Detection and Counting of Corn Plants in the Presence of Weeds with Convolutional Neural Networks. Remote Sensing 2022, 14. [CrossRef]
  108. Liu, Y.; Cen, C.; Che, Y.; Ke, R.; Ma, Y.; Ma, Y. Detection of Maize Tassels from UAV RGB Imagery with Faster R-CNN. Remote Sensing 2020, 12. [CrossRef]
  109. Jia, H.; Qu, M.; Wang, G.; Walsh, M.J.; Yao, J.; Guo, H.; Liu, H. Dough-Stage Maize (Zea mays L.) Ear Recognition Based on Multiscale Hierarchical Features and Multifeature Fusion. Mathematical Problems in Engineering 2020, 2020, 1-14. [CrossRef]
  110. Maji, A.K.; Marwaha, S.; Kumar, S.; Arora, A.; Chinnusamy, V.; Islam, S. SlypNet: Spikelet-based yield prediction of wheat using advanced plant phenotyping and computer vision techniques. Frontiers in Plant Science 2022, 13. [CrossRef]
  111. Nevavuori, P.; Narra, N.; Linna, P.; Lipping, T. Crop Yield Prediction Using Multitemporal UAV Data and Spatio-Temporal Deep Learning Models. Remote Sensing 2020, 12. [CrossRef]
  112. Qiu, R.; He, Y.; Zhang, M. Automatic Detection and Counting of Wheat Spikelet Using Semi-Automatic Labeling and Deep Learning. Frontiers in Plant Science 2022, 13. [CrossRef]
  113. Zhaosheng, Y.; Tao, L.; Tianle, Y.; Chengxin, J.; Chengming, S. Rapid Detection of Wheat Ears in Orthophotos From Unmanned Aerial Vehicles in Fields Based on YOLOX. Frontiers in Plant Science 2022, 13. [CrossRef]
  114. Zang, H.; Wang, Y.; Ru, L.; Zhou, M.; Chen, D.; Zhao, Q.; Zhang, J.; Li, G.; Zheng, G. Detection method of wheat spike improved YOLOv5s based on the attention mechanism. Frontiers in Plant Science 2022, 13. [CrossRef]
  115. Zhao, F.; Xu, L.; Lv, L.; Zhang, Y. Wheat Ear Detection Algorithm Based on Improved YOLOv4. Applied Sciences 2022, 12. [CrossRef]
  116. Lin, Z.; Guo, W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Frontiers in Plant Science 2020, 11. [CrossRef]
  117. Guo, Y.; Li, S.; Zhang, Z.; Li, Y.; Hu, Z.; Xin, D.; Chen, Q.; Wang, J.; Zhu, R. Automatic and Accurate Calculation of Rice Seed Setting Rate Based on Image Segmentation and Deep Learning. Frontiers in Plant Science 2021, 12. [CrossRef]
  118. Han, J.; Shi, L.; Yang, Q.; Chen, Z.; Yu, J.; Zha, Y. Rice yield estimation using a CNN-based image-driven data assimilation framework. Field Crops Research 2022, 288. [CrossRef]
  119. Zhou, Z.; Song, Z.; Fu, L.; Gao, F.; Li, R.; Cui, Y. Real-time kiwifruit detection in orchard using deep learning on Android™ smartphones for yield estimation. Computers and Electronics in Agriculture 2020, 179. [CrossRef]
  120. Xiong, J.; Liu, Z.; Chen, S.; Liu, B.; Zheng, Z.; Zhong, Z.; Yang, Z.; Peng, H. Visual detection of green mangoes by an unmanned aerial vehicle in orchards based on a deep learning method. Biosystems Engineering 2020, 194, 261-272. [CrossRef]
  121. Santos, T.T.; de Souza, L.L.; dos Santos, A.A.; Avila, S. Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association. Computers and Electronics in Agriculture 2020, 170. [CrossRef]
  122. Shen, L.; Su, J.; He, R.; Song, L.; Huang, R.; Fang, Y.; Song, Y.; Su, B. Real-time tracking and counting of grape clusters in the field based on channel pruning with YOLOv5s. Computers and Electronics in Agriculture 2023, 206. [CrossRef]
  123. Cecotti, H.; Rivera, A.; Farhadloo, M.; Pedroza, M.A. Grape detection with convolutional neural networks. Expert Systems with Applications 2020, 159. [CrossRef]
  124. Palacios, F.; Melo-Pinto, P.; Diago, M.P.; Tardaguila, J. Deep learning and computer vision for assessing the number of actual berries in commercial vineyards. Biosystems Engineering 2022, 218, 175-188. [CrossRef]
  125. Chen, S.; Song, Y.; Su, J.; Fang, Y.; Shen, L.; Mi, Z.; Su, B. Segmentation of field grape bunches via an improved pyramid scene parsing network. International Journal of Agricultural and Biological Engineering 2021, 14, 185-194. [CrossRef]
  126. Olenskyj, A.G.; Sams, B.S.; Fei, Z.; Singh, V.; Raja, P.V.; Bornhorst, G.M.; Earles, J.M. End-to-end deep learning for directly estimating grape yield from ground-based imagery. Computers and Electronics in Agriculture 2022, 198. [CrossRef]
  127. Sozzi, M.; Cantalamessa, S.; Cogato, A.; Kayad, A.; Marinello, F. Automatic Bunch Detection in White Grape Varieties Using YOLOv3, YOLOv4, and YOLOv5 Deep Learning Algorithms. Agronomy 2022, 12. [CrossRef]
  128. Palacios, F.; Bueno, G.; Salido, J.; Diago, M.P.; Hernández, I.; Tardaguila, J. Automated grapevine flower detection and quantification method based on computer vision and deep learning from on-the-go imaging using a mobile sensing platform under field conditions. Computers and Electronics in Agriculture 2020, 178. [CrossRef]
  129. Sun, L.; Hu, G.; Chen, C.; Cai, H.; Li, C.; Zhang, S.; Chen, J. Lightweight Apple Detection in Complex Orchards Using YOLOV5-PRE. Horticulturae 2022, 8. [CrossRef]
  130. Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Martínez-Guanter, J.; Valente, J. A Cloud-Based Environment for Generating Yield Estimation Maps From Apple Orchards Using UAV Imagery and a Deep Learning Technique. Frontiers in Plant Science 2020, 11. [CrossRef]
  131. Murad, N.Y.; Mahmood, T.; Forkan, A.R.M.; Morshed, A.; Jayaraman, P.P.; Siddiqui, M.S. Weed Detection Using Deep Learning: A Systematic Literature Review. Sensors 2023, 23. [CrossRef]
  132. Quan, L.; Li, H.; Li, H.; Jiang, W.; Lou, Z.; Chen, L. Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment. Remote Sensing 2021, 13. [CrossRef]
  133. Moon, T.; Kim, D.; Kwon, S.; Ahn, T.I.; Son, J.E. Non-Destructive Monitoring of Crop Fresh Weight and Leaf Area with a Simple Formula and a Convolutional Neural Network. Sensors 2022, 22. [CrossRef]
  134. Lu, W.; Du, R.; Niu, P.; Xing, G.; Luo, H.; Deng, Y.; Shu, L. Soybean Yield Preharvest Prediction Based on Bean Pods and Leaves Image Recognition Using Deep Learning Neural Network Combined With GRNN. Frontiers in Plant Science 2022, 12. [CrossRef]
  135. Riera, L.G.; Carroll, M.E.; Zhang, Z.; Shook, J.M.; Ghosal, S.; Gao, T.; Singh, A.; Bhattacharya, S.; Ganapathysubramanian, B.; Singh, A.K.; et al. Deep Multiview Image Fusion for Soybean Yield Estimation in Breeding Applications. Plant Phenomics 2021, 2021. [CrossRef]
  136. Sandhu, K.; Patil, S.S.; Pumphrey, M.; Carter, A. Multitrait machine- and deep-learning models for genomic selection using spectral information in a wheat breeding program. The Plant Genome 2021, 14. [CrossRef]
  137. Vinson Joshua, S.; Selwin Mich Priyadharson, A.; Kannadasan, R.; Ahmad Khan, A.; Lawanont, W.; Ahmed Khan, F.; Ur Rehman, A.; Junaid Ali, M. Crop Yield Prediction Using Machine Learning Approaches on a Wide Spectrum. Computers, Materials & Continua 2022, 72, 5663-5679. [CrossRef]
  138. Wolanin, A.; Mateo-García, G.; Camps-Valls, G.; Gómez-Chova, L.; Meroni, M.; Duveiller, G.; Liangzhi, Y.; Guanter, L. Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environmental Research Letters 2020, 15. [CrossRef]
  139. Gong, L.; Yu, M.; Jiang, S.; Cutsuridis, V.; Pearson, S. Deep Learning Based Prediction on Greenhouse Crop Yield Combined TCN and RNN. Sensors 2021, 21. [CrossRef]
  140. de Oliveira, G.S.; Marcato Junior, J.; Polidoro, C.; Osco, L.P.; Siqueira, H.; Rodrigues, L.; Jank, L.; Barrios, S.; Valle, C.; Simeão, R.; et al. Convolutional Neural Networks to Estimate Dry Matter Yield in a Guineagrass Breeding Program Using UAV Remote Sensing. Sensors 2021, 21. [CrossRef]
  141. Meng, Y.; Xu, M.; Yoon, S.; Jeong, Y.; Park, D.S. Flexible and high quality plant growth prediction with limited data. Frontiers in Plant Science 2022, 13. [CrossRef]
  142. Oikonomidis, A.; Catal, C.; Kassahun, A. Deep learning for crop yield prediction: a systematic literature review. New Zealand Journal of Crop and Horticultural Science 2022, 51, 1-26. [CrossRef]
Table 1. Crop yield calculation methods and comparison of advantages and disadvantages.
Table 1. Crop yield calculation methods and comparison of advantages and disadvantages.
Calculation method Implementation method advantage disadvantages
Artificial field investigation Manual statistical calculation by calculation tools Low technical threshold, simple operation, and strong universality Each step of the operation is cumbersome and prone to errors, and some crops are also subject to damage detection
Meteorological model Analyze the correlation of meteorological factors and establish models using statistical, simulation, and other methods Strong regularity, and strong guiding significance for crop production Need a large amount of historical data to accumulate, suitable for large-scale crops
Growth model Digging a large amount of growth data to digitally describe the entire growth cycle of crops Strong mechanism, high interpretability, and high accuracy The growth models have numerous parameters and are difficult to obtain, which are only suitable for specific varieties and regions, and their application is limited
Remote sensing calculation Obtaining remote sensing data from multiple channels such as multispectral and hyperspectral data to establish regression models Expressing internal and external characteristics of crops, which can reflect agronomic traits of crops Applicable to specific regions, environments, and large-scale crops
Image detection Implementing statistics and counting through target segmentation or detection Low cost and high precision A large number of sample images are required, and the occlusion problem is not easy to solve
Table 2. Yield calculation methods for main crop varieties.
Table 2. Yield calculation methods for main crop varieties.
Classification Variety Crop characteristics Yield calculation indicators
food crops corn Important grain crop with strong adaptability, planted in many countries, and also an important source of feed Number of plants, empty stem rate, number of grains per spike
wheat The world's highest sowing area, yield, and distribution of food crops; High planting density and severe mutual obstruction Number of ears, number of grains per ear, and thousand-grain weight
rice One of the world's most important food crops, accounting for over 40% of the total global food production Number of ears, number of grains per ear, seed setting rate, thousand-grain weight
economic crops cotton One of the world's important economic crops, important industrial raw materials and strategic supplies Total number of cotton beads per unit area, number of cotton bolls per plant, and quality of seed cotton per boll
soybean One of the world's important economic crops, widely used in food, feed, and industrial raw materials Number of pods, number of seeds per plant, and weight of 100 seeds
potato Potatoes are the world's fourth largest food crop after wheat, corn, and rice Tuber weight and fruiting rate
sugarcane Important economic crops, grown globally, important sugar raw materials Single stem weight and number of stems
sunflower Important economic and oil crops Kui disk size and number of seeds
tea Important beverage raw materials Number and density of tender leaves
apple The third largest fruit crop in the world Number of plants per mu, number of fruits per plant, and fruit weight
grape Fruit consumption and brewing raw materials have high social and economic impacts Grape bead count, ear count, and grain count
orange The world's largest category of fruits has become a leading industry in many countries Number of plants per mu, number of fruits per plant, and fruit weight
tomato One of the main vegetable varieties in the facility, and also an important raw material for seasoning sauces Number of spikes per plant, number of fruits, and fruit weight
almond Common food and traditional Chinese medicine raw materials Number of plants per mu, number of fruits per plant, and fruit weight
kiwifruit One of the most consumed fruits in the world, renowned as the "King of Fruits" and "World Treasure Fruit" Number of plants per mu, number of fruits per plant, and fruit weight
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated