Preprint
Review

This version is not peer-reviewed.

A Comprehensive Review on Detection of Horticulture Fruits Disease Using Machine Learning and Deep Learning Approaches

Submitted:

24 March 2025

Posted:

26 March 2025

You are already at the latest version

Abstract
Identifying diseases in horticulture fruits is crucial in maintaining quality, reducing losses, and enhancing sustainable agricultural practices. Deep learning (DL) and machine learning (ML) techniques have enabled proficient and precise identification of these diseases. This paper consolidates the use of ML and DL approaches in horticultural fruit disease detection, incorporating the innovative models of convolutional neural networks (CNNs), Vision transformers, and other hybrid systems. It also reviews preprocessing and feature extraction for hyperspectral and multispectral imaging. Volume public datasets and real-world case studies are analyzed to demonstrate practical implementation and obstacles which include the quality of the dataset, required computation resources, and model interpretability. Furthermore, the paper elaborates on GAN-based data augmentation, implementing lightweight models on resource-constrained devices, and real-time IoT monitoring. Future directions aim at the utilization of explainable artificial intelligence, scaling up the models, and increasing sustainability in disease detection systems. The reviewed literature established this study as a point of reference for other researchers and practitioners to inspire the development of intelligent horticultural disease management systems.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

1.1. Background and Importance of Disease Detection in Horticulture:

From an international perspective, horticulture makes substantial contributions towards the attainment of food security and economic growth, as well as nutritional demands. However, issues such as cavities-worms in fruits and vegetables and other horticultural crop diseases pose serious challenges with declining output, deteriorating quality, and increasing economic losses. The development and implementation of sustainable farming practices requires the efficient and timely diagnosis of plant ailments to limit and minimize the use of pesticides and other chemicals that harm the environment and human health. Traditional methods associated with detecting plant diseases are often based on a static qualitative visual inspection of the crops and are conducted manually, which proves to be very laborious and prone to mistakes, most especially when the impact of the symptoms is very subtle. A scalable and precise automation of disease detection systems is required in this situation. The development of machine learning (ML) and deep learning (DL) technologies changed the game for horticultural disease management. These techniques utilize sophisticated algorithms and imaging systems to allow for the early diagnosis, classification, and quantification of diseases much faster and more accurately than ever before. The possibility of integrating such novel systems into agricultural science will create unprecedented opportunities for a decrease in crop losses and improvements in productivity within the context of sustainable development. The global adoption of intelligent systems for plant disease detection is an important milestone for rebuilding modern agriculture to meet the needs of an expanding international population.

1.2. Overview of Machine Learning (ML) and Deep Learning (DL) Applications:

The utilization of deep learning (DL) and machine learning (ML) is critical in overcoming the challenges posed by traditional approaches in disease detection within horticulture by employing the appropriate computational models. Employing ML methods such as Support Vector Machines (SVM), K-means clustering, and Decision trees enables classification, prediction, and clustering tasks. For instance, the Naive Bayes classifier has been utilized in detecting diseases affecting papaya and apple crops. Convolutional Neural Networks (CNNs) and deep learning (DL) models offer unparalleled proficiency in detecting image-based diseases by spotting complex features in the plant stem, fruits, and leaves. CNN models ResNet, Faster R-CNN, and Vision Transformers are the latest models that have achieved the best results in the identification and classification of diseases. More recent efforts include the detection of tomato diseases using YOLOv3 as well as augmentation using GANs to boost discrimination for citrus disease models. The performance of other hybrid models like CNN-LSTM is further improved by the addition of context or time. Also, other tasks like estimating the severity of diseases, pest detection, and even fruit grading are accomplished by having a combination of DL and ML, frequently along with advanced imaging methods like hyperspectral and multispectral imaging. These techniques offer an unprecedented level of ease for users while significantly enhancing the precision and effectiveness of disease detection for more productive horticulture activities.
Figure 1. Distribution of references across sections.
Figure 1. Distribution of references across sections.
Preprints 153400 g001

1.3. Challenges and Research Gaps:

Though machine learning (ML) and deep learning (DL) have made considerable strides in the detection of horticultural diseases, numerous issues still persist. One of the primary challenges is related to the lack of quality datasets. Most research is based on small, distorted, or fabricated datasets that limit comprehension and model performance in practical settings. In addition, the wide range of variation in the manifestations of the diseases themselves, like how multiple diseases may have similar visual features, or how external factors like lighting or background noise can affect the environment, can complicate accurate detection and classification. The use of computationally expensive resource-demanding DL models is also problematic for edge devices typically utilized in agriculture, further aggravating their limited scalability and accessibility for immediate use.
Additionally, the systems and models trained using Deep Learning still lack a level of explainability and interpretability that is required to lay trust in their workings. Most systems available today operate as black-box systems which makes it extremely difficult for both researchers and practitioners to trust the results or outcomes produced by such systems. Though newer technologies such as IoT and hyperspectral imaging have a lot of promise, their adoption of large-scale affordable disease detection systems is still nascent. Coupled with that, the imaging devices and the computing infrastructure required some of the challenges from smallholder farmers. It is proposed that explainable deep learning models designed to operate in real-world scenarios and targeted toward small farmers be created. These models are needed to be lightweight so that they can be economically viable. Advanced technologies complemented with robust datasets will increase the scope of machine learning and deep learning in horticultural disease management.

2. Machine Learning and Deep Learning Techniques for Disease Detection

2.1. Traditional Machine Learning Techniques:

Conventional ML approaches such as Naive Bayes, SVM, k-means clustering, and decision trees have had great application within classification, segmentation, and grading works in horticultural disease detection. But as with most methods, they also have their shortcomings and do not seem capable of accomplishing capturing complex disease patterns, and in actual world use cases, deep learning models have proven to be more efficient than them [1] Dubey et al., on Apple.

2.1.1. Naive Bayes:

A Naive Bayes classifier is a general probabilistic classifier that classifies a sample to different classes using Bayes’ Theorem with a strong independence assumption between the features. Naive Bayes is generally used for tasks that involve classification, that is, determining the category into which the sample falls to based on the values of its attributes. In a Naive Bayes classifier, the conditional probability of every feature is computed, given the sample class, assuming that each feature is independent of the other features given the class. Since every feature conditionally dependent on the class reduces the complexity of the classifier, this assumption increases the performance of the model, thus simplifying the computation.
Moraes et al.,[2] constructed Yolo–Papaya, a papaya fruit disease detector based on YoloV7, which was customized using a Convolutional Block Attention Module (CBAM). The authors dealt with the problem of fruit disease detection by creating a new dataset consisting of 23,158 images depicting nine different classes of papaya diseases. The Yolo-Papaya model resulted in a mean average precision (mAP) of 86.2, and over 98% accuracy in detecting the healthy fruits with Phytophthora blight. This system proves robust for the early-stage detection of fruit diseases and assists with quality control in post-harvest operations. Another technique for automating disease detection and classification of apple scab, rot, and apple blotch. The method was able to achieve a classification accuracy of 96.43% by combining Naive Bayes with texture-based GLCM feature extraction. The findings of this research suggest that traditional feature extraction and Naive Bayes methods can be effectively merged for diagnostic improvements in apple diseases was proposed by Supriyatna et al.,[3].
Sari et al.,[4] built an expert system incorporating fuzzy reasoning (Triangular Fuzzy Number membership function) based detection coupled with a Naive Bayes classifier for studying diseases in papaya. FNBC achieved 88% accuracy, while accuracy rose to 90% with forward chaining. This development is a useful resource to farmers for early and effective intervention without having to rely on field experts for disease detection. Huddar and Sujatha[5] discuss a system for predicting diseases that infect a fruit or leaf tender using a Naive Bayes-based classification algorithm. The system is capable of noticing, examining, and processing in MATLAB the presence and description of states of diseases in the form of images. This work presents the worth of this algorithm to address agricultural disease issues to enhance production as well as the quality of food produced.

2.1.2. SVM (Support Vector Machine):

A Support Vector Machine (SVM) is a type of supervised machine learning algorithm for both classification and regression tasks where the goal is to find the optimal hyperplane that maximizes the margin between different classes. By incorporating kernel functions, SVM is able to perform both linear and non-linear separations in an effective manner. In Horticulture, SVM has great importance in the aspects of disease diagnosis, crop classification, and quality control. It can analyze images of plants and fruits, identify illnesses, assess the quality of the produce, and control the identification of crops on a large scale. With the help of SVM, farmers are getting eye-pleasing results in precision agriculture which further improves productivity, helps in early problem detection, and ensures effective resource allocation and there is only limited research on this maybe this can be taken into consideration for future research purposes.
Figure 2. Workflow of Support Vector Machine.
Figure 2. Workflow of Support Vector Machine.
Preprints 153400 g002
Yasmeen et Al., [6] used ResNet18, Inception V3, and Improved Genetic Algorithm (ImGA) for feature selection to develop a new deep learning framework designed specifically for the detection of diseased citrus and achieved up to 99.5% accuracy. It greatly improves the classification performance, there are only a few numbers of research works done on the technology of SVM this opens up a pool of opportunities.
The integration of SVM-based methodologies in fruit disease detection has shown remarkable advancements, enabling effective and accurate classification of diseased fruits. Dubey and Jalal [159] proposed an adaptive approach using K-Means clustering for defect segmentation and SVM for multi-class classification, achieving a 93% classification accuracy in detecting apple diseases such as scab, blotch, and rot. Similarly, Alhwaiti et al., [160] introduced a hybrid algorithm integrating Histogram Oriented Gradients (HOG) with SVM to detect late blight in tomatoes, demonstrating significant improvements in precision and recall compared to other methods like Decision Trees and KNN.
Alagu et al.,[161] developed a computer vision system for detecting apple fruit diseases using K-Means clustering for feature extraction and Multi-class SVM for classification. This method achieved a classification accuracy of up to 99%, facilitating efficient sorting of healthy and diseased fruits. Dewliya and Singh [162] explored an approach combining shape approximation and histogram chain codes for feature extraction, with SVM achieving 98% accuracy when using a radial basis kernel for apple disease classification.
Lastly, Anu et al., [163] employed a Gray-Level Co-occurrence Matrix (GLCM) for feature extraction and SVM for classification. This system not only identified fruit diseases but also suggested suitable fertilizers for affected plants, enhancing practical applications in agriculture. These studies highlight the robustness of SVM-based models in improving fruit disease detection and classification accuracy. Haider et al.,[195] Used deep learning (EfficientNet-B0) and an enhanced Path Finder Algorithm (PFA) based measure to image features using a machine learning-based technique. The apple, grape, and citrus leaf datasets were used to train the model, where both original and noisy data were considered for robustness.

2.1.3. K-Means Clustering

K-means clustering is one of the methods in machine learning that does not require any prior information about the data set in question, and it is designed to sort the data set into a number of groups or clusters according to their similarities. It works by assigning each data point to the closest centroid and then moving the centroid to the new centre of the cluster assigned to it until the variance within each cluster is minimized. In agriculture, K-means clustering can be used to classify various plant species, identify specific patterns in the growth of crops or even crop images for disease diagnosis. Because K-means clustering organizes like data together, it is valuable for tasks like crop classification, crop yield monitoring, and growth stage identification for better farm deployment actions.
Figure 3. Pictorial Representation of k-Means.
Figure 3. Pictorial Representation of k-Means.
Preprints 153400 g003
Dharmasiri and Jayalal [7] are focused on developing a computer-based system for the identification of diseases of passion fruits with the aid of image processing. Steps like image capture, image pre-processing, segmentation employing K-means clustering, feature extraction using Local Binary Patterns, and classification with Support Vector Machines are included in this method. Different color models (RGB, Lab, HSV, Grey) are tested for the effectiveness of segmentation efficiency. The article “Fruit Disease Detection Using Colour, Texture Analysis, and ANN” by Ashwini Awate et al. aims to devise an automated method for detecting and classifying disease in fruits with the help of image processing, a new form of smart farming. The proposed system intends to resolve the challenges of manual supervision. It is inefficient and highly unreliable. The system is designed to process and evaluate fruit diseases through image capturing and external fruit disease analysis using color, shape, texture, and patterns of holes as markers.
The use of K-means color clustering segmentation for feature segmentation of apples in two stages. In the first stage, the pixels are clustered according to their color and spatial characteristics (Dubey et al.,[8]). Next, the image is divided into blocks which are merged into larger regions to increase processing speed as pixel-wise feature-level extraction is much more complex than removing the individual blocks. In this test carried out on apples, it was noted that the basic segmentation improved along with vice versa proportional relationship between defect segmentation precision and segmentation computation time[9]. Rao et al.,[167] propose a cucumber disease recognition method using K-means clustering for segmentation, shape and color feature extraction, and sparse representation classification, achieving an 85.7% recognition rate. Lamani et al.,[168] Work in this area involves the detection of bacterial blight and scab of a pomegranate fruit using image-processing techniques. It uses K-means segmentation and classifiers such as PNN, KNN, and SVM and gives 99% accuracy with high-resolution imaging and there are some systematic reviews that utilize automation methods for the detection and classification of plant diseases and pests. This reviews 48 studies on this subject, reporting important crops,  datasets, algorithms used, accuracy rates, and trends in automated disease identification (Francisco et al., [169]). Besides these two studies,  there were others as well contributing to the enhancements of k-means [170],[171].

2.2. Deep learning:

Deep Learning (DL) is a class of machine learning that utilizes multi-layer neural networks (also called as deep neural networks) to hierarchically learn features from data. Its effectiveness has been evident in solving problems with large amounts of complex data such as image and speech recognition. In agriculture, DL can be used in plant disease detection, fruit classification and yield prediction analysis using images or sensor data processing. This paper provided an overview of techniques such as Convolutional Neural Networks (CNNs) for image-based disease detection, Recurrent Neural Networks (RNNs) for the analysis of temporal data, and deep learning hybrid approaches to increase accuracy for classification tasks. These technologies help automate plant disease detection and diagnosis, thus enhancing agricultural productivity and sustainability while lowering the use of human resources.
Figure 4. Structure of Deep-Learning.
Figure 4. Structure of Deep-Learning.
Preprints 153400 g004

2.2.1. CNN (Convolution Neural Network):

Convolutional neural networks (CNNs) are deep learning algorithms used in the automatic detection of objects and edges in images: image pattern recognition. In the field of agriculture, CNNs are applied in plant disease detection, classification of fruits, and even in quality control, which improves the precision and automation of agricultural tasks.
Figure 5. Workflow of CNN.
Figure 5. Workflow of CNN.
Preprints 153400 g005
The study entitled “A Novel Fusion of Deep Learning and Android Application for Real-Time Mango Fruits Disease Detection” authored by Vani Ashok, and D. S. Vinod [10] was online published on 11 August 2020 as part of the book “Advances in Intelligent Systems and Computing.” The paper discusses the implementation of a deep learning technique through a trained model of a deep convolutional neural network (CNN) that was further trained with pictures of mangoes with various diseases. The model utilizes transfer learning with a set of diseased and healthy mangoes captured in controlled environments. Importantly, the system places the prediction model in the mobile phone application which allows offline diagnosis of diseases without the need for an internet connection. This method provided a training accuracy of 98.6%, and a validation accuracy of 96.4%. Syed-Ab-Rahman et al.,[11] formulated a two-tiered CNN model that facilitates identifying the existence of diseases on a citrus leaf. The first stage employs a region proposal network to pinpoint potential areas of disease and the second stage classifies the regions identified in the first stage into specific disease types. This model achieved an accuracy of 94.37% and an average precision of 95.8%. While analyzing the images, the model successfully detected the citrus black spot, citrus bacterial and canker and Huanglongbing. With the aid of this model, farmers can make better decisions to detect and manage diseases in their crops in order to increase agricultural productivity and limit the loss of crops.
Uğuz et al.,[12] proposed a model for defective and diseased citrus fruits classification using CNN called CitrusNet that resided with 5149 citrus fruit images collected from Turkey’s Antalya region. From the study, CyprusNet and ResNet50 that were implemented achieved the best classification performance in comparison to other models. Further experiments focused on images depicting Alternaria alternata and Thrips diseases where 3582 images were used in the dataset. The disease detection YOLOv5 model and Mask R-CNN models were the best performing, with an average precision score of AP = 0.99. This indicates that these models advanced the ways technology was used to aid in early disease detection and defect classification which directly deals with economic loss toward market demands, Li et al.,[196] have used R-CNN for automatic recognition and counting. Khan et al.,[13] set out to build a fully automatic segmentation and classification system for sicken fruits using correlation coefficient segmentation and deep CNN features. Instead of using correlation coefficient segmentation to define the infected region, image contrast was enhanced to achieve the required outcome. A VGG16 or AlexNet model was trained before to extract and fuse features, subsequently, a genetic solution was utilized to optimize sequence selection. Multi-class SVM was used as the classification technique to beat public datasets with 98.60% accuracy. It is argued that this method solved and improved many issues faced in the industry regarding precision and classification accuracy.
A Multi-level Capsule Network (CapsNet) for mango leaf disease identification such as anthracnose and powdery mildew. As opposed to CNNs, CapsNet has better capabilities for rotation and spatial invariance. With the dataset of mango leaves, the system achieved an accuracy of 98.5%, which was superior to SVM, CNN, and other conventional models developed by Janakiramaiah et al.,[14]. Liu and Wang’s [15] investigations are directed toward the recognition and classification of diseases and pests of tomatoes by means of an advanced Yolo V3 convolution neural network. The study seeks to improve the already existing Yolo V3 model by adding an image pyramid for multi-scale feature detection, thus enhancing the system’s accuracy and speed. The system tackles the problem of identifying tomato diseases and pests like early blight and whiteflies in real time. This technique can greatly facilitate early prediction and prevention and thereby help curb crop loss as well as reduce the use of pesticides for more sustainable tomato crop management.
A fruit disease detection system that incorporates Convolutional Neural Networks (CNN) to classify and identify various fruit diseases through images. The system accepts input images and performs a series of analyses on the extracted information in order to distinguish various diseases. It is implemented in Python and achieves a system accuracy of 97%, making it a suitable technique that farmers can depend on to improve crop yield and welfare (Malathy et al.,[16]). According to Dhiman et al., [17], in this work, the authors proposed an effective approach for detecting diseases in citrus fruits with a model based on CNN and LSTM combined with edge computing. This method improves feature extraction with a down-sampling process and fusion of features to facilitate disease recognition on resource-scarce devices. The model achieved accuracy rates above 97% based on a dataset of 2950 images of citrus fruits (97.18% with pruning, 98.25% with pruning and post-quantization). This research attempts to solve the problem of accurate and efficient detection of citrus fruit diseases in real-time using deep learning tools on edge devices, which is critical for integrated quality management after harvesting.
According to Dahiman et al., [18] cited above, describe PFDI, a model of fruit disease identification using the Faster-CNN architecture with data fusion NIFR and RGB images in edge computing environments. The model also incorporates modifications to features for citrus and fruit sickness detection by pruning with different levels of sparsity and post-quantization to improve accuracy and reduce model size. The model provides high accuracy (97%) for cankers and black spots, which shows that these diseases can be detected fast and reliably even by low-powered agricultural devices. Transfer learning and multi-model fusion make real-time disease detection fast, efficient, and easy to scale. Kavya et al.,[19] created a model capable of identifying apple-related diseases stemming from scab, rot, and blotch using various known CNN architectures and deep learning. This study works off a dataset of 2000 pictures, allocating 80% towards training and the remaining 20% for evaluation, and data augmentation is used to even out the dataset. With the imaged metrics of accuracy, precision, recall, and F1 score, Mobile Net has shown to outperform the rest with an admirable score of 98%. These results shed light on the promise given by deep learning models in the area of automated disease detection as opposed to manual inspections of apple orchards which proves to be less efficient. Azgomi et al.,[20] suggest a cost-efficient method for detecting apple diseases with the help of image processing and a multi-layer Perceptron artificial neural network. The system classifies apples into four categories: scab, bitter rot, black rot, and healthy. Photographs are analyzed to obtain color and texture features; the network is trained with 60% of the dataset while the remaining parts are kept for testing. A maximum accuracy of 73.7% was noted with the use of a two-layer neural network structure holding eight neurons on both layers. This technique offers a low-budget option for diagnosing apple disease instead of manually inspecting apples, which is slow and exhausting work. Pathmanaban et al.,[21] built a romantic deep convolutional neural network (CNN) model to identify injuries and diseases of Guava fruit using thermal and digital photonics. The research incorporated 4,129 thermal images of guava fruits at different levels of maturity by measuring the surface temperatures via thermal imaging. The CNN model acquired and performed outstanding results achieving 99.92 accuracy in classifying the quality of both damaged and diseased guava fruits. The thermal images were fundamental in delineating immature from mature fruits, and provided highly accurate predictions of fruit quality, specifically for dehydrated and diseased samples.
Gupta and Tripathi[22] survey the current state of detection and classification of vegetable and fruits diseases. Specifically, this research is about the application of machine learning, deep learning, and IoT in smart farming, as well as the existing research on disease diagnostics that covers early-stage recognition focusing on 99 studies and analyzing issues, research voids, and further prospects. The authors also evaluate the existing fruit and vegetable disease publicly available datasets and recommend how future research can be conducted. They provide substantial information for researchers looking to solve the problem of disease diagnosis and monitoring to enhance the sustainability of the agro-industry.
A study that utilizes hyperspectral imaging and convolutional neural network (CNN) for citrus disease detection. It includes an examination of the effectiveness of different eight peel conditions on citrus fruit and the determination of optimal PCA selected spectral bands for accurate classification of the diseases as the pinnacle. The model accomplished classification accuracy, sensitivity, and specificity and its performance is commendable (Pappu Kumar Yadav et al.,[23]). The study by Hasan Basri et al.,[24] describes a novel automated system for classifying fruit defects that employs the Faster R-CNN (FRCNN) architecture. The system classifies faults in mango, lime, and pitaya defects with impressive performance rates of 88%, 83%, and 99%, respectively. The work combines object detection and video motion capture algorithms in order to provide real-time defect assessment which makes quality control possible without extra sensor devices. Automatic grading and disease detection in pomegranates employing image processing techniques. The developed system uses Python with machine learning technologies (SVM, CNN) to classify fruits by color and size with diagnosis of diseases like Cercospora, fruit rot, and bacterial blight. The goal of the method attempts to more practical fruit quality evaluation processes considering climate change, one of the factors that greatly impacts agricultural production and indiscriminate fruit disease. The automation of fruit grading and disease detection greatly improves the post-harvest management processes and the determined equitable prices for farmers (Sultanabanu Kazi and Kazi Kutubuddin,[25]). Antor Mahamudul Hashan et al., [26] have developed a relatively new method that employs a CNN system to classify guava fruit disease facets with a primary goal of improving production while reducing economic losses within Asia. The model has eleven layers which allow feature extraction from the image dataset using Nesterov’s Accelerated Gradient (NAG) with a Linearly Scored Categorical Cross-entropy (LSCCE) loss function. Overfitting during training is prevented by capturing function, augmentation of data, and interference deletion. Some of the diseases the system is able to identify are phytophthora, root, and scab. The system has a training accuracy of 98% and 93% testing accuracy. This model surpasses the AlexNet model in metric accuracy on the dataset.
An algorithm for deep learning-based apple disease identification, which, by employing a CNN, relies on a residual neural network. The model assists farmers in the real-time accurate identification of disease thus reducing the amount of work and expertise needed compared to manual disease detection. The dataset is relatively small including 505 images of apples where 385 images are used for training and 120 images are utilized for test purposes. The model has an accuracy of 78.76% and a loss of value of 0.6818. Detecting the disease early allows timely intervention to be made which helps the farmers take pre-emptive steps towards better crop management (Lalitha and Nageswararao [27]). As noted in the study done by Muhammad et al.,[28] there exists a foliar diseases classification deep learning model that uses CNN-derived features from apple leaves and classifies them with a LSTM model-based neural network. The model effectively handled overfitting, class imbalance, and exploding gradients. The dataset above covering multiple apple leaf diseases was exceptionally performed on with 98% accuracy, 95% specificity, 96% sensitivity, and 94% AUC. This model was the most precise among its peers in classifying affected and healthy apple leaves, which shows that this model has a high potential for use in early disease detection and efficient crop management paradigms. Bhookya et al.,[29] developed a quasi-3D convolutional neural network called SECNN and used it to detect and classify five diseases of the chili leaf. Chilli leaf disease classification was done using 12 different pre-trained deep-learning models along with a custom chili leaf dataset. VGG19 without augmentation was sharply outperformed with DarkNet53 with augmentation when looking to classify the images where VGG19 scored only 83.54% and DarkNet 98.82% respectively. The SECNN model proposed took the augmented dataset and claimed 98.63% accuracy, without augmenting and claimed 99.12% with SECNN outperforming all pre-trained networks. The robustness of the model was tested against PlantVillage datasets and everything the model was put through; it claimed 99.28% accuracy for classifying 43 plant leaf disease classes.
Utilizing the PlantVillage dataset, Vishnoi et al.,[30] created a lightweight CNN for identifying and classifying apple leaf diseases in Scab, Black rot, and Cedar rust. Through augmentation shifts, shears, scales, zoom, flips, and other techniques, the CNN model was able to perform classification with 98% accuracy while using lower storage and computational resources than existing models. This resource efficiency makes the model ideal for deployment onto handheld devices. The model solves problems with limited datasets available, complex computations, and overfitting. The remarkable efficiency and low resource usage make it an exemplary solution for practical applications in detecting fungal diseases of apple crops. Barman et al.,[31] with the help of mobile images on smartphones, studied the classification of diseases affecting the leaves of citrus plants in MobileNet and Self-Structured Convolutional Neural Network (SSCNN) AI systems. Both models were built from scratch using the custom-made dataset containing pronounced plant disease features. While MobileNet had 98% training accuracy and 92% validation accuracy, SSCNN managed to surpass these results with 98% training accuracy and 99% validation accuracy. It is important to note that SSCNN was also able to use less computation time, which makes it much more efficient and inexpensive for the detection of citrus tree diseases. The study emphasizes the affordable and precise disease identification potential of disease diagnosis through smartphone-based SSCNN, thus offering solutions to the scarce affordable diagnostic facilities available to farmers.
Hyperspectral imaging to identify a gray mold disease in strawberry leaves in 3D convolutional neural network (CNN) models. The study involved the classification of healthy, infected, and asymptomatic areas of interest by deriving 16×16×150 Hyperspectral data cubes. The 3D CNN model, in comparison to the 2D CNN model, achieved enhanced classification rates. The accuracy rose from 0.74 classification rate on 2D CNN to 0.84 with the sophisticated 3D model, allowing for better use of physical and spectral data. For asymptomatic regions smoothing and spectral derivatives accuracy increased from 0.73 for the misclassifications to 0.77. The methodology incorporated in the 3D CNN model achieves efficient agricultural disease diagnosis in a simple means, which offers a remarkable way of overcoming the barriers of timely disease intervention strategies for effective disease management in crops(Jung et al.,[32]). An affordable Deep Neural Network (DNN) based approach to classify apple crop diseases on devices of minimal resources focusing on the rural parts with poor internet connectivity has been provided by Iftikhar et al.,[33] Most Efficient models for deployment were determined to be Basic CNN, AlexNet and EfficientNet Lite so the model was built using transfer learning with EfficientNet Lite achieving 85% test accuracy. This strategy simultaneously handles computational costs, power consumption, and latency issues making it a practical and effective system for farmers. The research focuses on the need for improvement in agricultural practices in underdeveloped areas through mobile solutions that can be utilized on the device itself.
To identify and categorize diseases affecting apple leaves, a CNN based approach was developed by Yadav et al. that is outlined in the paper.[34] The model is able to achieve 98 percent accuracy on a dataset of 400 images by using contrast stretching together with fuzzy c means clustering for preprocessing and segmentation. The combined use of these methods makes this technique effective and easy to use with minimum available training data for early disease detection.
Mehta et al., [35] developed an approach that uses CNN for detecting diseases in vegetables and fruits using augmented datasets and digital twins modeling. It is able to achieve an accuracy of 96.85%-99.39% which is better than both InceptionNet and EfficientNetB0, but it does have problems with closely competing diseases such as Marssonina leaf Blotch. As for Nirgude et al., [36] focused on a pomegranate fruit disease automated detection system which is powered by convolutional neural networks and Grad-CAM. These diseases include bacterial blight, anthrade, and Fusarium wilt, which are part of real field datasets. ResNet50 performed the best among other architectures with an accuracy of 98.55%. The interpretation of Grad-CAM which highlights the diseases-impacted regions to assist in treatment possibilities increases its usefulness. It helps in undertaking pomegranate farming in a more sustainable way, which keeps in check the losses arising out of diseases.
Alhazmi [37] uses VGG-16 Convolutional Neural Network (CNN) architecture and deep learning techniques to detect watermelon diseases. It produced some initial results with 0.7576 recall and only 7% true positives, however, overfitting was a significant challenge. Further tweaking the hyperparameters such as weight initializer and optimizer enhanced performance immensely, culminating in 0.9394 recall and 98% true positive in the model. This situation highlights the potential of CNN models for disease detection and availing farmers timely to enhance agricultural productivity.
In this work, Sajitha et al.,[38] have presented an approach that utilizes a graphical convolution neural network (GCNN) for the detection and classification of diseases on bananas. The Approach aims at separating image data of high complexity into various categories of diseases with minimum error. The GCNN processes images for disease detection by using graph representation that includes relevant spatial and contextual information. The findings of the experiment increase the classification results which illustrate the agricultural potential of GCNN for the real-time management of diseases on bananas.
In a study, Lanjewar et al.,[39] reported the use of CNNs and TC methods to implement the recognition of citrus leaf diseases on a Platform as a Service (PaaS) cloud, which facilitates mobile-based real-time classification. The five classes that the system identifies are black spot, melanose, canker, greening and healthy leaf. Pre-trained ResNet152V2, InceptionResNetV2, DenseNet121, DenseNet201, and a lightweight CNN model were tested. Performing data augmentation led to a drastically enhanced performance with reported accuracy, precision, and recall, and F1 scores equalling 98% and 99% respectively. The lightweight CNN model (1.68MB, 15 layers) was used in the cloud and could be retrieved instantly on mobile devices, showing effective and useful scalability for precision agriculture.
Deep learning-based plant disease detection using a CNN applied on 15 classes, 12 of which are diseased and 3 healthy plant leaves. The method uses image processing for identifying infected areas and examines the time complexity. The model was found to have 88.80% accuracy when tested on sample images and the performance metrics were evaluated on validation (Shreshtha et al., [40]).
Manzoor et al.,[41] argue the current state of detection of diseases in apple crops, noting the changes in approach from traditional methods like physical examination to more advanced techniques such as the use of light, thermal, and hyperspectral non-destructive imaging. They consider employing machine learning and deep learning models that automatically classify and diagnose diseases in high-resolution images with greater accuracy. The paper evaluates the advantages and disadvantages of these approaches and offers recommendations for further research to improve disease detection efficiency and effectiveness in apples to solve significant problems of apple production and export quality. Mitkal et al.,[42] conducted − he has proposed an automatic method for the detection of pomegranate fruits grading and disease detection using CNN, K-means clustering, and image processing. The proposed system provides image analysis of color, shape and texture to classify fruits into infected and non-infected. It utilizes methods featuring the monitoring of growth stages and the detection of illnesses, aiding farmers in maximizing fruit yield and quality through precise classification and timely detection. Ahmad et al.,[43] describe a convolutional neural network (CNN) based framework for the detection of diseases in plum fruits that is field-based. Rather than using publicly available datasets, the study was conducted utilizing real-world imagery taking angle, scale, and environment into consideration. The authors performed intensive data augmentation in order to pose an even more difficult dataset, as well as applied transfer learning with scale-sensitive models or rather, Inception-v3. The accuracy rate in identifying healthy and diseased fruits and leaves surpasses 92% due to the optimization of the model through parameter quantization for resource-restrained mobile devices. This framework presents an effective approach to agriculture with regard to disease detection.
In this study Mostafa et al.,[44] explain how the guava diseases identification problem is solved through deep convolutional neural networks (DCNN). Guava is an important fruit in countries like Pakistan and the research aims to prevent the spread of disease along with loss of finances which can occur due to misdiagnosis efforts. The authors enhance the data by applying color histogram equalization and unsharp masking as well as rotating the data to nine different angles for data augmentation. The five methodologies that were analyzed include AlexNet, SqueezeNet, Googlenet, ResNet-50 and ResNet-101 on a locally collected guava disease dataset. The greatest performance was 97.74% accuracy by Resnet-101 which validated the importance of deep learning in plant disease detection. Dhakate and Ingole [45] propose an approach based on a computation neural network for the identification of diseases affecting a pomegranate plant which are caused by fungi, bacteria and even bacterial blight, fruit spot, fruit rot, and leaf spot due to climatic conditions. The system has been developed using two techniques, image processing, where the k-means algorithm is used for clustering and segmentation as well as texture feature extraction using GLCM. The extracted features are inputted into an artificial neural network which divides them into categories making it possible to recognize their feature with an accuracy level of 90%. The development of this automated system is a better option compared to physical counting systems. It offers an accurate and dependable diagnosis of diseases which can increase the quantity and quality of crops. Suji et al.,[46] using Convolutional Neural Networks (CNNs), designed a system that autonomously detects disease in both citrus fruit and leaves. The system works as a tool for early detection of bacterial infections, which helps in improving productivity. The CNN model is trained to identify the spread of disease through multi-stage processes utilizing automated imaging devices or drones to capture images. Once trained, the CNN model enables real-time disease detection, thereby enhancing preventative measures, enabling early intervention, and improving crop yield, which provides a strong approach towards disease management in citriculture.
Kumar et al.,[47] developed a CNN model that is designed to provide an automatic classification of disease for a pear tree and helps to achieve high precision, recall, and F1 scores for diseases like Rust, Scab, Fire Blight, Leaf Spot, and Brown Rot. The model identifies unhealthy and healthy trees with a weighted average performance of 92.23 percent and achieved an accuracy of 83.4 percent. Using a dataset of 9,216 images, the model provides a reliable tool for early disease detection, assists farmers and horticulturists in improving crop yield, reducing losses, and enhancing sustainability in pear farming practices, which in turn helps in global food security and effective disease management. The decision-making support system employs CNNs in classifying both healthy peaches and three different diseases affecting peaches employing transfer learning and data augmentation to deal with insufficient data. The system is mobile, which facilitates its use in real-time disease detection with high accuracy (Macro average F1 score of 0.96) and no misclassification. It’s able to assist farmers in the efficient control of fruit diseases under field conditions. This system was proposed by Assuncao et al., [48]. In their work,  Rao et al., [164] examine the use of deep learning methodologies, specifically transfer learning using AlexNet, to detect and identify diseases in grape and mango leaves. It employs a dataset comprising 8,438 images (from PlantVillage and local sources) to train a CNN model, allowing for feature extraction and disease classification to occur automatically. It classifies grape leaves with an accuracy of 99% and mango leaves with 89%. Expected Results: The Android app, “JIT CROPFIX,” connects the system and provides real-time disease identification and decision-making support for farmers. The research takes the AI-based systems as the key method to reduce the food security risk, and improve the quality of crops. Dhanajayan et al.,[165] compare different CNN-based citrus disease detectors on the annotated CCL’20 dataset and identifies the speed-orientated Scaled YOLOv4 P7 and high detection accuracy of CenterNet2 for early recognition of disease. In this work, Shin et al., [172] utilize deep learning models to identify powdery mildew from RGB images of strawberry leaves. Among the six CNNs tested (ResNet-50, AlexNet, VGG-16, VGG-19, Inception v3, DenseNet), ResNet-50 gave the best accuracy, of 98.11 percent. The dataset was expanded to 11,600 images via data augmentation. AlexNet was the fastest in processing images, while SqueezeNet-MOD2 was memory efficient in deployment in addition to ResNet-50 achieving the highest classification accuracy. It aims to optimize disease detection for less fungicide and field scout reliance. Yadav et al.,[173] employ CNN-based deep learning and image processing to diagnose bacteriosis in peach leaves. Separating out the grand majority of the data set gray-level slicing, achieving 98.75% accuracy. Note: The method enables early disease detection in conjunction with plans to integrate UAVs for real-time monitoring. Momeny et al.,[174] conducted a study that describes the fine-tuning of deep CNN models for both the detection of black spot disease and the assessment of ripeness levels in oranges. Image augmentation becomes crucial for increasing model robustness by considering the a-conditional diversity aspect of the samples with respect to the labels, where a learning-to-augment framework is introduced that employs Bayesian optimization over a dataset of 1,896 images. Among all classifiers, ResNet50 gave the best accuracy of 99.5% and 100% F-measure.
Zhu et al., [175] develops an automated pest identification system for the finding and identification of pests on fruits such as longan and lychee using Raspberry Pi. Autonomous detection using a knowledge graph and VGG-16 model (94.9% accuracy) minimizes the manual effort needed, leading to improved fruit quality. Mask R-CNN based method Mask R-CNN based method for detecting and segmenting oranges in images Ganesh et al.,[180] used a deep learning approach (Mask R-CNN) for detecting and segmenting oranges in images. The method is designed to enhance the fruit segmentation by utilizing multi-mode input data (RGB and HSV images).

2.2.2. RNN (Recurrent Neural Network) & DNN (Deep Neural Network):

Both Recurrent Neural Networks (RNNs) and Deep Neural Networks (DNNs) are the essential building blocks of deep learning. RNNs are built for sequential data, as they feature feedback loops that help retain information over time, making them useful for tasks like time series forecasting, speech recognition, and natural language understanding. DNNs, in contrast, are multi-layered networks that acquire intricate information from the data set, making them effective in more complicated tasks such as image classification and object recognition. Although RNNs and DNNs differ in structure, the former emphasizes the temporal aspects of the data while the latter specializes in feature learning and non-sequential data classification.
This research aims to develop recurrent neural network models for the prediction of fruit rot disease in areca nut crops. severity and distribution of the disease are affected by weather parameters like rainfall and temperature. Different models of RNN—Vanilla GRU, Stacked GRU, Bidirectional GRU, and Bidirectional LSTM—are used as a basis for weight optimization models like Adam, Adagrad, RMSprop, and Genetic algorithms for fruit rot prediction. The fine-tuned Vanilla GRU model with Adam optimization achieved the lowest mean squared error of MSE 0.0009. The Bidirectional LSTM model optimized with RMSprop also showed a low root mean square error of RMSE 0.033. To cut off the use of more fungi and to increase yield accuracy, the precision of these models was deepened. This was suggested by Krishna et al., [49]
Rathnayake et Al, [50] focus on strategies that make use of IT for disease detection in guava plants, specifically targeting anthracnose, powdery mildews, and bacterial blight. My inspection methods compare visual methods of inspection with image processing and even machine learning. Modern-day blimps have much better precision and accuracy. The research done also puts forward new advances in the field, including thermal and hyperspectral imaging, towards more consecutive and precise classification of the diseases. Classification accuracy has greatly improved when machine learning algorithms are trained on massive images and spectral datasets. This study showcases the dire need to adopt systems thinking toward effective and sustainable disease management in guava farming.
Dhiman et al., [51] outline a problem in a two-step methodology, starting with a deep learning model for detection and subsequently classifying the severity of the disease in citrus fruits. First, we trained the system using a labeled citrus fruit dataset, to which the algorithm could apply selective search for region proposal and transfer learning with VGG Net for multi-classification. The model splits the levels of symptoms into severity levels (4): high (99%), medium (98%), low (97%), and healthy (96%). This bottom-up approach offers a set of more practical tools for monitoring citrus fruits and ensuring a high yield of premium quality products.

2.2.3. Transformers:

Transformers are self-attentive deep learning models proficient in processing sequential data. Unlike RNNs, transformers have the capability to process entire sequences at once, thus increasing the speed. In agriculture, transformers can improve image classification, disease detection, and even crop prediction, especially when working with big data sets.
Figure 6. Workflow of Transformers.
Figure 6. Workflow of Transformers.
Preprints 153400 g006
Agha et al., [52] developed machine learning models like vision transformers (ViT), MobileNetV2, and ResNet18 for optimal detection of strawberry diseases and quality. These three models target the issue of class imbalance and augment it in order to enhance the performance of the model. The image monitoring in question is passive. Farmers are enabled to monitor crop health and yield through the ever-helping camera system. The ViT model came out of the state with the highest accuracy of 98.4%. This shows great promise in regard to strawberry monitoring. With the help of Vision Transformers (ViTs), Parmar et al.,[53] analyze the diagnosis of papaya diseases. In this study, it is clear that VITs perform better than CNNs in both classification and computation speed. Such research provides evidence that early detection of disease may be possible in agriculture with the help of ViTs and thus increasing agricultural production while making it more environmentally friendly. Zala et al.,[54] developed a hard fruit classifier using deep learning. A customized CNN and Vision Transformer model were incorporated making use of a dataset with 12 fruit classes. The Vision Transformer achieved an astounding accuracy of 98.05%. The research seeks further development of automated fruit classification and its application as an advanced quality control measure. In analyzing the vision transformer model for the identification of strawberry diseases, Nguyen et al., [r55] find that the model performs best in the classification of diseases using a transfer learning approach, where the model achieves an accuracy and F1 score of 0.927. This study indicates that there is scope for strawberry disease detection and quality assessment using vision transformer models.
In an attempt of trying to do automatic peach tree disease recognition and segmentation, Chen et al.,[56] developed a framework that merges images and sensor data using transformer attention-based semantic segmentation tiny feature attention and alignment head module. From experiments, the model was shown to achieve 94% precision and 92% accuracy, which surpassed other more traditional models, indicating that the new approach is far more effective. As an effort to improve mango leaf disease detection for anthracnose and powdery mildew, Shereesha et al.,[57] propose the use of compact convolutional transformer CCT for mango leaf disease detection. The CCT outperformed VGG16 not only in accuracy but also in F1-score and precision, giving out 3-5 % more than VGG16. This research is important in precision agriculture to facilitate accurate mango disease for efficient crop management. The aim of Christakakis et al.,[58] is to improve AI-assisted early detection of Botrytis cinerea in cucurbitaceae crops by employing Vision Transformer (ViT) models. The study refines segmentation models with the Cut-and-Paste method which achieves 92% accuracy and infection detection from 2 days post-inoculation. This bestows progress in capturing agricultural diseases using deep learning techniques.
Tiwari et al.,[59] suggest the use of a hybrid Transformer-CNN model for pomegranate fruit disease diagnosis and classify it into healthy, bacterial blight, anthracnose, Cercospora fruit spot, and Alternaria fruit spot. The feature of CNN which captures local visual information and the feature of Transformer which captures local context allows robust classification through complementary feature fusion. With the model being optimized with the Adam Algorithm, the model managed to achieve 96.45% accuracy which surpasses a multitude of architectures such as ResNet, Inception, VGG, and MobileNet which allows early and accurate disease identification with lesions in pomegranate fruits. Li et al.,[60] introduced a vision transformer integrated with a classical convolution neural network referred to as ConvViT and used this model to identify variations in the environment for kiwifruit disease detection. The model captures global and local features thereby obtaining an accuracy of 98.78% on the kiwifruit dataset as well as outdoing the ResNet and ViT models. ConvViT’s enhanced mobility and real-world application render it more efficient. To tackle issues of dataset imbalance and computation power, Liu et al.,[61] present NanoSegmenter - a lightweight transformer model for detecting tomato diseases at high precision. By employing the inverted bottleneck technique, sparse attention, and quantization, the model records an astounding 98% precision, 97% recall, and a mIoU of 95. With a stunning inference speed of 37 frames per second, the model is able to cater to real-time applications on edge devices, making it highly scalable. Alshammari et al.,[62] on the other hand, designed a hybrid deep ensemble learning algorithm that integrates Vision Transformer and CNN models for the classification of olive leaf diseases. The model dealt with the challenges posed by symptom variability and pathogen diversity remarkably well, achieving a multiclass accuracy of 96% and 97% in binary classification. These results exemplify the effectiveness of combining CNNs and Transformers for agricultural disease detection. Lastly, Liu et al.,[63] scoped FTR-YOLO to build a real-time lightweight model for grape disease detection. Using a modified VoVNet backbone with additional transformer layers helped the model achieve state-of-the-art results of 90.67% mAP and 44 FPS.

2.2.4. Auto Encoders:

Autoencoders are deep learning architectures used in an unsupervised manner for compressing data and reducing noise. An autoencoder is generally composed of an encoder, which compresses the input data into the latent space, and a decoder that reconstructs the output. In horticulture, autoencoders can be applied for anomaly identification in plant images, which enhances disease detection and monitoring of the plants by capturing subtle changes or anomalies in the plants.
Figure 7. Workflow of Auto Encoders.
Figure 7. Workflow of Auto Encoders.
Preprints 153400 g007
The problem of plant disease detection through the application of deep autoencoders with the objective of increasing accuracy in detection. Pre-processed images of plant leaves are first segmented using the FCM algorithm. Next, features from the segmented images are extracted utilizing Discrete Wavelet Transform (DWT) and Gray-Level Co-occurrence Matrix (GLCM). The autoencoder is employed to detect diseases with higher accuracy than other conventional methods. By offering a deeper approach to neural networks that are capable of complex issues, this method seeks to provide an optimal form of agriculture and plant disease detection (Durairaj and Surianarayanan[64]). An approach to detecting plant disease that is based on deep learning-based convolutional autoencoders, trained with no supervision. This method features the absence of hand-crafted features since the identification of relevant features is made by the autoencoder itself, using data that is not labeled. For the disease classification, the output from the autoencoder provides the feature data for SVM classifiers. By achieving better results than conventional autoencoders that use greater numbers of hidden layers, this technique offers a cheap and effective way of detecting plant diseases for resource-poor farmers with little or no access to costly laboratory diagnostics (Pardede et al.,[65]). Boukhris et al.,[66] Propose an autoencoder based on a disease detection neural network called Autoencoder Latent Space-Neural Network (ALS-NN), which utilizes autoencoders for dimension reduction and neural networks for classification. At the benchmark of 10 crops in the Plant Village dataset, the encoder encodes images into latent space which effectively lowers dimensions and increases processing speed. The compressed representation of the image is then fed into a neural network to classify the diseases, achieving 90% accuracy in testing and validation. The hybrid ALS-NN model is an effective and accurate disease detection system that utilizes the powerful feature extraction and compression abilities of the autoencoder with the classification abilities of the neural network to achieve a faster and more efficient design with fewer parameters.
Huddar et al., [67] suggest a novel method of detecting plant leaf diseases through the use of wavelet analysis, color and texture features, autoencoder denoising, and SVM classification. Multi-resolution features are provided through wavelet analysis, and color and texture provide additional distinguishing visual cues. The autoencoder increases feature representation through noise reduction while the SVM classifier classifies data by learning highly complex patterns. This hybrid method remarkably increased the accuracy of disease detection to 98.60%, 97.25% precision, 96.89% recall, and 97.20% F-Score while performing on the PlantVillage dataset surpassing traditional methods.

2.2.5. GAN (Generative Adversarial Network):

The reality is that adventures in tech around the world have been flourished with the advent of Generative Adversarial Networks, better known as GANs. GANs aid in data generation using two critical parts: a generator that creates useless data formatted in images for example- and a discriminator that analyses if the data generated is real or fake. After hitting a few hurdles GANs are slowly but surely transforming a wider field for use in created innovations with deep learning models, they are even used for image creation.
Figure 8. Workflow of Generative Adversarial Network.
Figure 8. Workflow of Generative Adversarial Network.
Preprints 153400 g008
Take reference from Alshammari et al.,[68] study to go deeper into the information gap, CycleGAN is a network designed for imagery tasks that can translate from one domain to another, as the name suggests. His study lucidly demonstrates CycleGAN’s effectiveness with an improvement of 7% in the classification of disease detection as well as enhancing its speed; these are issues that traditionally have posed great difficulties for practitioners in that field. Moving deeply into the realm of GAN’s, Zeng et al. discuss its astonishing capability of assisting in the classification of deep defects for citrus leaf images with the help of a dataset that was infected with HLB. Using DCGANs in augmenting the dataset proved a leap in model accuracy, praised for going from 74.38% to 92.60% accuracy for the Inception_v3 model. These studies show how time and again the use of GANs in turn proves more and more effective results [69].XIAO et al.,[70] in their study, suggesting an approach to ameliorate how citrus greening disease (Huanglongbing) is diagnosed, utilizes the Texture Reconstruction Loss CycleGAN (TRL-GAN) method for data augmentation to address the limited availability of disease-afflicted leaf data. The technique improves variety by employing Mask RCNN for background cleaning and then through CycleGAN enhancements, shifting pathological traits to healthy leaves. The obtained augmented dataset is utilized to train ResNeXt101 model which produced a classification accuracy of 97.45%. In this case, TRL-GAN achieved accuracy by creating synthetic data that depict real pathological features. The findings of this study are valuable for the detection of diseases and plant phenotype image analysis, in this case, with citrus plants, which is a significant advancement.
Image processing techniques to devise an early detection method for bacterial blight in pomegranates. The diagnosis through segmentation, feature extraction, and classification is performed on pomegranate leaf images and is cost-effective as well as non-invasive. The Study conducts a comparison of ensemble learning with Cycle-GAN, which is an unsupervised learning-based Generative Adversarial Network (GAN) that translates image data from one domain to another without the use of paired training images. The results demonstrate that the proposed methods achieve better accuracy and detection rates than other methods which rely on traditional techniques in an endeavor to aid farmers in curbing the losses of yield due to bacterial blight (Ghadekar et al.,[71]).
To help overcome issues like an aging workforce and exorbitant costs, Noguchi et al.,[72] propose an automated mandarin orange sorting system which is an attempt to facilitate smart agriculture. The transformer uses CNN to find damage or disease in the oranges, and Cycle-GAN to increase the size of the dataset because the number of defective samples was much lower than non-defective samples. The system attempts to automate the visual inspection which is typically done by farmers by using a conveyor belt for sorting. This change removes to some extent the need for human decision-making and as a result, increases efficiency suggesting the use of responsive smart agriculture systems that are economically feasible for farmers. Chen et al.,[73] designed an automated surface defect identification system for golden diamond pineapple that is based on Cycle GAN and YOLOv4. The model achieved an 84.86% average precision score by adding CycleGAN pseudo-Defects demonstrating how well cycle GAN models for defect variety and increase the capability of the model.

2.2.6. Res-Net:

By implementing the concept of residual learning, ResNet enhances the training of exceptionally large networks and their performance on image classification and object detection. This is achieved through incorporating residual connections known as skip connections, which allow a network to bypass certain layers and make the network more dynamic. As a result, ResNet is considerably more potent in performing complex intellect tasks in comparison to other standard image classification networks.
Figure 9. Workflow of Res-Net.
Figure 9. Workflow of Res-Net.
Preprints 153400 g009
Wu et al.,[74] presented an identification system of strawberry plant diseases with a deep learning framework using an embedded electronic system. Their ResNet-9-SE model, featuring a new method of Squeeze and Excitation (SE) blocks, increases classification accuracy to an unprecedented level of 99.7% while minimizing the parameters and memory. Moreover, this approach to plant image transmission to cloud servers for subsequent analysis is efficient, provides incredible results, and consumes fewer resources, qualifying it for the use of embedded systems. Senthilkumar et al.,[75] reported on an automated disease detection model targeted towards citrus fruits using deep learning with Inception ResNet V2 and Random Forest (RF) classifier. The approach consists of multi-phase processing including pre-processing, segmentation by Otsu, feature extraction by Inception ResNet V2, and classification by RF. As a result of these studies, the model reached 99.13% accuracy on the Citrus Image Gallery dataset, which proves the models’ ability to successfully detect and classification of citrus diseases for better fruit quality and productivity. Kumar et al.[76] researched ResNet models in the classification of tomato leaf diseases using deep convolutional neural networks (CNN). Their study leveraged a dataset that consisted of 6594 images of tomato leaves comprising six classes depicting diseases and one class depicting a healthy state. ResNet-50 attained an accuracy of 96.35% with a 50% training and 50% testing split, while ResNet-18 showed greater efficiency with a faster average processing time of 12.46 minutes with a 70:30 training and testing split. The accuracy of ResNet-50 is higher and in comparison, real-time performance efficiency gives higher marks to ResNet-18. Li and Rai[77] utilized datasets such as grey-spot and black star along with cedar rust for the identification and classification of apple leaf diseases and their corresponding healthy leaves. The focus of the study was on a comparison of multiple models including the SVM for image segmentation and ResNet and VGG for convolutional neural networks. The final results indicated that ResNet with 18 layers performed the best among the different models with 98.5% accuracy which suggests optimal recognition performance using less complex architecture than the other models.
Mohinai et al.,[78] worked with a number of their colleagues, introducing an approach meant for diagnosing plant diseases in every fruit and vegetable varietal using the ResNet algorithm. With a PlantVillage dataset containing 38 classes (26 diseased, 14 healthy), they reached up to 99.2% accuracy in disease detection. This approach serves as a faster and more reliable solution than the traditional manual inspection. Farmers can acquire images and identify blight, rot, and leaf curl easily. Along with machine learning, the model works with image segmentation, allowing for more efficient disease detection and agriculture as a whole. Upadhyay and Saxena[79] developed a specialized deep-learning classifier for tomato leaf disease recognition and classification using an enhanced ResNet-50 model. This model incorporates sophisticated data augmentation, transfer learning, and deep CNNs. It has achieved more than 95% accuracy. The tool is reliable for early diagnosis and monitoring in precision agriculture to respond to varying conditions which allows for more robust applications. This method leads to better sustainable crop management and increased disease detection actively expanding its use to a range of other plants which aims for overarching agricultural usage.

3. Hybrid Models:

One or more models are combined to increase the efficiency of the model, which in turn improves the accuracy. This provides a customized model for us. In hybrid models, multiple machine learning techniques or algorithms are used as a way of minimizing weaknesses and maximizing strengths. These models improve accuracy and scalability while increasing efficiency in solving intricate problems.
The title of the research work is: “The project work conducted on disease detection of Banana Crop using Deep Learning Approach”. It deals with image processing, Convolutional Neural Networks (CNN) and other multi-machine or deep learning algorithms to identify diseases of banana plants. The goal is to promote early disease detection which would minimize disease spread and significant economic losses for farmers. Jadhav et al. argued that, for full text supporting this information, the publisher’s site should be accessed or requested directly from other authors.[80]
The study by Arora et al., [81] is a Hybrid model with Recurrent Neural Networks (RNN) and Random Forests calculating the severity of Watermelon Mosaic Virus (WMV) in watermelons. It classifies the WMV levels of about 16,000 images and crosses over 99.8% accuracy with an impressive recall and F1 score. The model performs exceptionally well for early disease detection. This research covers novel applications for hybrid deep-learning approaches in agriculture to manage diseases and protect crops. Such innovative solutions set the bar for further development to fight agricultural viruses. Pydipati et al., [82] explored the application of machine vision and AI in diagnosing early crop diseases of citrus such as greasy spoil, melanose, and scab. They developed classification algorithms using the Mahalanobis minimum distance method and neural network classifiers with feature extraction via texture analysis using the Color Co-occurrence Method (CCM). The study used hue and saturation features, achieving a classification accuracy of over 95%, while the back-propagation neural network achieved a mean accuracy of 95%. The findings indicate that both classifiers work equally well and can be used in actual citrus grove settings to improve disease management. The next step of the study is to deploy these algorithms in real-world settings. This review deals with the detection of diseases in citrus fruits and the grading of these fruits as previously published from 2010 to 2021 where machine vision was incorporated. It underscores the debilitating impact of diseases affecting both leaves and fruits, which results in degraded fruit quality. These citrus fruits are graded according to their skin tone and size, which are pivotal in proper packaging and grade estimation. The paper highlights different approaches concerning the prediction of citrus diseases and the fruit grading that takes place during the post-harvest stage, current issues and achievements, as well as matters needing more research for better citrus quantity and quality was highlighted by Palei et al., [83].
An automatic intra-ventricular septum defect detection system in echocardiographic images. In order to enhance manual monitoring of apple crops, an automatic detection system for fruit diseases is proposed. K-means clustering is used for the segmentation of the images and 13 features are obtained using gray-level co-occurrence matrix (GLCM). A multi-class SVM is employed for identifying and classifying the disease with a reported accuracy of 98.387 percent. Now fruit diseases are dealt with, which helps manage yield losses (Agarwal et al.,[84]). Doh et al.,[85] describe a machine learning technique for diagnosing citrus fruit diseases which is based on the physical characteristics of the fruit. This method uses K-Means clustering for image segmentation, ANN and SVM. In the K-Means method, images are classified into disease categories based on certain standard phenotypical details like texture, color, and form. In this study, the ANN performs more accurate diagnosis with less preprocessing required. No filters have to be manually crafted to do so. SVM is added to ANN for classification. This offers Canker, Greasy Spot and Black Spot Ladies accurate disease classification and detection with minimal human intervention. Deng et al., [86], implement the scenario in an improved Faster R-CNN framework. To accomplish this, we develop a framework based on ResNet-101 as the backbone for feature extraction, multi-size feature map fusion to detect various sizes of the pests and the soft-NMS algorithm to enhance the accuracy of detection due to obscured conditions. With FL, this model’s average accuracy is 90.27% for pest detection, which helped to reduce the training cost and communication overhead.
Banerjee et al.,[87] proposed a methodology that integrates CNN and SVM techniques to detect and classify powdery mildew disease on mango leaf into four levels of severity. This methodology entails the processing of 2,559 images of mango leaves, the utilization of CNNs on the preprocessed images to extract features, and the classification through SVMs tuned by hyperparameters optimized through k−fold cross-validation. The hybrid model obtained a classification accuracy of 89.29% and a macro-average F1 score of 90.10%. As with most machine learning models, this model performs poorly on smaller class proportions with class imbalance, which is observed with a lower micro-averaged F1 score. This method focuses on the assessment of disease severity that allows the application of timely measures for effective control of mango diseases and good management practices. Vasumathi et al.,[88] developed an approach that uses a deep learning architecture combining CNN and LSTM to classify pomegranate fruits as either normal or abnormal based on their color, spots, and shape. The proposed model had an accuracy of 98.17%, a specificity of 98.65%, a sensitivity of 97.77%, and an F1-score of 98.39% performance. This model demonstrates a high achievement rate which makes early disease detection possible. It also utilizes CNN to extract features that make traditional methods of detection inefficient, and obsolete. This study proves that deep learning can be used to eliminate human error in fruit classification and disease detection thereby achieving automation. Majid et al.,[89] designed an integrated deep-learning framework for automatic classification of fruit diseases on seven fruit types: apple, cherry, blueberry, grapes, peach, citrus, and strawberry. The approach consists of the integration of the deep learning extracted features from a transfer learning pre-trained model using texture and color features (classical) of the foundation. The feature vectors are first optimized with a harmonic threshold-based genetic algorithm and then classified with multiple classifiers. On the Plant Village dataset, this framework outperformed all other techniques with 99% accuracy. The framework seeks to cope with problems that did not provide fruitful results such as irrelevant features and dimensionality for fruit disease identification providing an effective approach.
Masuda et al., [90] look into the possibilities offered by deep learning techniques to perform non-invasive diagnostics of internal characteristics of a persimmon fruit, namely seedlessness. By simply taking 599 pictures of ’Fuyu’ persimmons, they performed binary classification using four convolutional neural networks (CNN) and attained 89% accuracy with the VGG16 model. The research was also supplemented with explainable AI tools like Grad-CAM which were used to pattern-diagnose, and it was found that seedless fruits can be diagnosed by analyzing the top part of the fruit. This represents a unique and simple diagnostic method for internal fruit structures using an ordinary RGB image, revealing further possibilities for application in agriculture. Nyarko et al., [91] describe an enhanced SSD algorithm, which is used to detect tomato fruit diseases. They aim to solve this problem by using SSD with a backbone of 15 convolutional layers. The model is better at distinguishing between disease-free and diseased tomato fruits. The CNN-SSD also surpassed some other well-known models such as ResNet-50, AlexNet, and VGG16, and attained 98.87% accuracy which is significantly better than those state-of-the-art models. This demonstrates the ability to use deep learning for plant disease recognition automation and increasing food quality production.
Gill et al.,[92] introduce an innovative approach for fruit recognition and classification based on deep learning which mitigates issues pertaining to accuracy and quantitative assessment. They suggest the employment of Type-II Fuzzy Logic with TLBO optimization, alongside CNN, RNN, and LSTM deep learning models for fruit image segmentation, recognition, and classification. This model facilitates accuracy and computational efficiency by circumventing the use of handcrafted features. This study sheds light on how automatic fruit quality recognition and classification can be enhanced using these models improving the fruit quality assessment and grading in agriculture. This research delves into using small and efficient deep-learning models to determine diseases in tomatoes. It integrates recurrent neural networks (RNN) and convolutional neural networks (CNN). The objective is to promote smart and sustainable agriculture by getting fast and efficient disease diagnosis in tomatoes. The combination of RNN and CNN models gives high accuracy while limiting the amount of computation needed so that the method can be applied in real-time on devices with limited resources. Because the models are lightweight, they can easily be implemented in agricultural settings which will enable better disease control and crop yield sustainability (Masoud Shakiba et al.,[93]). The research conducted by Jaafar Alghazo et al.,[94] developed a machine-learning approach to flag fruit date diseases at early stages using a combination of feature extraction methods and common classifiers. Using Lab color features, statistical features, and Discrete Wavelet Transform (DWT) texture features, it seeks to detect diseases in date fruits. The dataset comprises 871 images categorized as healthy, fruit dates with initial stage disease, malnourished, and parasite-infected dates.
Radhika Gupta et al.,[95] proposed a hybrid CNN-SVM model which is trained on a dataset of healthy and infected lemon leaves to detect Lemon Scab, Septoria Spot, Sooty Mould, Armillaria, and Huanglongbing diseases. The disease classification model attained an accuracy of 89.6%, making it a precise tool for early-stage disease identification. This method can improve the yield and quality of lemons, which helps in advancing sustainable agriculture. In a similar spirit, Alekhya et al.,[96] have attempted to solve the problem of detecting diseases in mangoes using a hybrid model combining MobileNetV2 and LSTM. Precise classification is done through dense layers of the LSTM that also capture the feature dependencies while MobileNetV2 is tasked with feature extraction. Trained on the over 1700 images of the Mango Fruit-DDS dataset, this model solves the more complex problem of multi-class recognition. With recent advancements in agriculture, this innovation will contribute significantly to economic growth and agricultural development in mango production. Khattak et el.,[97] performed a research study that designed a deep neural network model for the identification and automated monitoring of disease infestation in citrus fruits and leaves like black spot, canker, scab, greening, and melonose bore. As previously stated, the model applies a convolutional neural network (CNN) where normal fruits and sick fruits and leaves are differentiated through extracting relevant features within multiple layers. The experimental results show that the CNN model has a high test accuracy precisely 94.55% with regards to the earlier tests done on the Citrus dataset and PlantVillage datasets. Because the model’s accuracy as mentioned above is greater than all other state-of-the-art models, this enables farmers to use the CNN model for rapid disease detection and decision support.
A machine learning approach for an early classification of fruit disease based on its texture and skin color. With the KNN, Decision Tree, and Random Forest classifiers, feature extraction was performed using Haralick, Hu Moments, and color histograms. The best classifier in the model was a Random Forest which achieved an accuracy of 99%, followed by KNN at 98.67%, and Decision Trees at 97.75%. The overall strategies are termed smart farming, which can increase productivity, lower human effort, and aid in early disease detection, providing a viable solution to effective fruit yield management (Mohanapriya et al.,[98]). Similarly, in their work Kamarasan et al.,[99] focused on a hybrid approach towards the detection of tomato fruit disease that combines the Sparrow Search Algorithm (SSA) and deep learning models. The deep search neural network hyperparameter’s optimization is done using SSA, which increases the efficiency of disease classification of the different diseases affecting tomato fruits. The model aims at improving product effectiveness by targeting diseases and classifying them using SSA. The model improves over the classic approaches by assuming the possibility of recognizing, and even diagnosing at, primary disease stages in tomatoes, which helps in effective yield and quality management.
The work of Seetharaman et al.,[100] attempts to solve the problem of banana fruit disease detection and classification by proposing a novel approach that uses Gabor-based binary patterns along with a convolutional recurrent neural network (CRNN). Managing the banana crop health and reducing the impact of diseases is crucial, hence, the utilization of such techniques can help towards early detection. The method delineates granular sustainability levels of any crop disease by processing images to extract relevant features. This model proficiently classifies rough and fine-grained stages of diseases in banana fruits based on a dataset of 17,312 images. This approach provides more refined methods for disease marking and helps improve timely responses, enhancing banana crop management. In this paper by Gang Xue et al.,[101] the authors describe a novel method of fruit classification that is based on a deep learning hybrid framework - Attention-based Densely Connected Convolutional Networks with Convolution Autoencoder (CAE-ADN). With this model, a convolution autoencoder is used to pre-train the fruit images without supervision. This model is then used to initialize the weights for an attention-based DenseNet (ADN) that performs feature extraction on the image. This gives a better performance owing to the combination of both unsupervised and supervised learning. The results of the experiment conducted on two fruit datasets are showed the effectiveness of the model in increasing the efficiency of fruit sorting. This method can highly minimize the expenditure on fresh supply chains, factories, and supermarkets.
In the detection of apple diseases, Sharma et al.,[102] employed the deep learning approach methodology illustrated earlier. In their paper, Deep Learning Models for Automated Identification of Guava Fruits Diseases, Tewari, et al. implemented Shallow Transfer Learning via Convolutional Neural Networks. Their methods included algorithms from ViNet, Inception v4, DenseNet, MobileNet, SqueezeNet, and ResNet. Using DenseNet169, the authors achieved an impressive test accuracy of 96.76% out of a total amount of 42,926 processed flower images, while Out of the 10,139 processed images of 102 apple varieties, the test accuracy was 99,62%. The approaches developed also surpassed older methods, hence proving the capabilities of deep learning as an advanced tool for the effective detection of plant diseases. This study is narrowed down to only 4 common Guava diseases: Phytophthora, Red Rust, Scab, and Styler/Root. Deep learning for image classification is a new zone, especially when springing from AI Automated Recognition systems or Computer Vision Deep Learning. In a study done by Shanmugapriya Sankaran et.al.,[103], the authors have developed an integrated system for automated detection of citrus diseases which they have termed as CitrusDiseaseNet. It employs a Convolutional Neural Network (CNN) with a kernel extreme learning machine (KELM) classifier. With the help of CNN, relevant features from the citrus images are extracted and later the features are utilized for classification by the KELM classifier. The system produced extraordinary results featuring an accuracy of 98.9%, a precision of 98.3%, a recall value of 98%, an F1 score of 97.9%, and a specificity of 98.2%. The system outperforms previous approaches which makes it an effective and highly adaptable solution for the rapid detection of diseases in citrus crops. The challenge of diagnosing Citrus Huanglongbing (HLB) is greatly improved through the multi-modal feature fusion network presented by Yang et al., [104], which utilizes a combination of soft attention mechanisms for hyperspectral data reduction and bilinear fusion methods for feature integration, alongside auxiliary classifiers to achieve improved hyperspectral imagery extraction. Because of these multi-modal approaches, the system achieved an enhanced accuracy of 97. 89%, easily surpassing single modality networks that used either RGB images or hyperspectral data, which only achieved 87. 98% and 89% respectively. This proves that the fusion of multi-source data is effective for the detection of citrus HLB and deficiency.
The study conducted by Archana et al.,[105] introduces a new multimodal framework for fruit disease classification and segmentation with manually specified classes, using thermal imaging with a simplistic approach to save on disease variability, computational power and time. The model utilizes Saliency Maps based on Entropy for segmentation and then is followed by transformation into multi-mask domains for multi-domain feature capturing via Frequency, Z Transform, S Transform, and Gabor Transforms. A significant feature is the employment of Coot Optimization (CO) in conjunction with feature selection for optimized efficiency due to reduced redundancy. Classification is accomplished by the Graph-based Generative Adversarial Network (Graph GAN), which is claimed to outperform other approaches by improving accuracy by 9.4 and other measures of performance, with greater precision, recall, and lower processing intervals. The approach provides new capabilities for monitoring large-scale real-time diseases within agriculture, an aspect that has been difficult with current systems. The work done by Yunong Tian et al., [106] shows a deep learning approach to detecting anthracnose lesions on apples, the challenge of insufficient image data is cleverly worked around through image synthesis techniques. This method improves detection accuracy by integrating data augmentation with Cycle-Consistent Adversarial Network (CycleGAN) and feature extraction with YOLO-V3 and DenseNet. The results indicate the proposed method has real-time detection capabilities and outperforms faster R-CNN, YOLO-V3, and other benchmark models. As a proof of concept, this model features apple lesions orchard detection, marking a significant advancement toward robust disease diagnosis automation in agriculture. The work of Jongwook Si et al., [107] tackles disease diagnosis of chili pepper crops using an original image reconstruction technique through GrabCut for background furling and a GASA - Generative Adversarial Serial Autoencoder. This model parties to a GAN structure, concentrating on image generation which differs from normal to out-of-scope images to classify. The presented approach effectively performs normal versus diseased plant discrimination and is a step forward in smart farming for detecting chili pepper ailments. When assessing scores for images, these methods yield superior results compared to any other technique or system applied to this problem. Bhavini et al.,[108]in their study propose an image processing-based methodology to identify apple diseases like apple scab, rot, and blotch. The system integrates color and texture features and utilizes a random forest classifier for disease categorization and K-means clustering for segmenting affected areas. Feature-level fusion can improve the accuracy of the classification.
Nandi et al.,[109] proposed another deep-learning method for detecting diseases of guava fruits and leaves. The study applies five machine learning models and optimizes them using Float16 and dynamic range quantization, enabling their use in end-user devices. Experimental results show that the GoogleNet model achieved an accuracy of 97\% while the model size was 0.143MB, and EfficientNet achieved 99\% accuracy with a size of 4.2MB, clearly demonstrating the real-world applicability of lightweight high-accuracy models. Chung et al.,[110] in their study integrate machine learning (ML) and deep learning (DL) techniques to detect plant diseases effectively. The proposed hybrid framework combines eight EfficientNet (B0–B7) architectures as feature extractors with five ML classifiers (k-Nearest Neighbors, AdaBoost, Random Forest, Logistic Regression, and Stochastic Gradient Boosting). Hyperparameter tuning is achieved using the Optuna framework. The method is validated on the IARI-TomEBD dataset, achieving up to 100% accuracy, and further tested on PlantVillage-TomEBD and PlantVillage-BBLS datasets. Friedman statistical tests confirm the superior performance of EfNet-B3-ADB and EfNet-B3-SGB models, providing a practical solution for early disease detection in crops. Patel and Patil[111] proposed a Convolution Neural Network (CNN) methodology. This approach enhances the CNN by feature extraction through color, shape, and other important surface metrics which can be utilized for disease detection. The SSDAE-SVM approach along with dropout optimizes fruit grading and ultimately reduces postharvest losses. Achievement of the model is commendable as it boasts a striking accuracy of 97.25%, specificity of 95.62%, F1-score of 98.81%, and recall rate of 98.98%. This approach reduces fruit disease detection and grading inefficiencies and enhances overall fruit postharvest quality.

4. Machine Vision and Image Processing:

Machine vision employs cameras and computer systems to capture and evaluate visual data, whereas image processing refers to the algorithms that accomplish the “thinking” behind the analyzed images. In horticulture, it is possible to diagnose plant diseases by interpreting high-resolution images showing discolorations, lesions, and other symptomatic features. It facilitates early diagnosis and precise categorization of diseases, thereby enhancing crop health management and reducing pesticide application.
Figure 10. Workflow of Machine Vision and Image Processing.
Figure 10. Workflow of Machine Vision and Image Processing.
Preprints 153400 g010
In detecting diseases for passion fruit, Dharmasiri and Jayalal [7] developed a technique which relies on image processing to acquire information about woodiness and scab. Mavridou et al.,[112] presented the latest trends in using machine vision in precision, specifically in crop farming. Essential topics include fruit grading, count, yield assessment, plant health analysis (weeds, insects, and illness recognition), and farming robotics such as vehicle directing frameworks and harvesting robots. It also stresses on how machine vision and machine learning can help enhance efficiency and accuracy and mitigate challenges associated with traditional farming by hand.
AI-based aerial remote sensing system created for automated disease and pathogen surveillance on broccoli crops. The system consists of an autonomous drone equipped with a GPS that can conduct grid flights to plan its path as well as capture high-resolution images every two seconds. The images are snapped while being at a geotagged location and then processed by the developed deep learning YOLO v5x algorithm to increase accuracy, reduce false positives for pathogen detection, and improve deep learning, which subsequently maximizes the data processing accuracy. The information gathered is then stored in a central database to improve and train the algorithm further. This rigid system of snooping the crops guarantees precision with real-time updates, delivering timely action as well as quality control of the crops which is highly crucial in the broccoli export market for Ecuador (Laura et al., [113]).
In their research, Mahmud et al., [114] focused on utilizing machine vision to identify strawberry powdery mildew outbreaks. To do this, they developed an artificial cloud lighting condition system. The system incorporated a mobile platform with two cameras, custom software, and real-time kinematics GPS. With the combination of these components, the system achieved high recall (95.26%), precision (95.45%), and F-measure (95.37) detection accuracy under artificial cloud lighting conditions. This value exceeded that obtained under natural lighting conditions of 81.54%, 72%, and 75.95%, respectively. It was concluded that the use of artificial cloud lighting aided in disease detection and real-time image acquisition speeds of 1.5 km/h. The system’s depth of 300 mm proved suitable for effective field conditions. For the detection of apple fruit diseases, Abd El-Aziz et al. [115] suggested the use of a machine learning-based system. It offers K-Means segmentation and multiple feature extraction techniques that are classified by Support Vector, Machine, K-NN, Multi-Class SVM, and Multi-Label KNN. The model greatly outperforms existing methods achieving a whopping 97.5% disease classification accuracy and 99% health classification accuracy on apples. The goal of the approach is supported by the agricultural industry who strive to improve the accuracy and efficacy of fruit disease detection.
A machine vision-based agro-medical expert system for the detection of papaya diseases was proposed by Habib et al., [116] where images are grabbed using a mobile or handheld device, the captured images are preprocessed through disease region segmentation via K-means Clustering, and features are classified through Support Vector Machine (SVM). The system achieved over 90% classification accuracy, providing immense value for far-restricted, mostly uneducated Bangladesh farmers to deal with papaya diseases and decrease post-harvest losses. In terms of other perspectives such as Firouz and Sardari [117], it is acknowledged that machine vision (MV) integrated with image processing (IP) is a good approach to detecting fruit and vegetable damage caused by insects as well as rot. This begins with the image-capturing stage where images are taken under controlled illumination and IP is performed on the captured images. The low-quality products are then recognized as defective and categorized with algorithms which increases the efficiency of the Ex-Post Survey Inspection and even sustained agricultural output. This improves the agricultural markets for more effective defect detection and classification.
Mehra et al.,[118] Adhere out research involving the comprehensive examination performed on the effectiveness of computer vision in acquiring information about maturity and fungal lesions of tomatoes using thresholding and k-means segmentation and fungus analysis. Automated Image Processing and Machine Learning Algorithms with Feature Extraction techniques such as color, shape, and texture classification using ANFIS as a learning base and CBR as a reasoning base were defined for early-stage banana diseases detection by Athiraja et al.,[119]. Powdery mildew in strawberry crops was detected through a real time machine vision system by Mahmud et al.,[120]. The system processed continuous image streams captured from field locations in Nova Scotia using texture analysis based on color co-occurrence matrices and artificial neural network-based systems. The machine vision system performed reliably at high accuracy with very low error rates in detection. Some variation in performance was noted, however, due to wind conditions and overlapping leaves in the field.
Hadipour-Rokni et al.,[121] applied a machine vision system utilizing a CNN with transfer learning for detecting Mediterranean fruit fly infestation within citrus fruits. They did a comparison of models with transfer learning (ResNet-50, GoogleNet, VGG-16, and AlexNet) combined with three different optimization methods (SGDm, RMSProp, and Adam). It was determined that VGG-16 with SGDm achieved accuracy levels of 98.33% in early stages; while AlexNet with SGDm achieved accuracy levels of 99.33% in later stages. This system can help improve pest control measures in farming. Mia et al.,[122] Employed computer vision to detect uncommon indigenous fruits in Bangladesh featuring low discernibility of the fruits along with their cultural value. A set of images containing six uncommon local fruits was selected, and their preprocessing and feature extraction were carried out using image segmentation. Support Vector Machines (SVM) was used for classification, accurate results of up to 94.79% accuracy was achieved. The computer is effectively used to preserve heritage and raise recognition towards nutrition.
An automated system for identifying jackfruit diseases aimed at aiding farmers from faraway regions was developed by Habib et al.,[123] using computer vision. The system’s components include capturing images via handheld devices (feature collection), utilizing k-means clustering (feature extraction), and classifying images using Support Vector Machines (SVMs). The SVM gives an overall classification accuracy of 90%. The system enhances detection and classification of health problems in the crop which improves agricultural practice in Bangladesh.
Haque et al.,[124] constructed a CNN-based system for the detection of guava diseases which include anthracnose, fruit rot, and canker along with prescriptive early treatment options. Their system has an accuracy of 95.61%, and it assists in reducing the economic impact on farmers in Bangladesh by helping them receive timely treatment diagnoses and advice on preventive measures. In this study, the model was evaluated on multiple performance metrics including precision, recall, and F1 score.
In their work, Bhange and Hingoliwala[125] described a web-based tool for the detection of diseases in pomegranates from uploaded images of the fruit. The system uses image processing to perform grey images, color feature extraction such as morphology and color centroid vector (CCV), as well as k-means clustering for disease severity detection. Infected fruits are accurately classified using morphological features with an 82% success rate, the highest among color centroids. The system helps disabled farmers to prevent the disease and as a consequence enhance the level of agricultural production.
A computer vision system capable of mango defect detection and grading using a deep convolutional neural network. The system was trained on a publicly available mango database and as a result, achieved 98 percent accuracy. This automated method obviates the need for a tedious and subjective manual grading system, making the approach much more efficient and thus elevating the quality grading of mangoes for export. The approach pre-processes input images, extracts pertinent features and classifies them to surface defects (Nithya et al.,[126]). Another research analyzes the application of computer vision and sparse coding for the identification of different pests and diseases, including the apple capsid, codling moth, pear lace bug, and the misshapen apple russeting. The sparse coding technique permits efficient recognition of the afflicted Golden Delicious apples and Red Delicious apples using digital image processing methods [127]. It is noted that the accuracy of the method ranged from 67% to 100% depending on the particular combinations of pests and diseases. The most remarkable fact about this sparse coding method is the reduction in time for detection. After the dictionary was created, the detection was achieved in 0.175 seconds after the dictionary was created, which was only when the method was supposed to be used these startling figures show great promise for rapid initial detection. With reference to Habib et al.,[128] details between various emerging computer vision techniques and machine learning for fruit and vegetable disease recognition are well-reviewed. In this chapter, various methods employed in the agriculture industry with a particular focus on different performance metrics are reviewed and method insights are provided. It is beneficial for people undertaking research in agriculture as it reviews active domains and possible future work.
In Deshpande et al.,[129] a novel system for automatic grading and disease within pomegranate blight is described. Focusing on image blight received capture, the system identifies and catalogues diseases, as well as performs manual methods of grading in far less time. The pictures grabbed are subject to enhancement and segmentation in order to determine the affected part of the leaves and fruits. The diagnosis of bacterial blight is facilitated by such important features, which are the yellowed margins of the leaf spots and crumbles on the fruit spots. The paper argues for incorporating the ICT in managing agriculture diseases for better diagnosis and management of the diseases which bring about increased precision agriculture. Kamala et al., [130] present a machine learning and image processing method designed to detect disease in apples that are grown using hydroponic farming technology. The paper also describes the benefits of hydroponic farming relative to traditional farming which uses soil, regarding productivity and speed at which crops are grown. The system that has been developed for the study concentrates on the detection of apple scab and powdery mildew disease using a Support Vector Classifier (SVC) with an accuracy of 94.1%. Early detection of diseases enables better productivity, quality of produce, and fewer losses. This proves the necessity of using modern technology in contemporary agriculture for efficient monitoring and control of crop diseases. Durmus et al.,[131] argue that Such techniques as Machine vision and image processing have considerably improved the identification and classification of diseases within plants. In this research paper, a disease focused on tomato plants is detected using deep learning and RGB cameras to track the parameter’s alteration of the leaves. Instead of feature extraction as done in the past, modern deep learning models, such as AlexNet and SqueezeNet, were trained on the PlantVillage dataset, which contains ten classes, healthy leaves being one of them. The models were tested and implemented on the Nvidia Jetson TX1 for real-time detection of the disease. The system can be used autonomously in robotic systems or with sensors in greenhouses, demonstrating advanced precision agriculture technology. Ali et al., [166] propose an automated technique using Delta E color difference, histograms, and texture descriptors to detect and classify citrus diseases. It achieves 99.9% accuracy, emphasizing economic benefits for farmers. Wise et al.,[178] use machine learning-based image color analysis to predict strawberry fruit maturity, length, and weight. Based on 1,685 images, the regression models were able to predict harvest quality ahead of time, ahead and up to 22 days in advance, which would help with precision farming automation. Hasan et al.,[179] also suggested the color analyses when they used the techniques like color processes analysis and semantic segmentation and achieved a precision of 90%

5. Applications of Advanced Imaging Techniques:

Hyperspectral and multispectral psychophysics investigate information in different wavelengths for great detail analysis. Multispectral cameras work with a limited number of broad spectral bands, while hyperspectral cameras capture hundreds of narrow and continuous bands offering much more detailed information. In precision agriculture, these tools are invaluable for monitoring crop health, disease, stress, and quality to ensure precise and adequate measures for increasing yield and productivity.
Figure 11. Process Flow of Imaging Techniques.
Figure 11. Process Flow of Imaging Techniques.
Preprints 153400 g011
The reflectance imaging method of DCAN for hyperspectral imaging to detect citrus canker lesions on grapefruits. From a 450–930 nm imaging spectrometer, they derived reflectance images and performed the SID method to classify canker from normal fruit and other peel afflictions. With the optimized SID threshold, classification accuracy was 96.2%, and the threshold value with zero false rate was slightly adjusted. The study shows that hyperspectral imaging can be a valid method for specific citrus disease detection, which becomes a possible answer for the agriculture industry (Qin et al.,[132]). Jain et al.,[133] utilized deep learning models to identify the palm diseases: leaf spotting, guava rust, and guava canker. The model implements convolutional neural networks (CNN) and long short-term memory (LSTM) networks, achieving a total of 95.90% success with a sample database of 6,000 images. This method provides a powerful solution for disease classification, increasing the quality of guava production and boosting crop production. Bulanon et al.,[134] try to capture multispectral images of citrus fruits in their natural habitat for undetectable robotic harvesting. The cameras used in the study were 12-bit monochrome cameras fitted with 6 optical band-pass filters to capture multispectral images. The images were separated using pattern recognition methods such as linear discriminant classifiers and neural network-based segmentation and principal component analysis (PCA) was used to restrict the selection of the fruit detection wavelengths by fruit masking under the canopy. This study attempts to overcome some of the difficulties posed in fruit detection with commercial robotic harvesting and another study mentioned below uses similar techniques [24]. Hyperspectral imaging (380–1020 nm) and machine learning: Abdulridha et al.,[181] were able to detect and classify powdery mildew (PM) in squash by the disease stage; Employing unmanned aerial vehicles (UAVs) and radial basis function (RBF) classifiers, the researchers discovered significant spectral bands (388 nm, 975 nm) for PM detection. Likewise, Qin et al., [182] developed a hyperspectral reflectance imaging approach to enable PCA-based image classification for the detection of the citrus canker and were able to separate infected and healthy fruits using wavelengths between 400–900 nm. Focusing on hyperspectral disease detection, Bagheri et al., [183] performed a similar spectrum analysis on pear trees infected with fire blight disease, identifying several varying wavelengths between infected and non-infected contingent leaves, presenting this early-monitoring methodology to integrate into orchard management practices. To improve disease detection with hyperspectral techniques, Zhao et al. [184] also studied the impact of fruit harvest time on citrus canker detection accuracy, showing that image acquisition timing can greatly affect the precision of this application and that optimized imaging schedules must be considered to ensure consistency.
Hyperspectral imaging has been widely applicable to fruit quality assessment other than disease detection, as reviewed by Lorente et al., [185], and its applications in monitoring freshness, internal defects detection and monitoring of post-harvest quality control. [186] published their study on early decay detection in fruits, providing evidence that spectral bands related to spoilage can be used for indicating the spoilage state of perishable products before any visible it symptoms appear, a method that could be beneficial for supply chain management and to reduce post-harvest losses. Likewise, research from Sighicelli et al., [187] demonstrated the utility of fluorescence and reflectance hyperspectral imaging for post-harvest monitoring of disease in orange fruit to affirm the practicality of these techniques in quantifying microbial penetration and decay processes, which in turn support storage and distribution strategies.
Also, the deep learning disease classification with hyperspectral imaging is advanced; e.g., Jung et al., [188] created a 3D convolutional neural network (CNN) model for the diagnosis of Gray Mold disease in strawberry leaves. Building on the principle of integrating spatial and spectral information, their model greatly increased classification accuracy. For example, to improve the quality of apples, Mehl et al. [189] developed a hyperspectral imaging approach for detecting surface defects and contamination in apples, providing support for using hyperspectral imaging for contamination screening. Using image processing techniques for the detection of fungal diseases in crops, Pujari et al., [190] applied object segmentation and classification techniques to accurately recognize the diseases in a foliar crop.
Genangeli et al., [191] also proposed a new hyperspectral imaging technique for early detection of moldy cores in apples, proving that spectral analysis can be used to separate healthy from decomposed ones internally, solving the problem of early detection and food waste. Qin et al., [192] instead focused on the optimization of multispectral band selection for the plantations of citrus canker, improving spectral efficiency on the large-scale agricultural monitoring. Further, UAV-based hyperspectral imaging has been applied for large-scale disease detection, where Pansy & Murali [193] developed machine learning models (MD-FCM and XCS-RBFNN) using aerial hyperspectral imagery to successfully detect diseases and pests affecting mango plants in large orchards. Finally, Fernández et al., [194] used close-range multispectral images to detect infected cucumber plants and indicated that multispectral indices can successfully classify healthy and diseased plants, potentially paving the way for enhancing precision agriculture techniques.

6. Ensemble Learning

Ensemble learning is an advanced machine learning method that works by combining multiple models together to improve classification accuracy, robustness, and generalization. Ensemble methods: Instead of a single classifier, ensemble methods combine the predictions of several models (to reduce errors for better performance). It is useful in fruit disease detection, where the model has to distinguish between several diseases in different environments.
Figure 12. Workflow of Ensemble Learning.
Figure 12. Workflow of Ensemble Learning.
Preprints 153400 g012
Li et al. [197] developed a stacking ensemble learning model for diagnosing fruit tree diseases by combining base classifiers to improve classification accuracy across disease categories. Similarly, Mehmood et al., [198] combined two ideas, a deep learning approach and a traditional classifier, using an ensemble methodology to improve the automated identification of crop diseases, with a significant impact on precision agriculture. Yousuf and Khan [199] used an ensemble classifier to detect plant disease and compared various ensemble techniques (like bagging and boosting). Javidan et al.,[200] used weighted ensemble learning to integrate tomato leaf images and classify diseases with image processing and prediction algorithms, achieving higher accuracy than separate models. Likewise, Nader et al. [201] utilized transfer learning and ensemble learning to classify grape leaf diseases, demonstrating how deep models for feature extraction can be improved using ensemble strategies. In general, popular classification methods are integrated to improve fruit disease classification, which can effectively reduce the rate of misclassification in the classification system, thus improving the reliability of fruit disease diagnosis. Many ensemble techniques (stacking, bagging, and boosting) have been implemented for most of fruit crops, and ensemble learning will play an essential role in precision agriculture and automated disease detection.

7. Data and Case Studies-Based Research

Research papers also include a dataset and case studies for analysis as they provide further details concerning the domain in question. However, very few such case studies have been carried out so far.
The palms are as lemon orange and grapefruit as the diseases of citrus fruits and their leaves dataset. Among the diseases that his palms can detect and classify are: Black Spot, Canker, Scab, Greening, and Melanose. In collaboration with the Citrus Research Center, a dataset was compiled that contains pictures of healthy as well as infected plants from the Sargodha region of Pakistan [135]. This resource is great for researchers working with algorithms in machine learning and computer vision, as it aims to help in the early detection of plant diseases and alleviate the economic burden for farmers (Rauf et al., 2019). The dataset is hosted on Mendeley, where it can be accessed by anyone.
The “Tomato-Village” dataset for tomato disease detection with the intention of filling gaps left by existing datasets like Plant Village that lack photographs from actual scenarios. It captures the most common diseases of tomatoes that are found in the Jodhpur and Jaipur districts of Rajasthan, India, which are leaf miners, spotted wilt virus, and nutritional deficiency. The dataset has three variants: multiclass, multilabel, and object detection. The authors also report preliminary experiments using several CNN architectures on this dataset and their provided baseline results. The dataset and source code are available on GitHub for public use (Gehlot et al.,[136]). Mahendran and Seetharaman [137] designed a new method of feature extraction and classification to detect disease in banana fruit using specially designed neural networks. This methodology starts with image pre-processing, followed by segmentation using a boundary optimization technique. Then feature extraction is done by GLCM and classification by CDNN. The application of this technique is beneficial in disease detection in bananas, which is important to mitigate severe postharvest losses caused by viruses, fungi, and bacteria. Supporting agricultural efficiency, the suggested technique increases the precision in recognizing infected bananas along with boosting the speed of the process. Saleem et al.,[176] introduce the NZDLPlantDisease-v1 dataset for detecting diseases in New Zealand’s key horticultural crops. An optimized Region-Based Fully Convolutional Network (RFCN) model achieved 93.80% mean average precision, outperforming default models by 19.33%., Another study presents CitDet, a dataset for detecting citrus fruit and estimating yield in Huanglongbing (HLB)--affected orchards. It includes 579 high-resolution images with 32,000 annotated fruit instances. The dataset enhances object detection methods, contributing to yield estimation and disease impact assessment (James et al.,[177]).

8. Future Trends and Research Directions:

8.1. The Convergence of IoT with Edge Computing:

It is possible to append IoT components, i.e., sensors and cameras to edge computing systems to facilitate real-time disease detection within agricultural fields. Thus, data does not have to be sent to the cloud for processing since it can be executed on edge devices, and this will greatly reduce latency and bandwidth consumption. The application of IoT combined with edge computing will ensure rapid and efficient responses to plant diseases, increasing the autonomy of agriculture monitoring systems.
Continued efforts will concentrate on the detection of disease using low-powered yet high-performing edge devices that can be incorporated into IoT systems for real-time monitoring and swift decision-making. There are certain research works included here where Albanese et al.,[138] have proposed an automated pest detection with DNN on the edge which primarily focuses on edge computing along with Rumy et al.,[139] have proposed a system which is primarily based on IOT with Edge intelligence, the main reason for damage to quality and quantity is due to the wide spread of disease by integrating IoT we will be able to automate and monitor a large field area through integrating with IOT Tsai et al.,[140] research also aligns with this, some of the contributions to this work have been done by Klabunde et al. [141], Tamoor Khan et al.,[142]

8.2. Resource-Constrained Environments’ Lightweight Models:

Lightweight machine learning and deep learning models that are less computationally intensive are required in low-resource areas such as small farms and rural regions. The research will target developing low-power consuming models that are compact and efficient enough to be executed on mobile devices and Raspberry Pi.
To eliminate the resource-intensive components of the disease detection models while maintaining precision, methods like model pruning, quantization, and knowledge distillation will be employed. The exploration of lightweight machine learning and deep learning models has shown great potential for potential applications even in resource-constrained environments like small farms and rural areas. As emphasized by Kim et al., pruning,  quantization, and knowledge distillation approaches are key to optimizing these models to retain accuracy and efficiency. [143] and Liang et al. [144]. For example, Li et al. [145] proposed PMVT, a lightweight vision transformer for plant disease identification on mobile devices, which can facilitate monitoring in the field. Guan et al. A deep learning-based approach for the detection of pests and diseases in plants has been proposed by [147], as a more effective alternative for smart agriculture. Additionally, Borhani et al. For plant disease automatic classification, a serial instance of a lightweight, low-power solution was implemented by [146], who utilized vision transformers.

8.3. Sustainable Agriculture Applications:

In the future, the integration of disease detection models into sustainable farming practices will be a major focus of research. This will be the case as sustainability becomes an increasingly important concern. This entails the use of precision agriculture that reduces the amount of pesticides applied, enhances water use efficiency, and lessens the scope of overusing resources. Eco-friendly farming practices can be established through the integration of AI in disease diagnostics, which supports early diagnosis and limits the need for chemical interventions.
The research will also target the application of AI for enhancing crop management systems that promote sustainable and regenerative agriculture. the alliance between AI-powered disease detection models and sustainable farming techniques will be instrumental in driving agriculture forward as it tackles the increasing demands of sustainability. This involves the use of precision agriculture technologies to improve resource utilization, reduce pesticide use, and increase water-use efficiency. AI systems help in detecting the disease at early stages thereby reducing the use of chemical interventions, which promotes eco-friendly farming practices [148]. These systems are crucial for ensuring the health of these crops in the future—especially considering the threat posed by climate change, acting through predictive analytics of IoT and machine learning [149].
Moreover, AI advances related to smart farming, from intelligent disease detection models for crops such as potatoes, illustrate systematic approaches to improve crop monitoring accuracy and mitigate drawbacks of existing technologies [150]. Deep learning and imaging-based systems for plant disease detection represent another area that is expanding into research for the advancement of agriculture in a way that enables rapid and digitized farming solutions to large-scale operations, as described in the review [151]. Existing initiatives such as Plant Guard exemplify the application of AI in sustainable agriculture, encompassing advanced agricultural advisory services and systems for disease detection designed for real-time diagnosis and resource-efficient intervention [152]. By incorporating such innovations, microbiome applications promote the goals of regenerative agriculture, balancing food security with sustainable land and resource management.

8.4. Explainable AI for Better Decision-Making:

One important gap in utilizing AI technologies in agriculture is the credence given to deep learning algorithms that have a black-box appearance. Future work will focus on developing agriculture-specific explainable AI (XAI) systems with optable decision-making processes. These systems will increase farmers’ confidence in using AI tools by their ability to explain the rationale behind the prediction provided by the model.
Using XAI alongside disease detection models will improve rationality in the decision-making processes of farmers by providing useful information and justifiable explanations, especially in complicated agricultural cases that need quick resolution. In recent years, the role of Explainable AI (XAI) in agricultural disease detection systems has emerged as pivotal in understanding model decisions. Sagar et al.,[154] surveyed leaf-based plant disease detection, showing that XAI improves the interpretability of AI models and helps farmers in prediction understanding. Mahmud et al.,[155] LE: Model Interpretability and XAI in Tomato Leaf Disease Detection: All of this reveals a few elements of using XAI to achieve quality in tomato leaf disease detection by telling models they should interpret various factors influencing decisions for better transparency and improving the decision-making process.
Khandaker et al.,[156], a deep learning framework that includes Explainable Artificial Intelligence (XAI), has been designed; comparatively different CNN architectures helped provide clear visual results that explained the model’s decision-making in pumpkin leaf disease detection. Similarly, Ashoka et al., [157] developed an explainable artificial intelligence (XAI)-based framework for the detection of banana diseases, which enhanced the diagnostic accuracy and provided actionable diagnostics along with their interpretability. Khan et al., [158] Furthermore, proposed an Explainable AI (XAI) model called Enhanced Attention-CNN (EA-CNN), capable of classifying fruits and vegetables and justifying the classifications.
These results highlight the necessity to incorporate XAI in agricultural disease detection systems, which would facilitate farmers’ trust in adopting AI-based tools by ensuring rationality in scenarios where farmers must make complex decisions.
Table 1. Challenges & Impacts.
Table 1. Challenges & Impacts.
Challenge Impact on detection Proposed Solutions References
Imbalanced Datasets Poor generalization of models GAN-based data augmentation [Zeng et al.,]
Model Interpretability Lack of trust in predictions Incorporation of Explainable AI Techniques [Dhiman et al.,]
High Computational Costs Limited deployment on edge devices Development of lightweight models [Iftikhar et al.,]
Similar Symptoms Across Diseases Misclassification of diseases Advanced imaging and hybrid models [Li et al.,]

9. Conclusion

This review work has covered the application of several machines and deep learning techniques for disease detection and classification amongst horticultural fruits and vegetables. The review also analyses developments that have taken place in the field of imaging techniques especially multispectral and hyperspectral imaging and the new models of machine learning, especially Convolutional Neural Networks and Support Vector Machines, which have been used to improve the accuracy of disease detection. The papers discussed in the review demonstrate the growing awareness of using hybrid methodologies for image processing that combine AI models to improve the accuracy and scalability of disease detection systems in agriculture. Furthermore, the review highlighted the necessity of dataset diversity and the obstacles in data collection for real-world problems.
The analysis of the gathered performance metrics and model tests including accuracy, precision, recall, and F1-score, exhibited a clear tendency towards the use of Deep Learning Models when it comes to high accuracy achievement scoring, while still having issues of computational challenges and model interpretation. We further reviewed the ease with which adopting these models is done for environments with limited resources as this is extremely important for low-cost broad-based adoption on small-scale farming.
Considering the future, the combination of IoT and edge computing with AI disease detection systems, and the creation of explainable AI models will be fundamental in the progression of agriculture. These technologies will supercharge disease detection along with further promoting sustainability by reducing the use of pesticides and improving crop management practices.
To sum up, current technologies are tremendously promising, but additional efforts must be directed towards bettering model precision, model explainability, and enhanced deployment in the agricultural arena in real-time. This will usher in a new epoch of agricultural adaptive intelligence for effective disease management in agriculture.

References

  1. Dubey, S.R.; Jalal, A.S. Detection and classification of apple fruit diseases using complete local binary patterns. IEEE 2012, 978-0-7695-4872-2/12.
  2. de Moraes, J.L.; de Oliveira Neto, J.; Badue, C.; Oliveira-Santos, T.; de Souza, A.F. Yolo-Papaya: A Papaya Fruit Disease Detector and Classifier Using CNNs and Convolutional Block Attention Modules. Electronics 2023, 12, 2202. [Google Scholar] [CrossRef]
  3. Sumanto, Sumanto & Sugiarti, Yuni & Supriyatna, Adi & Carolina, Irmawati & Amin, Ruhul & Yani, Ahmad. (2021). Model Naïve Bayes Classifiers For Detection Apple Diseases. 1-4. [CrossRef]
  4. Sari, Wahyuni & Kurniawati, Yulia Ery & Santosa, Paulus. (2020). Papaya Disease Detection Using Fuzzy Naïve Bayes Classifier. 42-47. [CrossRef]
  5. Huddar, P.D.; Sujatha, S. Fruits and leaf disease detection using Naive Bayes algorithm. International Journal of Engineering Research in Electronics and Communication Engineering (IJERECE) 2017, 4, 89. [Google Scholar]
  6. Yasmeen, U., Khan, M. A., Tariq, U., Khan, J. A., Yar, M. A. E., Hanif, C. A., ... & Nam, Y. (2021). Citrus diseases recognition using deep improved genetic algorithm. Comput. Mater. Contin, 71(2).
  7. Dharmasiri, S.B.D.H.; Jayalal, S. (2019, March). Passion fruit disease detection using image processing. In 2019 International Research Conference on Smart Computing and Systems Engineering (SCSE) (pp. 126–133). IEEE.
  8. Awate, A.; Deshmankar, D.; Amrutkar, G. Bagul and S. Sonavane, “Fruit disease detection using color, texture analysis and ANN,” 2015 International Conference on Green Computing and Internet of Things (ICGCIoT), Greater Noida, India, 2015, pp. 970–975. [CrossRef]
  9. Dubey, S.R.; Dixit, P.; Singh, N.; Gupta, J.P. Infected fruit part detection using K-means clustering segmentation technique. International Journal of Artificial Intelligence and Interactive Multimedia 2013, 2. [Google Scholar]
  10. Ashok, V.; Vinod, D.S. (2021). A Novel Fusion of Deep Learning and Android Application for Real-Time Mango Fruits Disease Detection. In: Satapathy, S.; Bhateja, V.; Janakiramaiah, B.; Chen, YW. (eds) Intelligent System Design. Advances in Intelligent Systems and Computing, vol 1171. Springer, Singapore. [CrossRef]
  11. Syed-Ab-Rahman, S.F.; Hesamian, M.H. & Prasad, M. Citrus disease detection and classification using end-to-end anchor-based deep learning model. Appl Intell 2022, 52, 927–938. [Google Scholar] [CrossRef]
  12. Uğuz, Sinan & Şikaroğlu, Gülhan & Yağız, Abdullah. (2022). Disease detection and physical disorders classification for citrus fruit images using convolutional neural network. Journal of Food Measurement and Characterization. [CrossRef]
  13. Khan, M.A.; Akram, T.; Sharif, M.; Awais, M.; Javed, K.; Ali, H.; Saba, T. CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Computers and electronics in agriculture 2018, 155, 220–236. [Google Scholar]
  14. Janakiramaiah, B.; Kalyani, G.; Prasad, L.V.; Karuna, A.; Krishna, M. Intelligent system for leaf disease detection using capsule networks for horticulture. Journal of Intelligent & Fuzzy Systems 2021, 41, 6697–6713. [Google Scholar]
  15. Liu, J.; Wang, X. Tomato Diseases and Pests Detection Based on Improved Yolo V3 Convolutional Neural Network. Frontiers in Plant Science 2020, 11. [Google Scholar] [CrossRef]
  16. Malathy, S.; Karthiga, R.R.; Swetha, K.; Preethi, G. (2021, January). Disease detection in fruits using image processing. In 2021 6th International Conference on Inventive Computation Technologies (ICICT) (pp. 747–752). IEEE.
  17. Dhiman, P.; Kaur, A.; Hamid, Y.; Alabdulkreem, E.; Elmannai, H.; Ababneh, N. Smart disease detection system for citrus fruits using deep learning with edge computing. Sustainability 2023, 15, 4576. [Google Scholar] [CrossRef]
  18. Dhiman, P.; Manoharan, P.; Lilhore, U.K.; et al. PFDI: a precise fruit disease identification model based on context data fusion with faster-CNN in edge computing environment. EURASIP J. Adv. Signal Process. 2023, 72 (2023). [Google Scholar] [CrossRef]
  19. Kavya, P.; Nischitha, S.; Nivedita, N.S.; Prabhu, A. (2024, June). Deep Analysis: Apple Fruit Disease Detection Using Deep Learning. In 2024 IEEE International Conference on Information Technology, Electronics and Intelligent Communication Systems (ICITEICS) (pp. 1–9). IEEE.
  20. Azgomi, H.; Haredasht, F.R.; Motlagh, M.R.S. Diagnosis of some apple fruit diseases by using image processing and artificial neural network. Food Control 2023, 145, 109484. [Google Scholar]
  21. Pathmanaban, P.; Gnanavel, B.K.; Anandan, S.S. Guava fruit (Psidium guajava) damage and disease detection using deep convolutional neural networks and thermal imaging. The Imaging Science Journal 2022, 70, 102–116. [Google Scholar]
  22. Gupta, S.; Tripathi, A.K. Fruit and vegetable disease detection and classification: Recent trends, challenges, and future opportunities. Engineering Applications of Artificial Intelligence 2024, 133, 108260. [Google Scholar]
  23. Yadav, P.K.; Burks, T.; Frederick, Q.; Qin, J.; Kim, M.; Ritenour, M.A. Citrus disease detection using convolution neural network generated features and Softmax classifier on hyperspectral image data. Frontiers in Plant Science 2022, 13, 1043712. [Google Scholar]
  24. Basri, H.; Syarif, I.; Sukaridhoto, S.; Falah, M.F. Intelligent system for automatic classification of fruit defect using faster region-based convolutional neural network (faster r-CNN). Jurnal Ilmiah Kursor 2019, 10. [Google Scholar]
  25. Kazi, S. Fruit grading, disease detection, and an image processing strategy. Journal of Image Processing and Artificial Intelligence 2023, 9, 17–34. [Google Scholar]
  26. Hashan, A.M.; Rahman, S.M.T.; Islam, R.M.R.U.; Avinash, K.; Shekhor, S.; Iftakhairul, S.M. (2023, December). Smart Horticulture Based on Image Processing: Guava Fruit Disease Identification. In 2023 IEEE 21st Student Conference on Research and Development (SCOReD) (pp. 270–274). IEEE.
  27. K. Nageswararao, A.S.L. Apple Disease Detection Using Convolutional Neural Networks. International Journal of Intelligent Systems and Applications in Engineering 2024, 12(21s), 466–470. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5442.
  28. Haruna, A.A.; Badi, I.A.; Muhammad, L.J.; Abuobieda, A.; Altamimi, A. (2023, January). CNN-LSTM learning approach for classification of foliar disease of apple. In 2023 1st International Conference on Advanced Innovations in Smart Cities (ICAISC) (pp. 1–6). IEEE.
  29. Naik, B.N.; Malmathanraj, R.; Palanisamy, P. Detection and classification of chilli leaf disease using a squeeze-and-excitation-based CNN model. Ecological Informatics 2022, 69, 101663. [Google Scholar]
  30. Vishnoi, V.K.; Kumar, K.; Kumar, B.; Mohan, S.; Khan, A.A. Detection of apple plant diseases using leaf images through convolutional neural network. IEEE Access 2022, 11, 6594–6609. [Google Scholar]
  31. Barman, U.; Choudhury, R.D.; Sahu, D.; Barman, G.G. Comparison of convolution neural networks for smartphone image based real time classification of citrus leaf disease. Computers and Electronics in Agriculture 2020, 177, 105661. [Google Scholar]
  32. Jung, D.H.; Kim, J.D.; Kim, H.Y.; Lee, T.S.; Kim, H.S.; Park, S.H. A hyperspectral data 3D convolutional neural network classification model for diagnosis of gray mold disease in strawberry leaves. Frontiers in Plant Science 2022, 13, 837020. [Google Scholar]
  33. Iftikhar, S.; Khattak, H.A.; Saadat, A.; Ameer, Z.; Zakarya, M. Efficient fruit disease diagnosis on resource-constrained agriculture devices. Journal of the Saudi Society of Agricultural Sciences 2020.
  34. Yadav, D.; Yadav, A.K. A Novel Convolutional Neural Network Based Model for 0Recognition and Classification of Apple Leaf Diseases. Traitement du Signal 2020, 37. [Google Scholar]
  35. Mehta, S.; Singh, G.; Bhale, Y.A. (2024). Precision Agriculture: An Augmented Datasets and CNN Model-Based Approach to Diagnose Diseases in Fruits and Vegetable Crops. Simulation Techniques of Digital Twin in Real-Time Applications: Design Modeling and Implementation, 215-242.
  36. Nirgude, V.; Rathi, S. Improving the accuracy of real field pomegranate fruit diseases detection and visualisation using convolution neural networks and grad-CAM. International Journal of Data Analysis Techniques and Strategies 2023, 15, 57–75. [Google Scholar]
  37. Alhazmi, S. (2023). Different Stages of Watermelon Diseases Detection Using Optimized CNN. In Soft Computing: Theories and Applications: Proceedings of SoCTA 2022 (pp. 121–133). Singapore: Springer Nature Singapore.
  38. Sajitha, P.; Andrushia, A.D. (2022, April). Banana Fruit Disease Detection and Categorization Utilizing Graph Convolution Neural Network (GCNN). In 2022 6th International Conference on Devices, Circuits and Systems (ICDCS) (pp. 130–134). IEEE.
  39. Lanjewar, M.G.; Parab, J.S. CNN and transfer learning methods with augmentation for citrus leaf diseases detection using PaaS cloud on mobile. Multimedia Tools and Applications 2024, 83, 31733–31758. [Google Scholar]
  40. Shrestha, G. ; Deepsikha; Das, M.; Dey, N. “Plant Disease Detection Using CNN,” 2020 IEEE Applied Signal Processing Conference (ASPCON), Kolkata, India, 2020, pp. 109–113. [CrossRef]
  41. Manzoor, E.S.; Malhotra, R.; Bhat, R.; Shekhar, S. (2024, July). Apple Detection: A CNN Approach for Diseases Detection. In 2024 Second International Conference on Advances in Information Technology (ICAIT) (Vol. 1, pp. 1–5). IEEE.
  42. Mitkal, P.S.; Jagadale, A. ‘Grading of pomegranate fruit using cnn. age 2023, 3. [Google Scholar]
  43. Ahmad, J.; Jan, B.; Farman, H.; Ahmad, W.; Ullah, A. Disease detection in plum using convolutional neural network under true field conditions. Sensors 2020, 20, 5569. [Google Scholar] [CrossRef]
  44. Mostafa, A.M.; Kumar, S.A.; Meraj, T.; Rauf, H.T.; Alnuaim, A.A.; Alkhayyal, M.A. Guava disease detection using deep convolutional neural networks: A case study of guava plants. Applied Sciences 2021, 12, 239. [Google Scholar]
  45. Rana, H.S.; Manjunatha, N.; Pokhare, S.S.; Marathe, R.A.; Rajan, J. (2024, October). Convolutional Neural Network Based Approach for Automatic Detection of Diseases from Pomegranate Plants. In 2024 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER) (pp. 66–72). IEEE.
  46. Suji, A.; Gopi, R.; Danalakshmi, D.; Govindasamy, R. (2024, July). An Automatic Detection of Citrus Fruits and Leaves Diseases Using CNN. In 2024 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS) (pp. 1552–1557). IEEE.
  47. Kumar, V.; Banerjee, D.; Chauhan, R.; Rawat, R.S.; Gill, K.S. (2023, December). Revolutionizing Pear Tree Disease Detection: An In-Depth Investigation into CNN-Based Approaches. In 2023 Global Conference on Information Technologies and Communications (GCITC) (pp. 1–6). IEEE.
  48. Assunção, E.; Diniz, C.; Gaspar, P.D.; Proença, H. Decision-making support system for fruit diseases classification using deep learning. 2020 International Conference on Decision Aid Sciences and Application (DASA) 2020, 652–656. [Google Scholar] [CrossRef]
  49. Krishna, R.; Prema, K.V. (2023). Constructing and Optimising RNN models to predict fruit rot disease incidence in areca nut crop based on weather parameters. IEEE Access.
  50. Rathnayake, G.; Rupasinghe, S.; Weerathunga, I.; Akalanka, E.D.K.S.; Sankalana, P.; Zoysa, A.K.T.D. Diseases Detection and Quality Detection of Guava Fruits and Leaves Using Image Processing. International Research Journal of Innovations in Engineering and Technology 2023, 7, 511. [Google Scholar]
  51. Dhiman, P.; Kukreja, V.; Manoharan, P.; Kaur, A.; Kamruzzaman, M.M.; Dhaou, I.B.; Iwendi, C. A novel deep learning model for detection of severity level of the disease in citrus fruits. Electronics 2022, 11, 495. [Google Scholar] [CrossRef]
  52. Aghamohammadesmaeilketabforoosh, K.; Nikan, S.; Antonini, G.; Pearce, J.M. Optimizing Strawberry Disease and Quality Detection with Vision Transformers and Attention-Based Convolutional Neural Networks. Foods 2024, 13, 1869. [Google Scholar] [CrossRef] [PubMed]
  53. Parmar, M.; Degadwala, S. Deep learning for accurate papaya disease identification using vision transformers. International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2024, 10. [Google Scholar] [CrossRef]
  54. Zala, S.; Goyal, V.; Sharma, S.; Shukla, A. Transformer based fruits disease classification. Multimedia Tools and Applications 2024, 1–21. [Google Scholar]
  55. Nguyen, H.T.; Tran, T.D.; Nguyen, T.T.; Pham, N.M.; Nguyen Ly, P.H.; Luong, H.H. Strawberry disease identification with vision transformer-based models. Multimedia Tools and Applications 2024, 1–26. [Google Scholar]
  56. Chen, K.; Lang, J.; Li, J.; Chen, D.; Wang, X.; Zhou, J.; Liu, X.; Song, Y.; Dong, M. Integration of Image and Sensor Data for Improved Disease Detection in Peach Trees Using Deep Learning Techniques. Agriculture 2024, 14, 797. [Google Scholar] [CrossRef]
  57. SHereesha, M.; Hemavathy, C.; Teja, H.; Reddy, G.M.; Kumar, B.V.; Sunitha, G. (2023). Precision Mango Farming: Using Compact Convolutional Transformer for Disease Detection. In A. Abraham, A. Bajaj, N. Gandhi, A.M. Madureira, & C. Kahraman (Eds.), Innovations in Bio-Inspired Computing and Applications (Vol. 649, pp. 623–633). Lecture Notes in Networks and Systems. Springer, Cham. [CrossRef]
  58. Christakakis, P.; Giakoumoglou, N.; Kapetas, D.; Tzovaras, D.; Pechlivani, E.M. Vision Transformers in Optimization of AI-Based Early Detection of Botrytis cinerea. AI 2024, 5, 1301–1323. [Google Scholar] [CrossRef]
  59. Tiwari, R.G.; Misra, A.; Maheshwari, H.; Agarwal, A.K.; Sharma, M. (2024, August). Hybrid Transformer-CNN Model for Automated Diagnosis of Pomegranate Fruit Diseases. In 2024 10th International Conference on Electrical Energy Systems (ICEES) (pp. 1–5). IEEE.
  60. Li, X.; Chen, X.; Yang, J.; Li, S. Transformer helps identify kiwifruit diseases in complex natural environments. Computers and Electronics in Agriculture 2022, 200, 107258. [Google Scholar]
  61. Liu, Y.; Song, Y.; Ye, R.; Zhu, S.; Huang, Y.; Chen, T. . & Lv, C. High-Precision Tomato Disease Detection Using NanoSegmenter Based on Transformer and Lightweighting. Plants 2023, 12, 2559. [Google Scholar]
  62. Alshammari, H.; Gasmi, K.; Ben Ltaifa, I.; Krichen, M.; Ben Ammar, L.; Mahmood, M.A. Olive disease classification based on vision transformer and CNN models. Computational Intelligence and Neuroscience 2022, 2022, 3998193. [Google Scholar]
  63. Liu, Y.; Yu, Q.; Geng, S. Real-time and lightweight detection of grape diseases based on Fusion Transformer YOLO. Frontiers in Plant Science 2024, 15, 1269423. [Google Scholar] [CrossRef]
  64. Durairaj, V.; Surianarayanan, C. Disease detection in plant leaves using segmentation and autoencoder techniques. Malaya Journal 2020. [CrossRef] [PubMed]
  65. Pardede, H.F.; Suryawati, E.; Sustika, R.; Zilvan, V. (2018, November). Unsupervised convolutional autoencoder-based feature learning for automatic detection of plant diseases. In 2018 international conference on computer, control, informatics and its applications (IC3INA) (pp. 158–162). IEEE.
  66. Boukhris, A.; Jilali, A.; Asri, H. Deep Learning and Machine Learning Based Method for Crop Disease Detection and Identification Using Autoencoder and Neural Network. Revue d’Intelligence Artificielle 2024, 38. [Google Scholar] [CrossRef]
  67. Huddar, S.; Prabhushetty, K.; Jakati, J.; Havaldar, R.; Sirdeshpande, N. Deep autoencoder based image enhancement approach with hybrid feature extraction for plant disease detection using supervised classification. International Journal of Electrical & Computer Engineering (2088-8708) 2024, 14. [Google Scholar]
  68. Alshammari, K.; Alshammari, R.; Alshammari, A.; et al. An improved pear disease classification approach using cycle generative adversarial network. Sci Rep 2024, 14, 6680. [Google Scholar] [CrossRef]
  69. Zeng, Q.; Ma, X.; Cheng, B.; Zhou, E.; Pang, W. Gans-based data augmentation for citrus disease severity detection using deep learning. IEEE Access 2020, 8, 172882–172891. [Google Scholar] [CrossRef]
  70. Xiao, D.; Zeng, R.; Liu, Y.; Huang, Y.; Liu, J.; Feng, J.; Zhang, X. Citrus greening disease recognition algorithm based on classification network using TRL-GAN. Computers and Electronics in Agriculture 2022, 200, 107206. [Google Scholar] [CrossRef]
  71. Ghadekar, P.; Shaikh, U.; Ner, R.; Patil, S.; Nimase, O.; Shinde, T. (2023, November). Early Phase Detection of Bacterial Blight in Pomegranate Using GAN Versus Ensemble Learning. In International Conference on Data Science, Computation and Security (pp. 125–138). Singapore: Springer Nature Singapore.
  72. Noguchi, K.; Takemura, Y.; Tominaga, M.; Ishii, K. “Mandarin Orange Anomaly Detection with Cycle-GAN,” 2024 IEEE/SICE International Symposium on System Integration (SII), Ha Long, Vietnam, 2024, pp. 644–649. [CrossRef]
  73. Chen, S.H.; Lai, Y.W.; Kuo, C.L.; Lo, C.Y.; Lin, Y.S.; Lin, Y.R. . & Tsai, C.C. A surface defect detection system for golden diamond pineapple based on CycleGAN and YOLOv4. Journal of King Saud University-Computer and Information Sciences 2022, 34, 8041–8053. [Google Scholar]
  74. Wu, J.; Abolghasemi, V.; Anisi, M.H.; Dar, U.; Ivanov, A.; Newenham, C. “Strawberry Disease Detection Through an Advanced Squeeze-and-Excitation Deep Learning Model,” in IEEE Transactions on AgriFood Electronics, vol. 2, no. 2, pp. 259–267, Sept.-Oct. 2024. [CrossRef]
  75. Senthilkumar, C.; Kamarasan, M. An effective citrus disease detection and classification using deep learning based inception resnet V2 model. Turkish Journal of Computer and Mathematics Education 2021, 12, 2283–2296. [Google Scholar]
  76. Kumar, S.; Pal, S.; Singh, V.P.; Jaiswal, P. Performance evaluation of ResNet model for classification of tomato plant disease. Epidemiologic Methods 2023, 12, 20210044. [Google Scholar] [CrossRef]
  77. Li, X.; Rai, L. (2020, November). Apple leaf disease identification and classification using resnet models. In 2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT) (pp. 738–742). IEEE.
  78. Mohinani, H.; Chugh, V.; Kaw, S.; Yerawar, O.; Dokare, I. (2022, February). Vegetable and fruit leaf diseases detection using ResNet. In 2022 Interdisciplinary Research in Technology and Management (IRTM) (pp. 1–7). IEEE.
  79. Upadhyay, L.; Saxena, A. Evaluation of Enhanced Resnet-50 Based Deep Learning Classifier for Tomato Leaf Disease Detection and Classification. Journal of Electrical Systems 2024, 20(3s), 2270–2282. [Google Scholar]
  80. Jadhav, S.; Gandhi, S.; Joshi, P.; Choudhary, V. Banana Crop Disease Detection Using Deep Learning Approach. *International Journal for Research in Applied Science and Engineering Technology* 2023, *11*(5), 2061–2066. [CrossRef]
  81. Arora, D.; Mehta, K.; Kumar, A.; Lamba, S. (2024, March). Evaluating Watermelon Mosaic Virus Seriousness with Hybrid RNN and Random Forest Model: A Five-Degree Approach. In 2024 11th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO) (pp. 1–5). IEEE.
  82. Pydipati, R.; Burks, T.F.; Lee, W.S. Statistical and neural network classifiers for citrus disease detection using machine vision. Transactions of the ASAE 2005, 48, 2007–2014. [Google Scholar] [CrossRef]
  83. Palei, S.; Behera, S.K.; Sethy, P.K. A systematic review of citrus disease perceptions and fruit grading using machine vision. Procedia Computer Science 2023, 218, 2504–2519. [Google Scholar] [CrossRef]
  84. Agarwal, A.; Sarkar, A.; Dubey, A.K. (2019). Computer vision-based fruit disease detection and classification. In Smart Innovations in Communication and Computational Sciences: Proceedings of ICSICCS-2018 (pp. 105–115). Springer Singapore.
  85. Doh, B.; Zhang, D.; Shen, Y.; Hussain, F.; Doh, R.F.; Ayepah, K. (2019, September). Automatic citrus fruit disease detection by phenotyping using machine learning. In 2019 25th International conference on automation and computing (ICAC) (pp. 1–5). IEEE.
  86. Deng, F.; Mao, W.; Zeng, Z.; Zeng, H.; Wei, B. Multiple diseases and pests detection based on federated learning and improved faster R-CNN. IEEE Transactions on Instrumentation and Measurement 2022, 71, 1–11. [Google Scholar] [CrossRef]
  87. Banerjee, D.; Kukreja, V.; Hariharan, S.; Jain, V. (2023, April). Enhancing Mango Fruit Disease Severity Assessment with CNN and SVM-Based Classification. In 2023 IEEE 8th International Conference for Convergence in Technology (I2CT) (pp. 1–6). IEEE.
  88. Vasumathi, M.T.; Kamarasan, M. An effective pomegranate fruit classification based on CNN-LSTM deep learning models. Indian Journal of Science and Technology 2021, 14, 1310–1319. [Google Scholar] [CrossRef]
  89. Majid, A.; Khan, M.A.; Alhaisoni, M.; E. yar, M.A.; Tariq, U. et al. An integrated deep learning framework for fruits diseases classification. Computers Materials & Continua 2022, 71, 1387–1402. [CrossRef]
  90. Masuda, K.; Suzuki, M.; Baba, K.; Takeshita, K.; Suzuki, T.; Sugiura, M. . & Akagi, T. Noninvasive diagnosis of seedless fruit using deep learning in persimmon. The Horticulture Journal 2021, 90, 172–180. [Google Scholar]
  91. Nyarko, B.N.E.; Bin, W.; Jinzhi, Z.; Odoom, J. (2023). Tomato fruit disease detection based on improved single shot detection algorithm. Journal of Plant Protection Research.
  92. Gill, H.S.; Murugesan, G.; Khehra, B.S.; Sajja, G.S.; Gupta, G.; Bhatt, A. Fruit recognition from images using deep learning applications. Multimedia Tools and Applications 2022, 81, 33269–33290. [Google Scholar] [CrossRef]
  93. Le, A.T.; Shakiba, M.; Ardekani, I. Tomato disease detection with lightweight recurrent and convolutional deep learning models for sustainable and smart agriculture. Frontiers in Sustainability 2024, 5, 1383182. [Google Scholar] [CrossRef]
  94. Latif, G.; Alghazo, J.; Ben Brahim, G.; Alnujaidi, K. (n.d.). Dates fruit disease recognition using machine learning. Prince Mohammad Bin Fahd University.
  95. Gupta, R.; Kaur, M.; Garg, N.; Shankar, H.; Ahmed, S. (2023, May). Lemon Diseases Detection and Classification using Hybrid CNN-SVM Model. In 2023 Third International Conference on Secure Cyber Computing and Communication (ICSCCC) (pp. 326–331). IEEE.
  96. Alekhya, J.L.; Nithin, P.S.; Enosh, P.; Devika, Y. (2024, August). Mango Fruit Disease Detection by Integrating MobileNetV2 and Long Short-Term Memory. In 2024 International Conference on Electrical Electronics and Computing Technologies (ICEECT) (Vol. 1, pp. 1–6). IEEE.
  97. Khattak, A.; Asghar, M.U.; Batool, U.; Asghar, M.Z.; Ullah, H.; Al-Rakhami, M.; Gumaei, A. Automatic detection of citrus fruit and leaves diseases using deep neural network model. IEEE access 2021, 9, 112942–112954. [Google Scholar] [CrossRef]
  98. Mohanapriya, S.; Efshiba, V.; Natesan, P. (2021, September). Identification of Fruit Disease Using Instance Segmentation. In 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA) (pp. 1779–1787). IEEE.
  99. Sundaramoorthi, K.; Kamarasan, M. (2024, May). Integrating Sparrow Search Algorithm with Deep Learning for Tomato Fruit Disease Detection and Classification. In 2024 4th International Conference on Pervasive Computing and Social Networking (ICPCSN) (pp. 184–190). IEEE.
  100. Seetharaman, K.; Mahendran, T. Detection of Disease in Banana Fruit using Gabor Based Binary Patterns with Convolution Recurrent Neural Network. Turkish Online Journal of Qualitative Inquiry 2021, 12. [Google Scholar]
  101. Xue, G.; Liu, S.; Ma, Y. A hybrid deep learning-based fruit classification using attention model and convolution autoencoder. Complex & Intelligent Systems 2020, 1–11. [Google Scholar]
  102. Tewari, V.; Azeem, N.A.; Sharma, S. Automatic guava disease detection using different deep learning approaches. Multimedia Tools and Applications 2024, 83, 9973–9996. [Google Scholar]
  103. Sankaran, S.; Subbiah, D.; Chokkalingam, B.S. CitrusDiseaseNet: An integrated approach for automated citrus disease detection using deep learning and kernel extreme learning machine. Earth Science Informatics 2024, 1–18. [Google Scholar]
  104. Yang, D.; Wang, F.; Hu, Y.; Lan, Y.; Deng, X. Citrus huanglongbing detection based on multi-modal feature fusion learning. Frontiers in plant science 2021, 12, 809506. [Google Scholar]
  105. SAID, Archana Ganesh; JOSHI, Bharti. Advanced multimodal thermal imaging for high-precision fruit disease segmentation and classification. Journal of Autonomous Intelligence, [S.l.], v. 7, n. 5, p. 1618, may 2024. ISSN 2630-5046.
  106. Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Detection of apple lesions in orchards based on deep learning methods of CycleGAN and YOLOV3-dense. Journal of Sensors 2019, 2019, 7630926. [Google Scholar]
  107. Si, J.; Kim, S. Chili Pepper Disease Diagnosis via Image Reconstruction Using GrabCut and Generative Adversarial Serial Autoencoder. arXiv 2023, arXiv:2306.12057. [Google Scholar]
  108. Samajpati, B.J.; Degadwala, S.D. (2016, April). Hybrid approach for apple fruit diseases detection and classification using random forest classifier. In 2016 International conference on communication and signal processing (ICCSP) (pp. 1015–1019). IEEE.
  109. Nandi, R.N.; Palash, A.H.; Siddique, N.; Zilani, M.G. (2023). Device-friendly guava fruit and leaf disease detection using deep learning. In M. S. Satu, M.A. Moni, M.S. Kaiser, & M. S. Arefin (Eds.), Machine intelligence and emerging technologies (Vol. 490, pp. 55–66). Lecture Notes of the Institute for Computer Sciences, Social Informatics, and Telecommunications Engineering. Springer. [CrossRef]
  110. Chug, A.; Bhatia, A.; Singh, A.P.; Singh, D. A novel framework for image-based plant disease detection using hybrid deep learning approach. Soft Computing 2023, 27, 13613–13638. [Google Scholar]
  111. H. B. Patel and N. J. Patil, “Enhanced CNN for Fruit Disease Detection and Grading Classification Using SSDAE-SVM for Postharvest Fruits,” in IEEE Sensors Journal, vol. 24, no. 5, pp. 6719–6732, 1 March1, 2024. [CrossRef]
  112. Dharmasiri, S.B.D.H.; Jayalal, S. “Passion Fruit Disease Detection using Image Processing,” 2019 International Research Conference on Smart Computing and Systems Engineering (SCSE), Colombo, Sri Lanka, 2019, pp. 126–133. [CrossRef]
  113. Darwin Laura, Elsa Pilar Urrutia, Franklin Salazar, Jeanette Ureña, Rodrigo Moreno, Gustavo Machado, Maria Cazorla-Logroño, Santiago Altamirano,Aerial remote sensing system to control pathogens and diseases in broccoli crops with the use of artificial vision,Smart Agricultural Technology,Volume 10,2025,100739,ISSN 2772-3755. [CrossRef]
  114. Mahmud, M.S.; Zaman, Q.U.; Esau, T.J.; Price, G.W.; Prithiviraj, B. Development of an artificial cloud lighting condition system using machine vision for strawberry powdery mildew disease detection. Computers and electronics in agriculture 2019, 158, 219–225. [Google Scholar]
  115. Abd El-aziz, A.A.; Darwish, A.; Oliva, D.; Hassanien, A.E. (2020). Machine Learning for Apple Fruit Diseases Classification System. In: Hassanien, AE.; Azar, A.; Gaber, T.; Oliva, D.; Tolba, F. (eds) Proceedings of the International Conference on Artificial Intelligence and Computer Vision (AICV2020). AICV 2020. Advances in Intelligent Systems and Computing, vol 1153. Springer, Cham. [CrossRef]
  116. Habib, M.T.; Majumder, A.; Jakaria, A.Z.M.; Akter, M.; Uddin, M.S.; Ahmed, F. Machine vision based papaya disease recognition. Journal of King Saud University-Computer and Information Sciences 2020, 32, 300–309. [Google Scholar]
  117. Soltani Firouz, M.; Sardari, H. Defect detection in fruit and vegetables by using machine vision systems and image processing. Food Engineering Reviews 2022, 14, 353–379. [Google Scholar]
  118. Mehra, Tanvi, Vinay Kumar, and Pragya Gupta. “Maturity and disease detection in tomato using computer vision.” In 2016 Fourth international conference on parallel, distributed and grid computing (PDGC), pp. 399–403. IEEE, 2016.
  119. Athiraja, A.; Vijayakumar, P. RETRACTED ARTICLE: Banana disease diagnosis using computer vision and machine learning methods. Journal of Ambient Intelligence and Humanized Computing 2021, 12, 6537–6556. [Google Scholar]
  120. Mahmud, M.S.; Zaman, Q.U.; Esau, T.J.; Chang, Y.K.; Price, G.W.; Prithiviraj, B. Real-time detection of strawberry powdery mildew disease using a mobile machine vision system. Agronomy 2020, 10, 1027. [Google Scholar] [CrossRef]
  121. Hadipour-Rokni, R.; Asli-Ardeh, E.A.; Jahanbakhshi, A.; Sabzi, S. Intelligent detection of citrus fruit pests using machine vision system and convolutional neural network through transfer learning technique. Computers in Biology and Medicine 2023, 155, 106611. [Google Scholar] [CrossRef] [PubMed]
  122. Mia, M.R.; Mia, M.J.; Majumder, A.; Supriya, S.; Habib, M.T. Computer vision based local fruit recognition. Int. J. Eng. Adv. Technol 2019, 9, 2810–2820. [Google Scholar]
  123. Habib, M.T.; Mia, M.R.; Mia, M.J.; Uddin, M.S.; Ahmed, F. 2020). A computer vision approach for jackfruit disease recognition. In Proceedings of International Joint Conference on Computational Intelligence: IJCCI 2019 (pp. 343–353). Springer Singapore.
  124. Al Haque, A.F.; Hafiz, R.; Hakim, M.A.; Islam, G.R. (2019, December). A computer vision system for guava disease detection and recommend curative solution using deep learning approach. In 2019 22nd International Conference on Computer and Information Technology (ICCIT) (pp. 1–6). IEEE.
  125. Bhange, M.; Hingoliwala, H.A. Smart farming: Pomegranate disease detection using image processing. Procedia computer science 2015, 58, 280–288. [Google Scholar]
  126. Nithya, R.; Santhi, B.; Manikandan, R.; Rahimi, M.; Gandomi, A.H. Computer vision system for mango fruit defect detection using deep convolutional neural network. foods 2022, 11, 3483. [Google Scholar] [CrossRef]
  127. Abbaspour-Gilandeh, Y.; Aghabara, A.; Davari, M.; Maja, J.M. Feasibility of using computer vision and artificial intelligence techniques in detection of some apple pests and diseases. Applied Sciences 2022, 12, 906. [Google Scholar]
  128. Habib, M.T.; Arif, M.A.I.; Shorif, S.B.; Uddin, M.S.; Ahmed, F. Machine vision-based fruit and vegetable disease recognition: A review. Computer Vision and Machine Learning in Agriculture 2021, 143–157. [Google Scholar]
  129. Deshpande, T.; Sengupta, S.; Raghuvanshi, K.S. Grading & identification of disease in pomegranate leaf and fruit. International Journal of Computer Science and Information Technologies 2014, 5, 4638–4645. [Google Scholar]
  130. Kamala, K.L.; Alex, S.A. “Apple Fruit Disease Detection for Hydroponic plants using Leading edge Technology Machine Learning and Image Processing,” 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 2021, pp. 820–825. [CrossRef]
  131. Durmuş, H.; Güneş, E.O.; Kırcı, M. Disease detection on the leaves of the tomato plants by using deep learning. 2017 6th International Conference on Agro-Geoinformatics 2017, 1–5. [Google Scholar] [CrossRef]
  132. Qin, J.; Burks, T.; Ritenour, M.; Bonn, W.G. Detection of citrus canker using hyperspectral reflectance imaging with spectral information divergence. *Journal of Food Engineering* 2009, *93*(2), 183–191. [CrossRef]
  133. Jain, R.; Singla, P.; Sharma, R.; Kukreja, V.; Singh, R. (2023, April). Detection of Guava Fruit Disease through a Unified Deep Learning Approach for Multi-classification. In 2023 IEEE International Conference on Contemporary Computing and Communications (InC4) (Vol. 1, pp. 1–5). IEEE.
  134. Bulanon, D.; Burks, T.; Alchanatis, V. A Multispectral Imaging Analysis for Enhancing Citrus Fruit Detection. *Environmental Control in Biology* 2010, *48*(2), 81–91. [CrossRef]
  135. Rauf, H.T.; Saleem, B.A.; Lali, M.I.U.; Khan, M.A.; Sharif, M.; Bukhari, S.A.C. A citrus fruits and leaves dataset for detection and classification of citrus diseases through machine learning. Data in brief 2019, 26, 104340. [Google Scholar] [PubMed]
  136. Mahendran, T.; Seetharaman, K. (2023, January). Feature extraction and classification based on pixel in banana fruit for disease detection using neural networks. In 2023 Third International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT) (pp. 1–7). IEEE.
  137. Gehlot, M.; Saxena, R.K. & Gandhi, G.C. “Tomato-Village”: a dataset for end-to-end tomato disease detection in a real-world environment. Multimedia Systems 2023, 29, 3305–3328. [Google Scholar] [CrossRef]
  138. Albanese, A.; Nardello, M.; Brunelli, D. “Automated Pest Detection With DNN on the Edge for Precision Agriculture,” in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 11, no. 3, pp. 458–467, Sept. 2021. [CrossRef]
  139. Rumy, S.M.S.H.; Hossain, M.I.A.; Jahan, F.; Tanvin, T. “An IoT based System with Edge Intelligence for Rice Leaf Disease Detection using Machine Learning,” 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 2021, pp. 1–6. [CrossRef]
  140. Tsai, Y.-H.; Hsu, T.-C. (2024). An effective deep neural network in edge computing enabled Internet of Things for plant diseases monitoring. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 695–704. Hsuan Chuang University. https://openaccess.thecvf.com/WACV2024W_paperlist.
  141. Kalbande, K. ., Patil, W.., Deshmukh, A.., Joshi, S.., Titarmare, A.S..; Patil, S.C.. Novel Edge Device System for Plant Disease Detection with Deep Learning Approach. International Journal of Intelligent Systems and Applications in Engineering 2024, 12, 610–618. [Google Scholar]
  142. Khan, A.T.; Jensen, S.M.; Khan, A.R.; Li, S. Plant disease detection model for edge computing devices. Frontiers in Plant Science 2023, 14. [Google Scholar] [CrossRef]
  143. Kim, J.; Chang, S.; Kwak, N. PQK: Model compression via pruning, quantization, and knowledge distillation. arXiv 2021, arXiv:arXiv:2106.14681. [Google Scholar] [CrossRef]
  144. Liang, T.; Glossner, J.; Wang, L.; Shi, S.; Zhang, X. Pruning and quantization for deep neural network acceleration: A survey. Journal of Systems Architecture 2021, 117, 102137. [Google Scholar] [CrossRef]
  145. Li, G.; Wang, Y.; Zhao, Q.; Yuan, P.; Chang, B. ; Chang, B. PMVT: A lightweight vision transformer for plant disease identification on mobile devices. Frontiers in Plant Science 2023, 14, 1256773. [Google Scholar] [CrossRef]
  146. Borhani, Y.; Khoramdel, J.; Najafi, E. A deep learning based approach for automated plant disease classification using vision transformer. Sci Rep 2022, 12, 11554. [Google Scholar] [CrossRef]
  147. Guan, H.; Fu, C.; Zhang, G.; Li, K.; Wang, P.; Zhu, Z. A lightweight model for efficient identification of plant diseases and pests based on deep learning. Frontiers in Plant Science 2023, 14, 1227011. [Google Scholar] [CrossRef]
  148. Delfani, P.; Thuraga, V.; Banerjee, B.; et al. Integrative approaches in modern agriculture: IoT, ML and AI for disease forecasting amidst climate change. Precision Agric 2024, 25, 2589–2613. [Google Scholar] [CrossRef]
  149. Egon, A.; Bell, C. (n.d.). AI in agriculture: Revolutionizing crop monitoring and disease management through precision technology. ResearchGate. Retrieved from https://www.researchgate.net/publication/385940131_AI_IN_AGRICULTURE_REVOLUTIONIZING_CROP_MONITORING_AND_DISEASE_MANAGEMENT_THROUGH_PRECISION_TECHNOLOGY.
  150. Kaur, A.; et al. , “Artificial Intelligence Driven Smart Farming for Accurate Detection of Potato Diseases: A Systematic Review,” in IEEE Access, vol. 12, pp. 193902–193922, 2024. [CrossRef]
  151. Jafar A, Bibi N, Naqvi RA, Sadeghi-Niaraki, A., Jeong, D. Revolutionizing agriculture with artificial intelligence: plant disease detection methods, applications, and their limitations. Front Plant Sci. 2024 Mar 13; 15:1356260. [CrossRef] [PubMed]
  152. Arulmurugan, S.; Bharathkumar, V.; Gokulachandru, S.; Yusuf, M.M. “Plant Guard: AI-Enhanced Plant Diseases Detection for Sustainable Agriculture,” 2024 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 2024, pp. 726–730. [CrossRef]
  153. Tariq, M.; Ali, U.; Abbas, S.; Hassan, S.; Naqvi, R.A.; Khan, M.A.; Jeong, D. Corn leaf disease: Insightful diagnosis using VGG16 empowered by explainable AI. Frontiers in Plant Science 2024, 15. [Google Scholar] [CrossRef]
  154. Sagar, S.; Javed, M.; Doermann, D.S. (n.d.). Leaf-based plant disease detection and explainable AI. Indian Institute of Information Technology, Allahabad, & University at Buffalo, NY, USA.
  155. Mahmud, T.; et al. , “Explainable AI for Tomato Leaf Disease Detection: Insights into Model Interpretability,” 2023 26th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 2023, pp. 1–6. [CrossRef]
  156. Khandaker, M.A.A.; Raha, Z.S.; Islam, S.; Muhammad, T. (2025). Explainable AI-Enhanced Deep Learning for Pumpkin Leaf Disease Detection: A Comparative Analysis of CNN Architectures. ArXiv. Retrieved from https://arxiv.org/abs/2501.05449.
  157. Ashoka, S.B.; Pramodha, M.; Muaad, A.Y.; Nyange, R. (2024). Explainable AI-based framework for banana disease detection. Research Square. [CrossRef]
  158. Khan ZA, Waqar M, Cheema KM, Bakar Mahmood AA, Ain Q, Chaudhary NI, Alshehri A, Alshamrani SS, Zahoor Raja MA. EA-CNN: Enhanced attention-CNN with explainable AI for fruit and vegetable classification. Heliyon. 2024 Nov 30;10:e40820. [CrossRef] [PubMed]
  159. Dubey, S.R.; Jalal, A.S. (2014). Adapted approach for fruit disease identification using images. arXiv.
  160. Alhwaiti, Y.; Ishaq, M.; Siddiqi, M.H.; Waqas, M.; Alruwaili, M.; Alanazi, S.; Khan, A.; Khan, F. 2024). Early detection of late blight tomato disease using histogram oriented gradient based support vector machine. arXiv. https://arxiv.org/abs/2306.08326.
  161. .Alagu, S. (2020). Apple Fruit disease detection using Multiclass SVM classifier and IP Webcam APP.
  162. Dewliya, S.; Singh, M.P. (2015). Detection and classification for apple fruit diseases using support vector machine and chain code.
  163. Anu, S.; Nisha, T.; Ramya, R.; Rizuvana, M. Fruit Disease Detection Using GLCM And SVM Classifier. International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2019, 365-371. [CrossRef]
  164. Sanath Rao, U.; Swathi, R.; Sanjana, V.; Arpitha, L.; Chandrasekhar, K.; Chinmayi, & Naik, P.K. Deep learning precision farming: Grapes and mango leaf disease detection by transfer learning. Global Transitions Proceedings 2021, 2, 535–544. [CrossRef]
  165. Dananjayan, S.; Tang, Y.; Zhuang, J.; Hou, C.; Luo, S. Assessment of state-of-the-art deep learning-based citrus disease detection techniques using annotated optical leaf images. Computers and Electronics in Agriculture 2022, 193, 106658. [Google Scholar] [CrossRef]
  166. Ali, H.; Lali, M.I.; Nawaz, M.Z.; Sharif, M.; Saleem, B.A. Symptom-based automated detection of citrus diseases using color histogram and textural descriptors. Computers and Electronics in Agriculture 2017, 138, 92–104. [Google Scholar] [CrossRef]
  167. Zhang, S.; Wu, X.; You, Z.; Zhang, L. Leaf image-based cucumber disease recognition using sparse representation classification. Computers and Electronics in Agriculture 2017, 134, 135–141. [Google Scholar] [CrossRef]
  168. Lamani, S. B. (2018). Pomegranate fruits disease classification with K-means clustering. International Journal for Research Trends and Innovation, 3(3), 74-79. https://www.ijrti.org/papers/IJRTI1803012.pdf.
  169. Doh, B.; Zhang, D.; Shen, Y.; Hussain, F.; Doh, R.F.; Ayepah, K. , “Automatic Citrus Fruit Disease Detection by Phenotyping Using Machine Learning,” 2019 25th International Conference on Automation and Computing (ICAC), Lancaster, UK, pp. 1–5. [CrossRef]
  170. Tiwari, R.; Chahande, M. (2021). Apple Fruit Disease Detection and Classification Using K-Means Clustering Method. In: Das, S.; Mohanty, M.N. (eds) Advances in Intelligent Computing and Communication. Lecture Notes in Networks and Systems, vol 202. Springer, Singapore. [CrossRef]
  171. Devi, P.K. and Rathamani, “Image Segmentation K-Means Clustering Algorithm for Fruit Disease Detection Image Processing,” 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2020, pp. 861–865. [CrossRef]
  172. Shin, J.; Chang, Y.K.; Heung, B.; Nguyen-Quang, T.; Price, G.W.; Al-Mallahi, A. A deep learning approach for RGB image-based powdery mildew disease detection on strawberry leaves. Computers and Electronics in Agriculture 2021, 183, 106042. [Google Scholar] [CrossRef]
  173. Yadav, S.; Sengar, N.; Singh, A.; Singh, A.; Dutta, M.K. Identification of disease using deep learning and evaluation of bacteriosis in peach leaf. Ecological Informatics 2021, 61, 101247. [Google Scholar] [CrossRef]
  174. Momeny, M.; Jahanbakhshi, A.; Hadipour-Rokni, R.; Zhang, Y.-D.; Neshat, A.A.; Ampatzidis, Y. Detection of citrus black spot disease and ripeness level in orange fruit using learning-to-augment incorporated deep networks. Ecological Informatics 2022, 72, 101829. [Google Scholar] [CrossRef]
  175. Zhu, D.; Xie, L.; Chen, B.; Tan, J.; Deng, R.; Zheng, Y.; Hu, Q.; Mustafa, R.; Chen, W.; Yi, S.; Yung, K.; IP, A.W.H. Knowledge graph and deep learning-based pest detection and identification system for fruit quality. Internet of Things 2023, 21, 100649. [Google Scholar] [CrossRef]
  176. Saleem, M.; Arif, K.M.; Potgieter, J. A performance-optimized deep learning-based plant disease detection approach for horticultural crops of New Zealand. IEEE Access 2022, 10, 3201104. [Google Scholar] [CrossRef]
  177. James, J.A.; Manching, H.K.; Mattia, M.R.; Bowman, K.D.; Hulse-Kemp, A.M.; Beksi, W.J. CitDet: A benchmark dataset for citrus fruit detection. arXiv 2023, arXiv:2309.05645. [Google Scholar] [CrossRef]
  178. Wise, K.; Wedding, T.; Selby-Pham, J. Application of automated image colour analyses for the early-prediction of strawberry development and quality. Scientia Horticulturae 2022, 305, 111316. [Google Scholar] [CrossRef]
  179. Hasan, R.I.; Alzubaidi, L.; Yusuf, S.M.; Rahim, M.S.M. Automated masks generation for coffee and apple leaf infected with single or multiple diseases-based color analysis approaches. Informatics in Medicine Unlocked 2021, 27, 100837. [Google Scholar] [CrossRef]
  180. Ganesh, P.; Volle, K.; Burks, T.F.; Mehta, S.S. DeepOrange: Mask R-CNN-based orange detection and segmentation. IFAC PapersOnLine 2019, 52, 70–75. [Google Scholar] [CrossRef]
  181. Abdulridha, J.; Ampatzidis, Y.; Roberts, P.; Kakarla, S.C. Detecting powdery mildew disease in squash at different stages using UAV-based hyperspectral imaging and artificial intelligence. Biosystems Engineering 2020, 197, 48–60. [Google Scholar] [CrossRef]
  182. Qin, J.; Burks, T.F.; Kim, M.S.; Chao, K.; Ritenour, M.A. Citrus canker detection using hyperspectral reflectance imaging and PCA-based image classification method. Sensing and Instrumentation for Food Quality and Safety 2008, 2, 168–177. [Google Scholar] [CrossRef]
  183. Bagheri, N.; Mohamadi-Monavar, H.; Azizi, A.; Ghasemi, A. Detection of Fire Blight disease in pear trees by hyperspectral data. European Journal of Remote Sensing 2018, 51, 1–10. [Google Scholar] [CrossRef]
  184. Zhao, X.; Burks, T.F.; Qin, J.; Ritenour, M.A. Effect of fruit harvest time on citrus canker detection using hyperspectral reflectance imaging. Sensing and Instrumentation for Food Quality and Safety 2010, 4, 126–135. [Google Scholar]
  185. Lorente, D.; Aleixos, N.; Gómez-Sanchis, J.U.A.N.; Cubero, S.; García-Navarrete, O.L.; Blasco, J. Recent advances and applications of hyperspectral imaging for fruit and vegetable quality assessment. Food and Bioprocess Technology 2012, 5, 1121–1142. [Google Scholar]
  186. Min, D.; Zhao, J.; Bodner, G.; Ali, M.; Li, F.; Zhang, X.; Rewald, B. Early decay detection in fruit by hyperspectral imaging–Principles and application potential. Food Control 2023, 152, 109830. [Google Scholar] [CrossRef]
  187. Sighicelli, M.; Colao, F.; Lai, A.; Patsaeva, S. (2008, February). Monitoring post-harvest orange fruit disease by fluorescence and reflectance hyperspectral imaging. In I International Symposium on Horticulture in Europe 817 (pp. 277–284).
  188. Jung, D.H.; Kim, J.D.; Kim, H.Y.; Lee, T.S.; Kim, H.S.; Park, S.H. A hyperspectral data 3D convolutional neural network classification model for diagnosis of gray mold disease in strawberry leaves. Frontiers in Plant Science 2022, 13, 837020. [Google Scholar] [PubMed]
  189. Mehl, P.M.; Chen, Y.R.; Kim, M.S.; Chan, D.E. Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations. Journal of food engineering 2004, 61, 67–81. [Google Scholar]
  190. Pujari, J.D.; Yakkundimath, R.; Byadgi, A.S. (2014, December). Identification and classification of fungal disease affected on agriculture/horticulture crops using image processing techniques. In 2014 IEEE International Conference on Computational Intelligence and Computing Research (pp. 1–4). IEEE.
  191. Genangeli, A.; Allasia, G.; Bindi, M.; Cantini, C.; Cavaliere, A.; Genesio, L. . & Gioli, B. A novel hyperspectral method to detect moldy core in apple fruits. Sensors 2022, 22, 4479. [Google Scholar]
  192. Qin, J.; Burks, T.F.; Zhao, X.; Niphadkar, N.; Ritenour, M.A. Multispectral detection of citrus canker using hyperspectral band selection. Transactions of the ASABE 2011, 54, 2331–2341. [Google Scholar]
  193. Pansy, D.L.; Murali, M. UAV hyperspectral remote sensor images for mango plant disease and pest identification using MD-FCM and XCS-RBFNN. Environmental Monitoring and Assessment 2023, 195, 1120. [Google Scholar]
  194. Fernández, C.I.; Leblon, B.; Wang, J.; Haddadi, A.; Wang, K. Detecting infected cucumber plants with close-range multispectral imagery. Remote Sensing 2021, 13, 2948. [Google Scholar]
  195. Haider, I.; Khan, M.A.; Nazir, M.; Kim, T.; Cha, J.-H. An artificial intelligence-based framework for fruits disease recognition using deep learning. Computer Systems Science & Engineering 2024, 48, 1–15. [Google Scholar] [CrossRef]
  196. Li, J.; Zhu, Z.; Liu, H.; Su, Y.; Deng, L. Strawberry R-CNN: Recognition and counting model of strawberry based on improved Faster R-CNN. Ecological Informatics 2023, 75, 102210. [Google Scholar] [CrossRef]
  197. Li, H.; Jin, Y.; Zhong, J.; Zhao, R. A fruit tree disease diagnosis model based on stacking ensemble learning. Complexity 2021, 2021, 6868592. [Google Scholar]
  198. Mehmood, A.; Ahmad, M.; Ilyas, Q.M. On precision agriculture: enhanced automated fruit disease identification and classification using a new ensemble classification method. Agriculture 2023, 13, 500. [Google Scholar] [CrossRef]
  199. Yousuf, A.; Khan, U. Ensemble classifier for plant disease detection. International Journal of Computer Science and Mobile Computing 2021, 10, 14–22. [Google Scholar]
  200. Javidan, S.M.; Banakar, A.; Vakilian, K.A.; Ampatzidis, Y. Tomato leaf diseases classification using image processing and weighted ensemble learning. Agronomy Journal 2024, 116, 1029–1049. [Google Scholar]
  201. Nader, A.; Khafagy, M.H.; Hussien, S.A. Grape leaves diseases classification using ensemble learning and transfer learning. Int. J. Adv. Comput. Sci. Appl 2022, 13, 563–571. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated