This section presents a summary of studies that use artificial intelligence to classify brain tumors. In addition to the several techniques for segmenting brain tumors that we already highlighted.
5.3.1. MRI Brain tumor segmentation
This section will discuss the various machine learning, deep learning, region growth, thresholding, and literature-proposed brain tumor segmentation strategies.
In order to segment brain tumors, Gordillo et al. [
80] utilized a fuzzy logic structure, which he built utilizing features extracted from MR images and expert knowledge. This system learns unsupervised and is fully automated. With trials conducted on two different forms of brain tumors, meningioma and glioblastoma multiform, the result of segmentation using this approach is shown to be satisfactory, with the lowest accuracy of 71% and maximum of 93%.
Employing fuzzy c- means clustering on MRI, Rajendran [
81] presented logic analyzing for segmenting brain tumors. The region-based technique that iteratively progresses towards the ultimate tumor border was initialized using the tumor type output of fuzzy clustering. With 15 MR images with manual segmentation ground truth available, tests were conducted on this approach to determine its effectiveness. The overall result was suitable with a sensitivity of 96.37% and an average Jaccard coefficient value of 83.19%.
SVM classifier was applied by Kishore et al. [
82] to categorize tumor pixels using vectors of features from MR images, such as mean intensity and LBP. Level sets and region-growing techniques are used for the segmentation. The experiments on their suggested methods used MR images with tumor regions manually defined from 11 different participants. Their suggested methods are effective, with DSC score of 0.69.
Figure 12 shows a block schematic illustrating the segmentation of brain cancer.
A framework for segmenting tumorous MRI 3D images was presented by Abbasi and Tajeripour [
39]. The input image's contrast is improved using bias field correction in the first phase. The data capacity was reduced in the second phase using the multi-level Otsu technique. LBP in three orthogonal planes and an enhanced histogram of images are employed in the third stage, the feature extraction step. Lastly, the random forest is employed as a classifier for distinguishing tumorous areas since it can work flawlessly with big inputs and has a high level of segmentation accuracy. With a mean Jaccard value of 87% and a DSC of 93%, the overall outcome was acceptable.
By combining two K-Means and FCM clustering approaches, Almahfud et al. [
83] suggest a technique for segmenting in human brain MRI images to identify brain cancers. Because K-Means is more susceptible to color variations, it can rapidly and effectively discover optima and local outliers. So that the cluster results are better and the calculation procedure is simpler, the K-means results were clustered once more with FCM to categorize the convex contour based on the border. In order to increase accuracy, morphology and noise reduction procedures are also suggested. 62 brain MRI scans were used in the study, and the accuracy rate was 91.94%.
According to Pereira et al. [
69], an automated segmentation technique based on CNN architecture was proposed, which explores small 3 by 3 kernels. Given the smaller number of weights in the network, using small kernels enables the creation of more intricate architectures and helps prevent overfitting. Additionally, they looked at the use of intensity normalizing as an initial processing step, which, when combined with data augmentation, was highly successful in segmenting brain tumors in MRI images. Their suggestion was verified using the BRATS database, yielding Dice Similarity Coefficient values of 0.88, 0.83, and 0.77 for the Challenge data set for the whole, core, and enhancing areas.
According to properties of a separated local square, they suggested a unique approach for segmenting brain tumors in [
84]. The suggested procedure essentially consists of three parts. An image was divided into homogenous sections with roughly comparable properties and size using the super pixel segmentation technique in the first stage. The second phase was the extraction of gray statistical feature and textural information. In the last phase of building the segmentation model, super pixels were identified as either tumor areas or nontumor regions using SVM. They use 20 images from the BRATS dataset, where a DSC of 86.12% was attained, to test the suggested technique.
The suggested CAD system by Gupta et al. [
85] offers a non-invasive method for the accurate tumors segmentation and detection of gliomas. The system takes advantage of the super pixels' combined properties and the FCM clustering technique. The suggested CAD method recorded 98% accuracy for glioma detection in both low-grade and high-grade tumors.
Brain tumor segmentation using the CNN-based data transfer to SVM classifier approach was proposed by Cui et al. [
67]. Two cascaded phases make up their algorithm. They trained CNN in the initial step to understand the mapping of the image region to the tumor label region. In the testing phase, they passed the testing image and CNN's anticipated label output to an SVM classifier for precise segmentation. Tests and evaluations show that the suggested structure outperforms separate SVM-based or CNN-based segmentation, while DSC achieved 86.12%.
The two-pathway-group CNN architecture described by Razzak et al. [
86] is a novel approach for brain tumor segmentation that simultaneously takes advantage of local and global contextual traits as shown in
Figure 13. To prevent instability and overfitting parameter sharing, this approach imposes equivariance in the 2PG-CNN model. The output of a basic CNN is handled as an extra source and combined at the last layer of the 2PG CNN where the cascade architecture has been included. When a group CNN was embedded into a two route architecture for model validation using BRATS datasets, the results were DSC 89.2%, PR 88.22%, and SEN 88.32%.
A semantic segmentation model for the segmentation of brain tumors from multi-modal 3D MRIs for the BraTS dataset was published in [
87]. After experimenting with several normalizing techniques, they discovered that group-norm and instance-norm performed equally well. Additionally, they have tested with more advanced methods of data augmentation, such as random histogram pairing, linear image transformations, rotations, and randomly image filtering, but these have not shown any significant benefit. Further raising the network depth had no positive effect on performance, however increasing the amount of filters consistently produced better results. Their BraTS end testing dataset values were 0.826, 0.882, and 0.837 overall dice or improved tumor core, entire tumor, and tumor center, respectively.
CNN were used by Karayegen and Aksahin [
88] to offer a semantic segmentation approach for autonomously segmenting brain tumors on BraTS image datasets that include images from four distinct imaging modalities (T1, T1C, T2, and Flair). This technique was effectively used, and images were shown in a variety of planes including sagittal, coronal, and axial in order to determine the precise tumor location and parameters such as height, breadth, and depth. In terms of tumor prediction, evaluation findings of semantic segmentation carried out using network are incredibly encouraging. Mean IoU and Mean Prediction Ratio were both calculated to be 86.946 and 91.718, respectively.
Table 5.
MRI Brain tumor segmentation.
Table 5.
MRI Brain tumor segmentation.
Ref |
Scan |
year |
technique |
Method |
Performance Metrics |
result |
[80] |
MRI |
2010 |
region-based |
FCM |
Acc |
93.00% |
[81] |
MRI |
2011 |
region-based |
FCM |
Jaccard |
83.19% |
[82] |
MRI |
2012 |
NN |
LBP with SVM |
DSC |
69.00% |
[69] |
MRI |
2016 |
DL |
CNN |
DSC |
88.00% |
[84] |
MRI |
2017 |
NN |
GLCM with SVM |
DSC |
86.12% |
[39] |
MRI |
2017 |
NN |
LBP with RF |
Jaccard & DSC |
87.% & 93% |
[85] |
MRI |
2018 |
region-based |
FCM |
Acc |
98.00% |
[83] |
MRI |
2018 |
region-based |
FCM and k-mean |
Acc |
91.94% |
[67] |
MRI |
2019 |
DL & NN |
CNN with SVM |
DSC |
88.00% |
[86] |
MRI |
2019 |
DL |
Two-path CNN |
DSC |
89.20% |
[87] |
MRI |
2019 |
DL |
semantic |
Acc |
88.20% |
[88] |
MRI |
2021 |
DL |
semantic |
IoU |
91.72% |
5.3.2. MRI brain tumor classification using ML
The automated classification of brain cancers using MRI images has been the subject of several research. Cleaning data, feature extraction, and feature selection are the basic steps in the machine learning (ML) process that have been used for purpose. Building ML model based on labeled samples is the last step.
NN based technique to categorize a given MR brain image as either normal or abnormal is presented in [
89]. In this method, features were first extracted from images using the wavelet transform, and then the dimensionality of the features were reduced using PCA methodology. In order to determine the best weights for the NN, the reduced features were routed to a Back Propagation NN that uses scaled conjugate gradient (SCG). This technique was used on 66 images, 18 of which were normal and 48 abnormal. On training and test images, the classification accuracy was 100%.
An automated and efficient CAD method based on ensemble classifiers was proposed by Arakeri and Reddy [
36] for the classification of brain cancers on MRI images, as benign or malignant. The texture, shape, and border properties of a tumor were extracted and used as a representation. ICA approached was used to select the most significant features. The ensemble classifier, which consists of SVM, ANN, and kNN classifiers, is then trained using these features to describe the tumor. A dataset of 550 patients' T1- and T2-weighted MR images was used for the experiments. With an accuracy of 99.09% (sensitivity 100% and specificity 98.21%), the experimental findings demonstrated that the suggested classification approach achieves strong agreement with the combined classifier and is extremely successful in the identification of brain tumors.
Figure 14.
CAD method based on ensemble classifiers [
36].
Figure 14.
CAD method based on ensemble classifiers [
36].
In [
90], they suggested a novel, wavelet-energy based method for automatically classifying MR images of the human brain into normal or abnormal. The classifier was SVM, and biogeography-based optimization (BBO) was utilized to enhance the SVM's weights. They succeeded in achieving 99% precision and 97% accuracy.
According to Amin et al. [
91], an automated technique is suggested to distinguish between malignant and benign brain MRI images. The segmentation of potential lesions has used a variety of methodologies. Then, considering shape, texture, and intensity, a features set was selected for every candidate lesion. The SVM classifier is then used on the collection of features to compare the proposed framework's precision using various cross validations. Three benchmark datasets, including Harvard, Rider, and Local, are used to verify the suggested technique. Average accuracy was 97.1%, area under the curve was 0.98, sensitivity was 91.9%, and specificity was 98.0% for the procedure.
A suitable CAD approach to classifying brain tumors is proposed in [
92]. The database includes Meningioma, Astrocytoma, and Normal brain areas along with primary brain tumors. The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the dataset. These ROI(s) were used to extract 371 intensity and texture features altogether. These three classes were divided using ANN classifier. Overall classification accuracy was 92.43%.
428 T1 MR images from 55 individuals were used in a varied dataset for multiclass brain tumor classification [
93]. A based on content active contour model extracted 856 ROIs. These ROIs were used to extract 218 intensity and texture features. PCA was employed in this study to reduce the size of the feature space. The ANN was then used to classify these six categories. The classification accuracy was seen to have reached 85.5%.
A unique strategy to classifying brain tumors in MRI images had been proposed in [
94] employing improved structural descriptor and hybrid Kernel-SVM. In order to better classify the image and improve the texture feature extraction process using statistical parameters, they had used GLCM and histogram to derive the texture feature from every region. To enhance the classification process, different kernels were combined to create a hybrid kernel SVM classifier. the only axial T1 brain MRI images to which they have applied this technique. 93% accuracy for their suggested strategy.
A hybrid system composed of two ML techniques has been suggested in [
95] for classifying brain tumors. For this, an overall of 70 brain MR images (60 abnormal, 10 normal) were taken into consideration. DWT was used to extract features from the images. Using PCA, the total number of features was decreased. Following feature extraction, feed-forward back propagation ANN and KNN were applied individually on the decreased features. The back-propagation learning method for updating weights is covered by FP-ANN. KNN has already been covered. Using KNN and FP-ANN, this technique achieves accuracy of 97% and 98%, respectively.
Figure 15 illustrates the proposed method's process model.
Table 6.
MRI brain tumor classification using ML.
Table 6.
MRI brain tumor classification using ML.
Ref |
Scan |
year |
feature extraction |
feature selection |
classification |
Acc |
[95] |
MRI |
2010 |
GLCM |
PCA |
ANN and KNN |
98% and 97% |
[89] |
MRI |
2011 |
Wavelet |
PCA |
Back Propagation NN |
100.00% |
[93] |
MRI |
2013 |
Intensity and texture |
PCA |
ANN |
85.50% |
[94] |
MRI |
2014 |
GLCM |
- |
SVM |
93.00% |
[36] |
MRI |
2015 |
Texture and shape |
ICA |
SVM |
99.09% |
[90] |
MRI |
2015 |
Wavelet |
- |
SVM |
97.00% |
[91] |
MRI |
2017 |
Texture and shape |
- |
SVM |
97.10% |
[92] |
MRI |
2017 |
Intensity and texture |
- |
ANN |
92.43% |
5.3.3. MRI brain tumor classification using DL
There are still difficulties in categorizing brain cancers from an MRI scan, despite encouraging developments in the field of ML algorithms for the classification of brain tumors into their different types. These difficulties are mostly the result of the ROI detection, and typical, labor-intensive feature extraction methods are ineffective [
96]. Due to the nature of deep learning, the categorization of brain tumors is now a data-driven problem rather than a challenge based on manually created features [
97]. CNN is one of the deep learning models that is frequently utilized in brain tumor classification tasks and has produced a significant result [
98].
According to a study [
99], the CNN algorithm can be used to divide the severity of gliomas into two categories: low severity or high severity, as well as multiple grades of severity (Grade II, Grade III, and Grade IV). Accuracy rates of 71% and 96% were reached by the classifier.
A DL approach based on a CNN was proposed by the authors Sultan et al. [
100] to classify different kinds of brain tumors using two publically available datasets. The proposed method's block diagram is presented in
Figure 1. The first divides cancers into three categories: meningioma, pituitary and , glioma tumor. The other one distinguishes between Grade II, Grade III, and Grade IV gliomas. The first and second datasets, which each have 233 and 73 patients, contain a combined total of 3064 and 516 T1 images. The best overall accuracy, 96.13% and 98.7% for the two studies, is achieved by the suggested network configuration, which results in a significant performance.
Figure 16.
A block schematic of the suggested approach [
100].
Figure 16.
A block schematic of the suggested approach [
100].
Similar to this, in study [
101] it was shown how to classify brain MRI scan images into malignant and benign using CNN algorithms in conjunction with augmenting data and image processing. They evaluated the effectiveness of their CNN model with pre-trained VGG-16, Inception-v3, and ResNet-50 models using the transfer learning methodology. Even though the experiment was carried out on a relatively small dataset, the results reveal that our model's accuracy result is quite strong and has a very low complexity rate, as it obtained 100% accuracy, compared to VGG-16's 96%, ResNet-50's 89%, and Inception-V3's 75%. The structure of suggested CNN architecture is shown in
Figure 17.
For an accurate glioma grade prediction, they developed a customized CNN-based deep learning model in [
102] and evaluated the performance with AlexNet, GoogLeNet, and SqueezeNet by transfer learning. Based on 104 clinical glioma patients with (50 LGGs and 54 HGGs), they trained and evaluated the models. The training data was expanded using a variety of data augmentation methods. A five-fold cross-validation procedure was used to assess each model's performance. According to the study's findings, their specially created deep CNN model outperformed the pretrained models by an equal or greater percentage. The custom model's accuracy, sensitivity, F1 score, specificity, and AUC values were, respectively,0.971, 0.980,0.970, 0.963, , and 0.989.
A novel transfer learning-based active learning paradigm for classifying brain tumors was proposed by Ruqian et al. [
103].
Figure 18 describes the workflow for active learning. On the MRI training dataset of 203 patients and the baseline validation dataset of 66 patients, they used a 2D slice-based technique to train and fine-tune our model. Their suggested approach allowed the model to obtain an area under the ROC of 82.89%. We built a balanced dataset and ran the same process on it to further investigate the robustness of their strategy. Compared to the baseline's AUC of 78.48%, the model's AUC was 82%.
A total of 131 patients with glioma were enrolled in [
104]. A rectangular ROI was used to segment tumor images, and this ROI contained around 80% of the tumor. The test dataset was then created by randomly selecting 20% of the patient-level data. Models previously trained on the expansive natural image database ImageNet were applied to MRI images and then AlexNet and GoogLeNet were developed from scratch and fine-tuned. 5-fold cross-validation (CV) was used on the patient-level split to evaluate the classification task. The averaged performance metrics for validation accuracy, test accuracy, and test AUC from the five-fold CV of GoogleNet were, respectively, 0.867, 0.909, and 0.939.
A intelligent medical decision-support system was proposed by Hamdaoui et al. [
105] for the identification and categorization of brain tumors using images from the risk of malignancy index. They had employed a deep transfer learning principles to get around the scarcity of training data required to construct CNN model. For this, they selected seven CNN architectures that had already been trained on an ImageNet dataset that they carefully fitted on (MRI) data of brain tumors gathered from the BraTS database as shown in
Figure 19. Just the prediction that received the highest score among the predictions made by the seven pre-trained CNNs is produced in order to increase the accuracy of their model. They evaluated the effectiveness of our primary 2-class model, which includes LGG and HGG brain cancers, using a 10-way cross-validation method. The test precision, f1 score, test precision, and test sensitivity for their suggested model were 98.67%, 98.06%, 98.33%, and 98.06%, respectively.
A new AI diagnosis model called EfficientNetB0 was created by Khazaee et al. [
106] to assess and categorize human brain gliomas utilizing sequences from MR images.They used a common dataset (BraTS-2019) to validate the new AImodel, and they showed that the AI components—CNN and transfer learning—provided outstanding performance for categorizing and grading glioma images with 98.8% accuracy.
In [
70], they developed a model using transfer learning and pre-trained ResNet18 to more accurately identify basal ganglia germinomas. In this retrospective analysis, 73 patients with basal ganglioma were enrolled. Based on both T1 and T2 data, brain tumors were manually segmented. To create the tumor classification model, the T1 sequence was utilized. Transfer learning and a 2D convolutional network were used. 5-fold cross-validation was used to train the model, and it resulted in a mean AUC of 88%.
They suggested an effective hyperparameter optimization method for CNN based on Bayesian optimization in [
107]. By categorizing 3064 T1 images into three different types of brain cancers (glioma, pituitary, and meningioma), this method was assessed. Five popular deep pre-trained models are compared to the improved CNN's performance using Transfer Learning. Their CNN achieved 98.70% validation accuracy after applying Bayesian optimization.
A novel generated transfer DL model was developed by Alanazi et al. [
108] for the early diagnosis of brain cancers into their different categories, such as meningioma, pituitary, and glioma. In order to test the performance of standalone CNN models performed for brain MRI images, several layers of the models were first constructed from scratch. The weights of the neurons were then revised using the transfer-learning approach to categorize brain MRI images into tumor subclasses using the 22-layer, isolated CNN model. Consequently, the transfer-learned model that was created has an accuracy rate of 95.75%.
On two datasets, Rizwan et al. [
109] suggested a method to identify various BT classes using Gaussian-CNN (
Figure 20). To categorize lesions into pituitary, glioma, and meningioma, one of the datasets is employed. The other distinguishes between the three glioma classes (II, III, and IV) . For the first and second datasets, respectively, these datasets have "233" and "73" victims with a total of "3064" and "516" images on T1 enhanced images. For the two datasets, the suggested method has accuracy of 99.8% and 97.14%.
A seven-layer CNN was suggested in [
110] to assist with the three-class categorization of brain MR images. To decrease computing time, separable convolution was used. The suggested separable CNN model achieves 97.52% accuracy on a dataset of 3064 images that is available to the public.
Figure 21 illustrated the proposed method.
There were several pre-trained CNNs utilized in [
111], including GoogLeNet, Alexnet, Resnet50, Resnet101, VGG-16, VGG-19, InceptionResNetV2, and Inceptionv3. To accommodate additional image categories, the final few layers of these Networks were modified. Data from the clinical, Harvard, and Figshare repositories were widely used to assess these models. Following that, the dataset was divided into training and testing halves in a 60:40 ratio. The validation on the test set demonstrates that, when compared to other proposed models, the Alexnet with transfer learning demonstrated the best performance in the shortest amount of time. The suggested method can obtain accuracy values of 100%, 94%, and 95.92% using three datasets and is more generic because it does not require any manually created features.
The suggested framework [
112] describes three experiments that classified brain malignancies such meningiomas, gliomas, and pituitary tumors using three designs of CNN (AlexNet, VGGNet, and GoogLeNet). Using the MRI slices of the brain tumor dataset from Figshare, each study then investigates transfer learning approaches like as fine-tuning and freezing. For results generalization, increasing dataset samples, and minimizing the risk of over-fitting, the data augmentation approaches are applied on the MRI slices. The fine-tuned VGG16 architecture attained the best accuracy of 98.69 in terms of categorization in the proposed studies.
Table 7.
MRI brain tumor classification using DL.
Table 7.
MRI brain tumor classification using DL.
Ref |
Scan |
year |
technique |
Method |
result |
Performance Metrics |
[99] |
MRI |
2015 |
DL |
Custom-CNN |
96.00% |
Acc |
[100] |
MRI |
2019 |
DL |
Custom-CNN |
98.70% |
Acc |
[101] |
MRI |
2020 |
DL |
VGG-16, Inception-v3, ResNet-50 |
96% 75% 89% |
Acc |
[102] |
MRI |
2021 |
DL |
AlexNet, GoogLeNet, SqueezeNet |
97.10% |
Acc |
[103] |
MRI |
2021 |
DL |
Custom-CNN |
82.89% |
ROC |
[104] |
MRI |
2018 |
DL |
AlexNet |
90.90% |
test acc |
[105] |
MRI |
2021 |
DL |
multi CNN structure |
98.67% 98.06% 98.33% 98.06% |
precision, f1 score, precision, sensitivity |
[106] |
MRI |
2022 |
DL |
EfficientNetB0 |
98.80% |
Acc |
[70] |
MRI |
2022 |
DL |
ResNet18 |
88.00% |
AUC |
[107] |
MRI |
2022 |
DL |
Custom-CNN |
98.70% |
Acc |
[108] |
MRI |
2022 |
DL |
Custom-CNN |
95.75% |
Acc |
[109] |
MRI |
2022 |
DL |
Gaussian-CNN |
99.80% |
Acc |
[110] |
MRI |
2020 |
DL |
seven-layer CNN |
97.52% |
Acc |
[111] |
MRI |
2021 |
DL |
Alexnet |
100.00% |
Acc |
[112] |
MRI |
2019 |
DL |
VGG16 |
98.69% |
Acc |
5.3.4. Hybrid techniques
Hybrid strategies use multiple approaches to achieve high accuracy; it emphasize the benefits of each approach while minimizing the drawbacks. The first method employed a segmentation technique to identify the part of the brain that was infected and the second method for classification.
The proposed integrated SVM and ANN-based method for classification can be discovered in [
113]. The FCM method is used to segment the brain MRI images initially. where the updated membership and k value diverge from the standard method. In order to distinguish and categorize tumors, two types of characteristics have been retrieved from segmented images. Using SVM, the first category of statistical features was used to differentiate between normal or abnormal brain MRI images. This SVM technique has an accuracy rate of 97.44%. Area, perimeter, orientation, and eccentricity are additional criteria that were utilized to distinguish between the tumor and various malignant stages I through IV. Through the ANN back propagation technique, the tumor categories and stages of malignant tumors are classified. This suggested strategy has a 97.37% accuracy rate for categorizing tumor stages.
A hybrid segmentation strategy using ANN was suggested in [
114] for enhancing the classification outcomes of the brain tumor. First, utilizing skull stripping, and thresholding, the tumor region was segmented. The segmented tumor was subsequently recognized using the canny algorithm, and the features of the identified tumor cell region were then used as the input of the ANN for classification. 98.9% accuracy can be attained with the provided strategy.
A system that can identify and categorize the different types of tumors as well as detect them in T1 and T2 image sequences was proposed by Ramdlon et al. [
52]. Only the Axial section of the MRI results, which are divided into three classifications (Glioblastoma, Astrocytoma, and Oligodendroglioma), are used for the data analysis using this method. Basic image processing techniques, including image enhancement, binarization, morphological, and watershed, were used to identify the tumor region. Following the segmentation of the shape extraction feature, the KNN classifier was used to classify tumors. 89.5% of tumors were correctly classified.
Gurbina et al. [
115] described the integrated DWT and SVM classification method that is suggested. The initial segmentation of the brain MRI images was performed using the Ostu's approach. The DWT features have been obtained from segmented images in order to identify and categorize tumors. Brain MRI images were divided into benign and malignant categories using an SVM classifier. This SVM method has a 99% accuracy rate.
The objective of the study in [
116] is multi-level segmentation for effective feature extraction and brain tumor classification from MRI data. The authors used thresholding, the watershed algorithm, and morphological methods for segmentation after pre-processing the MRI image data. Through CNN, features are extracted, and after that, SVM classed the tumor images as malignant or non-cancerous. The proposed algorithm has an overall accuracy of 87.4%.
The classification of brain tumors into three types—glioblastoma, sarcoma, and metastatic —has been proposed by the authors of [
117]. The authors first used FCM clustering to segment the brain tumor, and then DWT to extract the features. PCA was then used to minimize the characteristics. Using six layers of DNN, categorization was completed. The suggested method displays 98% accuracy.
Table 8.
Hybrid techniques.
Table 8.
Hybrid techniques.
Ref |
year |
Segmentation Method |
Feature Extraction |
Classifier |
Accuracy |
[113] |
2017 |
FCM |
shape and statistical |
SVM and ANN |
97.44% and 97.37% |
[117] |
2017 |
FCM |
DWT and PCA |
CNN |
98.00% |
[52] |
2019 |
watershed |
shape |
KNN |
89.50% |
[115] |
2019 |
Ostu's |
DWT |
SVM |
99.00% |
[116] |
2020 |
thresholding and watershed |
CNN |
SVM |
87.4%. |
[114] |
2020 |
canny |
GLCM and Gabor |
ANN |
98.90% |
5.3.5. Various segmentation and classification methods employing CT images.
Wavelet Statistical Texture features (WST) and Wavelet Co-occurrence Texture features (WCT) were combined to automatically segment brain tumors in CT images in [
118]. After utilizing GA to choose the best texture features, two different NN classifiers as shown in
Figure 22 were tested to segment the region of a tumor. This approach is shown to provide good outcomes with an accuracy rate of above 97%.
For the segmentation and classification of cancers in brain CT images utilizing SVM with GA feature selection, a novel dominating feature extraction methodology was presented in [
119]. They used FCM and K means during the segmentation step, and GLCM and WCT during the feature extraction stage. This approach is shown to provide positive results with an accuracy rate of above 98%.
An improved semantic segmentation model for CT images has been suggested in [
120]. Additionally, classification is used in the suggested work. In the suggested architecture, the semantic segmentation network, which has a number of convolutional layers and pooling layers, was used to first segment brain image. Then, using the GoogleNet model, the tumor was divided into three distinct groups, including meningioma, glioma, and pituitary tumor. The overall accuracy achieved with this strategy was 99.6%.
A unique correlation learning technique utilizing CNN and ANN was proposed by Woniak et al. [
121]. CNN used the support neural network to determine the best filers for the convolution and pooling layers. As consequence, the main neural classification improved efficiency and learns more quickly. Results indicated that the CLM model can achieve 96% accuracy, 95% precision, and 95% recall.
The contribution of image fusion to an enhanced brain tumor classification framework was examined by Nanmaran et al. [
122], and this new fusion-based tumor categorization model can be more successfully applied to personalized therapy. A distinct cosine transform-based (DCT) fusion technique is utilized to combine MRI and SPECT images of benign and malignant class brain tumors. With the help of the features extracted from fused images, SVM, KNN, and decision tree were set to the test. When using features extracted from fused images, the SVM classifier outperformed KNN and decision tree classifiers with an overall accuracy of 96.8%, specificity of 93%, recall of 94%, precision of 95%, and F1 score of 91%.
Table 9.
Various segmentation and classification methods employing CT images.
Table 9.
Various segmentation and classification methods employing CT images.
Ref |
year |
type |
segmentation |
feature extraction |
feature selection |
classification |
result |
[118] |
2011 |
CT |
NN |
WCT and WST |
GA |
- |
97.00% |
[119] |
2011 |
CT |
FCM & k-mean |
GLCM and WCT |
GA |
SVM |
98.00% |
[120] |
2020 |
CT |
Semantic |
- |
- |
GoogleNet |
99.60% |
[121] |
2021 |
CT |
- |
- |
- |
CNN |
96.00% |
[122] |
2022 |
SPECT/MRI |
- |
DCT |
- |
SVM |
96.80% |