Preprint
Review

A Review of Brain Tumor Diagnosis and Segmentation

Altmetrics

Downloads

211

Views

98

Comments

0

This version is not peer-reviewed

Submitted:

29 June 2023

Posted:

30 June 2023

You are already at the latest version

Alerts
Abstract
Uncontrolled and fast cell proliferation is the cause of brain tumors. Early cancer detection is vitally important in order to save many lives. Brain tumors can be divided into several categories depending on the kind, place of origin, pace of development, and stage of progression; as a result, tumor classification is crucial for targeted therapy. The aim of brain tumor segmentation is to accurately delineate the areas of the brain. A specialist with a thorough understanding of brain illnesses must manually identify the proper type of brain tumor. Additionally, processing a lot of images takes time and is tiresome. Therefore, automatic segmentation and classification techniques are required to speed up and enhance the diagnosis of brain tumors. Tumors can be quickly and safely detected by brain scans using imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and others. Machine learning (ML) and artificial intelligence (AI) have shown promise in the development of algorithms that aid in automatic classification and segmentation utilizing various imaging modalities. This review discussed various types of brain tumors, publicly accessible datasets, enhancement methods, segmentation, feature extraction, classification, machine learning techniques, and deep learning, learning through transfer, for the study of brain tumors.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

The human brain, which serves as the control center for all of the body's organs, is a highly developed organ that enables a person to adapt to and withstand a variety of environmental situations [1]. The human brain gives a person the ability to express themselves in words, carry out activities, and express thoughts and feelings. Cerebrospinal fluid (CSF), white matter (WM), and grey matter (GM) are the three major tissue components of the human brain. The gray matter, which regulates brain activity, is composed of neurons and glial cells. The cerebral cortex is connected to other brain areas through white matter fibers, which are made up of several elinated axons. The corpus callosum, a substantial band of white matter fibers, connects the left and right hemispheres of the brain [2]. A brain tumor is a brain cell growth that is out of control and aberrant. Any unanticipated development may affect human functioning since the human skull is a rigid and volume-restricted structure, depending on the area of the brain involved. Additionally, it might spread to other organs, further jeopardizing human functions [3]. The ability to plan effective treatment is made possible by early detection of cancer, which is crucial for the healthcare sector [4]. Cancer is difficult to cure and the odds of survival are significantly reduced if it spreads to nearby cells. There is no doubt that a lot of lives could be preserved if cancer was detected at its earliest stage using quick and affordable diagnostic methods. Both invasive and non-invasive approaches may be utilized to diagnose brain cancer. An incision is made during a biopsy to extract a sample of the lesion for analysis. It is regarded as the gold standard for diagnosis of cancer where pathologists examine several cell characteristics of the tumor specimen under a microscope to verify the malignancy.
Non-invasive techniques include physical inspections of the body and imaging modalities employed for imaging the brain.[5] In comparison to brain biopsy, other imaging modalities such as CT scans and MRI are more rapid and secure. Radiologists use these imaging techniques to identify brain problems, evaluate the development of diseases, and plan surgeries [6]. However, brain scans or brain image interpretation to diagnose illnesses are prone to inter-reader variability and accuracy, which is dependent on the medical practitioner's competency [5]. In order to reduce diagnostic errors, it is crucial to identify the type of brain disorder with accuracy. Utilizing computer-aided diagnostic (CAD) technologies can improve accuracy. The fundamental idea behind CAD is to offer a computer result as an additional guide to help radiologists interpret images and shorten the reading time for images. This enhances the accuracy and stability of radiological diagnosis [7]. Several CAT-based artificial intelligence techniques, such as machine learning (ML) and deep learning (DL), have been described in this review for diagnosing tissues and segmenting tumors.
The review is structured as follows: Types of brain tumors are described in Section 2. The imaging modalities utilized in brain imaging are discussed in Section 3.The review algorithms utilized in the study are provided in Section 4. Review of relevant state-of-the-art work is provided in Section 5.The review is discussed in Section 6. The work's conclusion is presented in Section 7.

2. Types of brain tumors

The main three parts of the brain are the brain stem, cerebrum, and cerebellum. [1] The cerebellum, which is the second-largest component of the brain, is. It has to do with managing bodily motor activities including balance, posture, walking, and general coordination of movements. It is positioned behind the brain and connected to the brain stem. Internal white matter, tiny but deeply positioned volumes of gray matter, and a very thin gray matter outer cortex can all be found in the cerebellum and cerebrum. The brainstem links to the spinal cord. It is situated at the brain's base. Vital bodily processes including motor, sensory, cardiac, repositories, and reflexes are all under the control of the brainstem. The medulla oblongata, pons, and midbrain are its three structural components [2]. Brain tumor is the medical term for an unexpected growth of brain cells.[8] According to the tumor's location, the kind of tissue involved, and whether they are malignant or benign, scientists have categorized several types of brain tumors. The location of the origin (primary or secondary) and additional contributing elements [9]. The World Health Organization (WHO) categorized brain tumors into 120 kinds. Based on the cell's origin and behavior, which ranges from less aggressive to greater aggressive, this categorization is made. Even certain tumor forms are rated, with grades I being the least malignant (e.g., meningiomas, pituitary tumors) and IV being the most malignant. Despite differences in grading systems that rely on the kind of tumor, this denotes the pace of growth [10]. The most frequent type of brain tumor in adults is a glioma, which may be classified into two types: HGG and LGG. Further categorizing LGG into I-II grade tumors while HGG into III-IV grade is performed by the WHO. To reduce diagnosing errors, accurate identification of the specific type of brain disorder is crucial for treatment planning.
Table 1. Types of Brain Tumors.
Table 1. Types of Brain Tumors.
Types of tumors based on Type comment
Nature Benign Less aggressive and grows slowly.
Malignant Life-threatening and rapidly expanding.
Origin Primary tumor Originates in the brain directly.
Secondary tumor This tumor develops in another area of the body like lung and breast before migrating to the brain.
Grading Grade I Basically regular in shape, and they develop slowly.
Grade II Appear strange to the view and grow more slowly.
Grade III These tumors grow more quickly than grade II cancers.
Grade IV Reproduced with greater rate.
Progression stage Stage 0 Malignant but do not invade neighboring cells.
Stage 1 Malignant and quickly spreading
Stage 2
Stage 3
Stage 4 The malignancy invades every part of the body.

3. Imaging Modalities

For many years, the detection of brain abnormalities has involved the use of several medical imaging methods. The two types of brain imaging approaches are structural scanning and functional scanning [11]. Different measurements relating to brain anatomy, tumor location, traumas, and other brain illnesses compose structural imaging [12]. The finer-scale metabolic alterations, lesions, and visualization of brain activity are all picked up by functional imaging methods. Techniques including CT, MRI, single-photon emission computed tomography (SPECT), positron emission tomography (PET), (FMRI), and ultrasound (US) were utilized to localize brain tumors for their size, location as well as shape, and other characteristics [13].

3.1. MRI

MRI is a noninvasive procedure which utilizes non-ionizing, safe radiation [14] to display the 3D anatomical structure of any region of the body without the need for cutting the tissue. To acquire images, it employs RF pulses and an intense magnetic field [15].
Figure 1. MRI scanner [15].
Figure 1. MRI scanner [15].
Preprints 78118 g001
The body is intended to be positioned within an intense magnetic field. The water molecules of the human body are initially in their equilibrium state when the magnets are off. The magnetic field is then activated by moving the magnets. The body's water molecules align with the magnetic field's direction under the effect of this powerful magnetic field [16]. Protons are stimulated to spin opposing the magnetic field and realign by the application of a high RF energy pulse to the body in the magnetic field's direction. Protons are stimulated to spin opposing the magnetic field and realign by the application of a high RF energy pulse to the body in the magnetic field's direction. When the RF energy pulse is stopped, the water molecules return to their state of equilibrium and line up with the magnetic field once more [16]. This causes the water molecules to produce RF energy, which the scanner detects and transforms into visual images [17]. The tissue structure determines what amount of RF energy can be given off by the water molecules. As we can see in Figure 2 healthy brain has white matter (WM), gray matter (GM), and CSF, according to a structural MRI scan [18]. The primary difference between these tissues in a structural MRI scan is based on the amount of water they contain, with WM constituted of 70% water and GM containing 80% water. While the CSF fluid is almost entirely composed of water as we show in Figure 2.
Figure 3 illustrates the fundamental MRI planes used to visualize the anatomy of the brain: axial, coronal, and sagittal. Tl, T2, and FLAIR MRI sequences are most often employed for brain analysis [16]. A Tl-weighted scan can distinguish between gray and white matter. T2-weighted imaging is water content sensitive and is therefore ideally suited to conditions where water accumulates up within the tissues of the brain.
In pathology, FLAIR is utilized to differentiate between CSF and abnormalities in the brain. Gray levels intensity values in pixel spaces form an image during an MRI scan. The values of the gray level intensity are dependent on the cell density. On T1 and T2 images of a tumor brain, the intensity level of the tumorous tissues differs.[17]
Most tumors show low or medium gray intensity on T1-w. On T2-w, the majority of tumors exhibit bright intensity [18]. Examples of MRI tumor intensity level includes are shown in Figure 4.
Table 2. Properties of various MRI sequences.
Table 2. Properties of various MRI sequences.
T1 T2 Flair
White Matter Bright Dark Dark
Grey Matter Grey Dark Dark
CSF Dark Bright Dark
Tumor Dark Bright Bright

3.2. CT

CT scanners provide finely detailed images of the interior of your body using a revolving X-ray beam and a row of detectors. On a computer, specific algorithms are used to process the photos captured from various angles to create cross-sectional images of entire body [19]. However, CT scan can offer more precise images of the skull or spine and other bone structures close to a brain tumor. Patients typically receive contrast injections to look for aberrant tissues. The patient may occasionally take dye to improve their image. When an MRI is unavailable and the patient has implantation like a pacemaker, a CT scan may be performed to diagnose a brain tumor. The benefits of using CT scanning are low cost, improved tissue classification detection, quick imaging, and more widespread availability. The radiation risk in a CT scan is 100 times greater than in a standard X-ray diagnosis [19].

3.3. PET

An example of a nuclear medicine technique which analyzes the metabolic activity of biological tissues is positron emission tomography (PET) [20]. Therefore to help with the evaluation of the tissue being studied, a small amount of a radioactive tracer is utilized throughout the procedure. Fluorodeoxyglucose (FDG) is a popular PET agent for imaging the brain. In order to provide more conclusive information on malignant (cancerous) tumors and other lesions, PET may also be utilized in conjunction with other diagnostic procedures like CT or MRI. The way PET scans an organ or tissue is by utilizing a scanning device to find photons released by a radionuclide there [20]. The chemical compounds that are normally utilized by the specific organ or tissue throughout its metabolic process are combined with a radioactive atom to create the tracer used in PET scans.
Figure 6. PET brain tumor.
Figure 6. PET brain tumor.
Preprints 78118 g006

3.4. SPECT

A nuclear imaging examination called a SPECT combines CT with a radioactive tracer. The tracer is what enables medical professionals to observe the blood flow to tissues and organs [21]. A tracer is injected into the patient's bloodstream prior to the SPECT scan. The radiolabeled tracer generates gamma rays that the CT scanner can detect since it is radiolabeled. Gamma ray information is gathered by the computer and shown on the CT cross-sections. A 3D representation of the brain can be created by adding these cross-sections back together [21].

3.5. Ultrasound

An ultrasound is a specialized imaging technique that provides details that can be useful for cancer diagnosis, especially of soft tissues. It is frequently employed as the initial step in the typical cancer diagnostic procedure [22]. One advantage of ultrasound is that a test can be completed swiftly and affordably without subjecting the patient to radiation. However, ultrasound cannot independently confirm a cancer diagnosis and is unable to generate images with the precise level of resolution or detail like a CT or MRI scan. A medical expert gently moves a transducer throughout the patient's skin across the region of the body being examined during a conventional ultrasound examination. A succession of high-frequency sounds are generated by the transducer and "bounce off" the patient's interior organs. The ensuing echoes come back to the ultrasound device, which then transforms the waves of sound to 2-D image which may be observed in real-time on a monitor. According to [22] US probes have been applied in brain tumor resection. According to the degree of density inside the tissue being assessed, the shape and strength of ultrasonic echoes can change. An ultrasound can detect tumors that may be malignant because solid masses and fluid-filled cysts bounce sound waves differently.

4. Classification and segmentation method

As was stated in the introduction, brain tumor is a leading cause of death worldwide. Computer-aided detection and diagnosis, refers to software which utilizes DL, ML and computer vision for analyzing radiological and pathological images. It has been created to assist radiologists in the diagnosis of human disease in a variety of body regions, including applications for brain tumors. This review explored different CAT-based artificial intelligence approaches, including ML and DL, for automatically classification and segmentation of tumors.

4.1. Classification methods

Classification is an approach in which related data sets are grouped together according to common features. A classifier in classification is a model created for predicting the unique features of a class label. Predicting the desired class for each type of data is the fundamental goal of classification. Deep learning and machine learning techniques are used for the classification of medical images. The approach for obtaining the features used in the classification process is the key distinction between the two types.

4.1.1. Machine learning

ML is a branch of AI that allows computers to learn without being explicitly programmed. Classifying medical images, including lesions, into various groups using input features has become one of the latest application of ML. There are two types of ML algorithms: supervised learning and unsupervised learning [23]. ML algorithms learn from labeled data in supervised learning. Unsupervised learning is the process by which ML systems attempt to comprehend the inter-data relationship using unlabeled data. ML has been employed to analyze brain cancers in the context of brain imaging [24]. The main stages of ML classification are: Image Preprocessing, feature extraction, feature selection and classification . Figure 7 illustrates the process architecture.
  • Data Acquisition
As previously said, we can collect brain cancer images using several imaging modalities such as MRI, CT, and PET. This technique effectively visualizes aberrant brain tissues.
2.
Preprocessing
Preprocessing is a very important stage in the medical field. Normally, noise enhancement or reduction in images happens over preprocessing. Medical noises significantly reduce image quality, making them diagnostically inefficient. In order to properly classify medical images, the preprocessing stage must be effective enough to eliminate as much noise as possible without effect in essential image components [25]. This procedure is carried out using a variety of approaches, including cropping, image scaling, histogram equalization, filtering using median filter, and image adjusting.[26]
3.
Feature extraction
The process of converting images into features based on several image characteristics in the medical field is known as feature extraction. These features carry the same information as the original images, but they are entirely different. This technique has the advantages of enhancing classifier accuracy, decreasing overfitting risk, allowing users to analyze data, and speeding up training [27].Texture, contrast, brightness, shape, gray level co-occurrence matrix (GLCM) [28], Gabor transforms [29], wavelet-based features [30], 3D Haralick features [31],and histogram of local binary patterns (LBP) [32] are some of the examples of the various types of features.
4.
Feature selection
The technique attempts to arrange the features in ascending order of importance or relevance, with the top features being mostly employed in classification. As a result, multiple feature selection techniques are needed to reduce redundant information in order to discriminate between relevant and non-related features [33] such that PCA [34], genetic algorithm (GA) [35], and ICA [36].
5.
ML algorithm
The objective of machine learning is to divide the input information into separate groups based on common features or patterns of behavior.KNN [37], ANN [38], RF [39] and SVM [40] are examples of supervised method . These techniques include two stages: training and testing. During training, the data are manually labeled using human involvement. The model is first constructed in this step, after which it is utilized to determine the classes that unlabeled in the testing stage. Applying the KNN algorithm works by finding the points that are closest to each other, by computing the distance between them using one of several different approaches, including the Hamming, Manhatten, Euclidean, and Minkowski distances [37].
The support vector machine (SVM) technique is frequently employed for classification tasks. Every feature forming a data point in this approach, which represents a coordinate, is formed in a distinct n-space. As a result, the objective of the SVM method is to identify a boundary or line across a space, with n dimensions which is referred to as a hyperplane that separates classes [40]. There are numerous ways to create different hyperplanes, but the one with the maximum margin is the best. The maximum margin is the separation between the most extreme data points inside a class, often known as the support vectors.

4.1.2. Extreme Learning Machine (ELM)

Another new field that uses less compute than neural networks is evolutionary machine learning (EML). It is based on the real-time classification and regression technique known as the single-layer feed-forward neural network (SLFFNN). The input-to-hidden layer weights in the ELM are initialized randomly, whereas the hidden-to-output layer weights are trained utilizing the Moore-Penrose inverse method [41] to get a least-squares solution. As a result, classification accuracy is increased while net complexity, training time, and learning speed are all reduced.
Additionally, the weights of the hidden layer provide the network the capacity to multitask like other ML techniques like KNN, SVM, and Bayesian network. According to Figure 8, the ELM network is composed of three levels, all of which are connected. Weights between the hidden and output layers can only vary, but the weights between the input and hidden layers are initially fixed at random and remain so during training.

4.1.3. Deep learning (DL)

Since a few years ago, deep learning, a branch of machine learning, has been utilized extensively to create automatic, semi-automatic, and hybrid models that can accurately detect and segment tumors in the shortest period of time possible [42]. DL is capable of learning the features that are significant for a problem by utilizing a training corpus with a sufficient diversity and quality. Deep learning [43] has achieved excellent success in tackling the issues of ML by combining the feature extraction and selection phases into the training process [44]. Deep learning is motivated by a comprehension of neural networks that exist within the human brain. DL models are often represented as a sequence of layers, each generated by a weighted sum of information from the previous layer. The data is represented by the first layer, while the output is represented by the last layer.[45] Deep learning models are able to tackle extremely difficult problems while often requiring less human interaction than conventional ML techniques because several layers make it possible to duplicate complex mapping functions.
The most common DL model used often for categorization and segmentation of images is a convolution neural network (CNN). In a hierarchical manner, the CNN analyzes the spatial relationship of pixels. Convoluting the images with learnt filters creates a hierarchy of feature maps, which is how this is accomplished. This convolution function is performed in a number of layers such that the features are translation- and distortion-invariant and hence accurate to a high degree.[46] Figure 9 illustrates the main process in DL.
Preprocessing is primarily used to eliminate unnecessary variation from the input image and to make the work of training the model easier. To get beyond neural network models' limits, more actions are required, such as resizing normalization. All images must be resized before being entered into CNN classification models since DL require inputs of a constant size [47]. Images that are greater than the desired size can be reduced by either downscaling employing interpolation or by cutting the background pixels [47].
A lot of images are required for CNN-based classification. One of the most important data strategies for addressing issues with unequal distribution and data lack is data augmentation [48].
CNN's architecture is composed of three primary layers: convolutional, pooling, and fully-connected. The first layer is the main layer that's able to extract image features such as edges and boundaries. Based on the desired prediction results, this layer may automatically learn a large number of filters in parallel for the training dataset. The first layer creates features, but the second layer is in charge of data reduction, which minimizes the size of those features and reduces the demand on computing resources. Every neuron in the final layer, which is a completely connected layer, is coupled to every neuron in the first layer. In order to classify the acquired feature vector of previous layers, the layer serves as a classifier [49]. Figure 10 illustrates the main layers of the CNN network.
The approach that CNN works is similar to the way that various neural networks work in that it continually modifies its weights by taking an error from the output and inserting it as output in order to improve filters and weights. In addition, CNN standardized the output utilizing a SoftMax function [51]. There are many types of CNN architecture including; ResNet, AlexNet, cascade-CNN etc.

4.2. Segmentation method

The purpose of segmentation in tumor classification is to detect the tumor location from brain scans, improve representation, and allow quantitative evaluations of image structures during the feature extraction step [52]. Brain tumor segmentation is able to be done in two ways: manually, and completely automatically [53].
Manual tumor segmentation from brain scans is a difficult and time-consuming procedure. Furthermore, the artifacts created during the imaging procedure result in poor-quality images that are difficult to analyze. Additionally, due to uneven lesions, geographical flexibility, and unclear borders, manual detecting of brain tumors is challenging. This section discusses several automated brain tumor segmentation strategies to help radiologist overcome these issues.

4.2.1. Region-Based segmentation

A region in an image is a collection of related pixels that comply with specific homogeneity requirements, such as shape, texture, and pixel intensity values [54]. In a region-based segmentation, the image is divided into disparate areas in order to precisely identify the target region [55]. When grouping pixels together, the region-based segmentation takes into consideration the pixel values, such as gray level variance and difference, as well as their spatial closeness, such as the Euclidean distance or region density. K-means clustering [56] and FCM[56] are most technique that used in this method

4.2.2. Thresholding methods

The thresholding approach is a straightforward and effective way to separate the necessary region [57], but finding an optimum threshold in low-contrast images might be challenging.
Based on picture intensity, threshold values are chosen using histogram analysis [58]. There are two types of thresholding techniques: local and global. The global thresholding approach is the best choice for segmentation if the objects and the background have highly uniform brightness or intensity. The Gaussian distribution approach may be used to get the ideal threshold value [59]. Otsu thresholding [39] is popular method in these method.

4.2.3. Watershed techniques

The intensities on the image are analyzed using the watershed techniques [60]. Topological watershed [61], marker-based watershed [62] and image IFT watershed [63], are a few examples of watershed algorithms.

4.2.4. Morphological-Based Method

The morphology technique relies on the morphology of image features. It is mostly used for extracting details from images based on shape representation. Dougherty [64] defines dilation and erosion as two basic operations. Dilation is used to increase the size of an image. Erosion reduces the size of images.

4.2.5. Edge-Based Method.

Edge detection is done using variations in imagine intensity. Pixels at an edge are those where the image's function abruptly changes. Edge-based segmentation techniques include those by Sobel, Roberts, Prewitt, and Canny [65]. In [66] offer an enhanced edge detection approach for tumor segmentation. The development of an automated image-dependent thresholding is combined with the Sobel operator to identify the edges of the brain tumor.

4.2.6. Neural networks based methods

Neuronal network based segmentation techniques employ computer models of artificial neural networks made up of weighted connections between processing units (called neurons). At the connections, the weights act as multipliers. To acquire the coefficient values, training is necessary. The segmentation of medical images and other fields have made use of a variety of neural network designs. Some of the techniques utilized in the segmentation process include the multilayer perceptron (MLP), Hopfield neural networks (HNN) [68], back-propagation learning algorithm, SVM-based segmentation [67], and self-organizing maps (SOM) neural network.[68]

4.2.7. DL-based segmentation

The primary strategy used in the DL-based segmentation of brain tumors technique is to pass an image through a series of deep learning structures before performing input image segmentation based on the deep features [69]. Many deep learning methods, such as deep CNNs, CNN, and others, have been suggested for segmenting brain tumors.
A deep learning system called semantic segmentation [70] arranges pixels in an image according to semantic categories. The objective is to create a dense pixel-by-pixel segmentation map of the image, and each pixel is given an assigned category or entity.

4.3. Performance evaluation

An important component of every research work involves evaluating the classification and segmentation performance. The primary goal of this evaluation is to measure and analyze the model's capability for segmentation or diagnostic purposes. Segmentation is a crucial step in improving the diagnostic process, as we mentioned before, but for this to occur, the segmentation process must be as accurate as feasible. Additionally, to evaluate the diagnostic approach utilized while taking complexity and time into account.[71]
True positive (TP), true negative (TN), false positive (FP) and false negative (FN) are most main four element of any analysis or evaluate any segmentation or classification algorithm. A pixel that is accurately predicted to be assigned to the specified class in a segmentation method is represented by TP and TN based on the ground truth. On top of that, FP is a result when the model predicts a pixel wrongly as not belonging to a specific class. A false negative (FN) is a result when the model predicts a pixel belonging to a certain class wrongly.[71]
TP in classification tasks refers to an image that is accurately categorized into a positive category based on the ground truth. Similar to this, TN result occurs when the model properly classifies an image in the negative category. As opposed to that, FP results occur when the model wrongly assigns an image in the positive class while the actual data is in the negative category. FN results occur when the model misclassifies an image while it belongs in the positive category. Through the four elements mentioned above, different performance measures enable us to analyze more.
Accuracy (ACC) measures a model's ability for correctly categorizing all pixels/classes, whether they are positive or negative. Sensitivity (SEN) shows the percentage of accurately predicted positive image/pixels among all actual positive samples. It evaluates a model's ability to recognize relevant samples or pixels. The percentage of actual negatives that were predicted is known as specificity (SPE). It indicates a percentage of classes or pixels that could not be accurately recognized.[71]
The precision (PR) or positive predictive value (PPV) measures how frequently the model correctly predicts the class or pixel. It provides the precise percentage of positively expected results from models. The most often used statistic that combines SEN and precision is the F1-Score [72]. It refers to the two-dimensional harmonic mean.
The Jaccard index (JI), also known as intersection over union (IoU), calculates the percentage of overlap between the model's prediction output and the annotation ground truth mask.
The spatial overlap between the segmented region of the model and the ground truth tumor region is measured by the dice similarity coefficient (DSC). A DSC value of zero means there is no spatial overlap between the annotated model result and the actual tumor location, whereas a value of one means there is complete spatial overlap. The receiver characteristics curve is summarized by the area under the curve (AUC), which compares SEN to the false positive rate as a measure of a classifier's ability to discriminate between classes.
The similarity between the segmentation produced by the model and the expert-annotated ground truth is known as the similarity index (SI). It describes how the tumor region's identification is comparable to that of the input image.[71]
Table 3. Performance Equation.
Table 3. Performance Equation.
Parameter Equation
ACC ( T P + T N ) / ( T P + F N + F P + T N )
SEN T P / ( T P + F N )
SPE T N / ( T N + F P )
PR T P / ( T P + F P )
F1_SCORE 2 * P R * S E N / ( P R + S E N )
DCS 2 * T P / ( 2 * T P + F P + F N
Jaccard T P / ( T P + F P + F N )

5. Literature Review

5.1. Article Selection

The major goal of this study is to review and understanding the worldwide classification and detection strategies for brain tumors between 2010 and 2022. The goal of this present study is to review the most popular techniques for detecting brain cancer that have been made available globally, in addition to looking at how successful CAD systems are in this process.
We didn't target any one publisher specifically, but we utilized articles from a variety of sources to account for the diversity of knowledge in a particular field. We collected appropriate articles from a number of internet scientific research article libraries. We searched the pertinent publications using IEEE Explore, Medline, ScienceDirect, Google Scholar, and ResearchGate.
Every time, the filter choice for the year (2010to 2022) was chosen so that only papers from the chosen time period were presented. Most frequently, we used terms like "detection of MRI images using deep learning," "classification of brain tumor from ct/mri images using deep learning," "detection and classification of brain tumor using deep learning," "CT brain tumor," "PET brain tumor," etc. This study offers an analysis of 46 chosen publications.

5.2. Publicly available datasets

A number of publicly available datasets are utilized by the researchers to assess the suggested techniques and most of them are MRI datasets. Table 4 presents a brief summary of the dataset names.

5.3. Related work

This section presents a summary of studies that use artificial intelligence to classify brain tumors. In addition to the several techniques for segmenting brain tumors that we already highlighted.

5.3.1. MRI Brain tumor segmentation

This section will discuss the various machine learning, deep learning, region growth, thresholding, and literature-proposed brain tumor segmentation strategies.
In order to segment brain tumors, Gordillo et al. [80] utilized a fuzzy logic structure, which he built utilizing features extracted from MR images and expert knowledge. This system learns unsupervised and is fully automated. With trials conducted on two different forms of brain tumors, meningioma and glioblastoma multiform, the result of segmentation using this approach is shown to be satisfactory, with the lowest accuracy of 71% and maximum of 93%.
Employing fuzzy c- means clustering on MRI, Rajendran [81] presented logic analyzing for segmenting brain tumors. The region-based technique that iteratively progresses towards the ultimate tumor border was initialized using the tumor type output of fuzzy clustering. With 15 MR images with manual segmentation ground truth available, tests were conducted on this approach to determine its effectiveness. The overall result was suitable with a sensitivity of 96.37% and an average Jaccard coefficient value of 83.19%.
SVM classifier was applied by Kishore et al. [82] to categorize tumor pixels using vectors of features from MR images, such as mean intensity and LBP. Level sets and region-growing techniques are used for the segmentation. The experiments on their suggested methods used MR images with tumor regions manually defined from 11 different participants. Their suggested methods are effective, with DSC score of 0.69. Figure 12 shows a block schematic illustrating the segmentation of brain cancer.
A framework for segmenting tumorous MRI 3D images was presented by Abbasi and Tajeripour [39]. The input image's contrast is improved using bias field correction in the first phase. The data capacity was reduced in the second phase using the multi-level Otsu technique. LBP in three orthogonal planes and an enhanced histogram of images are employed in the third stage, the feature extraction step. Lastly, the random forest is employed as a classifier for distinguishing tumorous areas since it can work flawlessly with big inputs and has a high level of segmentation accuracy. With a mean Jaccard value of 87% and a DSC of 93%, the overall outcome was acceptable.
By combining two K-Means and FCM clustering approaches, Almahfud et al. [83] suggest a technique for segmenting in human brain MRI images to identify brain cancers. Because K-Means is more susceptible to color variations, it can rapidly and effectively discover optima and local outliers. So that the cluster results are better and the calculation procedure is simpler, the K-means results were clustered once more with FCM to categorize the convex contour based on the border. In order to increase accuracy, morphology and noise reduction procedures are also suggested. 62 brain MRI scans were used in the study, and the accuracy rate was 91.94%.
According to Pereira et al. [69], an automated segmentation technique based on CNN architecture was proposed, which explores small 3 by 3 kernels. Given the smaller number of weights in the network, using small kernels enables the creation of more intricate architectures and helps prevent overfitting. Additionally, they looked at the use of intensity normalizing as an initial processing step, which, when combined with data augmentation, was highly successful in segmenting brain tumors in MRI images. Their suggestion was verified using the BRATS database, yielding Dice Similarity Coefficient values of 0.88, 0.83, and 0.77 for the Challenge data set for the whole, core, and enhancing areas.
According to properties of a separated local square, they suggested a unique approach for segmenting brain tumors in [84]. The suggested procedure essentially consists of three parts. An image was divided into homogenous sections with roughly comparable properties and size using the super pixel segmentation technique in the first stage. The second phase was the extraction of gray statistical feature and textural information. In the last phase of building the segmentation model, super pixels were identified as either tumor areas or nontumor regions using SVM. They use 20 images from the BRATS dataset, where a DSC of 86.12% was attained, to test the suggested technique.
The suggested CAD system by Gupta et al. [85] offers a non-invasive method for the accurate tumors segmentation and detection of gliomas. The system takes advantage of the super pixels' combined properties and the FCM clustering technique. The suggested CAD method recorded 98% accuracy for glioma detection in both low-grade and high-grade tumors.
Brain tumor segmentation using the CNN-based data transfer to SVM classifier approach was proposed by Cui et al. [67]. Two cascaded phases make up their algorithm. They trained CNN in the initial step to understand the mapping of the image region to the tumor label region. In the testing phase, they passed the testing image and CNN's anticipated label output to an SVM classifier for precise segmentation. Tests and evaluations show that the suggested structure outperforms separate SVM-based or CNN-based segmentation, while DSC achieved 86.12%.
The two-pathway-group CNN architecture described by Razzak et al. [86] is a novel approach for brain tumor segmentation that simultaneously takes advantage of local and global contextual traits as shown in Figure 13. To prevent instability and overfitting parameter sharing, this approach imposes equivariance in the 2PG-CNN model. The output of a basic CNN is handled as an extra source and combined at the last layer of the 2PG CNN where the cascade architecture has been included. When a group CNN was embedded into a two route architecture for model validation using BRATS datasets, the results were DSC 89.2%, PR 88.22%, and SEN 88.32%.
A semantic segmentation model for the segmentation of brain tumors from multi-modal 3D MRIs for the BraTS dataset was published in [87]. After experimenting with several normalizing techniques, they discovered that group-norm and instance-norm performed equally well. Additionally, they have tested with more advanced methods of data augmentation, such as random histogram pairing, linear image transformations, rotations, and randomly image filtering, but these have not shown any significant benefit. Further raising the network depth had no positive effect on performance, however increasing the amount of filters consistently produced better results. Their BraTS end testing dataset values were 0.826, 0.882, and 0.837 overall dice or improved tumor core, entire tumor, and tumor center, respectively.
CNN were used by Karayegen and Aksahin [88] to offer a semantic segmentation approach for autonomously segmenting brain tumors on BraTS image datasets that include images from four distinct imaging modalities (T1, T1C, T2, and Flair). This technique was effectively used, and images were shown in a variety of planes including sagittal, coronal, and axial in order to determine the precise tumor location and parameters such as height, breadth, and depth. In terms of tumor prediction, evaluation findings of semantic segmentation carried out using network are incredibly encouraging. Mean IoU and Mean Prediction Ratio were both calculated to be 86.946 and 91.718, respectively.
Table 5. MRI Brain tumor segmentation.
Table 5. MRI Brain tumor segmentation.
Ref Scan year technique Method Performance Metrics result
[80] MRI 2010 region-based FCM Acc 93.00%
[81] MRI 2011 region-based FCM Jaccard 83.19%
[82] MRI 2012 NN LBP with SVM DSC 69.00%
[69] MRI 2016 DL CNN DSC 88.00%
[84] MRI 2017 NN GLCM with SVM DSC 86.12%
[39] MRI 2017 NN LBP with RF Jaccard & DSC 87.% & 93%
[85] MRI 2018 region-based FCM Acc 98.00%
[83] MRI 2018 region-based FCM and k-mean Acc 91.94%
[67] MRI 2019 DL & NN CNN with SVM DSC 88.00%
[86] MRI 2019 DL Two-path CNN DSC 89.20%
[87] MRI 2019 DL semantic Acc 88.20%
[88] MRI 2021 DL semantic IoU 91.72%

5.3.2. MRI brain tumor classification using ML

The automated classification of brain cancers using MRI images has been the subject of several research. Cleaning data, feature extraction, and feature selection are the basic steps in the machine learning (ML) process that have been used for purpose. Building ML model based on labeled samples is the last step.
NN based technique to categorize a given MR brain image as either normal or abnormal is presented in [89]. In this method, features were first extracted from images using the wavelet transform, and then the dimensionality of the features were reduced using PCA methodology. In order to determine the best weights for the NN, the reduced features were routed to a Back Propagation NN that uses scaled conjugate gradient (SCG). This technique was used on 66 images, 18 of which were normal and 48 abnormal. On training and test images, the classification accuracy was 100%.
An automated and efficient CAD method based on ensemble classifiers was proposed by Arakeri and Reddy [36] for the classification of brain cancers on MRI images, as benign or malignant. The texture, shape, and border properties of a tumor were extracted and used as a representation. ICA approached was used to select the most significant features. The ensemble classifier, which consists of SVM, ANN, and kNN classifiers, is then trained using these features to describe the tumor. A dataset of 550 patients' T1- and T2-weighted MR images was used for the experiments. With an accuracy of 99.09% (sensitivity 100% and specificity 98.21%), the experimental findings demonstrated that the suggested classification approach achieves strong agreement with the combined classifier and is extremely successful in the identification of brain tumors.
Figure 14. CAD method based on ensemble classifiers [36].
Figure 14. CAD method based on ensemble classifiers [36].
Preprints 78118 g014
In [90], they suggested a novel, wavelet-energy based method for automatically classifying MR images of the human brain into normal or abnormal. The classifier was SVM, and biogeography-based optimization (BBO) was utilized to enhance the SVM's weights. They succeeded in achieving 99% precision and 97% accuracy.
According to Amin et al. [91], an automated technique is suggested to distinguish between malignant and benign brain MRI images. The segmentation of potential lesions has used a variety of methodologies. Then, considering shape, texture, and intensity, a features set was selected for every candidate lesion. The SVM classifier is then used on the collection of features to compare the proposed framework's precision using various cross validations. Three benchmark datasets, including Harvard, Rider, and Local, are used to verify the suggested technique. Average accuracy was 97.1%, area under the curve was 0.98, sensitivity was 91.9%, and specificity was 98.0% for the procedure.
A suitable CAD approach to classifying brain tumors is proposed in [92]. The database includes Meningioma, Astrocytoma, and Normal brain areas along with primary brain tumors. The radiologists selected 20 × 20 regions of interest (ROIs) for every image in the dataset. These ROI(s) were used to extract 371 intensity and texture features altogether. These three classes were divided using ANN classifier. Overall classification accuracy was 92.43%.
428 T1 MR images from 55 individuals were used in a varied dataset for multiclass brain tumor classification [93]. A based on content active contour model extracted 856 ROIs. These ROIs were used to extract 218 intensity and texture features. PCA was employed in this study to reduce the size of the feature space. The ANN was then used to classify these six categories. The classification accuracy was seen to have reached 85.5%.
A unique strategy to classifying brain tumors in MRI images had been proposed in [94] employing improved structural descriptor and hybrid Kernel-SVM. In order to better classify the image and improve the texture feature extraction process using statistical parameters, they had used GLCM and histogram to derive the texture feature from every region. To enhance the classification process, different kernels were combined to create a hybrid kernel SVM classifier. the only axial T1 brain MRI images to which they have applied this technique. 93% accuracy for their suggested strategy.
A hybrid system composed of two ML techniques has been suggested in [95] for classifying brain tumors. For this, an overall of 70 brain MR images (60 abnormal, 10 normal) were taken into consideration. DWT was used to extract features from the images. Using PCA, the total number of features was decreased. Following feature extraction, feed-forward back propagation ANN and KNN were applied individually on the decreased features. The back-propagation learning method for updating weights is covered by FP-ANN. KNN has already been covered. Using KNN and FP-ANN, this technique achieves accuracy of 97% and 98%, respectively. Figure 15 illustrates the proposed method's process model.
Table 6. MRI brain tumor classification using ML.
Table 6. MRI brain tumor classification using ML.
Ref Scan year feature extraction feature selection classification Acc
[95] MRI 2010 GLCM PCA ANN and KNN 98% and 97%
[89] MRI 2011 Wavelet PCA Back Propagation NN 100.00%
[93] MRI 2013 Intensity and texture PCA ANN 85.50%
[94] MRI 2014 GLCM - SVM 93.00%
[36] MRI 2015 Texture and shape ICA SVM 99.09%
[90] MRI 2015 Wavelet - SVM 97.00%
[91] MRI 2017 Texture and shape - SVM 97.10%
[92] MRI 2017 Intensity and texture - ANN 92.43%

5.3.3. MRI brain tumor classification using DL

There are still difficulties in categorizing brain cancers from an MRI scan, despite encouraging developments in the field of ML algorithms for the classification of brain tumors into their different types. These difficulties are mostly the result of the ROI detection, and typical, labor-intensive feature extraction methods are ineffective [96]. Due to the nature of deep learning, the categorization of brain tumors is now a data-driven problem rather than a challenge based on manually created features [97]. CNN is one of the deep learning models that is frequently utilized in brain tumor classification tasks and has produced a significant result [98].
According to a study [99], the CNN algorithm can be used to divide the severity of gliomas into two categories: low severity or high severity, as well as multiple grades of severity (Grade II, Grade III, and Grade IV). Accuracy rates of 71% and 96% were reached by the classifier.
A DL approach based on a CNN was proposed by the authors Sultan et al. [100] to classify different kinds of brain tumors using two publically available datasets. The proposed method's block diagram is presented in Figure 1. The first divides cancers into three categories: meningioma, pituitary and , glioma tumor. The other one distinguishes between Grade II, Grade III, and Grade IV gliomas. The first and second datasets, which each have 233 and 73 patients, contain a combined total of 3064 and 516 T1 images. The best overall accuracy, 96.13% and 98.7% for the two studies, is achieved by the suggested network configuration, which results in a significant performance.
Figure 16. A block schematic of the suggested approach [100].
Figure 16. A block schematic of the suggested approach [100].
Preprints 78118 g016
Similar to this, in study [101] it was shown how to classify brain MRI scan images into malignant and benign using CNN algorithms in conjunction with augmenting data and image processing. They evaluated the effectiveness of their CNN model with pre-trained VGG-16, Inception-v3, and ResNet-50 models using the transfer learning methodology. Even though the experiment was carried out on a relatively small dataset, the results reveal that our model's accuracy result is quite strong and has a very low complexity rate, as it obtained 100% accuracy, compared to VGG-16's 96%, ResNet-50's 89%, and Inception-V3's 75%. The structure of suggested CNN architecture is shown in Figure 17.
For an accurate glioma grade prediction, they developed a customized CNN-based deep learning model in [102] and evaluated the performance with AlexNet, GoogLeNet, and SqueezeNet by transfer learning. Based on 104 clinical glioma patients with (50 LGGs and 54 HGGs), they trained and evaluated the models. The training data was expanded using a variety of data augmentation methods. A five-fold cross-validation procedure was used to assess each model's performance. According to the study's findings, their specially created deep CNN model outperformed the pretrained models by an equal or greater percentage. The custom model's accuracy, sensitivity, F1 score, specificity, and AUC values were, respectively,0.971, 0.980,0.970, 0.963, , and 0.989.
A novel transfer learning-based active learning paradigm for classifying brain tumors was proposed by Ruqian et al. [103]. Figure 18 describes the workflow for active learning. On the MRI training dataset of 203 patients and the baseline validation dataset of 66 patients, they used a 2D slice-based technique to train and fine-tune our model. Their suggested approach allowed the model to obtain an area under the ROC of 82.89%. We built a balanced dataset and ran the same process on it to further investigate the robustness of their strategy. Compared to the baseline's AUC of 78.48%, the model's AUC was 82%.
A total of 131 patients with glioma were enrolled in [104]. A rectangular ROI was used to segment tumor images, and this ROI contained around 80% of the tumor. The test dataset was then created by randomly selecting 20% of the patient-level data. Models previously trained on the expansive natural image database ImageNet were applied to MRI images and then AlexNet and GoogLeNet were developed from scratch and fine-tuned. 5-fold cross-validation (CV) was used on the patient-level split to evaluate the classification task. The averaged performance metrics for validation accuracy, test accuracy, and test AUC from the five-fold CV of GoogleNet were, respectively, 0.867, 0.909, and 0.939.
A intelligent medical decision-support system was proposed by Hamdaoui et al. [105] for the identification and categorization of brain tumors using images from the risk of malignancy index. They had employed a deep transfer learning principles to get around the scarcity of training data required to construct CNN model. For this, they selected seven CNN architectures that had already been trained on an ImageNet dataset that they carefully fitted on (MRI) data of brain tumors gathered from the BraTS database as shown in Figure 19. Just the prediction that received the highest score among the predictions made by the seven pre-trained CNNs is produced in order to increase the accuracy of their model. They evaluated the effectiveness of our primary 2-class model, which includes LGG and HGG brain cancers, using a 10-way cross-validation method. The test precision, f1 score, test precision, and test sensitivity for their suggested model were 98.67%, 98.06%, 98.33%, and 98.06%, respectively.
A new AI diagnosis model called EfficientNetB0 was created by Khazaee et al. [106] to assess and categorize human brain gliomas utilizing sequences from MR images.They used a common dataset (BraTS-2019) to validate the new AImodel, and they showed that the AI components—CNN and transfer learning—provided outstanding performance for categorizing and grading glioma images with 98.8% accuracy.
In [70], they developed a model using transfer learning and pre-trained ResNet18 to more accurately identify basal ganglia germinomas. In this retrospective analysis, 73 patients with basal ganglioma were enrolled. Based on both T1 and T2 data, brain tumors were manually segmented. To create the tumor classification model, the T1 sequence was utilized. Transfer learning and a 2D convolutional network were used. 5-fold cross-validation was used to train the model, and it resulted in a mean AUC of 88%.
They suggested an effective hyperparameter optimization method for CNN based on Bayesian optimization in [107]. By categorizing 3064 T1 images into three different types of brain cancers (glioma, pituitary, and meningioma), this method was assessed. Five popular deep pre-trained models are compared to the improved CNN's performance using Transfer Learning. Their CNN achieved 98.70% validation accuracy after applying Bayesian optimization.
A novel generated transfer DL model was developed by Alanazi et al. [108] for the early diagnosis of brain cancers into their different categories, such as meningioma, pituitary, and glioma. In order to test the performance of standalone CNN models performed for brain MRI images, several layers of the models were first constructed from scratch. The weights of the neurons were then revised using the transfer-learning approach to categorize brain MRI images into tumor subclasses using the 22-layer, isolated CNN model. Consequently, the transfer-learned model that was created has an accuracy rate of 95.75%.
On two datasets, Rizwan et al. [109] suggested a method to identify various BT classes using Gaussian-CNN (Figure 20). To categorize lesions into pituitary, glioma, and meningioma, one of the datasets is employed. The other distinguishes between the three glioma classes (II, III, and IV) . For the first and second datasets, respectively, these datasets have "233" and "73" victims with a total of "3064" and "516" images on T1 enhanced images. For the two datasets, the suggested method has accuracy of 99.8% and 97.14%.
A seven-layer CNN was suggested in [110] to assist with the three-class categorization of brain MR images. To decrease computing time, separable convolution was used. The suggested separable CNN model achieves 97.52% accuracy on a dataset of 3064 images that is available to the public. Figure 21 illustrated the proposed method.
There were several pre-trained CNNs utilized in [111], including GoogLeNet, Alexnet, Resnet50, Resnet101, VGG-16, VGG-19, InceptionResNetV2, and Inceptionv3. To accommodate additional image categories, the final few layers of these Networks were modified. Data from the clinical, Harvard, and Figshare repositories were widely used to assess these models. Following that, the dataset was divided into training and testing halves in a 60:40 ratio. The validation on the test set demonstrates that, when compared to other proposed models, the Alexnet with transfer learning demonstrated the best performance in the shortest amount of time. The suggested method can obtain accuracy values of 100%, 94%, and 95.92% using three datasets and is more generic because it does not require any manually created features.
The suggested framework [112] describes three experiments that classified brain malignancies such meningiomas, gliomas, and pituitary tumors using three designs of CNN (AlexNet, VGGNet, and GoogLeNet). Using the MRI slices of the brain tumor dataset from Figshare, each study then investigates transfer learning approaches like as fine-tuning and freezing. For results generalization, increasing dataset samples, and minimizing the risk of over-fitting, the data augmentation approaches are applied on the MRI slices. The fine-tuned VGG16 architecture attained the best accuracy of 98.69 in terms of categorization in the proposed studies.
Table 7. MRI brain tumor classification using DL.
Table 7. MRI brain tumor classification using DL.
Ref Scan year technique Method result Performance Metrics
[99] MRI 2015 DL Custom-CNN 96.00% Acc
[100] MRI 2019 DL Custom-CNN 98.70% Acc
[101] MRI 2020 DL VGG-16, Inception-v3, ResNet-50 96% 75% 89% Acc
[102] MRI 2021 DL AlexNet, GoogLeNet, SqueezeNet 97.10% Acc
[103] MRI 2021 DL Custom-CNN 82.89% ROC
[104] MRI 2018 DL AlexNet 90.90% test acc
[105] MRI 2021 DL multi CNN structure 98.67% 98.06% 98.33% 98.06% precision,
f1 score, precision,
sensitivity
[106] MRI 2022 DL EfficientNetB0 98.80% Acc
[70] MRI 2022 DL ResNet18 88.00% AUC
[107] MRI 2022 DL Custom-CNN 98.70% Acc
[108] MRI 2022 DL Custom-CNN 95.75% Acc
[109] MRI 2022 DL Gaussian-CNN 99.80% Acc
[110] MRI 2020 DL seven-layer CNN 97.52% Acc
[111] MRI 2021 DL Alexnet 100.00% Acc
[112] MRI 2019 DL VGG16 98.69% Acc

5.3.4. Hybrid techniques

Hybrid strategies use multiple approaches to achieve high accuracy; it emphasize the benefits of each approach while minimizing the drawbacks. The first method employed a segmentation technique to identify the part of the brain that was infected and the second method for classification.
The proposed integrated SVM and ANN-based method for classification can be discovered in [113]. The FCM method is used to segment the brain MRI images initially. where the updated membership and k value diverge from the standard method. In order to distinguish and categorize tumors, two types of characteristics have been retrieved from segmented images. Using SVM, the first category of statistical features was used to differentiate between normal or abnormal brain MRI images. This SVM technique has an accuracy rate of 97.44%. Area, perimeter, orientation, and eccentricity are additional criteria that were utilized to distinguish between the tumor and various malignant stages I through IV. Through the ANN back propagation technique, the tumor categories and stages of malignant tumors are classified. This suggested strategy has a 97.37% accuracy rate for categorizing tumor stages.
A hybrid segmentation strategy using ANN was suggested in [114] for enhancing the classification outcomes of the brain tumor. First, utilizing skull stripping, and thresholding, the tumor region was segmented. The segmented tumor was subsequently recognized using the canny algorithm, and the features of the identified tumor cell region were then used as the input of the ANN for classification. 98.9% accuracy can be attained with the provided strategy.
A system that can identify and categorize the different types of tumors as well as detect them in T1 and T2 image sequences was proposed by Ramdlon et al. [52]. Only the Axial section of the MRI results, which are divided into three classifications (Glioblastoma, Astrocytoma, and Oligodendroglioma), are used for the data analysis using this method. Basic image processing techniques, including image enhancement, binarization, morphological, and watershed, were used to identify the tumor region. Following the segmentation of the shape extraction feature, the KNN classifier was used to classify tumors. 89.5% of tumors were correctly classified.
Gurbina et al. [115] described the integrated DWT and SVM classification method that is suggested. The initial segmentation of the brain MRI images was performed using the Ostu's approach. The DWT features have been obtained from segmented images in order to identify and categorize tumors. Brain MRI images were divided into benign and malignant categories using an SVM classifier. This SVM method has a 99% accuracy rate.
The objective of the study in [116] is multi-level segmentation for effective feature extraction and brain tumor classification from MRI data. The authors used thresholding, the watershed algorithm, and morphological methods for segmentation after pre-processing the MRI image data. Through CNN, features are extracted, and after that, SVM classed the tumor images as malignant or non-cancerous. The proposed algorithm has an overall accuracy of 87.4%.
The classification of brain tumors into three types—glioblastoma, sarcoma, and metastatic —has been proposed by the authors of [117]. The authors first used FCM clustering to segment the brain tumor, and then DWT to extract the features. PCA was then used to minimize the characteristics. Using six layers of DNN, categorization was completed. The suggested method displays 98% accuracy.
Table 8. Hybrid techniques.
Table 8. Hybrid techniques.
Ref year Segmentation Method Feature Extraction Classifier Accuracy
[113] 2017 FCM shape and statistical SVM and ANN 97.44% and 97.37%
[117] 2017 FCM DWT and PCA CNN 98.00%
[52] 2019 watershed shape KNN 89.50%
[115] 2019 Ostu's DWT SVM 99.00%
[116] 2020 thresholding and watershed CNN SVM 87.4%.
[114] 2020 canny GLCM and Gabor ANN 98.90%

5.3.5. Various segmentation and classification methods employing CT images.

Wavelet Statistical Texture features (WST) and Wavelet Co-occurrence Texture features (WCT) were combined to automatically segment brain tumors in CT images in [118]. After utilizing GA to choose the best texture features, two different NN classifiers as shown in Figure 22 were tested to segment the region of a tumor. This approach is shown to provide good outcomes with an accuracy rate of above 97%.
For the segmentation and classification of cancers in brain CT images utilizing SVM with GA feature selection, a novel dominating feature extraction methodology was presented in [119]. They used FCM and K means during the segmentation step, and GLCM and WCT during the feature extraction stage. This approach is shown to provide positive results with an accuracy rate of above 98%.
An improved semantic segmentation model for CT images has been suggested in [120]. Additionally, classification is used in the suggested work. In the suggested architecture, the semantic segmentation network, which has a number of convolutional layers and pooling layers, was used to first segment brain image. Then, using the GoogleNet model, the tumor was divided into three distinct groups, including meningioma, glioma, and pituitary tumor. The overall accuracy achieved with this strategy was 99.6%.
A unique correlation learning technique utilizing CNN and ANN was proposed by Woniak et al. [121]. CNN used the support neural network to determine the best filers for the convolution and pooling layers. As consequence, the main neural classification improved efficiency and learns more quickly. Results indicated that the CLM model can achieve 96% accuracy, 95% precision, and 95% recall.
The contribution of image fusion to an enhanced brain tumor classification framework was examined by Nanmaran et al. [122], and this new fusion-based tumor categorization model can be more successfully applied to personalized therapy. A distinct cosine transform-based (DCT) fusion technique is utilized to combine MRI and SPECT images of benign and malignant class brain tumors. With the help of the features extracted from fused images, SVM, KNN, and decision tree were set to the test. When using features extracted from fused images, the SVM classifier outperformed KNN and decision tree classifiers with an overall accuracy of 96.8%, specificity of 93%, recall of 94%, precision of 95%, and F1 score of 91%.
Table 9. Various segmentation and classification methods employing CT images.
Table 9. Various segmentation and classification methods employing CT images.
Ref year type segmentation feature extraction feature selection classification result
[118] 2011 CT NN WCT and WST GA - 97.00%
[119] 2011 CT FCM & k-mean GLCM and WCT GA SVM 98.00%
[120] 2020 CT Semantic - - GoogleNet 99.60%
[121] 2021 CT - - - CNN 96.00%
[122] 2022 SPECT/MRI - DCT - SVM 96.80%

6. Discussion

The majority of brain tumor segmentation and classification strategies were presented in this review. The quantitative efficiency of numerous conventional ML and DL-based algorithms is covered in this article. Figure 23 displays the total number of publications that were published between 2010 and 2022 that were used in this review.
Brain tumor segmentation uses traditional image segmentation methods like region growth and unsupervised machine learning. Noises, low image quality, and the initial seed point are its biggest challenge. The classification of pixels into multiple classes has been accomplished in the second generation of segmentation methods using unsupervised ML, such as FCM and k-means. These techniques are, nevertheless, quite noise-sensitive. In order to overcome this difficulty, pixel-level classification-based segmentation approaches utilizing conventional supervised ML have been presented. Feature engineering, which extracts the tumor-descriptive pieces of information for the model's training, is frequently used in conjunction with these techniques. Additionally, post-processing helps to further improve the results of supervised machine learning segmentation. Through the pipeline of its component parts, the deep learning-based approach accomplishes an end-to-end segmentation of tumors using an MRI image. These models frequently eliminate the requirement for manually built features by automatically extracting tumor descriptive information. However, their application in the medical domains is limited by the necessity for a big dataset for training the models and the complexity of understanding the models.
In addition to the segmentation of the brain cancer region from the MRI scan, classification of the tumor into its appropriate type is crucial for diagnosis and treatment planning, which in today's medical practice actually necessitates a biopsy process. Several approaches that use shallow ML and DL have been put forth for classifying brain tumors. The preprocessing, ROI identification, and feature extraction steps are frequently included in typical shallow ML techniques. Extracting descriptive information is a difficult task because of the inherent noise sensitivity associated with MRI image collection as well as differences in the shape, size, and position of tumor tissue cells. As a result, deep learning algorithms are currently the most advanced method for classifying many types of brain cancers, including astrocytomas, gliomas, meningiomas, and pituitary tumors. This review has covered a number of classifications of brain tumors.
The noisy nature of an MRI imagine is one of the most frequent difficulties in ML-based segmentation and classification of brain tumors. In order to increase the precision of brain tumor segmentation and classification models, noise estimation and denoising MRI images is a vital pre-processing operation. As a result, a number of methods, including the median filter [113], the Wiener filter and DWT [115], and DL-based methods [116], have been suggested for denoising MRI images.
Large amounts of data are needed for DL models to operate effectively, but truly, there aren't sufficient datasets available. Data augmentation aids in expanding small datasets and creating a powerful generalized model. A common augmentation method for MRI images has not yet been developed. Although many methods have been presented by researchers, their primary goal is to increase the amount of image. Most of the time, they ignore the connections between space and texture. For comparative analysis to be conducted on its foundation, an identical augmentation technique is required.

7. Conclusions

The review's primary goal is to present the state-of-the-art in the field of brain cancer, which includes the pathophysiology of the disease, imaging technologies, WHO classification standards for tumors, primary methods of diagnosis, and present CAD algorithms for brain tumor classifications using ML and DL techniques. Automating the segmentation and categorization of brain tumors using deep learning techniques has many advantages over region-growing and shallow ML systems. DL algorithms' powerful feature learning capabilities are primarily to blame for this. This study reviewed 46 studies that used ML and DL to classify brain tumors based on MRI, and it examined the challenges and obstacles that CAD brain tumor classification techniques now face in practical application and advancement. a thorough examination of the variables that might have an impact on classification accuracy.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Watson C., Kirkcaldie M., and Paxinos G., The Brain: An Introduction to Functional Neuroanatomy. 2010. [Online].. Available: http://ci.nii.ac.jp/ncid/BB04049625.
  2. Jellinger K., “The Human Nervous System Structure and Function, 6th edn,” European Journal of Neurology, vol. 16, no. 7, p. e136, May 2009. 20 May. [CrossRef]
  3. DeAngelis L., M., “Brain tumors,” New England Journal of Medicine, vol. 344, no. 2, pp. 114–123, 2001. [CrossRef]
  4. Louis, D.N.; Perry, A.; Wesseling, P.; Brat, D.J.; Cree, I.A.; Branger, D.F.; Hawkins, C.; Ng, H.K.; Pfister, S.M.; Reifenberger, G.; et al. The 2021 WHO classification of tumors of the central nervous system: A summary. Neuro-Oncology 2021, 23, 1231–1251. [Google Scholar] [CrossRef]
  5. Hayward, R.M.; Patronas, N.; Baker, E.H.; Vézina, G.; Albert, P.S.; Warren, K.E. Inter-observer variability in the measurement of diffuse intrinsic pontine gliomas. J. Neuro-Oncol. 2008, 90, 57–61. [Google Scholar] [CrossRef] [PubMed]
  6. Mahaley, M.S., Jr.; Mettlin, C.; Natarajan, N.; Laws, E.R., Jr.; Peace, B.B. National survey of patterns of care for brain-tumor patients. J. Neurosurg. 1989, 71, 826–836. [Google Scholar] [CrossRef] [PubMed]
  7. Sultan, H. H., Salem, N. M. and Al-Atabany, W. (2019) “Multi-Classification of Brain Tumor Images Using Deep Neural Network,” IEEE Access, vol. 7, pp. 69215–69225, Jan. 2019. [CrossRef]
  8. Johnson, Derek R., Julie B. Guerin, Caterina Giannini, Jonathan M. Morris, Lawrence J. Eckel, and Timothy J. Kaufmann. "2016 updates to the WHO brain tumor classification system: what the radiologist needs to know." Radiographics 37, No. 7 (2017): 2164-2180. [CrossRef]
  9. Jan, C. Buckner, et al., ―Central Nervous System Tumors, Mayo Clinic Proceedings, Vol. 82, No. 10, 2007, pp. 1271-1286. [CrossRef]
  10. World Health Organization: WHO, “Cancer,” www.who.int, Jul. 2019, [Online]. Available: https://www.who.int/health-topics/cancer (accessed on 10 may 2023).
  11. Amyot, F.; Arciniegas, D.B.; Brazaitis, M.P.; Curley, K.C.; Diaz-Arrastia, R.; Gandjbakhche, A.; Herscovitch, P.; Hindsll, S.R.; Manley, G.T.; Pacifico, A.; et al. A review of the effectiveness of neuroimaging modalities for the detection of traumatic brain injury. J. Neurotrauma 2015, 32, 1693–1721. [Google Scholar] [CrossRef] [PubMed]
  12. Pope, W.B. Brain metastases: Neuroimaging. Handb. Clin. Neurol. 2018, 149, 89–112. [Google Scholar] [PubMed]
  13. Abd-Ellah, Mahmoud Khaled, Ali Ismail Awad, Ashraf AM Khalaf, and Hesham FA Hamed. "A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned." Magnetic resonance imaging 61 (2019): 300-318. [CrossRef]
  14. Ammari, S.; Pitre-Champagnat, S.; Dercle, L.; Chouzenoux, E.; Moalla, S.; Reuze, S.; Talbot, H.; Mokoyoko, T.; Hadchiti, J.; Diffetocq, S.; et al. Influence of Magnetic Field Strength on Magnetic Resonance Imaging Radiomics Features in Brain Imaging, an In Vitro and In Vivo Study. Front. Oncol. 2021, 10. [Google Scholar] [CrossRef] [PubMed]
  15. L. Sahoo, L. Sarangi, B. R. Dash, and H. K. Palo, “Detection and Classification of Brain Tumor Using Magnetic Resonance Images,” in Lecture notes in electrical engineering, Springer Science+Business Media, 2020. [CrossRef]
  16. Ammari, S.; Pitre-Champagnat, S.; Dercle, L.; Chouzenoux, E.; Moalla, S.; Reuze, S.; Talbot, H.; Mokoyoko, T.; Hadchiti, J.; Diffetocq, S.; et al. Influence of Magnetic Field Strength on Magnetic Resonance Imaging Radiomics Features in Brain Imaging, an In Vitro and In Vivo Study. Front. Oncol. 2021, 10. [Google Scholar] [CrossRef] [PubMed]
  17. Kaur, R. and Doegar A., ‘Localization and Classification of Brain Tumor using Machine Learning & Deep Learning Techniques’, Int. J. Innov. Technol. Explor. Eng., vol. 8, no. 9S, pp. 59–66, Aug. 2019. [CrossRef]
  18. “The Radiology Assistant : Multiple Sclerosis 2.0,” Dec. 01, 2021. https://radiologyassistant.nl/neuroradiology/multiple-sclerosis/diagnosis-and-differential-diagnosis-3#mri-protocol-ms-brain-protocol (accessed , 2023). 22 May.
  19. Luo, Q.; Li, Y.; Luo, L.; Diao, W. Comparisons of the accuracy of radiation diagnostic modalities in brain tumor. Medicine 2018, 97, e11256. [Google Scholar] [CrossRef] [PubMed]
  20. “Positron Emission Tomography (PET),” Johns Hopkins Medicine, Aug. 20, 2021. https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/positron-emission-tomography-pet (accessed , 2023). 20 May.
  21. M. B. & Spine, “SPECT scan,” 2022. https://mayfieldclinic.com/pe-spect.
  22. Sastry R., A. et al., “Applications of Ultrasound in the Resection of Brain Tumors,” Journal of Neuroimaging, vol. 27, no. 1, pp. 5–15, Jan. 2017. [CrossRef]
  23. Nasrabadi, N.M. Pattern recognition and machine learning. J. Electron. Imaging 2007, 16, 049901. [Google Scholar] [CrossRef]
  24. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef] [PubMed]
  25. Mohan, MR Maneesha, C. Helen Sulochana, and T. Latha. "Medical image denoising using multistage directional median filter." 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015].. IEEE, 2015. [CrossRef]
  26. Borole, Vipin Y., Sunil S. Nimbhore, and Dr Seema S. Kawthekar. "Image processing techniques for brain tumor detection: A review." International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) 4.5 (2015): 2.
  27. Ziedan, R. H., Mead, M. A., & Eltawel, G. S. Selecting the Appropriate Feature Extraction Techniques for Automatic Medical Images Classification. International Journal, 1.‏ (2016).
  28. Amin, J., Sharif, M., Yasmin, M., & Fernandes, S. L. A distinctive approach in brain tumor detection and classification using MRI. Pattern Recognition Letters, (2020). 139, 118-127. [CrossRef]
  29. Islam, A., Reza, S. M., & Iftekharuddin, K. M. Multifractal texture estimation for detection and segmentation of brain tumors. IEEE transactions on biomedical engineering, 2013. 60(11), 3204-3215. [CrossRef]
  30. Gurbină, M., Lascu, M., & Lascu, D. Tumor detection and classification of MRI brain image using different wavelet transforms and support vector machines. In 2019 42nd International Conference on Telecommunications and Signal Processing (TSP) July, 2019. pp. 505-508. IEEE. [CrossRef]
  31. X. Xu et al., “Three-dimensional texture features from intensity and high-order derivative maps for the discrimination between bladder tumors and wall tissues via MRI,” International Journal of Computer Assisted Radiology and Surgery, vol. 12, no. 4, pp. 645–656, Jan. 2017. [CrossRef]
  32. Kaplan, K. Kaya, Y. Kuncan, M. Ertunç, H.M. Brain tumor classification using modified local binary patterns (LBP) feature extraction methods. Med. Hypotheses 2020, 139, 109696. [CrossRef] [PubMed]
  33. Afza, F., Khan M. S., Sharif M., and Saba T., “Microscopic skin laceration segmentation and classification: A framework of statistical normal distribution and optimal feature selection,” Microscopy Research and Technique, vol. 82, no. 9, pp. 1471–1488, Jun. 2019. [CrossRef]
  34. Lakshmi, A., Arivoli T., and Rajasekaran M. P., “A Novel M-ACA-Based Tumor Segmentation and DAPP Feature Extraction with PPCSO-PKC-Based MRI Classification,” Arabian Journal for Science and Engineering, vol. 43, no. 12, pp. 7095–7111, Nov. 2017. [CrossRef]
  35. Adair, J., Brownlee A. E. I., and Ochoa G., “Evolutionary Algorithms with Linkage Information for Feature Selection in Brain Computer Interfaces,” in Advances in intelligent systems and computing, Springer Nature, 2016, pp. 287–307. [CrossRef]
  36. Arakeri M., P. and Reddy G. R. M., “Computer-aided diagnosis system for tissue characterization of brain tumor on magnetic resonance images,” Signal, Image and Video Processing, vol. 9, no. 2, pp. 409–425, Feb. 2015. [CrossRef]
  37. Adair, J., ABrownlee. E. I., and Ochoa G., “Evolutionary Algorithms with Linkage Information for Feature Selection in Brain Computer Interfaces,” in Advances in intelligent systems and computing, Springer Nature, 2016, pp. 287–307. [CrossRef]
  38. Wang, S., Zhang, Y., Dong, Z., Du, S., Ji, G., Yan, J.,... & Phillips, P. Feed-forward neural network optimized by hybridization of PSO and ABC for abnormal brain detection. International Journal of Imaging Systems and Technology, 2015. 25(2), 153-164. [CrossRef]
  39. Abbasi, S. and Tajeripour F., Detection of brain tumor in 3D MRI images using local binary patterns and histogram orientation gradient, Neurocomputing, vol. 219, pp. 526–535, Jan. 2017. [CrossRef]
  40. Zöllner, F. G., Emblem, K. E., & Schad, L. R. SVM-based glioma grading: Optimization by feature reduction analysis. Zeitschrift Fur Medizinische Physik, 2012. 22(3), 205–214. [CrossRef]
  41. Huang, G.B., Zhu, Q.Y., Siew, C.K. Extreme learning machine: Theory and applications. Neuro-computing 2006, 70, 489–501. [CrossRef]
  42. Bhatele, K. R., & Bhadauria, S. S. Brain structural disorders detection and classification approaches: a review. Artificial Intelligence Review, 2019. 53(5), 3349–3401. [CrossRef]
  43. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Networks, 2015. 61, 85–117. [CrossRef]
  44. Hu, A., & Razmjooy, N. (2021). Brain tumor diagnosis based on metaheuristics and deep learning. International Journal of Imaging Systems and Technology, 31(2), 657–669. 2. [CrossRef]
  45. Tandel, G. M., Balestrieri, A., Jujaray, T., Khanna, N. N., Saba, L., & Suri, J. S. Multiclass magnetic resonance imaging brain tumor classification using artificial intelligence paradigm. Computers in Biology and Medicine, 2020. 122, 103804. [CrossRef]
  46. Sahaai, M. B. Brain tumor detection using DNN algorithm. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 2021.12(11), 3338-3345.
  47. Hashemi, M. (2019). Enlarging smaller images before inputting into convolutional neural network: zero-padding vs. interpolation. Journal of Big Data, 6(1). [CrossRef]
  48. Miotto, R., Wang, F., Wang, S., Jiang, X., & Dudley, J. T. (2018). Deep learning for healthcare: review, opportunities and challenges. Briefings in Bioinformatics, 19(6), 1236–1246. [CrossRef]
  49. Gorach, T. Deep convolutional neural networks-a review. International Research Journal of Engineering and Technology (IRJET), 5(07), (2018). 439.
  50. Ogundokun, R. O., Maskeliunas, R., Misra, S., & Damaševičius, R. Improved CNN Based on Batch Normalization and Adam Optimizer. In Computational Science and Its Applications–ICCSA 2022 Workshops: Malaga, Spain, July 4–7, 2022, Proceedings, Part V (pp. 593-604). Cham: Springer International Publishing. [CrossRef]
  51. Ismael, S. a. A., Mohammed, A., & Hefny, H. A. An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artificial Intelligence in Medicine, (2020).102, 101779. [CrossRef]
  52. Ramdlon, R. H., Kusumaningtyas, E. M., & Karlita, T. Brain Tumor Classification Using MRI Images with K-Nearest Neighbor Method. (2019). [CrossRef]
  53. Gurusamy, R., & Subramaniam, V. A machine learning approach for MRI brain tumor classification. Computers, Materials and Continua, 53(2), (2017). 91-109. [CrossRef]
  54. Pohle, R., & Toennies, K. D. Segmentation of medical images using adaptive region growing. In Medical Imaging 2001: Image Processing (Vol. 4322, pp. 1337-1346). SPIE. [CrossRef]
  55. Dey, N., & Ashour, A. S. Computing in medical image analysis. In Soft computing based medical image analysis (pp. 3-11). Academic Press.
  56. Hooda, H., Verma, O. P., & Singhal, T. Brain tumor segmentation: A performance analysis using K-Means, Fuzzy C-Means and Region growing algorithm. In 2014 IEEE International Conference on advanced communications, control and computing technologies (pp. 1621-1626). IEEE. (2014, May). [CrossRef]
  57. Sharif, M., Tanvir, U., Munir, E. U., Khan, M. A., & Yasmin, M. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. Journal of ambient intelligence and humanized computing, 1-20.‏ (2018). [CrossRef]
  58. Shanthi, K. J., & Kumar, M. S. Skull stripping and automatic segmentation of brain MRI using seed growth and threshold techniques. In 2007 International conference on intelligent and advanced systems (pp. 422-426). (2007, November). IEEE. [CrossRef]
  59. Yao, J. Image Processing in Tumor Imaging, New Techniques in Oncologic Imaging. Zhang, F., & Hancock, ER Zhang.(2010). New Riemannian techniques for directional and tensorial image data. Pattern Recognition, 43(4), 1590-1606. [CrossRef]
  60. Singh, N. P., Dixit, S., Akshaya, A. S., & Khodanpur, B. I. Gradient Magnitude Based Watershed Segmentation for Brain Tumor Segmentation and Classification. In Advances in intelligent systems and computing (pp. 611–619). (2017). [CrossRef]
  61. Couprie, M., & Bertrand, G. Topological gray-scale watershed transformation. In Vision Geometry VI (Vol. 3168, pp. 136-146). (1997, October). SPIE. [CrossRef]
  62. Khan, M. S., Lali, M. I. U., Saba, T., Ishaq, M., Sharif, M., Saba, T., Zahoor, S., & Akram, T. Brain tumor detection and classification: A framework of marker-based watershed algorithm and multilevel priority features selection. Microscopy Research and Technique, 82(6), (2019). 909–922. [CrossRef]
  63. De Alencar Lotufo, R., Falcão, A. X., & De Assis Zampirolli, F. IFT-Watershed from gray-scale marker.(2003). [CrossRef]
  64. Dougherty, E. R. An introduction to morphological image processing. In SPIE. (1992). Optical Engineering Press.
  65. Kaur, D., & Kaur, Y. Various image segmentation techniques: a review. International Journal of Computer Science and Mobile Computing, (2014). 3(5), 809-814.
  66. Aslam, A., Khan, E., & Beg, M. S. Improved edge detection algorithm for brain tumor segmentation. Procedia Computer Science, (2015). 58, 430-437. [CrossRef]
  67. Cui, B., Xie, M., & Wang, C. A deep convolutional neural network learning transfer to SVM-based segmentation method for brain tumor. In 2019 IEEE 11th International Conference on Advanced Infocomm Technology (ICAIT) (2019, October). (pp. 1-5). IEEE. [CrossRef]
  68. Egmont-Petersen, M., de Ridder, D., & Handels, H. Image processing with neural networks—a review. Pattern recognition, (2002). 35(10), 2279-2301. [CrossRef]
  69. Pereira, S.; Pinto, A.; Alves, V.; Silva, C.A. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans. Med. Imaging 2016, 35, 1240–1251. [Google Scholar] [CrossRef] [PubMed]
  70. Ye, N., Yu, H., Chen, Z., Teng, C., Liu, P., Liu, X., Xiong, Y., Lin, X., Li, S., & Li, X. Classification of Gliomas and Germinomas of the Basal Ganglia by Transfer Learning. Frontiers in Oncology, 12 (2022). [CrossRef]
  71. Biratu, E. S., Schwenker, F., Ayano, Y. M., & Debelee, T. G. A survey of brain tumor segmentation and classification algorithms. Journal of Imaging, (2021). 7(9), 179. [CrossRef]
  72. Wikipedia contributors. F score. Wikipedia. https://en.wikipedia.org/wiki/F-score (2023).
  73. Brain Tumor Segmentation (BraTS) Challenge. Available online: http://www.braintumorsegmentation.org/.
  74. RIDER NEURO MRI - The Cancer Imaging Archive (TCIA) Public Access - Cancer Imaging Archive Wiki. https://wiki.cancerimagingarchive.net/display/Public/RIDER+NEURO+MRI.
  75. Harvard Medical School Data. Available online: http://www.med.harvard.edu/AANLIB/.
  76. The Cancer Genome Atlas. TCGA. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM.
  77. The Cancer Genome Atlas. TCGA-LGG. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-LGG.
  78. figshare. brain tumor dataset. (2017, April2) . Figshare. https://figshare.com/articles/dataset/brain_tumor_dataset/1512427/5. 1512.
  79. IXI Dataset – Brain Development. https://brain-development.org/ixi-dataset/.
  80. Gordillo, N., Montseny, E., Sobrevilla, P., A New Fuzzy Approach to Brain Tumor Segmentation, Fuzzy Systems (FUZZ), 2010 IEEE International Conference on, 18-, pp.1-8, 23 July. [CrossRef]
  81. Rajendran and, R. Dhanasekaran, A hybrid Method Based on Fuzzy Clustering and Active Contour Using GGVF for Brain Tumor Segmentation on MRI Images, European Journal of Scientific Research, Vol. 61, No. 2, 2011, pp. 305-313.
  82. Kishore, K. Reddy, et al, Confidence Guided Enhancing Brain Tumor Segmentation in Multi-Parametric MRI, 9th IEEE International Symposium on Biomedical Imaging, 12, pp. 366-369. 20 May. [CrossRef]
  83. Almahfud, M.A.; Setyawan, R.; Sari, C.A.; Setiadi, D.R.I.M.; Rachmawanto, E.H. An Effective MRI Brain Image Segmentation using Joint Clustering (K-Means and Fuzzy C-Means). In Proceedings of the 2018 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), Yogyakarta, Indonesia, 21-22 Nov. 2018; pp. 11–16. [Google Scholar]
  84. Chen, W.; Qiao, X.; Liu, B.; Qi, X.; Wang, R.; Wang, X. Automatic brain tumor segmentation based on features of separated local square. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017. [Google Scholar] [CrossRef]
  85. Gupta, N., Mishra S., and Khanna P., Glioma identification from brain MRI using superpixels and FCM clustering. 2018. [CrossRef]
  86. Razzak, M.I.; Imran, M.; Xu, G. Efficient Brain Tumor Segmentation With Multiscale Two-Pathway-Group Conventional Neural Networks. IEEE J. Biomed. Health Inform. 2019, 23, 1911–1919. [Google Scholar] [CrossRef] [PubMed]
  87. Myronenko, A. and Hatamizadeh A., Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs, in Springer eBooks, 2019, pp. 82–89. [CrossRef]
  88. Karayegen, G. and Aksahin M. F., Brain tumor prediction on MR images with semantic segmentation by using deep learning network and 3D imaging of tumor region, Biomedical Signal Processing and Control, vol. 66, p. 102458, Apr. 2021. [CrossRef]
  89. Zhang, Y., Dong Z., Wu L., and Wang S., “A hybrid method for MRI brain image classification,” Expert Systems With Applications, vol. 38, no. 8, pp. 10049–10053, Aug. 2011. [CrossRef]
  90. G. Yang et al., “Automated classification of brain images using wavelet-energy and biogeography-based optimization,” Multimedia Tools and Applications, vol. 75, no. 23, pp. 15601–15617, 15. 20 May. [CrossRef]
  91. Amin, J., Sharif M., Yasmin M., and Fernandes S. L., “A distinctive approach in brain tumor detection and classification using MRI,” Pattern Recognition Letters, vol. 139, pp. 118–127, Oct. 2017. [CrossRef]
  92. Tiwari, P., Sachdeva J., Ahuja C. K., and Khandelwal N., “Computer Aided Diagnosis System-A Decision Support System for Clinical Diagnosis of Brain Tumours,” International Journal of Computational Intelligence Systems, vol. 10, no. 1, p. 104, Jan. 2017. [CrossRef]
  93. Sachdeva J., Kumar V., Gupta I. R., Khandelwal N., and Ahuja C. K., “Segmentation, Feature Extraction, and Multiclass Brain Tumor Classification,” Journal of Digital Imaging, vol. 26, no. 6, pp. 1141–1150, May 2013. [CrossRef]
  94. Jayachandran and, R. Dhanasekaran, “Severity Analysis of Brain Tumor in MRI Images Using Modified Multi-texton Structure Descriptor and Kernel-SVM,” Arabian Journal for Science and Engineering, vol. 39, no. 10, pp. 7073–7086, Aug. 2014. [CrossRef]
  95. El-Dahshan, E.S.A.; Hosny, T.; Salem, A.B.M. Hybrid intelligent techniques for MRI brain images classification. Dig. Signal Process 2010, 20, 433–441. [Google Scholar] [CrossRef]
  96. Kang, J.; Ullah, Z.; Gwak, J. MRI-Based Brain Tumor Classification Using Ensemble of Deep Features and Machine Learning Classifiers. Sensors 2021, 21, 2222. [Google Scholar] [CrossRef] [PubMed]
  97. Díaz-Pernas, F. J., Martínez-Zarzuela, M., Antón-Rodríguez, M., & González-Ortega, D. A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network. Healthcare, 9(2), 153. (2021). [CrossRef]
  98. Badža, M. M., & Barjaktarović, M. Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network. Applied Sciences, 10(6), (2020). [CrossRef]
  99. Ertosun M., G. and Rubin D. L., ``Automated grading of gliomas using deep learning in digital pathology images: A modular approach with ensemble of convolutional neural networks,'' in Proc. AMIA Annu. Symp. Proc., vol. 2015, Nov. 2015, pp. 1899_1908. [PubMed]
  100. Sultan H., H., Salem N. M., and Al-Atabany,W. “Multi-Classification of Brain Tumor Images Using Deep Neural Network,” IEEE Access, vol. 7, pp. 69215–69225, 2019. [CrossRef]
  101. Khan, H., Jue W., Mushtaq M., and Mushtaq M., “Brain tumor classification in MRI image using convolutional neural network,” Mathematical Biosciences and Engineering, vol. 17, no. 5, pp. 6203–6216, Jan. 2020. [CrossRef]
  102. Özcan, H.; Emiro˘ glu, B.G.; Sabuncuo˘ glu, H.; Özdo˘gan, S.; Soyer, A.; Saygı, T. A comparative study for glioma classification using deep convolutional neural networks. Math. Biosci. Eng. MBE 2021, 18, 1550–1572. [Google Scholar] [CrossRef] [PubMed]
  103. Ruqian, H., Namdar K., Liu L., and Khalvati F., “A Transfer Learning–Based Active Learning Framework for Brain Tumor Classification,” Frontiers in Artificial Intelligence, vol. 4, 21. 20 May. [CrossRef]
  104. Yang, Y., Yan, L., Zhang, X., Han, Y., Nan, H., Hu, Y., Hu, B., Yan, S., Zhang, J. Z., Cheng, D., Ge, X., Cui, G., Zhao, D., & Wang, W. Glioma Grading on Conventional MR Images: A Deep Learning Study With Transfer Learning. Frontiers in Neuroscience, 12. (2018). [CrossRef]
  105. Hamdaoui, H. E., Benfares, A., Boujraf, S., Chaoui, N. E. H., Alami, B., Maaroufi, M., & Qjidaa, H. High precision brain tumor classification model based on deep transfer learning and stacking concepts. Indonesian Journal of Electrical Engineering and Computer Science. 2021, 24(1), 167. [CrossRef]
  106. Khazaee, Z., Langarizadeh, M., & Ahmadabadi, M. R. N. Developing an Artificial Intelligence Model for Tumor Grading and Classification, Based on MRI Sequences of Human Brain Gliomas. International Journal of Cancer Management, (2022). 15(1). [CrossRef]
  107. Amou, M. A., Xia, K., Kamhi, S., & Mouhafid, M. A Novel MRI Diagnosis Method for Brain Tumor Classification Based on CNN and Bayesian Optimization. Healthcare, 10(3), 494. (2022). [CrossRef]
  108. Alanazi, M., Ali, M., Hussain, J., Zafar, A., Mohatram, M., Irfan, M., AlRuwaili, R., Alruwaili, M., Ali, N. T., & Albarrak, A. M. (2022). Brain Tumor/Mass Classification Framework Using Magnetic-Resonance-Imaging-Based Isolated and Developed Transfer Deep-Learning Model. Sensors, 22(1), 372. [CrossRef]
  109. Rizwan, M., Shabbir, A., Javed, A. R., Shabbr, M., Baker, T., & Al-Jumeily, D. Brain Tumor and Glioma Grade Classification Using Gaussian Convolutional Neural Network. IEEE Access, 2022. 10, 29731–29740. [CrossRef]
  110. Isunuri, B. V., & Kakarla, J. Three-class brain tumor classification from magnetic resonance images using separable convolution based neural network. Concurrency and Computation: Practice and Experience, 34(1). (2021). [CrossRef]
  111. Kaur, T., & Gandhi, T. K. Deep convolutional neural networks with transfer learning for automated brain image classification. Journal of Machine Vision and Applications, 31(3). (2020). [CrossRef]
  112. Rehman, A., Naz, S., Razzak, M. I., Akram, F., & Imran, M. A Deep Learning-Based Framework for Automatic Brain Tumors Classification Using Transfer Learning. Circuits Systems and Signal Processing, (2019). 39(2), 757–775. [CrossRef]
  113. Ahmmed, R., Swakshar, A. S., Hossain, M. F., & Rafiq, M. A. Classification of tumors and it stages in brain MRI using support vector machine and artificial neural network. (2017). [CrossRef]
  114. Sathi, K., & Islam, S. Hybrid Feature Extraction Based Brain Tumor Classification using an Artificial Neural Network. (2020). [CrossRef]
  115. Gurbina, M., Lascu, M., & Lascu, D. Tumor Detection and Classification of MRI Brain Image using Different Wavelet Transforms and Support Vector Machines. (2019). [CrossRef]
  116. Islam, R., Imran, S., Ashikuzzaman, M., & Khan, M. A. Detection and Classification of Brain Tumor Based on Multilevel Segmentation with Convolutional Neural Network. Journal of Biomedical Science and Engineering, (2020). 13(04), 45–53. [CrossRef]
  117. Mohsen, H., El-Dahshan, E. A., El-Horbaty, E. M., & Salem, A. M. (2017). Classification using deep learning neural networks for brain tumors. Future Computing and Informatics Journal, 3(1), 68–71. 1. [CrossRef]
  118. Padma, A., and Sukanesh R. "A wavelet based automatic segmentation of brain tumor in CT images using optimal statistical texture features." International Journal of Image Processing 5.5 (2011): 552-563.
  119. Padma, A., & Sukanesh, R. Automatic Classification and Segmentation of Brain Tumor in CT Images using Optimal Dominant Gray level Run length Texture Features. International Journal of Advanced Computer Science and Applications, (2011). 2(10). [CrossRef]
  120. Ruba, T., Tamilselvi, R., Parisabeham, M., & Aparna, N. Accurate Classification and Detection of Brain Cancer Cells in MRI and CT Images using Nano Contrast Agents. Biomedical and Pharmacology Journal, (2020). 13(03), 1227–1237. [CrossRef]
  121. Woźniak, M., Siłka, J., & Wieczorek, M. W. Deep neural network correlation learning mechanism for CT brain tumor detection. Neural Computing and Applications, (2021). 35(20), 14611–14626. [CrossRef]
  122. Nanmaran, R., et al. "Investigating the role of image fusion in brain tumor classification models based on machine learning algorithm for personalized medicine." Computational and Mathematical Methods in Medicine 2022 (2022). [CrossRef]
Figure 2. Brain MRI image [18].
Figure 2. Brain MRI image [18].
Preprints 78118 g002
Figure 3. MRI planes a) Coronal, b) Sagittal, c) Axial.
Figure 3. MRI planes a) Coronal, b) Sagittal, c) Axial.
Preprints 78118 g003
Figure 4. MRI brain tumor. a) FLAIR image, b) T1 image, and c) T2 image [18].
Figure 4. MRI brain tumor. a) FLAIR image, b) T1 image, and c) T2 image [18].
Preprints 78118 g004
Figure 5. CT brain tumor [19].
Figure 5. CT brain tumor [19].
Preprints 78118 g005
Figure 7. ML block diagram.
Figure 7. ML block diagram.
Preprints 78118 g007
Figure 8. Extreme learning machine.
Figure 8. Extreme learning machine.
Preprints 78118 g008
Figure 9. DL block diagram.
Figure 9. DL block diagram.
Preprints 78118 g009
Figure 10. CNN architecture [50].
Figure 10. CNN architecture [50].
Preprints 78118 g010
Figure 11. The most popular CNN architectures.
Figure 11. The most popular CNN architectures.
Preprints 78118 g011
Figure 12. Overview of the suggested framework for tumor detection [82].
Figure 12. Overview of the suggested framework for tumor detection [82].
Preprints 78118 g012
Figure 13. 2PG-CNN architecture [86].
Figure 13. 2PG-CNN architecture [86].
Preprints 78118 g013
Figure 15. The proposed technique's methodology. [95].
Figure 15. The proposed technique's methodology. [95].
Preprints 78118 g015
Figure 17. Proposed method [101].
Figure 17. Proposed method [101].
Preprints 78118 g017
Figure 18. Workflow of the suggested active learning framework based on transfer learning [103].
Figure 18. Workflow of the suggested active learning framework based on transfer learning [103].
Preprints 78118 g018
Figure 19. proposed process for deep transfer learning [105].
Figure 19. proposed process for deep transfer learning [105].
Preprints 78118 g019
Figure 20. GCNN framework [109].
Figure 20. GCNN framework [109].
Preprints 78118 g020
Figure 21. separable CNN model [110].
Figure 21. separable CNN model [110].
Preprints 78118 g021
Figure 22. Architecture of NN [118].
Figure 22. Architecture of NN [118].
Preprints 78118 g022
Figure 23. Number of articles published from 2010 to 2022.
Figure 23. Number of articles published from 2010 to 2022.
Preprints 78118 g023
Figure 24. Number of articles published that perform classification, segmentation, or both.
Figure 24. Number of articles published that perform classification, segmentation, or both.
Preprints 78118 g024
Table 4. Summary of the dataset.
Table 4. Summary of the dataset.
Dataset MRI sequences Source
BraTS T1,T2 and Flair [73]
RIDER T1,T2,FLAIR [74]
Harvard T2 [75]
TCGA T1,T2,FLAIR [76],[77]
Figshare T1 [78]
IXI T1,T2 [79]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated