1 Introduction

Brain tumour segmentation is a complex and challenging issue in medical imaging. Brain tumour segmentation aims to accurately describe brain tumour regions by using appropriately placed masks. Deep learning algorithms have shown satisfactory accuracy in recent years in solving various problems in computer vision, such as image classification, object recognition, and classification techniques. Many different types of deep learning have been used to separate brain Tumors with excellent system performance [1]. Deep Learning (DL) computational models comprise several processing layers representing data at different levels of abstraction. It is used in almost every field, especially medical imaging and biostatistics. As a result, DL algorithm has significantly improved identification, prediction, and diagnostic testing methods in various medical fields [2]. The problem of forecasting brain Tumors and patient survival remains unsolved for researchers. MRI images allow for new research method in brain cancers, such as prediction, segmentation, and segmentation analysis.

Brain tumours are classified as either benign or malignant. MRI data should be used to differentiate and categorize tumor types (gliomas, meningiomas, pituitary Tumors) to assist physicians and avoid risky histology procedures. The most frequent types of brain tumor are gliomas. They bring on most brain tumors and contain unchecked proliferating cells that, though they rarely spread to the spinal cord and other body organs, can increase and invade nearby healthy tissues. An essential member of this group is meningiomas. Brain tumors can result in life-threatening conditions, unlike benign tumors of other organs. Some tumors (like meningiomas) have a slight chance of developing cancerous tumors. Surgical removal is likely higher because they typically do not spread to the nearby brain tissue. Pituitary tumors are pituitary tumors that originate from the pituitary gland and regulate hormones and bodily processes. At the same time, improving the quality and precision of the diagnostic is challenging. For this objective, various approaches have been presented [3].

The advancement of new deep learning-based algorithms and artificial intelligence has significantly impacted medical imaging, specifically in illness diagnoses. Convolutional neural network (CNN) models are the most widely used deep learning models in neural networks. The five layers of the CNN architecture are the input layer, the pooling layer, the folding layer, the classification layer, and the fully connected layer [4,5,6]. The most widely used and highly accurate algorithms are Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Artificial Neural Networks (ANN). At the same time, improving brain Tumor classification will require growing the available data in the field and developing a new artificial neural network-based technique known as deep learning. The most notable distinction between the three tumours is that meningiomas are usually benign, whereas gliomas are usually malignant. CNNs are neural network that helps you visualize, interpret, and explore enormous volumes of data in medical imaging. The suggested autonomous computer diagnostic system's performance is evaluated using parameter accuracy.

The application of CNN's integrated feature extraction and classification to recognize and identify brain Tumors is suggested in the paper [7,8,9,10]. An artificial convolutional neural network serves as the classification model for tumor detection systems. When it comes to classifiers that need to process many data, CNNs are superior. A three-layer convolutional neural network with an activation function is used in the implemented of this work. The proposed work is 98% accurate. A multiscale CNN-based deep learning method was proposed for brain tumor classification and segmentation. Each window is processed through three convolution passes using kernels of three scales (large, medium, and small) to extract features. Each pass includes two convolution stages with ReLU correction and a 3X 3 max pooling kernel with a stride of 2. The input image is processed in three different spatial scales with varying processing paths, which is one of the contrasts between our proposal and past work. It has a multi-task classification system based on CNN for Tumor identification and prediction.

The segmentation of Tumors in a CNN-based model is used to localize brain cancers. Rather than creating a separate model for each classification assignment, the method employs a model to categorize a range of brain MRI classification data. Instead of using a new model for each classification, this brain tumor classification model employs a multi-task classifier [11]. Machine learning methods for classifying brain Tumors require many extracted functions to be effective. As a result, various machine learning approaches classify Tumors on normal brain MRI images using individual function extraction methods. These approaches for extracting functions and organizing them are not automated.

Expectation–Maximization (EM), Long Short Term Memory (LSTM), Finite Element Method (FEM), FCM, SVM, ANN and CNN -based segmentation, and other techniques have been utilized to diagnose brain Tumors from MRI images in recent years [12]. Extensive research is being conducted into the detection, segmentation, and classification of brain tumours using MRI images. The use of a novel DCNN-LuNet for brain tumor categorization has been proposed in this research work. The proposed LuNet model for developing a new way to solve the problem of automatically classifying brain tumors based on magnetic resonance imaging (MRI).

These diagnostic procedures' main limitations are intrusive, time-consuming, and increased vulnerability to sampling errors. Clinics and radiologists are computer-aided auto-detection and diagnostics to assist professionals in making rapid and accurate judgments to increase diagnostic capabilities and save diagnostic time [3, 4]. Machine learning algorithms make detection and classification processes more accessible and accurate for medical image analysis. For the detection and classification of meningioma, CNN deep network-based image analysis approaches with excellent accuracy, and classification speed has been developed. This research uses common medical imaging sub-modules such as pretreatment, extraction of features, categorization, and segmentation [5]. The capabilities of a brain Tumor detection system based on a hybrid machine learning algorithm are presented in this research work.

1.1 Problem statement

After reviewing several research articles on reinforcement learning to detect brain MRI tumors, it was discovered that deep models had many layers. Existing algorithms complicate the training parameters, mainly when dealing with small data as the model's complexity grows, which increases simulation time. As a result, we propose a solution to the issue of reduced data set complexity. The proposed LuNet model has a few layers, does not require more iteration, takes very little running time, and provides better results for all the test parameters in the test data set.

1.2 Major contribution

A four-layer deep LuNet model is used for classification. Moreover, traditional methods are incapable of achieving locational inheritance and authenticity. As a result, this research work presented an enhanced automatic classification method based on a hybrid DCNN and a LuNet classifier. The proposed method was used to preprocess, classify, and segment adult primary brain Tumors. The proposed hybrid architecture extracts feature from augmented images and classify them as normal or abnormal Tumor images on the inside. FCM and GMM use local morphological and functional approaches to classify Tumor regions.

The proposed rating system employs a hybrid classification approach for quantitative and qualitative evaluation. Measurements like sensitivity and specificity show that segmentation and classification accuracy and rate are 99.4% and 99.5%, respectively. Furthermore, experimental results demonstrate this is true by establishing the accuracy and F-score for authentic images. The deep LuNet model is easier for tumor diagnosis than the conventional machine learning algorithms.

The rest of the proposed framework is structured as follows; Sect. 2 discusses the literature. The proposed work is described in Sect. 3. Section 4 discusses the resulting analysis. The final research work is concluded in Sect. 5.

2 Literature survey

The proposed strategy is highly reliant on brain Tumor classification. Deep learning (DL) algorithm has become famous for classification in recent years. Previous processes and methods for segmentation and ML-based categorization of brain Tumors on MRI are described in this part. Manikandan  et al. [13] proposed a hybrid ML algorithm employing K-Nearest Neighbor (KNN)) method, Decision Tree (DT), and Random Forest (RF) (KNN-RF-DT). The Cancer Genome Atlas Glioblastoma Multiforme (TCGA-GBM) [14] data collection was used in the proposed study to perform experimental calculations of the suggested methodology. The primary types of brain tumors are represented in this open canonical Glioblastma Multi-forme dataset. The proposed method is evaluated overall on a dataset of 2,556 images and is used for 85:15 training and testing, respectively, producing a good accuracy of 97.305%. Otsu's threshold approach was used to divide the data at first. For function extraction, Stable Wavelet Transform (SWT), Gray Level Co-occurrence Matrix (GLCM) and Principal Component Analysis (PCA) were utilized, yielding 13 types of functions. Rather than using deep learning, researchers strive to increase the performance of existing classifiers. Based on training data set size and computational complexity, traditional classifiers are superior methods for deep learning.

YilamShazadi et al. [14] implemented a hybrid CNN with Long-Short Term Memory (LSTM) network to detect brain tumor. The system was tested using data from the 2015 BRAT dataset. According to the results, the parts recovered from VGG-16 have greater classification accuracy than those extracted from AlexNet and ResNet.

Wu Wentao et al. [15] Suggested a support vector machine approach based on deep convolutional neural networks (DCNN-F-SVM). Segment brain tumors using each model's BraTS dataset and custom dataset. The segmentation's results demonstrate that: The proposed model performs significantly better than ensemble SVM classifiers and deep convolutional neural networks. The segmentation model for brain tumors that has been proposed has three essential processes. A DCNN must first be trained to learn the mapping from the image to the tumor marker space. The test image is combined with the deep convolutional neural network's prepared prediction labels in the second step, and the input data is fed to the SVM classifier. The third step entails cascading a DCNN and an ensemble support vector machine to train a deep classifier. To segment brain Tumors, run each model on custom datasets. Outperforms deep convolutional neural networks and ensemble SVM classifiers regarding how well it can separate things into groups.

Javaria Amin et al. [16] suggested a Weiner filter with multiple wavelet frequency bands to remove and improve input slice noise. Two publicly available datasets and one locally collected dataset are used to validate the proposed technique. The local dataset includes 86 images from Multan's Nishtar Hospital in Pakistan, including 49 tumor images and 37 non-tumor images. A portion of Tumor pixels can be clustered using PF (potential field) clustering. In addition, Tumor areas were isolated from FLAIR and T2MRI using global thresholds and different mathematical processes. The LBP (Local Binary Pattern) and GWT (Gabor Wavelet Transform) functions work together to get a good classification. To classify Tumor/non-Tumor MR slices, multiple classifiers use the proposed mixed texture feature for each segmented portion. We discovered that functional fusion and KNN outperformed other classifiers based on a thorough performance evaluation. This paper suggests and develops a novel Genetic Algorithm based on the Seed Corrected Region Growing (GFSMRG) approach and a Back Propagation Neural Network (BPNN) based on fuzzy initialization. The introduced work has four stages: preprocessing, segmentation, feature extraction, and classification [17]. It also specifies the accuracy and temporal complexity of the GFSMRG algorithm. Performance indicators such as the similarity index, jacquard index, sensitivity, specificity and accuracy have been used to statistically and qualitatively validate the method's performance.

Jaeyong Kang et al. [18] applied the transfer learning principle and some pre-trained CNN to extract deep properties from the brain's magnetic resonance (MR) pictures. Different machine learning classifiers assess the extracted depth functions. Select the top three deep parts that operate well with various machine learning classifiers, connect them to a collection of deep functions, feed them to various machine classification models, and predict the final result. Different types of pre-training for brain Tumor classification using three different MRI datasets (deep feature extractor, machine learning classifier, and deep feature set) published on the web evaluate the finished model's validity. By collecting in-depth features, experimental results can significantly improve performance. In most cases, Support Vector Machines (SVMs) with long-term base function cores outperform other machine learning rating containers.

Shanka Ramesh Gunasekaratal et al. [19] proposed the triple deep learning architecture. First, a deep-folded CNN is used to implement the classifier. Then, for decision-making, tumor regions in the classified image are identified using region-based folding CNN. As the third and final step in the segmentation process, the segmentation algorithm delineated the focused Tumor boundaries. The proposal's final output, the border area, divided by the gold standard, and the boundary area divided by the targeted expert are used to determine the noise ratio (PSNR).

Baza et al. [20] developed a new CNN architecture to classify three types of brain Tumors. T1-weighted contrast-enhanced magnetic resonance imaging was used to test the new network, which is less complicated than a network that has been pre-trained. Hari Mohan Rai et al. [21] demonstrated a new deep neural network tailored for Tumor identification with U-Net (LuNet). It is simpler and has fewer layers. From a data set of 253 high-pixel images, this assignment required identifying MRI scans of the brain as normal or pathological. MRI pictures are resized, cropped, pre-treated, and scaled for the first time to swiftly and adequately train deep neural models. The suggested LuNet deep CNN model for detecting brain Tumors on MRI images is straightforward, fast, and efficient. For this task, we created an efficient CNN architecture dubbed "LuNet for medical picture segmentation". Downsampling and upsampling are the two main components of the design. Five statistical evaluation scales were used to evaluate and compare the performance of the LuNet models: precision, recall, specificity, F-score, and accuracy.

Chong Zhang et al. [22] denoise and remove brain tissue using adaptive wiener filtering and morphological manipulation. This significantly lowers the method's susceptibility to denoising. A fuzzy C-means algorithm and K-means +  + clustering break up the image. This clustering enhances the algorithm's stability while lowering the clustering settings' sensitivity. Finally, the retrieved images were post-processed using morphological manipulation and median filtering to picture the brain tumor accurately. In addition, the suggested technique will be compared to various segmentation algorithms that are currently in use. The results show that the algorithm is better than the other algorithms in accuracy, sensitivity, specificity, and recall.

HuseyinKutlu et al. [23] presented a hybrid CNN-DWT-LSTM technique for classifying CT pictures of a Tumor-bearing liver and magnetic resonance (MR) images of a Tumor-bearing brain. The suggested method divides liver Tumor images into benign and malignant categories before dividing brain Tumor images into meningioma, glioma, and pituitary tumors. The hybrid CNN-DWT-LSTM approach is utilized to extract the feature vectors of the pictures using the pre-trained AlexNet-CNN architecture. It reduces feature vectors but improves them before training and classifying them in an LSTM network using a single-stage 1D discrete wavelet transform (1-D DWT). The suggested method outperforms classifiers like the K-nearest neighbor method (KNN) and SVM in terms of performance.

The Fuzzy Cluster Means (FCM) partitioning methodology was created by Aneza and Rawat et al. [24]. Cluster validation power, processing time, and convergence speed evaluate segmentation performance. An error rate of 0.537% was achieved using the FCM approach. Wasule and P. Sonar et al. [25] have demonstrated this. The characteristics of this article were extracted using the GLCM method. The method uses SVMs and K-nearest neighbors to categorize malignant gliomas, positive gliomas, HG gliomas, and LG-gliomas, (KNN). The clinical dataset distinguishes between malignant and benign gliomas, whereas the BRATS 2012 dataset distinguishes between high- and low-grade gliomas.

Saleck et al. [26] proposed a reliable and accurate FCM splitting technique. From the MRI, the malignant mass was excised. The suggested method avoids troublesome estimation by utilizing FCM clusters as input data. To identify the optimal threshold for splitting pixels into groups other than the selected group, GLCM is used to extract texture properties. This has a significant bearing on precision. M. Rashid et al. [27] looked into techniques to improve the sharpness of MRI pictures and Tumor location. The technology uses MRI brain scans as input. This method removes noise from brain MRIs with an anisotropic filter and then uses SVMs to make changes to the morphology of the fragments after they have been broken up.

Len et al. [28] created a classification system for brain cancers. Histogram smoothing removes unnecessary information from the image in the first place. We developed three categorization methods through research and development: FCM, Core-based FCM and weighted fuzzy kernel clustering. It has a 2.36% lower misclassification rate than other algorithms. According to Mohamed Tallow [29], deep transmission learning algorithms should be used to separate MRI brain images into normal and pathological categories. The ResNet34 technique is used in the pre-trained CNN model. A data expansion technique is used to extend the database. This method has been proven by looking at MRI data from Harvard Medical School [13].

Deepak et al. [30]. Presented a Google Net-based brain Tumor detection approaches. There are three forms of brain cancers: gliomas (meningiomas), meningiomas, and pituitary Tumors. Because brain Tumor classification is difficult, substantial changes in size and shape frequently occur, affecting classification. When employing typical machine learning algorithms, this problem is especially perplexing. We use migration learning to solve this problem and obtain higher accuracy than earlier models. Even with tiny datasets, significant improvement has been accomplished. This method recommends Google Net, a Tumor classification system that has been updated for various Tumor types and is commonly utilized at the softmax level. The CNN-centric Google Net technique improves accuracy from 92.3% to 97.8% on multiclass SVMs.

ArdhenduSekhar et al. [31]. Proposed anIoMT Enabled CAD System This study divides glioma, meningioma, and pituitary tumors into three groups using a transfer learning approach. A pre-trained CNN named GoogLeNet is used to extract features from brain MRI images. Then, elements are classified utilizing classifiers like K-Nearest Neighbors (K-NN), Softmax, and Support Vector Machine (SVM). The suggested model has been trained on datasets from the Harvard Medical Knowledge Base and CE-MRI Figshare. The experimental outcomes perform better than other current models. The effectiveness of the suggested model is assessed using performance metrics like accuracy, specificity, and F1-score.Veeramuthu et al. [32]. Suggest the combined feature and image-based classification (CFIC) method to classify images of brain tumors. The proposed classifier is trained on the Kaggle Brain Tumor Detection 2020 dataset and put to the test. CFIC outperforms all other proposed methods among the various classifiers that have been suggested. The proposed CFIC method performs noticeably better than existing classification techniques, with sensitivity, specificity, and accuracy ratings of 98.86, 97.14, and 98.97%, respectively.

Iliass Zine-Dine et al. [33]. Classify brain tumors using a combination of VGG-16 and several classifiers. The information used in this study (brain MRI images for brain tumor detection) was obtained from a Kaggle competition (Rakotomamonjy, 2008). The extracted features are sent to the classifier after the VGG-16 tuning step. In terms of precision (98.7%), recall (98.7%), and F1 score (98.7%), the suggested method outperforms some cutting-edge studies in the field of brain tumors [34].

3 Proposed research work

The main goal of this work was to locate the tumor and classify brain tumors as Glioma or Meningioma. To begin, use the extended LuNet algorithm to divide the data. For preprocessing, a Laplacian (LOG) Gaussian filter is used. For preprocessing, a Laplacian (LOG) Gaussian filter is used. On MRI scans, a DCNN deep network was used to diagnose and classify meningioma cancers. Modify all database images having a resolution of 512*512 pixels to 256*256 pixels to reach the same size. The suggested CNN deep network classifier was applied to evaluate if the preprocessed brain MRI pictures were normal or pathological. The linked component method uses the global threshold approach to split the Tumor region. Figure 1 shows the proposed brain tumor classification process.

Fig. 1
figure 1

Block diagram of proposed work

3.1 Datasets

The distribution of data is also important in image classification. The whole dataset must be divided into training set, test set and validation set, and there is a maximum amount of training data (about 70% or more) so that the model can learn the validation and test data well. The datasets in this work are also divided into three categories: training datasets, test datasets, and validation datasets. About 70% (173) of the total data is reserved for training, 30 MR images are reserved for testing, and 50 images are reserved for validation. Validation data is primarily used to validate and predict training data; it is not used in the training process, allowing unbiased evaluation of the proposed model. These training, testing, and validation datasets are subdivided into tumor and non-tumor data. The training set contains 64 non-tumor and 109 tumor data, the test set contains 15 tumor and 15 non-tumor data, and the validation set contains 19 non-tumor and 31 tumor images (Figs. 2, 3, 4).

Fig. 2
figure 2

Overview of proposed work

Fig. 3
figure 3

Proposed hybrids DCNN with LUNET classifier architecture

Fig. 4
figure 4

MRI images of brain tumor dataset-I & II

Before training the model, divide the data into three components: training data, test data, and validation test. The data will then be resized to width 224, height 224, and channel three as the next step (RGB). To test the full CNN model, standard parameters are employed. The MR image's input size is 224 × 224 × 3. Figure 5 represents the proposed preprocessing; segmentation of the proposed algorithm. Table 1 represents the distribution of types of tumor classification.

Fig. 5
figure 5

Segmented output–dataset I & II

Table 1 Distribution of Non-tumor and tumor classifications in the database

3.2 Preprocessing

For preprocessing, a Laplacian Gaussian filter (LOG) is used. The preprocessing phase aims to improve the features of the specific necessary image and eliminate undesired distortion in preparation for later processing. Image enhancement is a preprocessing technique for transforming a less-than-ideal image into a better one. The following measures should be adopted during the pretreatment stage: Combine the original image with the sharpened image for extra impact. The MRI scan image is transformed into a greyscale image with a resolution of 255 × 255 when stored in the system. These photos have been noise-reduced, which harms image quality. The high-pass filter produces images with excellent resolution and no noise for feature extraction and sharpening. Data expansion is one of the preprocessing approaches used to turn the brain's visual source into a homogeneous three-dimensional image by vertically rotating and tilting the image. As a result, data expansion assists the suggested hybrid DCNN-LuNet architecture in achieving high accuracy and precision in evaluation (Figs. 6, 7).

Fig. 6
figure 6

Comparison of proposed algorithm

Fig. 7
figure 7

Comparison of proposed algorithm

3.3 Segmentation using hybrid FCM-GMM

To further classify and predict brain Tumors, use brain Tumor segmentation to extract Tumor regions from pictures. Various machine-learning and deep learning approaches have been developed for segmenting Tumor cells. Some of these machine learning algorithms are trained using manually segmented photos. This is an expensive, time-consuming procedure that necessitates medical competence. The deep neural network model was used to determine where the Tumor was on an MRI using hybrid FCM-GMM.

3.3.1 ROI segmentation

A region of interest (ROI) is a portion of an image or dataset selected for a specific purpose from a raw sample. On T1-weighted MRI, the limit of brain Tumors on the intervertebral disc in this situation is ROI. Annotation masks for Tumors may be found in the Brain Tumor Dataset. The Tumor area in this database has a "1" designation, while everything else has a "0" label. Using a mask that corresponded to the pixels, the specific Tumor was retrieved from the MRI sample of the brain. Because Tumor sizes differed between samples, the Tumor ROI image shrunk and gave zero padding to meet the proposed model's input geometry. After ROI segmentation, each image is 256 × 256 pixels in size. Hybrid procedures use several methods or techniques to attain high accuracy, emphasizing the method's benefits while minimizing its drawbacks. For example, studies that combine FCM and GMM have been proposed to segment brain-related disorders. FCM is utilized in the first approach to identify sick areas in the brain, while the second method is used to classify them. To extract features, the authors used a grayscale length matrix. According to the study's authors, the FCM approach can categorize Tumor tissue more accurately than the K-means method, while the latter can complete the task faster. As a result, each classifier in this study can take advantage of this advantage to perform classification in less time and produce better results [7].

3.3.2 FCM-GMM

When splitting an image into various pieces, segmentation is crucial. Only the valuable bits can be considered in this scenario. This step should be completed to reduce the work required in the following steps. If only the Tumor area needed to extract its features throughout the entire image, feature extraction would be easy. K-means is an unsupervised, repetitive clustering algorithm. The centres of gravity of all clusters are distributed at random. K-means converge by finding the local minimum of the cost function. The distance between the centre of gravity and the data points is calculated using the Euclidean distance. Because K-means is a complex cluster technology, data points can belong to any cluster. The distance between randomly picked data points and the centre of gravity is solely used to allocate data points to each cluster. The proposed approach employs a Gaussian mixture model (GMM) segmentation technique [12]. GMM is a more advanced algorithm than the K-means algorithm. GMM also uses expectation maximization (EM) approaches to divide the brain into distinct areas and decrease data. Only the brightest pixels were gathered in Fig. 8, and the Tumor region was removed.

Fig. 8
figure 8

Accuracy with cropped MR images using the LuNet model


Fuzzy C–means algorithm Fuzzy C: A pixel set with the algorithm X = \({X}_{1}\), \({X}_{2}\),…, \({X}_{N}\) indicates that it has been divided into C fuzzy clusters. Each point is part of a cluster. Based on their membership value, points can belong to multiple clusters. This is an iterative process in cluster-centred C that minimizes the objective function associated with the fuzzy member set U.

$$j = \sum\nolimits_{i = 1}^{N} {\sum\nolimits_{j = 1}^{C} {U_{ij}^{m} \left( {X_{i} - C_{j} } \right)^{2} } }$$
(1)

where, \({U}_{ij}\) is the membership table, m is a cluster fuzziness factor and (\({X}_{i}\)\({C}_{j}\)) is Euclidean distance. Data points towards the cluster's centre have a higher degree of membership than data points around the edges [27]. FCM calculates the centre for each cluster and assigns a membership ranking to each cluster for each point. The cluster's centre is then moved to the correct place by repeatedly updating the data set's centre. The membership defines the ambiguity of the image and the information contained within it.

Image segmentation in the classic sense FCM clusters the pixel sample set directly. However, it has the drawback of being computationally intensive. As a result, selecting an appropriate first cluster centre is essential. The technique can swiftly converge to the actual cluster centre by selecting a better initial cluster centre. FCM algorithms have been successfully used to address various real-world issues. Compared to more complicated segmentation techniques, the FCM segmentation method has a significant advantage because it can preserve more information from the original image. The FCM algorithm consists of the steps in Eqs. 2, 3, 4. Iteratively optimizing the aforesaid goal function and updating the member \({U}_{ij}\) and the cluster centre Cj as follows achieves unclear segmentation:

  • Step 1: Initialise

    $$U = \left[ {U_{ij} } \right]matrix, U^{\left( 0 \right)}$$
    (2)
  • Step 2: At k-step: calculate the centres vectors \(c^{\left( k \right)} = \left[ {C_{j} } \right] \,with\, U^{\left( k \right)}\)

    $$U_{ij} = \frac{1}{{\mathop \sum \nolimits_{k - 1}^{C} \left( {\frac{{X_{i} - C_{j} }}{{X_{i} - C_{k} }}} \right)^{{\frac{2}{m - 1}}} }}$$
    (3)
  • Step 3: Update \(U^{\left( k \right)} and \,U^{{\left( { k + 1} \right)}}\)

    $$c_{j} = \frac{{\mathop \sum \nolimits_{i - 1}^{N} U_{ij}^{m} X_{i} }}{{\mathop \sum \nolimits_{i - 1}^{N} U_{ij}^{m} }}$$
    (4)
  • Step 4: \(if\,U^{{\left( { k + 1} \right)}} - U^{\left( k \right)} \,< \in \; then\; STOP ;\;otherwise \;go \;back \;to \;step \;2\)

    The term "k" refers to an iterative step. This process eventually leads to a local minimum or saddle point of jm.

3.4 Feature extraction using VGG-16

In conventional approaches, GLCM has been used for extraction. In this research work VGG16 extracts characteristics into 13 different categories. Many CNN models now have more excellent performance and a more profound architecture. On the other hand, deeper networks are challenging to train due to their high data requirements and millions of variables. Including an extensive, well-labeled dataset is critical for more accurate and generalized models. Large labeled datasets are not available for medical imaging challenges. Transfer learning techniques are used to address this issue [28]. For feature extraction, a VGG-16 network is used. VGG-Net comprises 16 convolutional layers and a 3 × 3 filter with one convolutional layer stride and three fully connected (FC) layers. The VGG-16 network comprises multiple stacked tiny kernels with filters that improve the network's depth, allowing it to extract more complicated features at a lower cost.

3.5 Classification using enhanced LUNET

The LuNet network receives VGG-16 features that have been pre-trained. By freeing memory, LuNet can solve the degradation problem. LuNet is a more advanced version of CNN with three gates: entry, exit, and forget. The neural network has two hidden layers, each with 100 nodes. LuNet uses these gates in order to learn long-term dependencies over time. This suggested that deep CNN with LuNet for detecting brain Tumors from MRI images is straightforward, quick, and effective. This project develops the highly effective CNN architecture LuNet for classifying medical images.Downsampling and upsampling are two components of its overall architecture. There are only two layers in the downsampling section, and there is also a Maxpooling procedure. The dimensions of the input image are 224 × 224 × 3. Figure 2 depicts the overview of proposed brain tumor classification process. The LuNet second layer network comprises two 64-digit ConvNets and a pooling layer with a 3 × 3 filter size. Upsampling is a two-layer process in the second half. A transposition layer and two convNets are used to do sampling. The preceding part's high-resolution functions are merged with sampled data during the sampling step to finding image data.The following characteristics distinguish the improvements proposed in this paper from the traditional LuNet network classification algorithm:

  • We chose to have two successive convolutional layers before the pooling layers in order to build better data representations without losing all of spatial information quickly.

  • Unlike the traditional LuNet model, our model has only one fully connected layer that is fixed between the last convolution and the output layer. This preference allowed for a reduction in the number of parameters, which affects the model's lightness and reduces the calculation's complexity.

A continuous ConvNet with transposed layers' function is to learn an extremely accurate output data composition (Ronneberger, Fischer, and Brox, 2015). The upsampling section of the LuNet has a transpose layer and two convolutional layers, and the first transpose layer has a filter size of 2 × 2, 64 digits in two stages, and the two ConvNets have the same size of 3 × 3 64 numbers. Second transposition layer with a 2 × 2 filter, 32 digits for two steps, and two ConvNets of the same number. Following the upsampling layer, the output image has the exact resolution as the input image. An "eLU" activation function for each layer in the LuNet model prevents the dropping of negative pixels. The two fully connected layers are combined in the final section using gathered information and a sigmoidal activation function.

Figure 3 depicts the proposed hybrids DCNN with LuNet classifier architecture. Because the suggested LuNet model can be viewed, it only has two layers for the encoder, two layers for the decoder, two ultimately linked layers, and four layers for the sigmoid activation function. This structure is based on the U-Net model, but the model is unique, simple, and fast because it only has six layers. Large datasets may not produce very promising findings, which is a shortcoming of this model.

4 Result analysis

This section presents the proposed methodology's quantitative and qualitative evaluation. This system makes use of the 64-bit MATLAB 2021a software. Sensitivity, specificity, precision, accuracy, F-score, DSI, and other metrics help compare objective and subjective classification and segmentation. The proposed hybrid DCNN and LuNet analysis employ high classification accuracy, measured as the percentage of correctly classified objects to the total number of objects used in the method. The introduced hybrid system DCNN-LuNet brain Tumor detection also considers the following performance metrics: Sensitivity and specificity are good representations of correlation between correctly categorized pixels. In segmented brain MRI images, accuracy refers to the percentage of accurately recognized pixels by identifying and illustrating healthy pixels free of malignancies. These variables are expressed in % and range from 0 to 100. Precision, recall, F-number, and precision measurements are all evaluated using quantitative analysis, which is expressed mathematically as:

$$Sensitivity\;(Sen) = TP/(TP + FN)$$
$$Specificity\;(Sp) = TN/(TN + FP)$$
$$Accuracy\;(ACC) = (TP + TN)/(TP + FN + TN + FP)$$
$$Precision\;(Pr) = TP/(TP + FP)$$
$$F - score = 2*Sen*Pr/(Pr + Sen)$$
$$DiceSimilarity\;(DSI) = 2*TP/(TP + TN + FP + FN)$$

The rating scale for performance analysis of the suggested method is represented by a confusion matrix with TP and TN values, indicating correctly identified tumor and detected non-tumor pixels, FP and FN. This results in tumor and non-tumor pixels that are mistaken for tumors. True positives (TP) are benign (or glioma identification). A true negative (TN) Tumor has been recognized as malignant. When a benign condition is misdiagnosed as malignant, it is called a false positive (FP) or when a glioma is identified as a meningioma. False-negative (FN) are malignant Tumors that have been misdiagnosed as benign (or meningiomas identified as gliomas). Figure 4 shows the sample input brain tumor datasets for the proposed classifier algorithm.

Table 2 compares the suggested simulation results with other standard methods for the same data set image. These comparisons show that the suggested technique employs CNN's deep network rating and delivers higher simulated values on brain MRI images of the same data set than other methods [4]. Figure 6 and 7 contains all the measurement data Accuracy, Sensitivity, Specificity, Precision and recall for all CNN models. According to the table data, the accuracy of the proposed hybrid DCNN with LuNet classifier has improved by 99.7%, Google Lenet is in second place with 97.50% accuracy, and the SVM model has the lowest performance at 96.21%. The enhanced LuNet models have Accuracy, Sensitivity, Specificity, Precision and recall of (99.7, 98.2, 98.6, 99.4, 99.8 and 98.82), respectively.

Table 2 A comparison of the proposed technique and the conventional approach

Hence, the proposed hybrid algorithm performs well in all evaluation parameters. Our suggested model outperforms all other cutting-edge techniques with an accuracy rate of 99.7%, demonstrating this model's superiority. The proposed hybrid DCNN-LuNet is used in the final experiments and performs admirably on both training and testing data. The LuNet model performed admirably on both training and test data regarding accuracy and loss.

The proposed hybrid DCNN with the LuNet classifier model performs well on training and testing data and is used in the final experiments. The LuNet model outperforms in terms of accuracy and loss on training and testing data in Figs. 8, 9. Figure 10 depicts the LuNet model's training and test accuracy performance, with training and validation dropouts. The classification rate between normal and affected Tumor images was 99.5%. As a result, the proposed method has an average classification rate of 99.5% and a verification accuracy of 99.4%.

Fig. 9
figure 9

Loss variations with cropped MRI using LuNet algorithm

Fig. 10
figure 10

Losses curves for classification

The suggested CNN deep network's sensitivity, specificity, accuracy, and F-score metrics outperform existing machine learning methods. There are 1249 benign and 29 malignant Tumors among the 1278 benign Tumors. Furthermore, 1239 of the 1278 Tumors are classified as malignant, whereas 39 are benign. Overall, the proposed technique achieves a high level of accuracy of 99.705%. The proposed method performs better for the various evaluation parameters above than existing methods. As a result, the proposed method for classifying benign and malignant brain tumors is both new and effective. Figure 11 shows the confusion matrix for classifications. A confusion matrix is presented as an example of the performance of the ground truth classification model.

Fig. 11
figure 11

Confusion matrix for classification

Figures 12 and 13 show confusion matrixes for testing confusion matrix for training. For the training dataset and test samples, green squares indicate TP and TN values in each confusion matrix, bright orange squares represent FP and FN values in each confusion matrix, and blue squares represent positive anticipated values in each confusion matrix. The values for sensitivity (Sn), specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV) are displayed clockwise from upper right to lower left, respectively. The purple square represents the total rate of proper classification (accuracy rate). The classification errors for the electronic training and test sets were 7.69% and 6.42%, respectively. Table 3 summarizes the classification procedure.

Fig. 12
figure 12

Confusion matrix for testing

Fig. 13
figure 13

Confusion matrix for training

Table 3 Comparative analysis of simulation time

Based on simulation time, computational and structural complexities are also compared. Table 3 and Fig. 14 compares each model's simulation time (in seconds) to the same data set and the number of parameters. In addition, the table shows that the CNN model has the longest simulation time of 421.6 s, the Google Lenet model has the shortest simulation time of 253.6 s, and the proposed DCNN- LuNet model has the shortest simulation time of 225.9 s.

Fig. 14
figure 14

Comparative analysis of proposed method to conventional method

5 Conclusion

The hybrid algorithm for detecting brain Tumors and classifying malignant brain MRIs into malignant and benign tumors and glioma and meningioma’s using MRI are described in this research work. Preprocessing procedures are used to detect brain Tumors, followed by skull dissection and brain Tumor segmentation. This algorithm could be used to segment brain Tumors from MRI images. This research presents a hybrid DCNN classifier with a LuNet classifier using the MATLAB tool. The performance metrics such as recall, F-score, specificity, and total accuracy are used to evaluate the performance of all CNN models. Research demonstrates that the proposed algorithm performs better than other CNN models, with an overall accuracy of 99.7%. The experimental outcomes show that DCNN with LuNet classifier correctly diagnoses both high- and low-grade Tumors compared to previous techniques. The proposed algorithm models have accuracy (99.7%), sensitivity (98.2%), specificity (98.6%), precision (99.4), F-Score (98.2) and recall (99.8%), respectively. In future work, novel hybrid Deep learning with bio-inspired optimization will be proposed to improve the performance of brain tumor segmentation and classification process.