Introduction

The human body’s complex and massive organ is the brain, which consists of almost 100 billion nerve cells. It presents in the middle of the nervous system, which controls all the nervous systems [1]. Hence, the abnormalities in the brain lead to danger to human health. In such abnormalities, the most dangerous one is the brain tumor, which is defined as the unnatural and uncontrolled increase of cells in the brain. Primary and secondary tumors are the two categories of brain tumors. The availability of tumors in the brain tissue is called primary tumors. When the other human body parts, tumor cells, are moved to brain tissue through the bloodstream and form tumors, it is called secondary tumors [23]. The symptoms of brain tumors vary due to the location and type of the tumors, and some of the symptoms are unusual behavior, memory issues, seizures, changes in vision, balance issues, and confusion [4]. To treat, monitor, diagnose, and analyze the human body, various medical imaging approaches are successfully used, which are X-rays, ultrasound imaging (UI), computerized tomography (CT), and MRI. Among these imaging technologies, the MRI is widely used for classifying and detecting due to its better resolution [5] and is also commonly used for identifying brain tumors. The reason for using MRI in identifying tumors is during the MRI scan, non-ionizing radiation is produced, which is able to acquire various images by applying numerous image parameters [67]. The tumor diagnosis has four MRI modalities, and the images are produced with various tissue variations in every modality. Hence, MRI is more suitable than other techniques for segmenting and classifying the tumors in the brain [6].

The exact brain tumor segmentation and classification lead to offering better treatment [8]. The main motive of segmenting a tumor is to separate the tumor brain tissues, like necrotic core, edema, and active cells, from the regular brain tissues, like cerebrospinal fluid (CSF), gray matter (GM). Recently, the segmentation approaches have been categorized on the basis of numerous principles, and the three major categories in segmentation are fully automatic, semi-automatic, and manual segmentations [910]. The classification of tumors in the brain helps doctors to attain an exact diagnosis that plays a major role in the efficient treatment for patients [11]. The manual and automatic classifications are the two types of brain tumor classification, in which the manual process is a challenging and tricky task. In the manual process, the classification of MRIs with the same appearances and structures are required the radiologist’s expertise available for identifying and classifying the tumors [12]. In the past years, various techniques have been developed for effective classification with the help of high-resolution MRI images of the brain with sensible contrast [5].

The advancement of machine learning and artificial intelligence techniques are offering a better impact on the medical area and are significant support tools in medical departments. For supporting the decisions of radiologist, numerous automatic learning schemes are applied for the segmentation and classification [34,35,36]. However, because of high intra and inter-shape, contrast variations, and texture, the classification is difficult in the traditional approaches [23233]. The supervised techniques need expertise for extracting the best features and technique selection for the classification of brain tumors [1314]. Recently, unsupervised techniques [15] have been used by many researchers because of better performance, automatic feature generation, and minimized error rate. Also, deep learning (DL)-based techniques are essential in healthcare image analysis, like segmentation [16], reconstruction [17], and classification [18]. Moreover, these DL techniques are used in the automatic extraction of meaningful features for obtaining better results.

Accordingly, an efficient optimization-enabled deep learning technique is developed in this research for segmenting and classifying brain tumors. Here, MRI is the input, which is passed to the pre-processing phase for enhancing the image quality by removing the noises using normalization. Then, the DeepMRSeg [19] is used in the segmentation, which is training using CHBA. The CHBA is the integration of HBA [20] and chronological concept. After segmenting, the CNN features [21] are extracted for proceeding the next step. Then, the data augmentation is achieved by means of randomized left or right flipping, rotation and brightness or contrast adjustment, random translation, and including Gaussian noise. After augmenting data, the classification is performed using DRN [22], in which the training is performed using the proposed CJHBA. The proposed CJHBA is the integration of the Jaya algorithm [23], HBA, and chronological concept.

The contributions of this research:

  • Proposed CHBA-Based DeepMRSeg for Segmentation: DeepMRSeg is used in the segmentation, which is training using CHBA. The CHBA is the integration of HBA and chronological concept.

  • Proposed CJHBA-Based DRN for Classification: The classification is performed using the DRN classifier, which is trained using the proposed CJHBA. The proposed CJHBA is the integration of the Jaya algorithm, HBA, and chronological concept.

The remaining manuscript parts are structured as follows: Section 2 reviews previous studies related to segmentation and classification of tumors in the brain using MRIs, and Sect. 3 describes the proposed technique. The results and discussion of the techniques are described in Sect. 4. The research conclusion is provided in Sect. 5.

Motivation

The rapid increase of the cells in the brain is called brain tumor, which causes death if it is not treated well. The MRI is the most common method for brain structure analysis. There are many methods devised previously to effectively classify brain tumors. However, due to the various symptoms and structure of the tumor cells, the classification task is very difficult. The issues faced in the previously devised technique inspired us to develop a novel optimization technique for segmenting and classifying the tumors in the brain.

Literature Survey

The eight recent related techniques are reviewed in this part. Francisco Javier Díaz-Pernas et al. [6] implemented an automatic approach for segmenting and classifying the tumors in the brain. Here, the multiscale approach–based deep convolutional neural network was devised for identifying three kinds of tumors. It had good accuracy, and it was applicable to various imaging problems in the medical field. Anyhow, for larger datasets, this scheme was not appropriate. Yurong Guan et al. [5] implemented computer-assisted diagnosis (CAD) for the classification of tumors in the brain. Also, the agglomerative clustering-based scheme was employed for identifying the locations of the tumors. This scheme had less computational complexity and more accuracy. Furthermore, the data augmentation approach was employed in the reduction of over-fitting issues. However, the cost of computation was high for larger datasets. Jaeyong Kang et al. [2] developed transfer learning–based deep convolutional neural networks for better performance in segmentation and classification. Here, the best three deep features were evaluated, which gave to the classifier for improved performance. Thus, this ensemble feature–based classifier offered better performance for the larger dataset. Anyhow, real-time data analysis was difficult in this technique. Javaria Amin et al. [24] developed a CNN model for efficiently classifying the tumor and non-tumor regions. Here, texture and structural information were fused, which was done using discrete wavelet transform (DWT) and Daubechies wavelet kernel. The segmentation was done using the global thresholding approach. This technique had good accuracy in segmentation, but the important features were not determined for the evaluation. Ahmad M. Sarhan [4] developed a CAD scheme for effective classification of the tumors in the brain. Here, DWT was applied for extraction of the required features and dimensionality reduction, which boosted the accuracy of the model. Also, this model was reliable and robust, but it required more time to complete the entire process. Isselmou Abd El Kader et al. [14] implemented a differential deep-CNN technique for identifying various brain tumor types using MRIs. Here, more differential feature maps were derived from the normal feature maps. Also, the contrast calculations of pixel directional patterns were used to improve the classification ability and accuracy. However, the minimum parameters were used, which may reduce the speed of coverage. Wadhah Ayadi et al. [25] implemented a support vector machine (SVM)-based technique to classify the cerebral tumors in the brain. In this approach, before classification, the normalization and feature extraction were carried out. The SVM classifier was used in the classification process, and the produced results were accurate with less computational time. Anyhow, robustness was less because of using less images modalities. Asmita Dixit and Aparajita Nanda [8] implemented an improved whale optimization algorithm (IWOA)-based radial basis neural network (RBNN) for classifying brain tumors. Here, the tumor area was identified using fuzzy-c means (FCM) clustering, and principle component analysis (PCA) was used in the feature extraction. This technique had high accuracy with less computational time. However, the tumor substructures were unable to detect in this technique.

Challenges

The issues that are identified in the previous brain tumor segmentation and classification techniques are listed as follows:

  • The RBNN-IWOA technique obtained accurate classification results with less computational time, but the substructure identification of tumors was not possible [8].

  • The identification of strong and meaningful features by the experts required more time, which led to errors during the handling of a large amount of data.

  • The lesion complexity was another important drawback in the previous studies. Due to this complexity, early tumor identification and accuracy gets affected [25].

  • The robustness of the system was affected due to the use of less modalities of the brain MRIs.

  • Because of high intra- and inter-shape, contrast variations, and texture, the classification is difficult in the traditional approaches [2].

Proposed Chronological Jaya-Honey Badger Algorithm–Based Deep Residual Network for the Classification of Brain Tumors

This section shows the development procedure of the implemented brain tumor classification technique. The developed technique is useful in the early and exact identification of tumors in the brain, which leads to efficient treatment for patients. Therefore, in this research, an efficient optimization-enabled deep learning technique is developed for classifying and segmenting the tumors in the brain. The phases involved in the research are pre-processing, segmentation, feature extraction, data augmentation, and classification. Here, MRI is the input, which is given to the pre-processing phase for improving the image quality by removing the noises using normalization. Then, the DeepMRSeg [19] is used in the segmentation, which is training using CHBA. The CHBA is the integration of HBA and chronological concept. After segmenting, the CNN features [21] are extracted for proceeding the next step. Then, the data augmentation is achieved by means of randomized left or right flipping, rotation and brightness or contrast adjustment, random translation, and including Gaussian noise. After augmenting data, the classification is performed using DRN [22], in which the training is performed using the proposed CJHBA. The proposed CJHBA is the integration of the Jaya algorithm [23], HBA, and chronological concept. Figure 1 signifies the block diagram of the implemented model.

Fig. 1
figure 1

Block diagram of the implemented model

Data Acquisition

The data gathering happened in this step, and in the implemented technique, the brain MRIs are gathered from BRATS 2018 and Figshare datasets. Assume the dataset as \(O\) and the brain MRIs in the dataset are indicated as \(R\) and the notation is expressed as,

$$O=\left\{{R}_{1},{R}_{2},.......,{R}_{n},......,{R}_{h}\right\}$$
(1)

where the dataset is indicated as \(O\), \(h\) is the entire MRIs in the dataset \(O\), and \({R}_{n}\) is the nth MRI in dataset \(O\).

Pre-Processing

Pre-processing is the necessary step, which ensures performance enhancement by improving the quality of data. Also, in the deep learning-based model, noisy and unreliable data may affect the training time of the process, which is effectively handled using data pre-processing. Here, the normalization technique is used for pre-processing the input MRIs. The main aim of the normalization is to vary the pixel intensity values range, and it is useful to increase the contrast of the images. The other names of the normalization are histogram stretching or contrast stretching. The min–max normalization is used in this research to pre-process the input data. In this normalization approach, the maximum feature value is distorted into 1, the minimum value is distorted into 0, and for the other values the feature gets distorted into decimals in the range of 0 and 1. In the implemented model, every image \({Q}_{m}\) is fed to pre-processing and the normalization [26] is given as,

$${z}_{j}=\left(^{{y}_{j}-\mathrm{min}(y)}/_{\mathrm{max}(y)-\mathrm{min}(y)}\right)$$
(2)

where \({y}_{j}\) is the position and \(j\) is varying as \(\mathrm{1,2},3,......,m\), the intensity value of normalization against location \({y}_{j}\) is denoted as \({z}_{j}\), and the maximum and minimum intensity values of all images are indicated as \(\mathrm{max}(y)\) and \(\mathrm{min}(y)\). The outcome of the pre-processing is signified as \({J}_{q}\), which is forwarded to further processing.

Brain Tumor Segmentation

The output of the pre-processing phase \({J}_{q}\) is the input of DeepMRSeg, which is the segmentation classifier. For segmenting the tumors in the brain, the DeepMRSeg classifier is trained with the proposed CHBA.

DeepMRSeg Architecture

Figure 2 shows the structure of the DeepMRSeg classifier [19], in which \({J}_{q}\) is the input of the classifier. It is a DL-related segmentation approach, which can be applicable of various types of segmentation tasks. Also, this classifier is mainly used in computer vision and healthcare image segmentation [19]. Here, the multi-scale features are extracted based on the consideration of sizes of multiple convolution filters using modified U-Net. This classifier is used in the effective segmentation of white matter lesion, deep structures of the brain, and hippocampus. The encoding and decoding are the two paths in the classifier. These paths are present in U-Net after the multi-class soft-max and voxel-wise, which produce independent class probabilities of every voxel. The projection layer is the initial layer, which has \(a\) feature maps, and it is changed into desired amount of features \(b\), which are fed to pre-encoding block, and it contains ResNet blocks. The extraction of numerous features from the input MRIs are done using ResNet, which creates the U-Net input. In the encoding path, the encode blocks are available. These blocks are useful in operating various resolutions of feature maps. In all the layers, the sub-samplings of features are done using the “transition down,” which is given to the ResInc block. When increasing the receptive field, the feature map sizes are decreasing in every layer. The decoding section contains up-sampling operations along with ResInc blocks. The outcome of the segmentation process is \({J}_{t}\).

Fig. 2
figure 2

Structure design of DeepMRSeg

Deepmrseg Training Using Proposed CHBA

For effective tumor segmentation in the brains, the DeepMRSeg is trained using the proposed CHBA, and the CHBA is obtained by integrating the chronological concept with the HBA. The chronological concept is useful in the identification of how things vary over time and the historical details. The HBA [20] has the honey badger characteristics, which is a mammal variety with black and white fluffy fur. The prey location is identified with the help of the smelling mouse skills. The two cases in the honey badger are digging mode and honey mode. In the digging mode, it utilizes the smelling capability to identify the prey location and choose the proper place for catching and digging the prey. In the honey mode, the honeyguide bird directly guides the location of the beehive. This algorithm is useful in solution making of the optimization challenges. Besides, it converges very fast and has a good exploration–exploitation balance. Therefore, integrating the chronological concept with HBA is useful in the effective segmentation of tumors in the brain. The algorithmic procedures of the CHBA are discussed below:

  • Step 1: Initialization

    The entire honey badgers present in the population size is signified as \(P\). The position of the population is given in the below equation.

    $${c}_{k}=o{d}_{k}+{u}_{1}\times \left(x{d}_{k}-o{d}_{k}\right)$$
    (3)

    where the honey badger in kth location of the population is signified as \({c}_{k}\), and the search domain lower bound is denoted as \(o{d}_{k}\) and search domain upper bounds is given as \(x{d}_{k}\). The random number presents in 0 and 1 range is signified as \({u}_{1}\).

  • Step 2: Fitness Evaluation

    The best results are obtained using the fitness evaluation, in which the mean square error (MSE) is determined for obtaining the best solution. The minimum MSE is chosen as the best solution, and it is formulated as,

    $$MSE=\frac{1}{h}{\sum_{i=1}^{h}\left[{\psi }_{i}-{J}_{t}{}\right]}^{2}$$
    (4)

    where the expected output is symbolized as \({\psi }_{i}\) and the output of DeepMRseg is denoted as \({J}_{t}\); the total samples are given as \(h\), such that \(1<i\le h\).

  • Step 3: Describing Intensity (J)

    The intensity is the major thing for identifying the concentration ability of the prey and the gap in between the kth honey and the prey. The smelling power of the prey is denoted as \({K}_{k}\). The inverse square law is used for describing the intensity. Based on this law, the intensity is defined as, when the smelling ability is high, then the prey moving speed is high; otherwise, speed of moving is slow. The intensity formula is denoted as,

    $${K}_{k}={u}_{2}\times \frac{V}{4\pi {f}_{k}^{2}}$$
    (5)

    where \({u}_{2}\) is the random number in 0 and 1 range. \(V={\left({c}_{k}-{c}_{k+1}\right)}^{2}\) and \({f}_{k}={c}_{prey}-{c}_{k}\).

    The source strength or the ability of concentration is denoted as \(U\), and the space between the kth badger and prey is signified as \({f}_{k}\).

  • Step 4: Factor of Density Updation

    The density factor \(\lambda\) is used in the handling of time-varying randomization for guaranteeing the effortless exploration and exploitation moves. While minimizing the iterations, the density factor \(\lambda\) is minimized, and the density factor is calculated as,

    $$\lambda =E\times \mathrm{exp}\left(\frac{-w}{{w}_{\mathrm{max}}}\right)$$
    (6)

    where the total count of iterations is signified as \({w}_{\mathrm{max}}\). The constant \(E\) is equal to or greater than 1. In general, the constant value \(E\) used is 2.

  • Step 5: Escaping from Local Optimum

    The searching direction is modified based on the flag \(H\), which has maximum choices for obtaining the scanning of search space for the search agents.

  • Step 6: Agent’s Position Update

    The new location of the agent is denoted as \({c}_{k+1}\), and the location is divided into two, such as “honey phase” and “digging phase.”

  • Step 6.1: Digging Mode

    The Cardioid shape function is useful for doing the operation in the digging mode, and the position update using the digging node is denoted as,

    $$\begin{aligned}c_l\left(k+1\right)=&\ c_{lprey}+H\times\alpha\times K\times c_{lprey}+H\times u_3\times\lambda\times f_k\\&\times\left|\cos\left(2\pi u_4\right)\times\left[1-\cos\left(2\pi u_5\right)\right]\right|\end{aligned}$$
    (7)

    where the best possible position is signified as \({c}_{k}\) and the value of \(\alpha\) is equal to or greater than 1. Most commonly, the value of \(\alpha\) is denoted as 6. The food getting ability of the honey badger is indicated as \(\alpha\). The three random numbers are signified as \({u}_{3}\), \({u}_{4}\), and \({u}_{5}\), which are in the range of 0 to 1. The flag \(H\) is determined based on the equation provided below.

    $$H=\left\{\begin{aligned}&1\\& -1 \qquad else\end{aligned}\right.if\;{u}_{6}\le 0.5$$
    (8)

    where \({u}_{6}\) is the random value in the range of 0 to 1.

    In digging mode, the honey badger smelling intensity \(J\) depends on the basis of the prey cprey, and the density factor \(\lambda\) is influenced based upon the time. The need of better location of the prey is required when any disturbance occurs in the digging mode.

  • Step 6.2: Honey Mode

    The food identification of honey badger happens due to the use of honeyguide bird is called the honey mode, and the formula for obtaining this step is,

    $${c}_{l}\left(k+1\right)={c}_{lprey}+H\times {u}_{7}\times \lambda \times {f}_{k}$$
    (9)
    $${f}_{k}={c}_{prey}-{c}_{l}\left(k\right)$$
    (10)

    When substituting Eq. (10) in Eq. (9),

    $${c}_{l}\left(k+1\right)={c}_{prey}+H\times {u}_{7}\times \lambda \times \left({c}_{prey}-{c}_{l}\left(k\right)\right)$$
    (11)
    $${c}_{l}\left(k+1\right)={c}_{prey}+H\times {u}_{7}\times \lambda \times {c}_{prey}-H\times {u}_{7}\times \lambda \times {c}_{l}\left(k\right)$$
    (12)
    $${c}_{l}\left(k+1\right)={c}_{prey}\left[1+H\times {u}_{7}\times \lambda \right]-H\times {u}_{7}\times \lambda \times {c}_{l}\left(k\right)$$
    (13)

    The equation for the chronological concept is given as,

    $${c}_{l}\left(k+1\right)=\frac{{c}_{l}\left(k+1\right)+{c}_{l}\left(k+1\right)}{2}$$
    (14)

    When the iteration is \(k\)

    $$c_l\left(k\right)=c_{prey}\left[1+H\times u_7\times\lambda\right]-H\times u_7\times\lambda\times c_l\left(k\right)$$
    (15)

    When substituting Eq. (15) in Eq. (13),

    $$\begin{aligned}{c}_{l}\left(k+1\right)=&\ {c}_{prey}\left[1+H\times {u}_{7}\times \lambda \right]-H\times {u}_{7}\times \lambda \\& \left[{c}_{prey}\left(1+H\times {r}_{7}\times \lambda \right)-H\times {u}_{7}\times \lambda \times {c}_{l}\left(k+1\right)\right]\end{aligned}$$
    (16)
    $$\begin{aligned}{c}_{l}\left(k+1\right)=&\ {c}_{prey}\left[1+H\times {u}_{7}\times \lambda \right]-H\times {u}_{7}\times \lambda \\&\left[{c}_{prey}\left(1+H\times {r}_{7}\times \lambda \right)+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]\end{aligned}$$
    (17)
    $$\begin{aligned}{c}_{l}\left(k+1\right)=&\ {c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left[1-H\times {u}_{7}\times \lambda \right]\\&+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\end{aligned}$$
    (18)

    When substituting Eqs. (18) and (13) in Eq. (14),

    $${c}_{l}\left(k+1\right)=\frac{{c}_{prey}\left[1+H\times {u}_{7}\times \lambda \right]-H\times {u}_{7}\times \lambda \times {c}_{l}\left(k\right)-{c}_{l}\left(k\right)\left(1+H\times {u}_{7}\times \lambda \right)\left[1-H\times {u}_{7}\times \lambda \right]+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}}{2}$$
    (19)
    $${c}_{l}\left(k+1\right)=\frac{{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left[1+\left(1-H\times {u}_{7}\times \lambda \right)\right]-{c}_{l}\left(k\right)H\times {u}_{7}\times \lambda +{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}}{2}$$
    (20)
    $$\begin{aligned}{c}_{l}\left(k+1\right)=&\ \frac{1}{2}\left[{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left[1+\left(1-H\times {u}_{7}\times \lambda \right)\right]\right.\\&\left.-{c}_{l}\left(k\right)H\times {u}_{7}\times \lambda +{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]\end{aligned}$$
    (21)
    $$\begin{aligned}{c}_{l}\left(k+1\right)=&\ \frac{1}{2}\left[{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left(2-H\times {u}_{7}\times \lambda \right)\right.\\&\left.-{c}_{l}\left(k\right)H\times {u}_{7}\times \lambda +{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]\end{aligned}$$
    (22)

    where \({c}_{l}\left(k+1\right)\) denotes the lth solution at iteration \(k+1\), \({c}_{l}\left(k\right)\) denotes the lth solution at iteration \(k\), \({c}_{l}\left(k-1\right)\) denotes the lth solution at iteration \(k-1\), and the location of the prey is specified as cprey.

  • Step 7: Re-evaluation of Fitness

    After obtaining the update in Eq. (22), again, the fitness value is evaluated for choosing the best results.

  • Step 8: Termination

    The algorithm is terminated, when obtaining the maximum number of iteration \({k}_{\mathrm{max}}\). Algorithm 1 denotes the pseudo-code of devised CHBA.

figure a

Algorithm 1 Pseudo-code of devised CHBA

Data Augmentation

Data augmentation is useful in increasing the amount of data by including slightly modified already existing data copies or creating imitation data newly from previously available data. The data augmentation plays the responsibility of the regularizer and is helpful in the reduction of overfitting during the training of the classifier. The segmentation outcome \({J}_{t}\) is considered as the input of data augmentation. The augmented outcome is obtained using flipping of image, translation of image, rotation of image, noise added image, and brightness. In flipping, the image orientation is flipped in the vertical or horizontal direction, and the outcome is denoted as \({J}_{g}\). The translated image is obtained by moving the image in various directions, and \({J}_{u}\) is the translated image outcome. The image is rotated in a particular degree that is called rotated image, and \({J}_{s}\) is used to represent the result of the rotated image. To obtain more training data, noises are added in the image and the output of noise added image is denoted as \({J}_{o}\). The image contrast is improved in brightness, and the output is indicated as \({J}_{d}\). The data augmentation outcome is provided as \({R}_{i}=\left\{{J}_{t},{J}_{g},{J}_{u},{J}_{s},{J}_{o},{J}_{d}\right\}\).

Extracting the Feature

The feature extraction is the process of reducing the dimension of the image. Here, to do the process in an easy manner, the raw input data is classified into various groups. In general, large datasets require more variables and a large number of computing resources, so that the optimal features are extracted. Besides, in this process, the dimension of the data was highly reduced, which will enhance the accuracy of the model. In the devised model, the CNN features are selected for the data augmentation. The CNN features [21] are denoted as \(G=\left\{{g}_{1,}{g}_{2},......,{g}_{r}\right\}\). The CNN is a variety of artificial neural network, which is commonly devised for extracting the features and for classifying the high dimensional data. Also, it is useful in reorganizing the two-dimensional shapes with scaling, translation, distortion, and skewing functions. The processes involved in the CNN are extracting the features, mapping of features, and layers sub-sampling. The various layers present in the CNN are convolutional, sub-sampling layers, and fully connected output layers. The backpropagation algorithm is applied for training the CNN model. Figure 3 represents the CNN feature extraction.

Fig. 3
figure 3

Extraction of CNN features

Tumor Classification in Brain Using the Proposed CJHBA-Based DRN

The tumors in the brain are classified using the CNN features \(G\), which are provided as the input to the DRN classifier. The tumors in the brain are classified using implemented CJHBA-based DRN, in which the weights and biases of the DRN are trained by the implemented CJHBA.

DRN Architecture

The DRN classifier [22] is mainly used in image recognition and pattern recognition tasks. When compared with other deep learning techniques, the DRN has maximum speed for training, and simple gradient transmission. Also, it obtains good results in classification and regression tasks. Moreover, the computational efficiency of this classifier is high, and it avoids overfitting issues.

Convolutional Layer

In the deep learning methods, the general 2D convolutional layer (Conv2d) is applied for reducing the free parameters used for training procedure. Also, it has the advantage of weight sharing in the local receptive filed. Here, filter series are called kernel that is used in for processing the input. The process of the convolutional layer is given as,

$$I2d\left(G\right)=\sum_{y=0}^{F-1}\sum_{z=0}^{F-1}{U}_{y,z} \bullet {G}_{\left(c+y\right),\left(d+z\right)}$$
(23)
$$I1d\left(G\right)=\sum_{s=0}^{{C}_{in}-1}{S}_{s}*G$$
(24)

where the input image of the convolutional layer is denoted as ; the coordinates used for recoding is signified as \(c\) and ; \(F\times F\) kernel matrix is represented as, which is also referred as a learnable parameter; and the kernel matrix location index is denoted as \(y\) and z. Therefore, sth input neuron kernel size is denoted as and the cross-correlation operator is denoted as \(*\).

Pooling Layer

This layer is used to control the over fitting issues and minimize the feature map’s spatial size. This layer is presents next to the convolutional layer. Here, the average pooling is selected, which is used for operating on every feature map size, and all the values are replaced by the average pooling.

$${y}_{out}=\frac{{y}_{in}-{S}_{y}}{\eta }+1$$
(25)
$${z}_{out}=\frac{{z}_{in}-{S}_{z}}{\eta }+1$$
(26)

where the width and height of the input matrix are signified as yin and zin, the outcome of input matrix width and height are yout and zout, and the kernel size width and height are signified as \({S}_{y}\) and \({S}_{z}\).

Activation Function

The complex and non-linear features are educated using the activation function, so that it is useful in the enhancement of mined features non-linearity, which is used for the MRI processing that is rectified linear unit (ReLU) and which is given below,

$$\mathrm{Re}\;LU\left(G\right)=\left\{\begin{aligned}&0\kern0.1500em ;L<0\\& L\kern0.1500em;L\ge 0\end{aligned}\right.$$
(27)

Batch Normalization

In the deep learning techniques, to obtain a better tradeoff among the computation complexity and convergence, the data used for training is divided into more small divisions known as mini-batches and the training is achieved based on these mini-batches. Here, the normalization is carried out in input layers using the activations alteration and scaling for increasing the consistency and speediness of training.

Residual Blocks

It is the shortcut association in-between the output and input of a residual block. If the size of the residual block input and output are identical, then the output and input are directly linked with each other. Else, the dimension matching factor is required for linking the input and output.

$$B=\nu \left(G\right)+G$$
(28)
$$B=\nu \left(G\right)+{\varpi }_{G}G$$
(29)

where input is the feature, which is denoted as \(G\), the output of the residual block is indicated as \(B\), the mapping relationship is indicated as \(\nu\), and \({\varpi }_{G}\) signifies the dimension matching factor.

Linear Classifier: It is used in the determination of the classification outputs. It is the grouping of softmax function and fully connected (FC) layer. The operation of the FC layer is the same as multi-layer perceptron, in which the neurons are linked to each other in various layers. The softmax activation function is utilized in the normalization of the input vector into a probability vector and the highest probability class is the final result.

$$T=\delta B+\varsigma$$
(30)

Here, \(\delta\) indicates the weight matrix, the bias is indicated as \(\varsigma\), and \(T\) signifies the DRN output. Figure 4 represents the architectural design of DRN.

Fig. 4
figure 4

DRN architecture

DRN Training Using the Developed CJHBA

The DRN classifier is trained using the proposed hybrid optimization approach namely CJHBA, in which the weight and bias of the classifier are trained with developed CJHBA for determining the optimum solution. The implemented CJHBA is the integration of chronological concept, Jaya, and HBA. The historical details related to things, which are changing based on time, are identified using the chronological concept. The HBA is useful in the solution-making of the optimization challenges. Besides, it converges very fast and has a good exploration–exploitation balance. In the Jaya algorithm, there is no need of algorithm-specific parameters and the simple general parameters are used for optimization. Because of this benefit, this algorithm is useful in solving various real-world optimization issues, like automatic clustering, mechanical design issues, and optimum power flow. However, it is difficult for doing complex multi-dimensional issues, which lead to slow convergence. Hence, integrating the chronological concept, HBA, with the Jaya algorithm is useful in the effective classification of tumors in the brain. The steps are discussed below:

Fitness Evaluation

To determine the fitness, the mean square error (MSE) is evaluated for every solution, and the solution with minimum MSE is considered the optimal solution. The formula used for the fitness evaluation is given as,

$$MSE=\frac{1}{h}{\sum_{i=1}^{h}\left[{\xi }_{i}-{T}\right]}^{2}$$
(31)

where the expected output of DRN is symbolized as \({\xi }_{i}\) and the original output of DRN is indicated as \(T\), and \(h\) signifies the total samples and \(1<i\le h\).

Let us consider Eq. (22), which is the hybridization of the chronological concept and HBA. Along with this, the Jaya algorithm is integrated to obtain the proposed CJHBA, and the steps in the Jaya algorithm is given as,

$${c}_{l}\left(k+1\right)={c}_{l}\left(k\right)+{w}_{1}\left(k\right)\left({c}_{best}\left(k\right)\right)-{c}_{l}\left(k\right)-{w}_{2}\left(k\right)\left({c}_{worst}\left(k\right)-{c}_{l}\left(k\right)\right)$$
(32)
$$\begin{aligned}{c}_{l}\left(k+1\right)=&\ {c}_{l}\left(k\right)+{w}_{1}\left(k\right)\left({c}_{best}\left(k\right)\right)-{w}_{1}\left(t\right){v}_{l}\left(k\right)\\&-{w}_{2}\left(k\right)\left({c}_{worst}\left(k\right)-{w}_{2}\left(k\right){c}_{l}\left(k\right)\right)\end{aligned}$$
(33)
$${c}_{l}\left(k+1\right)={c}_{l}\left(k\right)\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]+{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)$$
(34)
$${c}_{l}\left(k\right)\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]={c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)$$
(35)
$${c}_{l}\left(k\right)=\frac{{c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)}{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}$$
(36)

While substituting Eq. (35) in Eq. (21),

$$\begin{aligned}{c}_{l}\left(k+1\right)=&\ \frac{1}{2}\left[{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left(2-H\times {u}_{7}\times \lambda \right)\right.\\&\left.-\left(\frac{{c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)}{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right)\right.\\&\left.\left(H\times {u}_{7}\times \lambda \right)+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]\end{aligned}$$
(37)
$$\begin{aligned}&{c}_{l}\left(k+1\right)+\frac{c\left(k+1\right)H\times {u}_{7}\times \lambda }{2\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\\&\quad= \frac{1}{2}\left[{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left(2-H\times {u}_{7}\times \lambda \right)\right.\\&\qquad\left.-\left(\frac{{c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)}{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right)\right.\\&\qquad\left.\left(H\times {u}_{7}\times \lambda \right)+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]\end{aligned}$$
(38)
$$\begin{aligned}&\frac{{c}_{l}\left(k+1\right)2\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]+c\left(k+1\right)H\times {u}_{7}\times \lambda }{2\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\\&\quad=\frac{1}{2}\left[\begin{aligned}&{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left(2-H\times {u}_{7}\times \lambda \right)\\& -\left(\frac{{c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)}{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right)\left(H\times {u}_{7}\times \lambda \right)+\\& {c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\end{aligned}\right]\end{aligned}$$
(39)
$$\begin{aligned}{c}_{l}&\left(k+1\right)\left[\frac{2\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]+H\times {u}_{7}\times \lambda }{2\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right]\\&\quad=\frac{1}{2}\left[{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left(2-H\times {u}_{7}\times \lambda \right)\right.\\&\qquad\left.-\left(\frac{{c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)}{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right)\right.\\&\qquad\left.\left(H\times {u}_{7}\times \lambda \right)+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]\end{aligned}$$
(40)

The final equation is

$$\begin{aligned}{c}_{l}\left(k+1\right)=&\ \left[{c}_{prey}\left(1+H\times {u}_{7}\times \lambda \right)\left(2-H\times {u}_{7}\times \lambda \right)\right.\\&\left.-\left(\frac{{c}_{l}\left(k+1\right)-{w}_{1}\left(k\right){c}_{best}\left(k\right)-{w}_{2}\left(t\right){c}_{worst}\left(k\right)}{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right)\right.\\&\left.\left(H\times {u}_{7}\times \lambda \right)+{c}_{l}\left(k-1\right){H}^{2}\times {u}_{7}^{2}\times {\lambda }^{2}\right]*\\& \left[\frac{\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]+H\times {u}_{7}\times \lambda }{2\left[1-{w}_{1}\left(k\right)+{w}_{2}\left(k\right)\right]}\right]\end{aligned}$$
(41)

The algorithm is terminated when satisfying the stopping criteria. The pseudo-code of the devised CJHBA is discussed in Algorithm 2.

figure b

Algorithm 2Proposed CJHBA pseudo-code

Thus, the tumorous or non-tumorous are determined by the integration of chronological concept, HBA, and Jaya algorithm.

Results and Discussion

This section contains the details regarding the experimentation and discussion of the implemented CJHBA-based DRN. The setup details, dataset description, metrics, and results details are explained in further sections.

Experimental Setup

The experimentation of the implemented CJHBA-based DRN is determined using MATLAB with windows 10 OS, 4 GB RAM, and Intel i3 processor.

Dataset Description

The efficiency of the implemented CJHBA-based DRN is evaluated using two datasets, such as BRATS 2018 [27] and Figshare dataset [28].

BRATS 2018 Dataset (dataset-A)

This dataset contains the MRI scans of pre-operative brain tumors obtained from multi-institution. Also, it concentrates on the intrinsic heterogeneous segmentation of brain tumors (in histology, shape, and appearance) called gliomas. Moreover, this dataset is used to pinpoint the medical relationship of segmentation and concentrates the overall survival prediction using radiomic features integrative analyzes and machine learning algorithms.

Figshare Dataset (dataset-B)

It is a dataset for brain tumors with 3064 T1-weighted contrast-enhanced images. Also, it contains brain tumor types as glioma, meningioma, and pituitary. Here, 930 slices are available in pituitary tumor, 1426 slices available in 1426 glioma, and 708 slices are available in meningioma.

Performance Metrics

The testing accuracy, specificity, sensitivity, and receiver operating characteristic (ROC) curve metrics are considered for evaluating the implemented scheme.

Accuracy: It is defined as how well the implemented brain tumor classification method is correctly classified, and it is formulated as,

$$Accuracy=\frac{TPos+TNeg}{FNeg+FPos+TPos+TNeg}$$
(42)

where true negative, true positive, false negative, and false positive are denoted as \(TNeg\), \(TPos\), \(FNeg\), and \(FPos\), respectively.

Specificity

It is determined as the proportion of correct identification of true negatives, and the formula is given as,

$$Specificity=\frac{TNeg}{TNeg+FPos}$$
(43)

Sensitivity

It is determined as the proportion of correct identification of true positives, and the formula is given as,

$$Sensitivity=\frac{TPos}{TPos+FNeg}$$
(44)

ROC

It is the classification aptitude of the proposed brain tumor classification system. It is plotted true positive rate (TPR) against the true negative rate (TNR) based on various threshold settings.

Experimental Results

The experimental results using dataset-A (three images) are given in Fig. 5. The input MRIs are represented in Fig. 5a, and the pre-processed results are provided in Fig. 5b. Then, the segmented MRIs are given in Fig. 5c. The augmented results, such as flipped, are given in Fig. 5d, rotated is provided in Fig. 5e, translated is given in Fig. 5f, and noise added images are provided Fig. 5g.

Fig. 5
figure 5figure 5

Experimental results using dataset-A: a input images, b pre-processed images, c segmented images, d flipped images, e rotated images, f translated images, and g noised added images

Figure 6 represents the experimental results using dataset-B (three images). The input MRIs is represented in Fig. 6a, and the pre-processed results are provided in Fig. 6b. Then, the segmented MRIs are given in Fig. 6c. The augmented results, such as flipped, are given in Fig. 6d, rotated is provided in Fig. 6e, translated is given in Fig. 6f, and noise added images are provided Fig. 6g.

Fig. 6
figure 6figure 6

Experimental results using dataset-B: a input images, b pre-processed images, c segmented images, d flipped images, e rotated images, f translated images, and g noised added images

Segmentation Analysis

The segmentation accuracy is analyzed using dataset-A and dataset-B, which is depicted in Fig. 7. The segmentation accuracy of using dataset-A is shown in Fig. 7a, which is evaluated by varying the training data. The comparison methods used for evaluating the segmentation accuracy of the implemented DeepMRseg are SegNet, U_net, and DeepJoint. When the training data = 70%, the segmentation accuracy of the methods SegNet, U_net, and DeepJoint, and implemented DeepMRseg is 0.8289, 0.8578, 0.8630, and 0.9052, respectively. The segmentation accuracy of the DeepMRseg is more because of the training of the DeepMRseg with the hybrid optimization technique CHBA. Similarly, the segmentation accuracy of implemented DeepMRseg that is determined using dataset-B is given in Fig. 7b. For training data = 60%, the segmentation accuracy of the implemented DeepMRseg is 0.9023, whereas SegNet, U_net, and DeepJoint obtain the segmentation accuracy of 0.8405, 0.8517, and 0.8842, respectively. When increasing the percentage of training data, the segmentation accuracy improves, and the maximum accuracy is achieved at 90% training data.

Fig. 7
figure 7

Analysis based on segmentation accuracy: a dataset-A and b dataset-B

Comparative Methods

The classification performance of the implemented CJHBA-based DRN is compared with previously implemented techniques, such as transfer learning [26], Bayesian fuzzy clustering (BFC) [29], deep neural network (DNN) [30], and CNN [31].

Comparative Analysis

The classification performance of the implemented technique is evaluated using two datasets, which are discussed below.

Comparative Analysis using Dataset-A

The classification performance of the implemented CJHBA-based DRN using dataset-A is resulted in Fig. 8. The analysis is based on adjusting the training percentage to examine accuracy, specificity, sensitivity, and ROC. The testing accuracy evaluation is shown in Fig. 8a. The accuracy of the implemented CJHBA-based DRN is 0.8934, for 60% of training data. For the same training percentage, the transfer learning, BFC, DNN, and CNN methods have the accuracy of 0.8211, 0.8322, 0.8433, and 0.8754, respectively. Figure 8b shows the sensitivity analysis of the model. When 70% of training data is used for the evaluation, the proposed method offers a sensitivity of 0.9089, and the improvement percentage with transfer learning is 5.02%, BFC is 3.84%, DNN is 3.07%, and CNN is 2.94%. The specificity analysis is depicted in Fig. 8c. The specificity of the models, such as transfer learning, BFC, DNN, CNN, and implemented CJHBA-based DRN are 0.8682, 0.8838, 0.8983, 0.9075, and 0.9239, respectively, for 80% of training data. The ROC analysis is given in Fig. 8d, in which the TPR is plotted against the FPR. When TPR is 0.4, the FPR of the models, such as transfer learning, BFC, DNN, CNN, and implemented CJHBA-based DRN, are 0.7968, 0.8124, 0.8699, 0.8515, and 0.8728, respectively. By considering the classification performance analysis, the developed CJHBA-based DRN has attained maximum performance at 90% training data, which is happened due the effective training of the DRN classifier with the implemented CJHBA.

Fig. 8
figure 8

Comparative analysis using dataset-A: a accuracy, b sensitivity, and c specificity

Comparative Analysis Using Dataset-B

Figure 9 shows the classification performance of the implemented CJHBA-based DRN using dataset-B. The analysis is based on adjusting the training percentage to examine accuracy, specificity, sensitivity, and ROC. The testing accuracy determination is depicted in Fig. 9a. The testing accuracy of transfer learning, BFC, DNN, CNN, and implemented CJHBA-based DRN methods are 0.7949, 0.8284, 0.8584, 0.8731, and 0.9110, respectively, for 80% of training data. The sensitivity evaluation is shown in Fig. 9b. When evaluating the sensitivity, the sensitivity of the implemented approach is 0.9043, for 60% of training data. For the same training percentage, the transfer learning, BFC, DNN, and CNN methods have a sensitivity of 0.7954, 0.8190, 0.8231, and 0.8640, respectively. Figure 9 c shows the specificity analysis of the model. When 70% of training data is considered for the evaluation, the proposed method offers the specificity of 0.9109, which is 11.27% improved than transfer learning, 7.80% improved than BFC, 9.08% improved than DNN, and 3.44% improved than CNN. Figure 9d shows the ROC analysis. The FPR of transfer learning is 0.8823, BFC is 0.8809, DNN is 0.8876, CNN is 0.8612, and implemented approach is 0.8976, respectively, for considering the TPR as 0.6. The classification performance is enhanced because of the hybridization of the chronological concept, Jaya algorithm, and HBA.

Fig. 9
figure 9

Comparative analysis using dataset-B: a accuracy, b sensitivity, c specificity, and d ROC

Algorithmic Analysis

The algorithmic analysis is carried out to show the efficiency of implemented CJHBA + DRN with other optimization algorithms. Here, the optimization algorithms considered for the evaluation are Competitive swarm optimization (CSO) + DRN, JOA + DRN, HBA + DRN, and proposed CJHBA + DRN.

Algorithmic Analysis Using Dataset-A

The algorithmic analysis using dataset-A is shown in Fig. 10. The evaluation is done using various metrics. The algorithmic analysis of accuracy is depicted in Fig. 10a by varying the training data. When 70% of training data is used for evaluation, the accuracy of the methods, such as CSO + DRN, JOA + DRN, HBA + DRN, and implemented CJHBA + DRN, are 0.8594, 0.8713, 0.8788, and 0.8898, respectively. Similarly, the sensitivity and specificity algorithmic analysis are given in Fig. 10b, c, respectively. In the sensitivity analysis, for 80% of training data, the sensitivity of the implemented CJHBA + DRN is 0.9038, and other methods, such as CSO + DRN, JOA + DRN, and HBA + DRN, are 0.8726, 0.8837, and 0.8868, respectively. In the same way, for the same training percentage, the specificity of the CJHBA + DRN is 0.9061, whereas the CSO + DRN is 0.8749, JOA + DRN is 0.8860, and HBA + DRN is 0.8891. By the consideration of dataset-A, the implemented algorithm shows the effective outcomes, which is because of the hybridization of the chronological concept, Jaya algorithm, and HBA.

Fig. 10
figure 10

Algorithmic analysis using dataset-A: a accuracy, b sensitivity, and c specificity

Algorithmic Analysis Using Dataset-B

Figure 11 depicts the algorithmic analysis using dataset-B. The accuracy-based algorithmic analysis is provided in Fig. 11a. The accuracy of the methods, such as CSO + DRN, JOA + DRN, HBA + DRN, and implemented CJHBA + DRN, are 0.7926, 0.8410, 0.8426, and respectively. Similarly, the sensitivity analysis is shown in Fig. 11b. For 60% training data, the sensitivity of the implemented CJHBA + DRN is 0.8608, and other methods, such as CSO + DRN, JOA + DRN, and HBA + DRN are 0.7919, 0.8026, and 0.8132, respectively. The specificity analysis is depicted in Fig. 11c. Likewise, for the same training percentage, the specificity of the implemented algorithm is 0.8695, whereas the CSO + DRN is 0.7846, JOA + DRN is 0.8097, and HBA + DRN is 0.8108. The performance is maximum for the highest training data percentage.

Fig. 11
figure 11

Algorithmic analysis using dataset-B: a accuracy, b sensitivity, and c specificity

Comparative Discussion

Table 1 shows the comparative analysis of the brain tumor classification, and the values given in the table are attained at 90% training data. For considering the dataset-A, the testing accuracy of the implemented model is 0.9210, which is 3.88%, 3.49%, 2.53%, and 2.03% improved than other models, like transfer learning, BFC, DNN, and CNN, respectively. In the previously developed comparative methods, the DNN attained maximum sensitivity of 0.9110, but the proposed CJHBA-based DRN has improved 2.18% than the existing DNN. Similarly, the specificity of the implemented model is 0.9284, which is comparably higher than the previously developed techniques, like transfer learning, BFC, DNN, and CNN. The comparative discussion on dataset-B is discussed as follows: The maximum accuracy attained by the implemented technique is 0.9184, which is 10.90%, 8.06%, 5.66%, and 2.75% improved than the existing transfer learning, BFC, DNN, and CNN. The sensitivity of the implemented model is 0.9155, which is comparably higher than the previously developed techniques, like transfer learning, BFC, DNN, and CNN. Also, in the previously developed comparative methods, the CNN attained maximum specificity of 0.9186, but the proposed CJHBA-based DRN has improved 1.69% than the existing CNN.

Table 1 Comparative analysis

The discussion of algorithmic analysis of the classification of tumors in the brain tumor is depicted in Table 2, and the values given in Table 2 are attained at 90% training data. From this table, it is proven that the implemented CJHBA + DRN has attained maximum accuracy, specificity, and sensitivity. While considering dataset-A, the implemented CJHBA + DRN attained maximum accuracy of 0.9104, sensitivity of 0.9166, and specificity of 0.9189. Similarly, the maximum specificity, accuracy, sensitivity, and specificity offered by considering the dataset-B are 0.9089, 0.9114, and 0.9031, respectively.

Table 2 Algorithmic analysis

Conclusion

The brain tumor is the most dangerous abnormality, which is the unnatural and uncontrolled increase of brain cells. The exact and early diagnosis plays a main role in the treatment of patients. Hence, in this research, for segmenting and classifying the tumors in the brain, an efficient optimization-enabled deep learning approach is devised. In the devised technique, the MRI is initially pre-processed using a normalization scheme, and then, the segmentation is achieved using CHBA-based DeepMRSeg. After that, the CNN features are extracted for effectively classify the tumors in the brain. Then, the data augmentation is achieved and after data augmentation, the classification is performed using DRN. The DRN is trained using the proposed CJHBA, which is obtained by integrating the Jaya algorithm, HBA, and chronological concept. Thus, the proposed method effectively categorizes tumorous and non-tumorous patients. The efficiency of the implemented classification technique is evaluated using two publicly available datasets, like BRATS 2018 and Figshare, in which the maximum specificity, sensitivity, and accuracy are attained using the BRATS dataset with values 0.9284, 0.9313, and 0.9210, respectively. In the future, more features will be considered for enhancing the performance of the developed model. Also, an effective method will be implemented for classifying the types of tumor and non-tumor tissues.