Abstract
Globally, one among the most prominent cause of death is lung cancer, which is the most malignant tumorin human health. Hence, automatically detecting or diagnosing lung disease as of computerized tomographic scan image is the necessary application. Numerous cancer classification systems have been engendered for this. However, it is not easy to determine the presence of tumors in small nodules. Hence, a novel Aquila-optimized mish dropout-deep convolutional neural network (AmiD-DCNN) cancer classification system was proposed in this paper. The noises are eliminated by utilizing an adaptive median filter initially and the Chi-square distribution adapted contrast limited adaptive histogram equalization is utilized to elevate the contrast. The residual unity AlexNet is utilized to segment the lung regions from the preprocessed image; also the Jaccard similarity and quadratic kernel-induced profuse clustering are employed to extract the cancerous region. The features are extracted after those steps, which are then fed to the Amid-DCNN classifier to classify cancer. The experiments are evaluated along with analogized with the benchmark models. The proposed model’s efficient performance was demonstrated by the experimental outcomes.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
With 8.2 million deaths per year or so, cancer is the leading cause of mortality globally. However, the top of this list with 1.69 million fatalities annually is the LC (Perez and Arbelaez 2020). LC is a lung disease condition and can be either primary (originating as of the lungs) or metastasis (originating as of other organs). Small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) are the extensions of LC. The most malignant cancer that occurs in 15% of cases is the SCLC, whereas NSCLC occurs in the remaining 85%. Adenocarcinoma, large-cell carcinoma, and squamous-cell carcinoma are the subcategories of NSCLC. Since the true cause of cancer and its whole therapy have not been discovered, lung cancer detection (LCD) continues to be a challenge for medical practitioners.. However, cancer that is detected early enough can be treated (Akter et al. 2021) and survival rates can be elevated.
This is why accurate classification of lung illness is necessary, whereas LCDs can be created using medical imaging techniques as chest X-rays, MRI scans, computerized tomography (CT) scans, etc. Since CT scans create cross-sectional body images using CT and X-rays, which are crucial for the medical condition’s diagnosis it is more appropriate to utilize CT scans for human body organs like the lungs (Asuntha and Srinivasan 2020). Manually segmenting the LC such as lesions, which calls for the skill of a radiologist drawing, is the conventional method for identifying LC in scan images. For a variety of reasons, it is difficult to engender strong and effective outcomes (Avanzo et al. 2020). Additionally, the sort, shape, size, and location of lung abnormalities may vary from what lung specialists predict. Thus, a computer-aided design module (CAD) was engendered to deal with morphological operators to execute the pre-processing over lung nodules (Firdaus Abdullah et al. 2020). For the early detection and classification of LC in the CT scan images, several machine learning (ML) and deep learning (DL) algorithms were developed.
Graph convolution networks (GCN), artificial neural networks (ANN), deep neural networks (DNN), support vector machines (SVM), convolutional neural networks (CNN) or deep CNN (CNN), and hybrid DNN with optimization algorithms are a few ML techniques utilized for the LC categorization (Firdaus et al. 2020). Since DL networks, also known as DCNN, accept images as input and can be trained end-to-end utilizing a supervised methodology, while learning highly discriminative image features, it is picked as the accurate lung cancer classifier among these models (Gao et al. 2021). Owing to learning numerous features of the input CT scan images, the DCNNs, however, have a temporal complexity. Hence, a novel AmiD-DCNN-based LC classification framework is proposed to tackle this issue.
1.1 Problem statement
Certain limitations are still needed to be solved for better classification even though several ML- and DL-based models are available for LC classification. A few limitations of the prevailing models are framed as follows,
-
In the prevailing methodologies, grading the images grounded on the degree of the pulmonary nodule was not evaluated, which is of great significance for the LC diagnosis and treatment in clinical applications.
-
Numerous irregular details along with low-quality pixels that diminished the LC prediction accuracy are contained in the captured CT images. The classification process’ accuracy was significantly influenced by this.
By analyzing these downsides, the proposed scheme’s objective is to engender an accurate classifier with a reliable segmentation and patch extraction technique.
This paper is structured as follows; Sect. 2 describes the recent related works of LC classification. Section 3 explains the proposed methodology. Section 4 analyzes the proposed model’s experimental outcomes. Finally, Sect. 5 concludes this paper and gives a future suggestion.
2 Related works
Investigated LCD model grounded on morphological feature extraction (FE) and Kernel-based non-Gaussian-CNN (KNG-CNN) classification of CT images (Jena and George 2020). The false positives in the work were diagnosed by utilizing the KNG computation. Consequently, the KNG-CNN technique obtained an accuracy of 87.3%. But, the input image quality was not pondered,which would affect the classifier’s accuracy.
Developed a DL-assisted prediction of LC on CT images grounded on the adaptive hierarchical heuristic mathematical model (AHHMM) (Karthiga and Rekha 2020). Image acquisition, pre-processing, binarization, thresholding and segmentation, FE, and detection by DNN were various stages comprised in this system. The test evaluation exhibited that the presence or absence of LC was detected by the AHHMM with 96.67% accuracy. However, the accuracy was analyzed only for ten images. Hence, when executed with enormous images, the accuracy could deviate.
K-nearest Neighbor (KNN) method for the LC prognosis. To elevate the KNN algorithm’s accuracy, the features were chosen in the presented scheme by the genetic algorithm (GA) (Kavithaa et al. 2021). The presented approaches’ implementation on an LC database exposed that better accuracy was attained by the suggested model. However, the GA could not achieve the repetitive fitness evaluation for the complex data. Hence, the k value was chosen accurately.
An approach grounded on the improved Naive Bayes classifier (I-NBC) algorithm for the early LC prediction on the CT images. An accelerated wrapper-based binary artificial bee colony (AWB-ABC) algorithm was utilized in this model for effective feature selection (Kumbhar et al. 2022). A superior trade-off betwixt, the prevailing and the engendered techniques were exhibited by the experiential outcomes. However, the linear data that limits the model’s performance was not classified by the INBC model.
DL framework for the detection of lung nodules on CT images (Majidpourkhoei et al. 2021). The features as of the lung images were automatically extracted by engendering the CNN layers centered on the LeNet-5 model succeeded by the suspicious regions’ classification as either nodule or non-nodule objects. The experiential outcomes demonstrated that a better accuracy than the prevailing works was attained by the developed CNN model. However, an overfitting issue was caused by the LeNet-5 for a few images that could affect the outcomes.
An autonomous decision support scheme for LC detection and classification. A multidimensional region-based fully convolutional network (mRFCN) algorithm was developed in this model for that (Maleki et al. 2021). When contrasted to the benchmark nodule detection/classification models, better detection performance was exhibited by the experiential outcomes. However, more false positives were provided by the model on the clinical dataset than the compared model.
A CNN-based framework for the LC diagnosis. An input image was classified by the developed CNN into one of three classes (benign, malignant, and non-cancerous) (Marentakis et al. 2021). A classification accuracy of 93.9% with diminished false positives was achieved by the suggested CNN. Yet, better sensitivity than the compared model was not achieved.
Improved DNN (IDNN) and ensemble classifier for LC identification from the CT image automatically. The noise was diminished with a multilevel brightness-preserving approach from the CT image. The affected region was segmented with IDNN (Masood et al. 2020). Regarding logarithmic loss, the discussed model elevated the LC prediction rate. Nevertheless, predicting the ensemble classifier’s output was hard,which could cause computational complexity. Deep Gaussian mixture model in a region-centered CNN (DGMM-RBCNN) may have reduced computing cost, but it was difficult to anticipate the ensemble classifier's output (Nageswaran et al. 2022). For the elimination of over-parameterized solutions in the RBCNN model, DGMM-based dimensionality reduction was introduced in the given model. In the experiential analysis, better accuracy was achieved by the DGMM-RBCNN model than the prevailing system. In the initial stages, the model’s precision was poor, which could diminish the average precision that could affect the model’s performance.
Elastic deformation (ED)-based ML to augment the subtype classification of NSCLC (Nanglia et al. 2021). For the output prediction, ‘2’ classifier models were trained on the original and augmented dataset. Regarding accuracy, sensitivity, specificity, and f1-score, the developed model’s performance was achieved. The model was made by processing more features which takes more time to train the parameters that could create time complexity.
Methodology for LC detection with enhanced segmentation accuracy. Here, the segmented data was classified by a neuro-fuzzy classifier (Perez and Arbelaez 2020). The suggested model concluded with 90% accuracy with a diminished false alarm rate. As the model analyzed outcomes only for a limited number of images, the outcomes might deviate when processed with more images.
DL-based approach for LC detection and classification. The optimal features were selected by the fuzzy particle swarm optimization (FPSO) algorithm (Raoof et al. 2020). It was verified as of the experimental outcomes that better performance than the other techniques was attained by the developed technique. Yet, more accurate outcomes than the surveyed works were not produced by the work.
A scheme of LC histology classification as of CT images grounded on radiomics and DL models. ‘4’ diverse families of techniques were investigated for that (Rehman et al. 2021). The outcomes exhibited that Inception and LSTM + Inception was the best CNN that generated better performance than the other models. However, more time was taken by the model to evaluate all models that caused memory and time complexity.
A DL-centered methodology for LCD as of denoised CT scan images. “Denoising first” two-path CNN called denoising first detection-Nnetwork (DFD-Net) for LC detection was the introduced DL model (Sori et al. 2021). Regarding accuracy, sensitivity, and specificity, better LCD outcomes than the recently introduced approach were revealed by the DFD-Net. However, there was a loss in a few significant features that arouses the overfitting issue.
An early LCD model for breast image segmentation and analysis is discussed on (Surendar 2021). For lung lobe segmentation, global thresholding and guided 3D watershed transform were the models employed. It can be claimed as of the obtained outcomes that more accuracy was attained by the suggested model. But, a better segmentation cannot be executed for noisy images that degraded the model.
3 Proposed lung cancer classification system
For the efficient identification and classification of LC, a novel AmiD-DCNN classifier is future herein paper. The lung region is originally segmented herein scheme. The landscapes are extracted as of that segmented regions and fed into the AmiD-DCNN, which classifies cancer. Figure 1 articulated the future copy’s block drawing.
3.1 Preprocessing
The input chest CT image (I) was used by the proposed work for categorizing the LCs that had been gathered from publically available datasets. The input image is induced into the preprocessing step owing to the input CT image’s lower quality. Noise removal and contrast enhancement are the two steps in preprocessing.
3.1.1 Noise removal
Utilizing an adaptive median filter (AMF), the noise is eliminated from the input image. The drawbacks faced by the standard median were eliminated by designing AMF. The filter performs under two steps.
Step 1: After choosing the input images’ window size, the median value \(\left( {G_{{{\text{med}}}} } \right)\) for the image is computed.
where \(G_{\min } \in {\rm I}\) is the minimum gray level value, \(G_{\max } \in {\rm I}\) is the maximum gray level value in the selected window \(W\), the parameters of gray level value in step 1 is notated as \(L_{{{\text{lev}}\,1}}^{1}\) and \(L_{{{\text{lev}}\,1}}^{2}\).
Here, the elevated window size is signified as \(W_{{{\text{increased}}}}\).
Step 2: Here, it checks whether the noise corrupted the current pixel value in the input image or not. The pixel changes with the median if the pixel is corrupted or else retains the grayscale’s pixel value.
where the parameters of the gray level value in step 2 are signified as \(l_{{{\text{lev}}\,2}}^{1}\) and \(l_{{{\text{lev}}\,2}}^{2}\). The output from these two processes can be obtained as \(I_{{{\text{removed}}}}\).
3.1.2 Contrast intensification
The Chi-square distribution adapted contrast limited adaptive histogram equalization (Chi-CLAHE) algorithm was utilized to elevate the filtered CT lung image \(I_{{{\text{removed}}}}\) in this section. To enhance the image’s contrast, a variant of the adaptive histogram equalization technique applied over all neighborhood pixels is the CLAHE. The control parameters to adjust the contrast are chosen randomly in the conventional CLAHE algorithm. This may impact the contrast enhancement performance as well as elevate the number of iterations. Therefore to compute the control parameter, the Chi-square distribution function was utilized in this proposed work to overcome this drawback. The control parameters in a single test were computed by the distribution without elevating the error probability.
The filtered region is initially partitioned into contextual regions with an equal number of pixels. They are named tiles. For each tile, the pixels present in the image compute the histogram. Then, the average pixel in the gray level \(G_{{{\text{avg}}}}\) is computed as,
where the number of pixels in \(s\) direction of the contextual region is notated as \(P_{s}\), the number of pixels in \(t\) direction of the contextual region is represented as \(P_{t}\), and the number of gray levelsis notated as \(N_{g}\). Using Chi-square distribution, the clip limit \(cl_{{{\text{chi}}\;{\text{sq}}}}\) is set grounded on the average gray level and it is articulated as,
Here, a random number is denoted by \(R\). Then, the pixels over the control parameter are pondered as excess pixels and are redistributed for each gray level. To remove induced boundaries in the input image, it merges all the neighboring tiles using bilinear interpolation after executing the equalization. Hence, \(I_{{{\text{pre}}}}\) denotes the image after the contrast enhancement.
3.2 Lung segmentation
Here, the residual unity AlexNet (RU-AlexNet) is utilized to segment the lung image from \(I_{{{\text{pre}}}}\). A graphics processing unit (GPU) to accelerate image classification was utilized by the AlexNet, which is a DCNN model. However, more time was required by AlexNet to segment the lung region. This may lead to some sort of error. The proposed work adapted residual network along with unity normalization function to tackle these issues. The residual module utilizing a method of the fitting residual map was the most vital one in the residual network. The convolutional layer’s outcome was not provided directly however chooses the residual mapping. Moreover, unity normalization is utilized to normalize the image pixels. The proposed segmentation algorithm is called RU-AlexNet owing to the addition of residual network (R) and unity normalization (U).
Here, the preprocessed lung image is provided as input to the RU-AlexNet. Also, the kernels are utilized to convolute the image and are articulated as,
where the number of kernels is denoted as \(\omega_{n}\). In the mathematical representation, the convolution for the given input data \(C\) is expressed as,
Unity normalization (U) that provides nonlinearity in the neural network modifies the network structure. The gradient is also enhanced by the function. The normalization can be defined as,
Further, the convoluted features are provided in the residual network containing a number of residual blocks. Every residual block is followed by a unit normalization layer, convolution, and a ReLU activation function. The input is added directly to the activation function (skip connection) in each residual block.
The first residual block is directly inputted by the convolution’s output. Convolution and unit normalization is executed as stated earlier. The output is injected directly into the ReLU activation function. It may aid to study the complex patterns in the data and the residual block’s output. It can be articulated as,
Now, the output with the introduction of skip connection is changed to,
Till the last residual block, this process is repeated. The residual network’s output is transferred into the max pooling operation that diminished the number of features. The pooling operation \(\Im\) can be formulated as,
Anywhere, the kernel’s steps are written as \(\omega_{{{\text{stride}}}}\). Till the last layer, this process is continued.
All the features are flattened and provided to the completely linked coating (CLC) after the combining operation. A similar number of production lumps is limited in the last CLC identical to the amount of lessons. To board lesson likelihoods, the softmax activation purpose was utilized by the RU-AlexNet to normalize production actual standards in the series \(\left[ {0,1} \right]\) from the last CLC. The softmax equation \(S\) is signified as follows,
Anywhere, the CLC production at kth lump is signified for example \(\Im_{k}\), \(b\) is the entire amount of production lumps. The segmented lung can be obtained from this process, and it is denoted as \(I_{{{\text{seg}}}}\).
3.3 Patch extraction
Here, the Jaccard similarity and quadratic kernel-induced profuse clustering (JQPC) is utilized to extract the abnormal regions (cancerous region) from \(I_{{{\text{seg}}}}\). As per the pixel similarity, the lung region was inspected by the traditional profuse clustering (PC). For predicting the affected region, the images are segmented into several subimages by the PC. Here, the Jaccard similarity index executed the grouping of similar superpixels into the same group. Also, the k-means have the issue of distance computation in PC during the clustering process. The quadratic kernel function is applied to overcome this. The extraction occurred in the following steps.
Step 1: Take the lung segmented region \(I_{{{\text{seg}}}}\).
Step 2: Simple Linear Iterative Clustering (SLIC) is utilized to compute superpixels. To diminish the operational complexity, similar value pixels are grouped utilizing this algorithm.
Step 3: The K means algorithm is utilized to cluster all the pixels present in the image. The centroid is chosen randomly for this purpose. Then, the distance \({\text{Dis}}_{q\ker }\) betwixt the centroids \(\cup_{k}\) and the selected pixel \(p_{i}\) is computed as,
Here, the kernel function is represented as \(\phi\).
Step 4: A cluster is chosen for pixels where the distance betwixt the pixels, and the centroid is minimal.
Step 5: Fuse superpixels.
Step 6: Till the entire superpixels end, the process is repeated. The output can be obtained as normal \(\beta\) and abnormal regions \(\alpha\) as of this process. For further processing, this abnormal region is utilized. The proposed JQPC’s pseudocode is,
3.4 Feature extraction
Here, the input acquired is the cancerous region \(\alpha\) and passed on to the FE stage. Intensity histogram, histogram of oriented gradients, Gabor filter, entropy filter, grayscale contrast, grayscale correlation, grayscale energy, grayscale homogeneity, standard deviation, Haarwavelet, etc., are the several spectral features derived by the FE stage. The extracted features are articulated as,
Anywhere, the numbers of removed structures are signified for example \(\varepsilon_{n}\).
3.5 Classification
For labeling the lung stages, the AmiD-DCNN further examined the extracted features \(\varepsilon_{n}\). To identify patterns in images and video, DCNN is the most commonly utilized type. However, owing to the elevated number of layers betwixt input and output layers along with the random weights engendered at each layer due to active neurons, the traditional DCNN has a computational complexity issue. By replacing the ReLu activation with the mish activation function, a dropout connection is added to the DCNN network to execute effectively by overwhelming such limitations. The adaptive learning strategy-centered Aquila optimizer (ALSAQ) Algorithm was utilized to optimize the weight values in the DCNN. The proposed AmiD-DCNN’s architecture is exhibited in Fig. 2,
Weight initialization: The adaptive learning strategy-based Aquila optimizer (ALSAQ) algorithm was utilized to compute the proposed model’s weight. Generally, a new population-centered optimizer is the Aquila optimizer (AQ), which is classified as a metaheuristic optimization technique. However, the AQ may fall readily into the local optimal solution. A very low convergence rate was found. For distributing Aquila’s position in the search space, an adaptive learning model is introduced in the proposed model to elevate the convergence speed along with prevent the solution from local optima.
Initially, a set of random numbers (number of Aquila) are initialized for this. It can be articulated as,
The novel algorithm was utilized to update the weights utilizing these initialized numbers. Utilizing adaptive learning strategy, each Aquila updated its position in betwixt an upper limit (UL) and a lower limit (LL). It is expressed as,
where the total number of population is notated as \(m\,\), and each Aquila’s dimension size is signified as \(d\), \(\aleph_{pq}\) represents pth Aquila at pth dimension, the random value is specified as \(R\), \(UL_{q}\) and \(LL_{q}\) represents upper and lower bound at dimension \(q\).
To determine the position of prey, the area of the search space from the sky was explored by the Aquila. The prey areas are identified by the Aquila, which chooses the best areas for hunting. The exploration is computed as,
where the solution of the next iteration of \(\tau\) is represented as \(\aleph_{n}^{{{\text{exploration}}}} \left( {\tau + 1} \right)\), the best-obtained solution until \(\tau\) the iteration is signified as \(\aleph_{{{\text{best}}}} \left( \tau \right)\), which describes the prey’s exact position, the current solutions’ average value at \(\tau\)th iteration is notated as \(\aleph_{{{\text{avg}}}} \left( \tau \right)\), the parameter to control the search process are signified as \(\left( {1 - \frac{\tau }{t}} \right)\), \(\aleph_{m} \left( \tau \right)\) is the pth Aquila at \(\tau\)th iteration, the total amount of repetitions is meant for example \(t\).
Subsequently, the area of prey was chosen by the Aquila, which is found at a high level of altitude. The Aquila circle in the clouds, get into position along with prepares to attack the prey. The step \(\aleph_{n}^{{{\text{prepare}}}}\) is expressed mathematically as,
where the random Aquila between \(\left[ {1 - m} \right]\) is symbolized as \(\aleph_{r} \left( \tau \right)\), \(s_{1}\) and \(s_{2}\) are the spiral shape in search space, the distribution function of levy flights at dimensionis notated as \(L\left( d \right)\). The distribution function is articulated as,
where the fixed constant values with the range up to 0.01 and 1.5 were represented by \(f\) and \(\ell\), respectively, the random value between 0 and 1 is denoted as \(R_{1}\) and \(R_{2}\), and the constant value is notated as \(\hbar\).
Consequently, the Aquila is in an exploitation position approaching the prey along with providing a pre-emptive attack. This behavior can be articulated as,
where the adjustment parameteris denoted as \(\lambda\) and \(\zeta\).
Consequently, the Aquila takes the prey by walking on the ground. The last location at which the Aquila attacked the prey is computed as,
where a quality function was signified by \(Q\), the motions employed during tracking the best solution is notated as \(\upsilon_{1}\), and it is articulated as,
\(\upsilon_{2}\) is a lowering worth as of 2 to 0, and it is computed as,
The weights are updated in the developed network identical to the Aquila attacking prey among the number of prey. The updated weights \(\varphi_{n}\) are articulated as,
The input features are subjected to the developed neural network’s convolution layer after updating weights. For the assumed contribution statistics \(\Psi\), the difficulty is exactly spoken for example,
After the convolution operation, the activation function is applied to convolutional layers. Learning trivial linear combinations of the inputs was avoided by the proposed work by employing activation functions. Hence, the dropout with a mish activation function was proposed now this paper. The initiation purpose is exactly represented for example,
Here, the mish activation function is signified as \(f\left( \Psi \right)\), and the softplus function is notated \(\Omega\). To prevent a model from overfitting, the dropout function after the activation function is articulated as,
Here, the probability function is represented as \(prob\), and the dropout purpose’s production is meant for example \(D\). Aggregating the material lengthways by weakening the information was the goal of the combining layer to which the obtained output is passed. The combining process’s \(\Delta\) outcome remains:
Anywhere, the kernel pace remain mentioned for example \(S\). The acquired features are flattened and transferred into the FCL where the entire features are converted into the one-dimensional array \(\Gamma\). Toward board lesson possibilities, the array remains provided to the softmax function that normalized the production actual standards in the variety \(\left[ {0,1} \right]\) as of the last CLC. The softmax purpose \(\Gamma_{{{\text{soft}}}}\) is signified for example follows,
Anywhere, the CLC production on mth lump remains denoted, for example, \(\Gamma_{m}\), and the entire amount of production lumps remains illustrated, for example, \(h\). Thus, LC can be classified with higher accuracy by this process. The proposed AmiD-DCNN’s pseudocode is,
4 Results and discussions
In comparison with the benchmark techniques, the future LCD replica’s presentation remains analyzed experimentally in this section. The experiments are executed on the working platform of Python. The data are gathered as of the publically available chest CT scan images dataset for experiential analysis. Figure 3 expresses a few sample image outcomes on the chest CT scan images dataset by the proposed model.
4.1 Performance analysis
Here, patch extraction and classification are the two segments in which the proposed models’ performance analysis is executed.
4.1.1 Performance analysis of the patch extraction
Regarding mean square error (MSE), root MSE (RMSE), clustering time, and clustering accuracy, the proposed JQPC algorithm’s performance is comparatively analyzed with the prevailing PC algorithms.
In comparison with the prevailing PC, K-Medoid, and FCM algorithms, the experimental outcomes of MSE and RMSE metrics for the proposed JQPC algorithm are demonstrated in Table 1. Here, 9.98 MSE was produced by the proposed model which is lower than the prevailing PC (15.89), K-Medoid (25.82), and FCM (28.20). The MSE’s square root values are the RMSE values. Abnormal regions and normal regions are portioned with the least error values by utilizing the proposed JQPC algorithm, which is proved in Table 1.
To partition image regions into normal and abnormal regions, the time taken by the algorithm is the clustering time. It is verified as of Fig. 4 that 4051 ms, 12,972 ms, and 15,959 ms less than the existing PC, K-Medoid, and FCM algorithms were attained by the proposed JQPC. This exhibits that the regions were clustered faster than the prevailing algorithms by the proposed JQPC.
Figure 5 gives the comparative analysis of the proposed and the prevailing model’s clustering accuracy. The clustered output’s quality was determined by evaluating the clustering accuracy. Here, PC gives more accurate (94.57%) results than the existing K-Medoid (91.31%) and FCM (89.72%) among the prevailing algorithms. However, 3.53% more accurate results than the PC algorithm was attained by the proposed algorithm. Implementing Jaccard similarity and Quadratic kernel in the PC algorithm caused this improvement.
4.1.2 Performance analysis of classification
Regarding training time, precision, recall, f-measure, sensitivity, specificity, and accuracy, the proposed Amid DCNN’s experiential analysis is analyzed with the prevailing CNN, recurrent neural network (RNN), deep belief network (DBN), and DNN.
The time taken by the classifier to train all the input parameters is termed training time. Table 2 depicts the experimental results obtained during the experimental analysis. Here, 40,007 ms is the proposed Amid DCNN’s training time, which is the least time taken than the CNN (45,005 ms), RNN (50,010 ms), DBN (55,002 ms), and DNN (60,011 ms). This exhibits that fewer time computations were taken by the proposed classifier.
Regarding precision, the proposed classifier’s experimental analysis and its comparison with existing algorithms are signified in Fig. 6. The precision value determines whether the LC stages are predicted correctly or incorrectly. Here, poor outcomes than the other classifiers were provided by the DNN (86.46%). Followed by CNN, the DBN and RNN perform slightly better than the DNN. By attaining a precision of 96.88%, the other baseline techniques were dominated by the proposed AmiD-DCNN. Owing to the utilization of Aquila-optimized mish dropout in the DCNN classifier, a better value is obtained.
The output result’s quality was evaluated by the metrics: sensitivity and specificity. Figure 7 pictorially represented the experimental outcomes of sensitivity and specificity. It is evident from Fig. 7 that a better performance than the baseline classifiers was attained by the proposed AmiD-DCNN. 96.88% is the proposed classifier’s sensitivity which is higher than the prevailing CNN (91.91%), DBN (89.4), and DNN (86.46%). The future replica’s specificity is 96.70%, which remains better than the other algorithms. This analysis exhibits more quality output than the state-of-the-art classifiers were given by the proposed AmiD-DCNN.
During the experimental analysis for the proposed and the existing classifiers, the accuracy outcomes are given in Table 3. As the accuracy determines how accurately the outcomes are predicted, the system’s accuracy should be high as possible. Here, 96.79% is the proposed AmiD-DCNN’s accuracy, which is higher than the CNN, RNN, and DNN by 5.22%, 6.61%, and 11.39%. This exhibits that the proposed classifier is more appropriate than the baseline classifiers for LC stages classification.
The f-measure values obtained by the proposed and the existing classifier algorithms are illustrated in Fig. 8. To give accurate outcomes, the f-measure value should be high for a better scheme. 96.88% is the AmiD-DCNN’s F-measure value, which is followed by CNN (91.91%), then RNN (90.83%), and so on. This proves that the proposed classifier predicts the output more reliably than the prevailing baseline classifiers.
4.2 Comparative analysis with the literature papers
Regarding accuracy and specificity, the future replica’s performance is analyzed now comparison with the prevailing works like CNN (Yu et al. 2020), IDNN, FPSOCNN, and DFD-Net.
The proposed AmiD-DCNN algorithm’s accuracy and specificity analysis with the prevailing algorithms was unveiled in Table 4. It is seen from Table 4 that 1.22%, 3.29%, and 10.23% more accurate results than the existing FPSOCNN, IDNN, and DFD-Net models were attained by the proposed AmiD-DCNN. Thus, it can be concluded that the LC could be detected more effectively than the prevailing works by the proposed model.
5 Conclusion
In this paper, a novel AmiD-DCNN-founded LC organization framework is future. Here, the RU-AlexNet segmented the tumor regions, and the JQPC algorithm partitioned the patches. Finally, the data were classified by the AmiD-DCNN into the stages of LC. The proposed model’s experiential outcomes are proved as a better model than the prevailing algorithms. For instance, 94.57% was achieved by the proposed JQPC, which was far better than the compared algorithms. By achieving the average accuracy of 96.79% with less training time of 40007 ms, the proposed AmiD-DCNN classifier performance was proved. In the comparative analysis with recent works, a improved presentation stood showed through the future perfect. Aimed at LC prediction, the analyses exhibited that dominance over other models was attained by the proposed models. The LC stages were successfully classified by the proposed model. However, the risk rate was not predicted. In the future, an advanced risk prediction approach will be developed along with the proposed model grounded on the cancer-affected region.
Data availability
Data included in article/supplementary material referenced in article.
References
Akter O, Moni MA, Islam MM, Quinn JMW, Kamal AHM (2021) Lung cancer detection using enhanced segmentation accuracy. Appl Intell 51(6):3391–3404. https://doi.org/10.1007/s10489-020-02046-y
Asuntha A, Srinivasan A (2020) Deep learning based lung cancer detection and classification. Multimed Tools Appl 79:1–32. https://doi.org/10.1088/1757-899X/994/1/012026
Avanzo M, Stancanello J, Pirrone G, Sartor G (2020) Radiomics and deep learning in lung cancer. Strahlenther Onkol 196(10):879–887. https://doi.org/10.1007/s00066-020-01625-9
Firdaus Abdullah M, Noraini Sulaiman S, Khusairi Osman M, Karim NKA, Lutfi Shuaib I, Danial Irfan Alhamdu M (2020). Classification of lung cancer stages from CT scan images using image processing and k-nearest neighbours. In: 2020 11th IEEE control and system graduate research colloquium, ICSGRC 2020—Proceedings, pp 68–72. https://doi.org/10.1109/ICSGRC49013.2020.9232492
Firdaus Q, Sigit R, Harsono T, Anwar A (2020) Lung cancer detection based on ct-scan images with detection features using gray level co-occurrence matrix (glcm) and support vector machine (svm) methods. In: IES 2020—international electronics symposium: the role of autonomous and intelligent systems for human life and comfort, pp 643–648. https://doi.org/10.1109/IES50839.2020.9231663
Gao Y, Song F, Zhang P, Liu J, Cui J, Ma Y, Zhang G, Luo J (2021) Improving the subtype classification of non-small cell lung cancer by elastic deformation based machine learning. J Digit Imaging 34(3):605–617. https://doi.org/10.1007/s10278-021-00455-0
Jena SR, George ST (2020) Morphological feature extraction and KNG-CNN classification of CT images for early lung cancer detection. Int J Imaging Syst Technol 30(4):1324–1336. https://doi.org/10.1002/ima.22445
Karthiga B, Rekha M (2020) Feature extraction and I-NB classification of CT images for early lung cancer detection. Mater Today Proc 33:3334–3341. https://doi.org/10.1016/j.matpr.2020.04.896
Kavithaa G, Balakrishnan P, Yuvaraj SA (2021) Lung cancer detection and improving accuracy using linear subspace image classification algorithm. Interdiscip Sci Comput Life Sci. https://doi.org/10.1007/s12539-021-00468-x
Kumbhar VB, Chavan MS, Prasad SR, Rayjadhav SB (2022) A novel method of CT chest image segmentation and analysis for early lung cancer detection. J Inst Eng (india). https://doi.org/10.1007/s40031-022-00808-5
Majidpourkhoei R, Alilou M, Majidzadeh K, Babazadehsangar A (2021) A novel deep learning framework for lung nodule detection in 3d CT images. Multimed Tools Appl 80(20):30539–30555. https://doi.org/10.1007/s11042-021-11066-w
Maleki N, Zeinali Y, Niaki STA (2021) A k-NN method for lung cancer prognosis with the use of a genetic algorithm for feature selection. Expert Syst Appl 164:1–7. https://doi.org/10.1016/j.eswa.2020.113981
Marentakis P, Karaiskos P, Kouloulias V, Kelekis N, Argentos S, Oikonomopoulos N, Loukas C (2021) Lung cancer histology classification from CT images based on radiomics and deep learning models. Med Biol Eng Comput 59(1):215–226. https://doi.org/10.1007/s11517-020-0230ka2-w
Masood A, Sheng B, Yang P, Li P, Li H, Kim J, Feng DD (2020) Automated decision support system for lung cancer detection and classification via enhanced RFCN with multilayer fusion RPN. IEEE Trans Ind Inf 16(12):7791–7801. https://doi.org/10.1109/TII.2020.2972918
Nageswaran S, Arunkumar G, Bisht AK, Mewada S, Kumar JNVRS, Jawarneh M, Asenso E (2022) Lung cancer classification and prediction using machine learning and image processing. Biomed Res Int. https://doi.org/10.1155/2022/1755460
Nanglia P, Kumar S, Mahajan AN, Singh P, Rathee D (2021) A hybrid algorithm for lung cancer classification using SVM and neural networks. ICT Express 7(3):335–341. https://doi.org/10.1016/j.icte.2020.06.007
Perez G, Arbelaez P (2020) Automated lung cancer diagnosis using three-dimensional convolutional neural networks. Med Biol Eng Comput 58(8):1803–1815. https://doi.org/10.1007/s11517-020-02197-7
Raoof SS, Jabbar MA, Fathima SA (2020) Lung cancer prediction using machine learning: a comprehensive approach. In: 2nd International conference on innovative mechanisms for industry applications, icimia 2020 - conference proceedings, pp 108–115. https://doi.org/10.1109/ICIMIA48430.2020.9074947
Rehman A, Kashif M, Abunadi I, Ayesha N (2021) Lung cancer detection and classification from chest CT scans using machine learning techniques. In: 2021 1st International conference on artificial intelligence and data analytics, CAIDA 2021, pp 101–104. https://doi.org/10.1109/CAIDA51941.2021.9425269
Sori WJ, Feng J, Godana AW, Liu S, Gelmecha DJ (2021) DFD-Net: lung cancer detection from denoised CT scan image using deep learning. Front Comp Sci 15(2):1–13. https://doi.org/10.1007/s11704-020-9050-z
Surendar P (2021) Diagnosis of lung cancer using hybrid deep neural network with adaptive sine cosine crow search algorithm. J Comput Sci 53:1–16. https://doi.org/10.1016/j.jocs.2021.101374
Yu H, Zhou Z, Wang Q (2020) Deep learning assisted predict of lung cancer on computed tomography images using the adaptive hierarchical heuristic mathematical model. IEEE Access 8:86400–86410. https://doi.org/10.1109/ACCESS.2020.2992645
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare relevant to this article’s content.
Human and animal rights
This research does not involve any human participants and/or animals; hence, any informed consent or statement on the welfare of animals does not apply to this research.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Reddy, T.P.K., Bharathi, P.S. Lung cancer detection using novel residual unity AlexNet-based optimized mish dropout-deep convolutional neural network. Soft Comput (2023). https://doi.org/10.1007/s00500-023-08970-8
Accepted:
Published:
DOI: https://doi.org/10.1007/s00500-023-08970-8
Keywords
- Adaptive median filter (AMF)
- Chi-square distribution adapted contrast limited adaptive histogram equalization (Chi-CLAHE) algorithm
- Residual unity AlexNet (RU-AlexNet)
- Jaccard similarity and quadratic kernel-induced profuse clustering (JQPC)
- Aquila-optimized mish dropout-deep convolutional neural network (AmiD-DCNN)