1 Introduction

Object detection and classification are challenging tasks in the domain of computer vision (CV), which are used to classify the objects according to their class labels. This domain gets much attention due to its enormous applications including video surveillance, target recognition, face detection, optical character recognition, video stabilization, image watermarking, plants diseases recognition, and automated pedestrian detection [24, 48, 51, 52]. Recently, promising results are achieved in this area when dealing with simple images with transparent background. But in few cases, where objects contain a complex background, multiple shapes, and under congest situations [46], it require few enhancements.

Researchers work in this domain from last two decades and try to categorize the challenging problem of object detection and classification including complex background, features extraction, best features selection, execution time and accurate classification. They introduce several features extractions-based methods to detect and classify complex objects using classification methods. The famous features which are used for object classification include color [39, 46], shape (Histogram Oriented Gradients (HOG) [49], texture (Local Binary Pattern (LBP)) [59], local & global (SIFT, PCA), Bag of Features (BoF) [40] and also cover deep features (convolutional neural network (CNN)) [3]. Lazebnik et al. [28] introduce a new technique to deal with this limitation by BoF and made a Spatial Pyramid Matching (SPM) technique, which can divide the image into spatial sub-regions, and computes the histogram of each sub-region, which is later used for the creation of a spatial location sensitive vector. In addition to feature extraction, a fusion strategy is also adopted by several researchers to take advantage of distinct patterns of various descriptors which increases the classification accuracy [31, 34, 35]. The noticeable fusion methods are serial and parallel that is used in several domains like medical imaging, video surveillance, biometrics, and few more. Stochastic discriminate analysis (SDA) [22], fusion of low level and mid-level features, and transfer based features fusion techniques are the eminent algorithms in this domain that enhance the recognition accuracy [37].

To overcome the limits of conventional algorithms of features extraction like handcrafted approaches, deep learning is a suitable candidate for this domain due to its legitimate ability. CNN is a subtype of deep architecture [5]. It has shown improved performance for classification and recognition with successful applications such as machine learning and pattern recognition [41]. Several pre-trained deep CNN (DCNN) models are introduced by several researchers such as VGG [55], AlexNet [27], ResNet [19], YOLO model, and GoogleNet [58]. These model are implemented in several directions such as image classification [47], action recognition [36], medical imaging [10], agricultural plants, and a few more [25]. Jun et al. [38] presented an image classification method based on Group Sparse Deep Stack Network (GS-DSN). The method consists of two modules. In the first module, the interdependencies amongst hidden units are acquired by splitting them into amalgamate groups whereas the second module splits image description into sub-groups to design for clustering of each sample and later gradient descent is used for estimation of weights. The pre-trained VGG CNN models are used for features extraction and classified by group sparse network module (GSNM). Wei et al. [60] presented an image classification method through intra-class CNN feature pyramid. The advantage of lower level layers is to get the structural information and the higher-level layers to acquire the semantic information. Later, AlexNet and VGG16 pre-trained CNN models are utilized for features extraction and give an improved performance on the Caltech-101 dataset.

Recently, feature reduction and selection techniques have been gained much attention in the domain ML because a high number of features reduces the classification accuracy and also increases the computation time [26]. The major aim of feature reduction step is to solve the issue of the curse of dimensionality. The most existing feature reduction methods are Entropy-based feature reduction [16], Genetic Algorithm (GA) [8], Particle Swarm Optimization (PSO) [2, 44], and Canonical Correlation Analysis (CCA) [42]. Jinjoo et al. [57] introduce a new technique for the curse of dimensionality reduction using Structured Sparse Principle Component Analysis (SSPCA). The features are extracted through SIFT points, and optimal features are obtained through the SSPCA approach. Yongsheng et al. [45] describe a decomposition technique, which is used for encryption of frequency and spatial information. Through this approach, break down of the input image is performed into sub-regions using Spatial Pyramid Matching (SPM). The SIFT features are extracted from smaller regions in the initial stage and then by using codebook, the global features are extracted. Later, irrelevant features are reduced using K-means clustering and obtained maximum classification accuracy of 85.78% on the Caltech-101 dataset. Shuangshuang et al. [6] suggested a new sampling-based method for object classification. Three steps are performed in this method such as random, saliency-based, and dense sampling. The objects are categorized into semantic groups using these sampling methods. Thereafter, a supervised dimensionality reduction approach is provided, which removes the irrelevant features and only selects the best features for classification. The introduced method is validated on the STL-10 dataset and achieved a classification accuracy of 67%. In [29], a dynamic weighted discrimination power analysis method is introduced for selects the best discriminant coefficients for achieving the best recognition accuracy. The coefficients are selected according to their distinction strength. Lu et al. [30] introduce a dynamic weighted discriminant power analysis (DWDPA) approach for selection of best DCT features. The pre-masking transom is not required in DWDPA because this approach selects features through high power discrimination. The experiments are performed on three datasets which show significant recognition accuracy. Moreover, several other techniques are also introduced in literature for features reduction such as random projection (RP) [32], 2-dimensional RP [33], and few more [4, 53, 54]. The major advantage of features reduction and optimal features selection is to achieve maximum recongition rate in minimum computational time. The reduced features are finally classified by supervised and unsupervised learning methods such as Linear Support Vector Machine (L-SVM) [15, 56], Cubic-SVM (C-SVM), Quadratic-SVM (Q-SVM), Fine K-Nearest Neighbor (F-KNN), Cubic-KNN (C-KNN), deep learning, Ensemble Subspace-KNN (EKNN) [23, 48], Bayesian model, and Random forest and Naive based classifiers [17].

The above-discussed methods do not work well when dealing with larger datasets which include hundreds of classes. The preprocessing step is very important which is never performed in existing object classification studies to improves the classification accuracy. The preprocessing is an important step due to background factors such as illumination. Moreover, we notice that in [45, 57] SIFT features are extracted from input images but they achieved maximum accuracy of 85.78% on the Caltech101 dataset. As before, no one fuses two pre-trained DCNN models because each model has a different number of inputs, which makes the problem for fusion. Moreover, patterns fusion of two DCNN models provides better classification performance as compared to the individual model. To inspire this approach, in this article, we propose a new DCNN based method for object classification from static images. The proposed method is implemented in two parallel steps. An improved saliency-based method is proposed in the first step and SIFT point features are extracted. Then, VGG and AlexNet based pre-trained DCNN models are used to extract deep CNN features by employing activation on a fully connected (FC) layer. Thereafter, Reyni entropy-controlled method is proposed, which is employed on DCNN and SIFT Point feature matrices to selects the best features. But the size of inputs for each model is different, which becomes a problem in the fusion process. To resolve this problem, we perform the augmentation and make the both matrices equal in size. Then both feature matrices are fused by using a serial-based method and features are stored in a new matrix, which is later fed to ensemble classifier for classification.

2 Materials and methods

In this research, we used three famous datasets such as Caltech101, PASCAL 3D, and 3D dataset to deal with complex object detection and classification. These datasets contain hundreds of object classes and thousands of images. To overcome the challenges of these datasets such as illumination, color, and similarity among various object classes, we propose a new method for object classification based on DCNN features extraction along with SIFT points. The proposed method consists of two major steps, which are executed in parallel. In the first step, SIFT point features are extracted from mapped RGB segmented objects. In second step, DCNN features are extracted through pre-trained CNN models such as AlexNet and VGG. The both SIFT point and DCNN features are combined into one matrix by a parallel fusion method and the best features are selected for final classification. The detailed description of each step is given below in section 3.2 to 3.4. The comprehensive flow diagram is presented in Fig. 1.

Fig. 1
figure 1

Flow diagram of the proposed object classification method

2.1 Improved saliency method

An improved saliency method is employed by utilizing existing saliency approach name HDCT, for single object detection. In this step, we extract a single object from an image by an existing saliency method namely HDCT saliency estimation. The idea behind the improvement of saliency method is to implement the color spaces before it gives the input image to the saliency method. The LAB color transformation is utilized for this purpose, which identifies color in 3 dimensions consisting of L* for lightness, a*, and b* are utilized for color components green-red and blue-yellow respectively. The components L* is brighter white at 100 and darker black at 0, whereas ‘a’ and ‘b’ channels show the natural values for the RGB image. This transformation is defined as follows:

Let U(i, j) denotes an input RGB image having length N × M, then for RGB to LAB conversion, first RGB to XYZ conversion is performed through Eqs. 110:

$$ \left[\begin{array}{l}\varphi (X)\\ {}\varphi (Y)\\ {}\varphi (Z)\end{array}\right]=\left[M\times N\right]\left[\begin{array}{l}{\varphi}^r\\ {}{\varphi}^g\\ {}{\varphi}^b\end{array}\right] $$
(1)

where φ(X), φ(Y), and φ(Z) denote the X, Y, and Z channels, which are extracted from red (φr), green (φg), and blue channel (φb). The φr, φg, and φb channels are defined as:

$$ {\varphi}^r=\sum \limits_{k=1}\frac{\varphi k}{\triangle_k},k=\operatorname{Re}d $$
(2)
$$ {\varphi}^g=\sum \limits_{k=2}\frac{\varphi k}{\triangle_k},k= Green $$
(3)
$$ {\varphi}^b=\sum \limits_{k=3}\frac{\varphi k}{\triangle_k},k= Blue $$
(4)

Then LAB conversion is defined as:

$$ \left({\varphi}^L={\beta}_1\times \left({f}_y-16\right)\right),{\beta}_1=116 $$
(5)
$$ \left({\varphi}^{\ast A}={\beta}_2\left({f}_x-{f}_y\right)\right),{\beta}_2=500 $$
(6)
$$ \left({\varphi}^{\ast B}={\beta}_3\left({f}_y-{f}_z\right)\right),{\beta}_3=200 $$
(7)

where, fx, fy, and fz are linear functions which are computed as:

$$ {f}_x=\left\{\sqrt[3]{x_r}|\frac{k{x}_r+16}{116},\to {x}_r>\in |\ otherwise\right\},{x}_r=\frac{X}{Xr} $$
(8)
$$ {f}_y=\left\{\sqrt[3]{y_r}|\frac{k{y}_r+16}{116},\to {y}_r>\in |\ otherwise\right\},{y}_r=\frac{Y}{Yr} $$
(9)
$$ {f}_z=\left\{\sqrt[3]{z_r}|\frac{k{z}_r+16}{116},\to {z}_r>\in |\ otherwise\right\},{z}_r=\frac{Z}{Zr} $$
(10)

Thereafter, we employ a saliency approach for salient object detection. Salient region detection technique detects the salient region from an image by utilizing a high dimensional color transform. In this work, the superpixel saliency features are used to detect the initial salient regions of the dermoscopic images. The superpixels of the LAB image are formulated by Eq. 11:

$$ Y=\left\{{p}_1,....{p}_N\right\} $$
(11)

For low computational cost and exceptional performance, we utilize the SLIC superpixel [1] with a total number of N = 400 superpixels. The color features are computed from LAB color space. The parameters which are used for color features extraction from LAB color space are mean, variance, standard deviation, and skewness. These color features are concatenated with the histogram features because the histogram features are effective for saliency approach. The euclidean distance is calculated between extracted color features by Eq. 12:

$$ \overrightarrow{D}=\overrightarrow{D}(A)={\left\Vert {l}_i-{l}_j\right\Vert}_2^2 $$
(12)

where li and lj denote the ith and jth features in the given matrix A. In this work, the global contrast/color statistics of objects are used to define the saliency values of the pixels by using a histogram-based method. The saliency values of pixels are defined by Eq. 13:

$$ S\left({\varphi}_k\right)={\sum}_{\forall {\varphi}_i\in I}\overrightarrow{D}(A) $$
(13)

where \( \overrightarrow{D}(A) \) is the color distance between the features lii and the lj in the LAB color space. By rearranging the above equation, we get the saliency value for each color by Eq. 14:

$$ S\left({\varphi}_k\right)=\sum \limits_{l=1}^n{f}_lD\left({c}_j,{c}_l\right) $$
(14)

where n, cj, fldenote the total number of the different pixel color, the color value of the pixe l φk, and the frequency of the color pixel respectively. The HOG and the SFTA texture features, are utilized for shape and texture features. After the calculation of the feature vector for each superpixel, the random forest regression is used to estimate the salient degree of each region. Further to identify the very salient pixels calculated from initial saliency map, the Trimap is constructed by using adaptive thresholding. First, the input images divided into 2 × 2, 3 × 3, and 4 × 4 patches and then the Otsu thresholding is apply on each patch individually. Finally, the Trimap is obtained by using global thresholding which is formulated by Eq. 15:

$$ T(i)=\Big\{{\displaystyle \begin{array}{l}1\to T(i)\ge \tau \\ {}0\to T(i)\le \tau \\ {} unknown... else\end{array}} $$
(15)

Where τ denotes the global threshold value. After getting the optimal coefficient α (estimate for the saliency map) it constructs the saliency map as follow:

$$ {S}_{LS}\left({X}_i\right)=\sum \limits_{j=1}^l{K}_{ij}{\alpha}_j,i=1,2,.....,N $$
(16)

Where K denotes the high dimensional vector to present the color of the input image. The final map is obtained by adding the spatial and color-based saliency map through Eq. 17:

$$ {S}_{final}\left({X}_i\right)={S}_{LS}\left({X}_i\right)+{S}_S\left({X}_i\right),i=1,2,....,N $$
(17)

The final spatial saliency map is defined by Eq. 18:

$$ {S}_S\left({X}_i\right)=\exp \left(-K\frac{\min_j\in f\left(d\left({P}_i,{P}_j\right)\right)}{\min_j\in \beta \left(d\left({P}_i,{P}_j\right)\right)}\right) $$
(18)

where the K = 0.5, and minj ∈ β(d(Pi, Pj)) and minj ∈ f(d(Pi, Pj)) are the Euclidian distance from the ith pixel to definite background pixel and to definite foreground pixel respectively. The improved saliency method effects are shown in Fig. 2. In Fig. 2, the 1st row shows input images, second rows present LAB transformation, third row defines improved saliency image in a binary form, and the last row depicts the mapped RGB image.

Fig. 2
figure 2

Proposed improved saliency method results

2.2 SIFT features

Scale Invariant Feature Transform (SIFT) is originally designed in 2004 by [43] and have appeared as a strength descriptors for object detection and recognition. The SIFT features are computed in four steps. In the first step, local key points are determined that are important and stable for given images. Then features are extracted from each key point that explains the local image region samples, which are related to its scale space coordinate image. In the second step, weak features are removed by a specific threshold value. In the third step, orientations are assigned to each key point based on local image gradient directions. Finally, the 1 × 128 dimensional feature vector is extracted, and bi-linear interpolation is performed to improve the robustness of features. The above theory is defined through Eqs. 1921:

$$ \xi \left(\mu, \nu, \sigma \right)={\psi}_G\left(\mu, \nu, \sigma \right)\otimes {S}_{final}\left({X}_i\right) $$
(19)
$$ {\psi}_G\left(\mu, \nu, \sigma \right)=\frac{1}{2\pi {\sigma}^2}{e}^{-\frac{1}{2}\left(\frac{\mu^2+{\nu}^2}{2{\sigma}^2}\right)} $$
(20)
$$ {\displaystyle \begin{array}{l}D\left(\mu, \nu, \sigma \right)=\left({\psi}_G\left(\mu, \nu, k\sigma \right)-{\psi}_G\left(\mu, \nu, \sigma \right)\right)\otimes {S}_{final}\left({X}_i\right)=\\ {}\xi \left(\mu, \nu, k\sigma \right)-\xi \left(\mu, \nu, \sigma \right)\end{array}} $$
(21)

where ξ(u, v, σ) is scale space of an image, ψG(u, v, ) denotes the variable-scale Gaussian, k is a multiplicative factor and D(u, v σ) denotes the difference of Gaussian convolved with a segmented image.

2.3 Deep CNN features

Recently, in the domain of computer vision, machine, and pattern recognition, deep learning have shows improved performance for image classification on large datasets [20]. The deep learning designs such as deep CNN and recurrent NN have been employed to human action recognition, speech recognition, document classification, agricultural plants, medical imaging, and many other areas and shows superior performance. In object classification, CNN shows much attention due to their ability to automatically determine appropriate contextual features in image categorization problems. A simple CNN model consists of four types of layers. Initially, an input image is passed and computes its neurons by convolution layer, which are connected to local regions of the input. Each neuron is computed by dot product between their small regions and weights, which are connected to in the input volume. Thereafter, activation is performed using ReLu layer. The ReLu layer never changes the size of an input image. Then, pooling layer is performed to reduce the noise effects in the extracted features. Finally, high-level features are calculated by a fully connected (FC) layer.

In this article, we employ two pre-trained deep CNN models such as VGG19 and AlexNet, which are used for features extraction. These models incorporate convolution layer, pooling layer, normalization layer, ReLu layer, and FC layer. As discussed above the convolution layer extracts local features from an image, which is formulated by Eq. 22:

$$ {g_i}^{(L)}={b_i}^{(L)}+{\sum}_{j=1}^{{m_1}^{\left(L-1\right)}}{\psi_{i,j}}^{(L)}\times {h_j}^{\left(L-1\right)} $$
(22)

where gi(L) denotes the output layer L, bi(L) is base value, ψi, j(L) denotes the filter connecting the jth feature map, and hj denotes the L − 1 output layer. Then, pooling layer is defined which extract maximum responses from the lower convolutional layer with an objective of reducing irrelevant features. The max pooling also resolves the problem of overfitting and mostly 2 × 2 polling is performed on the extracted matrix. Mathematically, max pooling is described through Eqs. 2325:

$$ {m}_1^{(L)}={m}_1^{\left(L-1\right)} $$
(23)
$$ {m}_2^{(L)}=\frac{m_2^{\left(L-1\right)}-F(L)}{S^L}+1 $$
(24)
$$ {m}_3^{(L)}=\frac{m_3^{\left(L-1\right)}-F(L)}{S^L}+1 $$
(25)

where SL denotes the stride,m1(L), m2(L), and m3(L) are defined filters for feature map such as 2 × 2, 3 × 3. The other layers such as ReLu and fully connected (FC) are defined as:

$$ {\operatorname{Re}}_i^{(l)}=\max \left(h,{h}_i^{\left(l-1\right)}\right) $$
(26)
$$ F{c}_i^{(l)}=f\left({z}_i^{(l)}\right)\kern0.3em with\kern0.3em {z}_i^{(l)}=\kern0.5em \sum \limits_{j=1}^{m_1^{\left(l-1\right)}}\sum \limits_{r=1}^{m_2^{\left(l-1\right)}}\sum \limits_{s=1}^{m_3^{\left(l-1\right)}}{w}_{i,j,r,s}^{(l)}{\left(F{c}_i^{\left(l-1\right)}\right)}_{r,s} $$
(27)

where \( {\mathit{\operatorname{Re}}}_i^{(l)} \) denotes the ReLu layer, \( {Fc}_i^{(l)} \) denotes the FC layer. The FC layer follows the convolution and pooling layers. The FC layer is similar to convolution layer and most of the researchers perform activation on the FC layer for deep feature extraction.

2.4 Pre-trained deep CNN networks

In this research, we use two pre-trained deep CNN models such as VGG and AlexNet for deep features extraction. AlexNet deep CNN model is designed by Krizhevsky et al. [27] using ImageNet dataset. This network contains five convolution layers, three pooling layers, and 3 FC layers along with softmax classification function. This network trained on input image size 227x227x3.

VGG-19 CNN network is proposed by Zisserman et al. [20] which contains 16 convolution layers, 19 learnable weights layers, 3 FC layers along with softmax function. This network is trained on ImageNet dataset and shows exceptional performance. This network also uses dropout regularization in the FC layer and apply ReLu activation function on all the convolution layers. The size of the training input images is selected as 224x224x3.

2.5 Features extraction and fusion

In this section, we present our proposed feature extraction and fusion strategy. The features are extracted from pre-trained deep CNN models using the different number of layers. In this work, two pre-trained models are used such as VGG19 and AlexNet for features extraction. The major aim of deep CNN features extraction from two models is to improve the classification accuracy. Because each model has distinct characteristics and gives different features. Therefore, by using this advantage, we extract features by performing activation on the FC7 layer and applying max pooling to remove the noise factors. Thereafter, an entropy-controlled method is implemented for best feature reduction. The proposed feature extraction and reduction architecture are shown in Fig. 3. As shown in Fig. 3, three types of features are extracted such as AlexNet deep CNN, VGG19 CNN, and SIFT. For AlexNet and VGG19, convolution layer is employed as an input layer. Then activation is performed on FC7 layer for both networks to extract deep CNN features. The size of deep CNN features for output layer FC7 is 1 × 4096 for both networks. The feature size of both output layer is higher. Therefore, we perform max-pooling of filter size of 2 × 2, which removes the noise effects and selects the maximum value feature of the given filter.

Fig. 3
figure 3

Proposed deep CNN and SIFT features fusion and reduction method for object classification

After max-pooling, the new feature vectors of size 1 × 2048 are obtained, which are further improved by entropy-controlled feature reduction method. As extracted feature vectors can produce better results, but they increase the execution time. Therefore, our focus is to improve the classification accuracy and decrease the execution time. This problem is resolved by an entropy controlled method. The entropy gives the knowledge about randomness in a signal by showing the system disorder [50]. Due to its capacity to describe system behavior, entropy gives the valuable information which can be employed in features design [7]. Amongst several, we use the Renyi entropy method for feature reduction. In the circumstances of fractal dimension estimation, the Reyni entropy method determines the basis of the theory of generalized dimensions. The fractal dimension estimates the change patterns of the given feature space. The Reyni entropy is defined as follows:

Let f1, f2, …, fn denote the A feature space after max-pooling, g1, g2, g3, …gn denote the B feature space aftermax-pooling, and ξ1, ξ2, …, ξn denote the ξ feature space, where A∈ AlexNet DCNN features, B∈ VGG19 DCNN features, and ξ∈ SIFT point feature vector. The dimension of each features space is 1 × 2048, 1 × 2048, and 1 × 128.The entropy is formulated by Eq. 28:

$$ {E}_{\alpha }(X)=\frac{1}{1-\alpha}\log \left(\sum \limits_{i=1}^n{p}_i^a\right) $$
(28)

where α ≥ 0 &  < 1, X ∈ (fn, gn, ξn), and pi denote the probability value of extracted feature space A, B, and ξ which is defined by pi = Pr(X = i) and represents the length of all feature spaces. The entropy function gives a new N × M feature vector, which controls the randomness of each feature space. Then, each N × M feature vector is sorted into ascending order and the top 1000 features are selected from A and B vectors and 100 features from the ξ vector. Mathematically, this process is described by Eq. 29:

$$ E(A)=\Phi \left({f}_n,\varrho \right),E\left(B\ \right)=\Phi \left({g}_n,\varrho \right),E\left(\xi \right)=\Phi \left({\xi}_n,\varrho \right) $$
(29)

where E(A) denotes the entropy information of feature space A, E(B) denotes the entropy information of feature space B, E(ξ) denotes the entropy information of feature space ξ, Φ denotes sorting function, and ϱ denotes the ascending order operation. Thereafter, both E(A) and E(B) entropy information features are fused in one matrix by the simple serial based method, which returns a feature vector of size 1 × 2000, which is further fused with SIFT point feature by the serial-based method as shown in the above Fig. 3 and below expression given in Eqs. 3031:

$$ \prod (Fused)=\left(N\times 1000\right)+\left(N\times 1000\right)+\left(N\times 100\right) $$
(30)
$$ \prod (Fused)=N\times {f}_i $$
(31)

The size of the final feature vector is 1 × 2100, which is fed to ensemble classifier for classification. The ensemble classifier is a supervised learning method, which needs to training data for prediction. Ensemble method combines several classifiers data to produce a better system. The formulation of the ensemble method is given below.

Let we have extracted features and their corresponding labels ((f1, y1), (f2, y2), …, (fn, yn)), where fi denotes the extracted features which are typically vectors of form (fi + 1, fi + 2, …, fi + n), then the unknown function is defined as y = f(x). An ensemble classifier is a set of classifiers whose individual decisions are combined in on classifier by typical weights and voting. Hence the ensemble classifier is formulated as:

$$ \widehat{Y}= Sign\left(\sum \limits_{k=1}^K{\widehat{w}}_k\kern0.1em {h}_k(x)\right) $$
(32)

where hk(x) = h1(x), h2(x), …, hk(x) and \( {\widehat{w}}_k={\widehat{w}}_1,{\widehat{w}}_2,\dots {\widehat{w}}_k \). The proposed method is tested on three datasets such as Caltech101, PASCAL 3D+ dataset, and 3D dataset. The sample labeled results are shown in the Figs. 4 and 5.

Fig. 4
figure 4

Proposed labeled classification results for the 3D dataset and Caltech101 dataset

Fig. 5
figure 5

Proposed labeled classification results for PASCAL 3D+ dataset

3 Experimental results

The proposed method is endorsed on three available datasets such as Caltech 101, PASCAL 3D+, and Barkley3D dataset. The Caltech-101 [14] dataset consists of total 102 distinct object classes of 9144 images. Each class consists of approximately 31~800 images. However, this dataset consists of both RGB and gray images, which is a major issue of this dataset. It is because if objects are recognized by their color, then color features are not performed well on grayscale images. Pascal 3D+ dataset [11] is another challenging database which is used for object classification. This dataset is the combination of Pascal VOC 2012 and ImageNet. It contains a total of 22,394 images of 12 unique classes. The classes which are common between PASCAL VOC 2012 and ImageNet are merged into a new database, called Pascal 3D+. Barkley3D object dataset [21] consists of total 6604 images of 10 object classes including bicycle, car, cellphone, head, iron, monitor, mouse, shoe, stapler, and toaster. The number of images in each class range of 474–721. A brief description of each dataset is given in Table 1. For classification, we use Ensemble boosted tree (EBT) classifier and test its performance with Linear SVM (LSVM), Quadratic SVM (QSVM), Cubic SVM (CSVM), Fine KNN (FKNN), Cubic KNN (CKNN), decision tree (DT), and weighted KNN (WKNN). The performance of each classifier is calculated by three measures including accuracy, false negative rate (FNR), and execution time. All results are evaluated on 3.4 Gigahertz Corei7 7th generation desktop computer with a RAM of 16 Gigabytes and a GPU of NVIDIA GeForce 1070 (8GB, 256 bit) having MATLAB 2017b.

Table 1 Description of selected datasets

3.1 Caltech101 dataset classification results

In this section, we discuss detailed results of our method on selected datasets. For classification results on Caltech 101 dataset, we define three experiments on distinct classes such as 20, 34, 50, and 102. The experiments are a) classification of selected classes using AlexNet DCNN features with entropy-based selection method; b) Classification of selected classes using a VGG-19 DCNN model with entropy-based features selection; c) Fusion of deep CNN and SIFT features along with entropy-controlled selection method. The classification is performed on each class and finally compared the performance of all 102 classes in terms of accuracy and execution time with 20, 32, and 50 numbers of classes. For classification results, 50:50 approach is opted, and 10-fold cross-validation is employed. The 50:50 approach explains that 50 images from each class are used for training the classifier and remaining 50 for testing. The detailed results are explained in below sections.

3.1.1 AlexNet deep CNN with entropy-controlled selection

In the first step, we extract DCNN features of top 20 object classes by pre-trained AlexNet model and select the best features using an entropy-based method. The selected features are feed to classifiers and achieved the best classification accuracy of 86.5%, which is achieved on ensemble boosted tree (EBT). The classification accuracy of EBT classifier is given in Table 2. The testing time of EBT classifier is 105.00 s which is the best as compared to other classification methods. The second-best execution time is 114.25 s for quadratic SVM, which achieves classification accuracy 83.70% and FN rate is 16.30%. In the second step, classification is performed on 34 classes and obtained maximum classification accuracy of 84.6% on EBT classifier with an FN rate of 15.4%. Also, the classification is performed on some other classification methods and the second highest accuracy of 78.2% is achieved for cubic SVM as given in Table 2. The best execution time on the classification of 34 classes is 172.63 s which shows that the execution time is increased with the addition of more number of classes. In the third step, classification is performed on 50 number of classes and obtained maximum classification accuracy of 83.5% for EBT classifier, which decreases 1% as compared to 20 and 34 number of classes. This problem is caused, when an increase in a number of more object classes.

Table 2 Classification accuracy for Caltech101 dataset using AlexNet deep CNN along entropy-controlled features selection

Moreover, the best execution time for 50 object classes is 193.00 s which is better than other classification methods as shown in Table 2 but it increases as compared to a classification of 20 and 34 classes. Finally, classification is performed on 100 classes and obtained maximum correct classification rate 71.7% on ensemble classifier. However, the FN rate is increased up to 28.3%, which is higher than 20, 34, and 50 classes object classification. Moreover, the execution time of ensemble classifier on 100 classes is 620.42 s, which is better as compared to other classification methods as given in Table 2 but the overall execution time for 20 object classes is better, which shows that the increase in the number of classes’ effects on both classification accuracy and execution time.

3.1.2 VGG-19 deep CNN with entropy-controlled selection

In this experiment, the classification is performed on 20, 34, 50, and 100 object classes using VGG-19 deep CNN along entropy-controlled best features selection approach. In the first step, 20 object classes are randomly selected and classification is performed. For 20 object classes, the best classification accuracy of 92.0% with FN rate of 8.0% on EBT classifier. The classification results of EBT classifier are also compared with other supervised learning methods and obtained the second-best accuracy of 91.1% with FN rate is 8.9% on CSVM as presented in Table 3. The execution time of ensemble classifier is also calculated and obtained the best testing time of 88.129 s, which is efficiently well as compared to other classification methods as LSVM, QSVM, and few more in Table 3. In the second step, 34 object classes are selected randomly and performed classification. The best classification accuracy is achieved as 84.6% with FN rate 15.4% on EBT classifier as presented in Table 3. The classification performance of EBT classifier is compared with seven other supervised learning methods and achieved the second-best accuracy of 78.2% on CSVM.

Table 3 Classification accuracy for Caltech101 dataset using VGG deep CNN features with the entropy-controlled method

Moreover, the best execution time for classification is achieved for 34 object classes is 172.63 s on EBT, which is significantly good as compared to other methods. However, the worst execution time for classification of 34 object classes is 327 s on Fine KNN. Thereafter, 50 object classes are selected randomly for classification. The increase in the number of classes effects on the classification accuracy and execution time. However, using VGG deep CNN features with entropy-controlled method achieve the best classification accuracy of 86.0%, which is increased up to 2.55% as compared to AlexNet deep features. Moreover, the performance on VGG deep features is also improved and achieved the best computation time of 168.66 s which is better as compared to AlexNet deep features and other supervised learning methods as given in Table 3. Finally, classification is performed on all 102 object classes and achieved the best classification accuracy of 73.8% on EBT, which is executed in 454.270 s. The classification accuracy of EBT is increased up to 1.8% as compared to performance on AlexNetmodel but the execution time of EBT classifier is lower than the WKNN, which is 341.83 s as presented in Table 3. The above discussion, it is clear that entropy-controlled selection method performs well along with VGG-19 deep CNN features. Moreover, the execution time for object recognition on VGG features is improved as compared to AlexNet features on 20, 50, and 101 object classes.

3.1.3 VGG-19 deep CNN and AlexNet CNN features fusion and selection

Features fusion is an important step in the domain of machine learning because each feature extraction technique has unique characteristics. Therefore, in this study, we use two pre-trained deep CNN models for features extraction and select the best features from each model by an entropy-controlled method. Thereafter, we extract SIFT features from RGB silhouette image and fused along with deep CNN selected features by the parallel approach. Finally, the fused features are feed to classifiers for recognition accuracy. The best-achieved classification accuracy for 20, 34, 50, and 100 classes is 86.5%, 93.8%, 93.5%, and 89.7% on EBT classifier, as presented in the Table 4. The classification accuracy of EBT classifier on 34, 50, and 100 classes is significantly improved as compared to individual AlexNet and VGG-19 deep CNN features with an entropy-based selection approach. However, we notice the execution time of the proposed method on EBT classifier is increased as compared to Tables 2 and 3. The proposed classification performance is proved by their confusion matrices are given in Fig. 6.

Table 4 Classification accuracy for Caltech101 dataset using a fusion of CNN features and selection with the entropy-controlled method
Fig. 6
figure 6

Confusion matrix for 20, 34, 50, and 100 object classes using Caltech101 dataset

Finally, we compare our proposed results with existing methods in Table 5. Jun et al. [38] propose a deep stack network (DSN) for object classification and achieved a classification accuracy of 89%. In. [57] sparse structure PCA method is presented for object classification, which is based on SIFT features and SVM classifier. The presented method reports classification accuracy 83.9% on a Caltech101 dataset. Qing et al. [38] used a combination of YCbCr transformation and Extreme Learning (EL) for object classification and obtained an accuracy of 78% on the Caltech101 dataset. Yongsheng et al. [45] use the K-means based reduction on the SIFT descriptors and achieved 85.78% accuracy. However, in this research, our proposed method shows improved performance in both accuracy and execution time. The proposed method achieves classification accuracy 86.5%, 93.8%, 93.5%, and 89.7% for 20, 34, 50, and 100 object classes on Caltech101 dataset. The execution time of the proposed method is also plotted in Fig. 7.

Table 5 Comparison with existing methods
Fig. 7
figure 7

Comparison of the execution time of all defined experiments on the Caltech101 dataset

3.2 Pascal3D + v1.1 dataset results

In this section, we present the proposed algorithm results on PASCAL 3D dataset. The results are calculated in four different steps: a) AlexNet deep CNN features extraction along with entropy-controlled feature selection, b) VGG features extraction and entropy-controlled selection, c) fusion of VGG and AlexNet deep CNN features along with selection method, and d) fusion of SIFT and deep CNN features along with entropy-controlled method. Three parameters (i.e., accuracy, FNR, and time) are used to measure the performance of each classifier. As discussed above, this dataset consists of total 22,394 images of 12 unique object classes. For validation of the proposed method on this dataset, we opt an approach of 50:50 for training and testing. This approach is followed for each step. The achieved best classification accuracy for AlexNet deep CNN features along with entropy-controlled selection method is 76.8% on ensemble classifier. The FN rate on ensemble classifier is 23.2% and testing execution time is 154.5 s. The recognition results of an ensemble classifier are also compared with other state-of-the-art classification methods as presented in Table 6. In the second step, the classification is performed by using VGG-19 deep CNN features and achieved maximum classification accuracy 81.8%, which is improved as compared to AlexNet features. But the execution time on VGG-19 deep features along with the selection method is increased on ensemble classifier and best-achieved execution time is 240.86 s on decision tree as given in Table 6. In the third step, selected AlexNet DCNN and VGG DCNN features are fused by a serial-based method and perform classification. The best-achieved classification accuracy is 87.4% on ensemble classifier, which is significantly improved after fusion of DCNN features. The execution time of ensemble classifier for step 3 is 230.2 s, which is higher than the FKNN as presented in Table 6.

Table 6 Classification accuracy on PASCAL 3D dataset

Finally, SIFT point and DCNN features are fused by the parallel approach and perform classification. For classification, the EBT method is used and obtained a maximum accuracy of 88.6% and FN rate is 11.4%, which is significantly improved as compared to step 1, 2, and 3. Moreover, the best execution time is 111.99 s for EBT as given in Table 6. From Table 6, the performance of EBT classifier is compared with several other supervised learning methods such as LSVM, WKNN, FKNN, and few more. These supervised learning methods also perform well by using proposed features fusion and selection method, which gives the authenticity of the proposed method. Moreover, the classification performance of ensemble classifier is validated by Table 7.

Table 7 Confusion matrix for PASCAL 3D dataset using proposed features fusion and selection method

Finally, the proposed method results on PASCAL 3D dataset are compared with existing methods as presented in Table 8. In Table 8, Chi et al. [12] extract deep CNN features for object classification and reported classification accuracy of 81.8%. In [18] CNN based features are extracted for object classification and perform experiments on PASCAL 3D dataset and achieved accuracy 83.92%. However, our proposed method shows improved performance on PASCAL 3D dataset and achieved a classification accuracy of 88.60%.

Table 8 CA Comparison with state of the art techniques on the Pascal3D+ dataset

3.3 Barkley 3D dataset

The 3D dataset consists of total 6604 images of 10 object classes including bicycle, car, cellphone, head, iron, monitor, mouse, shoe, stapler, and toaster. The number of images in each class range of 474–721. For validation of the proposed method on a 3D dataset, 50:50 approach is opted for training and testing the classifier. To analyze the performance of a proposed method, we employ four distinct experiments. In the first experiment, Alexnet deep CNN features are extracted and select best features using the entropy method. The best-achieved classification accuracy for the first experiment is 97.90% on EBT classifier with FN rate is 2.1%. The execution time for experiment 1 is 978.00 s on EBT, which is higher than the other classifiers whereas the best execution time for experiment 1 is 245.68 s as given in Table 9. In the second experiment, the VGG deep CNN features are extracted and select the best features by an entropy-controlled method. 10-fold cross validation is performed for testing the classification performance and achieved the best accuracy 97.5% with FN rate is 2.5%. The execution time of ensemble classifier for VGG features is 900.5 s as given in Table 9, which shows that FKNN performs fast and execute in 113.5 s. In the third experiment, to improve the classification accuracy and execution time, we fuse both VGG and AlexNet deep CNN features and achieve classification accuracy 98.8% on ensemble classifier. The fused matrix improves the classification accuracy as compared to an individual selected deep CNN features as presented in Table 9. The execution time of fused approach is increased up to 5342 s on ensemble classifier. To resolve this issue, in experiment 4 we fuse SIFT features along with deep CNN features and achieved classification accuracy 99.7% with FN rate is 0.3%. The execution time of the proposed method is also reduced on ensemble classifier and achieved testing time is 177.49 s. Moreover, the classification accuracy of ensemble classifier for experiment 4 is confirmed by a confusion matrix in Table 10.

Table 9 Proposed classification results on the 3D dataset
Table 10 Confusion matrix of proposed method results on the 3D dataset

3.4 Graphical discussion

In this section, the classification results are presented in the graphical format. In Fig. 8a, the overall comparison of three experiments is plotted for the Caltech-101 dataset. Figure 8a explains that the minimum, average, and maximum accuracy which is achieved through all conducted experiments in Section 3.1. The minimum achieved accuracy for all classification methods is 52.5% on CSVM, average accuracy is above 60%, and maximum accuracy of 89.7% for EBT classifier. The EBT classifier achieves the minimum accuracy of 71.7% through AlexNet along entropy-controlled selection approach, whereas the maximum accuracy of 89.7% through proposed fusion and selection approach. Similar in Fig. 8b classification results for all datasets are computed on four different steps as explains in Section 3.2. The results are presented in the form of minimum, average, and maximum. Finally, classification results are calculated using Barkley 3D Dataset through four steps. All steps are explained in Section 3.3. The EBT classifier outperforms on all datasets for proposed fusion and selection method.

Fig. 8
figure 8

Overall range of classification accuracy of all datasets. a Caltech 101 dataset, b PASCAL 3D Plus dataset, and c Barkley 3D dataset

4 Conclusion

A DCNN and SIFT point features fusion, and selection-based approach is proposed in this article. The proposed method works in two parallel steps. In the first step, improved saliency method is implemented, and SIFT point features are extracted from RGB mapped image. Then, in the second step DCNN features are extracted using pre-trained CNN models. The max-pooling is performed on extracted features matrices to remove the noisy information. Thereafter, a Reyni entropy-controlled method is proposed which control the randomness of extracted features and select the best features. The selected features are finally fed to ensemble classifier for object classification. The proposed method automatically detects and labeled object from a large number of sample images with minimum human intervention. The proposed approach performs classification under the supervised method and achieves the maximum classification accuracy 93.8%, 88.6%, and 99% on Caltech101, PASCAL 3D Plus, and Barkley 3D dataset, which shows exceptional performance as compared to existing methods. Moreover, the proposed method efficiently reduces the computation time, which shows the importance of selection methods. In the future, we implement a new generic method for multiple object detection and classification using deep learning. Moreover, we apply method on real-time object classification.